FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Ashok - A OSINT Recon Tool, A.K.A Swiss Army Knife

By: Zion3R


Reconnaissance is the first phase of penetration testing which means gathering information before any real attacks are planned So Ashok is an Incredible fast recon tool for penetration tester which is specially designed for Reconnaissance" title="Reconnaissance">Reconnaissance phase. And in Ashok-v1.1 you can find the advanced google dorker and wayback crawling machine.



Main Features

- Wayback Crawler Machine
- Google Dorking without limits
- Github Information Grabbing
- Subdomain Identifier
- Cms/Technology Detector With Custom Headers

Installation

~> git clone https://github.com/ankitdobhal/Ashok
~> cd Ashok
~> python3.7 -m pip3 install -r requirements.txt

How to use Ashok?

A detailed usage guide is available on Usage section of the Wiki.

But Some index of options is given below:

Docker

Ashok can be launched using a lightweight Python3.8-Alpine Docker image.

$ docker pull powerexploit/ashok-v1.2
$ docker container run -it powerexploit/ashok-v1.2 --help


    Credits



    HardeningMeter - Open-Source Python Tool Carefully Designed To Comprehensively Assess The Security Hardening Of Binaries And Systems

    By: Zion3R


    HardeningMeter is an open-source Python tool carefully designed to comprehensively assess the security hardening of binaries and systems. Its robust capabilities include thorough checks of various binary exploitation protection mechanisms, including Stack Canary, RELRO, randomizations (ASLR, PIC, PIE), None Exec Stack, Fortify, ASAN, NX bit. This tool is suitable for all types of binaries and provides accurate information about the hardening status of each binary, identifying those that deserve attention and those with robust security measures. Hardening Meter supports all Linux distributions and machine-readable output, the results can be printed to the screen a table format or be exported to a csv. (For more information see Documentation.md file)


    Execute Scanning Example

    Scan the '/usr/bin' directory, the '/usr/sbin/newusers' file, the system and export the results to a csv file.

    python3 HardeningMeter.py -f /bin/cp -s

    Installation Requirements

    Before installing HardeningMeter, make sure your machine has the following: 1. readelf and file commands 2. python version 3 3. pip 4. tabulate

    pip install tabulate

    Install HardeningMeter

    The very latest developments can be obtained via git.

    Clone or download the project files (no compilation nor installation is required)

    git clone https://github.com/OfriOuzan/HardeningMeter

    Arguments

    -f --file

    Specify the files you want to scan, the argument can get more than one file seperated by spaces.

    -d --directory

    Specify the directory you want to scan, the argument retrieves one directory and scan all ELF files recursively.

    -e --external

    Specify whether you want to add external checks (False by default).

    -m --show_missing

    Prints according to the order, only those files that are missing security hardening mechanisms and need extra attention.

    -s --system

    Specify if you want to scan the system hardening methods.

    -c --csv_format'

    Specify if you want to save the results to csv file (results are printed as a table to stdout by default).

    Results

    HardeningMeter's results are printed as a table and consisted of 3 different states: - (X) - This state indicates that the binary hardening mechanism is disabled. - (V) - This state indicates that the binary hardening mechanism is enabled. - (-) - This state indicates that the binary hardening mechanism is not relevant in this particular case.

    Notes

    When the default language on Linux is not English make sure to add "LC_ALL=C" before calling the script.



    Porch-Pirate - The Most Comprehensive Postman Recon / OSINT Client And Framework That Facilitates The Automated Discovery And Exploitation Of API Endpoints And Secrets Committed To Workspaces, Collections, Requests, Users And Teams

    By: Zion3R


    Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.

    Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:

    • Workspaces
    • Collections
    • Requests
    • Users
    • Teams

    Installation

    python3 -m pip install porch-pirate

    Using the client

    The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.

    Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.

    • --globals
    • --collections
    • --requests
    • --urls
    • --dump
    • --raw
    • --curl

    Simple Search

    porch-pirate -s "coca-cola.com"

    Get Workspace Globals

    By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8

    Dump Workspace

    When an interesting result has been found with a simple search, we can provide the workspace ID to the -w argument with the --dump command to begin extracting information from the workspace and its collections.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump

    Automatic Search and Globals Extraction

    Porch Pirate can be supplied a simple search term, following the --globals argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.

    porch-pirate -s "shopify" --globals

    Automatic Search Dump

    Porch Pirate can be supplied a simple search term, following the --dump argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.

    porch-pirate -s "coca-cola.com" --dump

    Extract URLs from Workspace

    A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls

    Automatic URL Extraction

    Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.

    porch-pirate -s "coca-cola.com" --urls

    Show Collections in a Workspace

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections

    Show Workspace Requests

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests

    Show raw JSON

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw

    Show Entity Information

    porch-pirate -w WORKSPACE_ID
    porch-pirate -c COLLECTION_ID
    porch-pirate -r REQUEST_ID
    porch-pirate -u USERNAME/TEAMNAME

    Convert Request to Curl

    Porch Pirate can build curl requests when provided with a request ID for easier testing.

    porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl

    Use a proxy

    porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080

    Using as a library

    Searching

    p = porchpirate()
    print(p.search('coca-cola.com'))

    Get Workspace Collections

    p = porchpirate()
    print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

    Dumping a Workspace

    p = porchpirate()
    collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
    for collection in collections['data']:
    requests = collection['requests']
    for r in requests:
    request_data = p.request(r['id'])
    print(request_data)

    Grabbing a Workspace's Globals

    p = porchpirate()
    print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

    Other Examples

    Other library usage examples can be located in the examples directory, which contains the following examples:

    • dump_workspace.py
    • format_search_results.py
    • format_workspace_collections.py
    • format_workspace_globals.py
    • get_collection.py
    • get_collections.py
    • get_profile.py
    • get_request.py
    • get_statistics.py
    • get_team.py
    • get_user.py
    • get_workspace.py
    • recursive_globals_from_search.py
    • request_to_curl.py
    • search.py
    • search_by_page.py
    • workspace_collections.py


    CloudGrappler - A purpose-built tool designed for effortless querying of high-fidelity and single-event detections related to well-known threat actors in popular cloud environments such as AWS and Azure

    By: Zion3R


    Permiso: https://permiso.io
    Read our release blog: https://permiso.io/blog/cloudgrappler-a-powerful-open-source-threat-detection-tool-for-cloud-environments

    CloudGrappler is a purpose-built tool designed for effortless querying of high-fidelity and single-event detections related to well-known threat actors in popular cloud environments such as AWS and Azure.


    Notes

    To optimize your utilization of CloudGrappler, we recommend using shorter time ranges when querying for results. This approach enhances efficiency and accelerates the retrieval of information, ensuring a more seamless experience with the tool.

    Required Packages

    bash pip3 install -r requirements.txt

    Cloning cloudgrep locally

    To clone the cloudgrep repository locally, run the clone.sh file. Alternatively, you can manually clone the repository into the same directory where CloudGrappler was cloned.

    bash chmod +x clone.sh ./clone.sh

    Input

    This tool offers a CLI (Command Line Interface). As such, here we review its use:

    Example 1 - Running the tool with default queries file

    Define the scanning scope inside data_sources.json file based on your cloud infrastructure configuration. The following example showcases a structured data_sources.json file for both AWS and Azure environments:

    Note

    Modifying the source inside the queries.json file to a wildcard character (*) will scan the corresponding query across both AWS and Azure environments.

    {
    "AWS": [
    {
    "bucket": "cloudtrail-logs-00000000-ffffff",
    "prefix": [
    "testTrails/AWSLogs/00000000/CloudTrail/eu-east-1/2024/03/03",
    "testTrails/AWSLogs/00000000/CloudTrail/us-west-1/2024/03/04"
    ]
    },
    {
    "bucket": "aws-kosova-us-east-1-00000000"
    }

    ],
    "AZURE": [
    {
    "accountname": "logs",
    "container": [
    "cloudgrappler"
    ]
    }
    ]
    }

    Run command

    python3 main.py

    Example 2 - Permiso Intel Use Case

    python3 main.py -p

    [+] Running GetFileDownloadUrls.*secrets_ for AWS 
    [+] Threat Actor: LUCR3
    [+] Severity: MEDIUM
    [+] Description: Review use of CloudShell. Permiso seldom witnesses use of CloudShell outside of known attackers.This however may be a part of your normal business use case.

    Example 3 - Generate report

    python3 main.py -p -jo

    reports
    └── json
    β”œβ”€β”€ AWS
    β”‚Β Β  └── 2024-03-04 01:01 AM
    β”‚Β Β  └── cloudtrail-logs-00000000-ffffff--
    β”‚Β Β  └── testTrails/AWSLogs/00000000/CloudTrail/eu-east-1/2024/03/03
    β”‚Β Β  └── GetFileDownloadUrls.*secrets_.json
    └── AZURE
    └── 2024-03-04 01:01 AM
    └── logs
    └── cloudgrappler
    └── okta_key.json

    Example 4 - Filtering logs based on date or time

    python3 main.py -p -sd 2024-02-15 -ed 2024-02-16

    Example 5 - Manually adding queries and data source types

    python3 main.py -q "GetFileDownloadUrls.*secret", "UpdateAccessKey" -s '*'

    Example 6 - Running the tool with your own queries file

    python3 main.py -f new_file.json

    Running in your Cloud and Authentication cloudgrep

    AWS

    Your system will need access to the S3 bucket. For example, if you are running on your laptop, you will need to configure the AWS CLI. If you are running on an EC2, an Instance Profile is likely the best choice.

    If you run on an EC2 instance in the same region as the S3 bucket with a VPC endpoint for S3 you can avoid egress charges. You can authenticate in a number of ways.

    Azure

    The simplest way to authenticate with Azure is to first run:

    az login

    This will open a browser window and prompt you to login to Azure.



    Attack Surface Management vs. Vulnerability Management

    Attack surface management (ASM) and vulnerability management (VM) are often confused, and while they overlap, they’re not the same. The main difference between attack surface management and vulnerability management is in their scope: vulnerability management checks a list of known assets, while attack surface management assumes you have unknown assets and so begins with discovery. Let’s look at

    Sr2T - Converts Scanning Reports To A Tabular Format

    By: Zion3R


    Scanning reports to tabular (sr2t)

    This tool takes a scanning tool's output file, and converts it to a tabular format (CSV, XLSX, or text table). This tool can process output from the following tools:

    1. Nmap (XML);
    2. Nessus (XML);
    3. Nikto (XML);
    4. Dirble (XML);
    5. Testssl (JSON);
    6. Fortify (FPR).

    Rationale

    This tool can offer a human-readable, tabular format which you can tie to any observations you have drafted in your report. Why? Because then your reviewers can tell that you, the pentester, investigated all found open ports, and looked at all scanning reports.

    Dependencies

    1. argparse (dev-python/argparse);
    2. prettytable (dev-python/prettytable);
    3. python (dev-lang/python);
    4. xlsxwriter (dev-python/xlsxwriter).

    Install

    Using Pip:

    pip install --user sr2t

    Usage

    You can use sr2t in two ways:

    • When installed as package, call the installed script: sr2t --help.
    • When Git cloned, call the package directly from the root of the Git repository: python -m src.sr2t --help
    $ sr2t --help
    usage: sr2t [-h] [--nessus NESSUS [NESSUS ...]] [--nmap NMAP [NMAP ...]]
    [--nikto NIKTO [NIKTO ...]] [--dirble DIRBLE [DIRBLE ...]]
    [--testssl TESTSSL [TESTSSL ...]]
    [--fortify FORTIFY [FORTIFY ...]] [--nmap-state NMAP_STATE]
    [--nmap-services] [--no-nessus-autoclassify]
    [--nessus-autoclassify-file NESSUS_AUTOCLASSIFY_FILE]
    [--nessus-tls-file NESSUS_TLS_FILE]
    [--nessus-x509-file NESSUS_X509_FILE]
    [--nessus-http-file NESSUS_HTTP_FILE]
    [--nessus-smb-file NESSUS_SMB_FILE]
    [--nessus-rdp-file NESSUS_RDP_FILE]
    [--nessus-ssh-file NESSUS_SSH_FILE]
    [--nessus-min-severity NESSUS_MIN_SEVERITY]
    [--nessus-plugin-name-width NESSUS_PLUGIN_NAME_WIDTH]
    [--nessus-sort-by NESSUS_SORT_BY]
    [--nikto-description-width NIKTO_DESCRIPTION_WIDTH]< br/> [--fortify-details] [--annotation-width ANNOTATION_WIDTH]
    [-oC OUTPUT_CSV] [-oT OUTPUT_TXT] [-oX OUTPUT_XLSX]
    [-oA OUTPUT_ALL]

    Converting scanning reports to a tabular format

    optional arguments:
    -h, --help show this help message and exit
    --nmap-state NMAP_STATE
    Specify the desired state to filter (e.g.
    open|filtered).
    --nmap-services Specify to ouput a supplemental list of detected
    services.
    --no-nessus-autoclassify
    Specify to not autoclassify Nessus results.
    --nessus-autoclassify-file NESSUS_AUTOCLASSIFY_FILE
    Specify to override a custom Nessus autoclassify YAML
    file.
    --nessus-tls-file NESSUS_TLS_FILE
    Specify to override a custom Nessus TLS findings YAML
    file.
    --nessus-x509-file NESSUS_X509_FILE
    Specify to override a custom Nessus X.509 findings
    YAML file.
    --nessus-http-file NESSUS_HTTP_FILE
    Specify to override a custom Nessus HTTP findings YAML
    file.
    --nessus-smb-file NESSUS_SMB_FILE
    Specify to override a custom Nessus SMB findings YAML
    file.
    --nessus-rdp-file NESSUS_RDP_FILE
    Specify to override a custom Nessus RDP findings YAML
    file.
    --nessus-ssh-file NESSUS_SSH_FILE
    Specify to override a custom Nessus SSH findings YAML
    file.
    --nessus-min-severity NESSUS_MIN_SEVERITY
    Specify the minimum severity to output (e.g. 1).
    --nessus-plugin-name-width NESSUS_PLUGIN_NAME_WIDTH
    Specify the width of the pluginid column (e.g. 30).
    --nessus-sort-by NESSUS_SORT_BY
    Specify to sort output by ip-address, port, plugin-id,
    plugin-name or severity.
    --nikto-description-width NIKTO_DESCRIPTION_WIDTH
    Specify the width of the description column (e.g. 30).
    --fortify-details Specify to include the Fortify abstracts, explanations
    and recommendations for each vulnerability.
    --annotation-width ANNOTATION_WIDTH
    Specify the width of the annotation column (e.g. 30).
    -oC OUTPUT_CSV, --output-csv OUTPUT_CSV
    Specify the output CSV basename (e.g. output).
    -oT OUTPUT_TXT, --output-txt OUTPUT_TXT
    Specify the output TXT file (e.g. output.txt).
    -oX OUTPUT_XLSX, --output-xlsx OUTPUT_XLSX
    Specify the outpu t XLSX file (e.g. output.xlsx). Only
    for Nessus at the moment
    -oA OUTPUT_ALL, --output-all OUTPUT_ALL
    Specify the output basename to output to all formats
    (e.g. output).

    specify at least one:
    --nessus NESSUS [NESSUS ...]
    Specify (multiple) Nessus XML files.
    --nmap NMAP [NMAP ...]
    Specify (multiple) Nmap XML files.
    --nikto NIKTO [NIKTO ...]
    Specify (multiple) Nikto XML files.
    --dirble DIRBLE [DIRBLE ...]
    Specify (multiple) Dirble XML files.
    --testssl TESTSSL [TESTSSL ...]
    Specify (multiple) Testssl JSON files.
    --fortify FORTIFY [FORTIFY ...]
    Specify (multiple) HP Fortify FPR files.

    Example

    A few examples

    Nessus

    To produce an XLSX format:

    $ sr2t --nessus example/nessus.nessus --no-nessus-autoclassify -oX example.xlsx

    To produce an text tabular format to stdout:

    $ sr2t --nessus example/nessus.nessus
    +---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+
    | host | port | plugin id | plugin name | severity | annotations |
    +---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+
    | 192.168.142.4 | 3389 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
    | 192.168.142.4 | 443 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
    | 192.168.142.4 | 3389 | 18405 | Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness | 2 | X |
    | 192.168.142.4 | 3389 | 30218 | Terminal Services Encryption Level is not FIPS-140 Compliant | 1 | X |
    | 192.168.142.4 | 3389 | 57690 | Terminal Services Encryption Level is Medium or Low | 2 | X |
    | 192.168.142.4 | 3389 | 58453 | Terminal Services Doesn't Use Network Level Authentication (NLA) Only | 2 | X |
    | 192.168.142.4 | 3389 | 45411 | SSL Certificate with Wrong Hostname | 2 | X |
    | 192.168.142.4 | 443 | 45411 | SSL Certificate with Wrong Hostname | 2 | X |
    | 192.168.142.4 | 3389 | 35291 | SSL Certificate Signed Using Weak Hashing Algorithm | 2 | X |
    | 192.168.142.4 | 3389 | 57582 | SSL Self-Signed Certificate | 2 | X |
    | 192.168.142.4 | 3389 | 51192 | SSL Certificate Can not Be Trusted | 2 | X |
    | 192.168.142.2 | 3389 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
    | 192.168.142.2 | 443 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
    | 192.168.142.2 | 3389 | 18405 | Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness | 2 | X |
    | 192.168.142.2 | 3389 | 30218 | Terminal Services Encryption Level is not FIPS-140 Compliant | 1 | X |
    | 192.168.142.2 | 3389 | 57690 | Terminal Services Encryption Level is Medium or Low | 2 | X |
    | 192.168.142.2 | 3389 | 58453 | Terminal Services Doesn't Use Network Level Authentication (NLA) Only | 2 | X |
    | 192.168.142.2 | 3389 | 45411 | S SL Certificate with Wrong Hostname | 2 | X |
    | 192.168.142.2 | 443 | 45411 | SSL Certificate with Wrong Hostname | 2 | X |
    | 192.168.142.2 | 3389 | 35291 | SSL Certificate Signed Using Weak Hashing Algorithm | 2 | X |
    | 192.168.142.2 | 3389 | 57582 | SSL Self-Signed Certificate | 2 | X |
    | 192.168.142.2 | 3389 | 51192 | SSL Certificate Cannot Be Trusted | 2 | X |
    | 192.168.142.2 | 445 | 57608 | SMB Signing not required | 2 | X |
    +---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+

    Or to output a CSV file:

    $ sr2t --nessus example/nessus.nessus -oC example
    $ cat example_nessus.csv
    host,port,plugin id,plugin name,severity,annotations
    192.168.142.4,3389,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
    192.168.142.4,443,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
    192.168.142.4,3389,18405,Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness,2,X
    192.168.142.4,3389,30218,Terminal Services Encryption Level is not FIPS-140 Compliant,1,X
    192.168.142.4,3389,57690,Terminal Services Encryption Level is Medium or Low,2,X
    192.168.142.4,3389,58453,Terminal Services Doesn't Use Network Level Authentication (NLA) Only,2,X
    192.168.142.4,3389,45411,SSL Certificate with Wrong Hostname,2,X
    192.168.142.4,443,45411,SSL Certificate with Wrong Hostname,2,X
    192.168.142.4,3389,35291,SSL Certificate Signed Using Weak Hashing Algorithm,2,X
    192.168.142.4,3389,57582,SSL Self-Signed Certificate,2,X
    192.168.142.4,3389,51192,SSL Certificate Cannot Be Trusted,2,X
    192.168.142.2,3389,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
    192.168.142.2,443,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
    192.168.142.2,3389,18405,Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness,2,X
    192.168.142.2,3389,30218,Terminal Services Encryption Level is not FIPS-140 Compliant,1,X
    192.168.142.2,3389,57690,Terminal Services Encryption Level is Medium or Low,2,X
    192.168.142.2,3389,58453,Terminal Services Doesn't Use Network Level Authentication (NLA) Only,2,X
    192.168.142.2,3389,45411,SSL Certificate with Wrong Hostname,2,X
    192.168.142.2,443,45411,SSL Certificate with Wrong Hostname,2,X
    192.168.142.2,3389,35291,SSL Certificate Signed Using Weak Hashing Algorithm,2,X
    192.168.142.2,3389,57582,SSL Self-Signed Certificate,2,X
    192.168.142.2,3389,51192,SSL Certificate Cannot Be Trusted,2,X
    192.168.142.2,44 5,57608,SMB Signing not required,2,X

    Nmap

    To produce an XLSX format:

    $ sr2t --nmap example/nmap.xml -oX example.xlsx

    To produce an text tabular format to stdout:

    $ sr2t --nmap example/nmap.xml --nmap-services
    Nmap TCP:
    +-----------------+----+----+----+-----+-----+-----+-----+------+------+------+
    | | 53 | 80 | 88 | 135 | 139 | 389 | 445 | 3389 | 5800 | 5900 |
    +-----------------+----+----+----+-----+-----+-----+-----+------+------+------+
    | 192.168.23.78 | X | | X | X | X | X | X | X | | |
    | 192.168.27.243 | | | | X | X | | X | X | X | X |
    | 192.168.99.164 | | | | X | X | | X | X | X | X |
    | 192.168.228.211 | | X | | | | | | | | |
    | 192.168.171.74 | | | | X | X | | X | X | X | X |
    +-----------------+----+----+----+-----+-----+-----+-----+------+------+------+

    Nmap Services:
    +-----------------+------+-------+---------------+-------+
    | ip address | port | proto | service | state |
    +--------------- --+------+-------+---------------+-------+
    | 192.168.23.78 | 53 | tcp | domain | open |
    | 192.168.23.78 | 88 | tcp | kerberos-sec | open |
    | 192.168.23.78 | 135 | tcp | msrpc | open |
    | 192.168.23.78 | 139 | tcp | netbios-ssn | open |
    | 192.168.23.78 | 389 | tcp | ldap | open |
    | 192.168.23.78 | 445 | tcp | microsoft-ds | open |
    | 192.168.23.78 | 3389 | tcp | ms-wbt-server | open |
    | 192.168.27.243 | 135 | tcp | msrpc | open |
    | 192.168.27.243 | 139 | tcp | netbios-ssn | open |
    | 192.168.27.243 | 445 | tcp | microsoft-ds | open |
    | 192.168.27.243 | 3389 | tcp | ms-wbt-server | open |
    | 192.168.27.243 | 5800 | tcp | vnc-http | open |
    | 192.168.27.243 | 5900 | tcp | vnc | open |
    | 192.168.99.164 | 135 | tcp | msrpc | open |
    | 192.168.99.164 | 139 | tcp | netbios-ssn | open |
    | 192 .168.99.164 | 445 | tcp | microsoft-ds | open |
    | 192.168.99.164 | 3389 | tcp | ms-wbt-server | open |
    | 192.168.99.164 | 5800 | tcp | vnc-http | open |
    | 192.168.99.164 | 5900 | tcp | vnc | open |
    | 192.168.228.211 | 80 | tcp | http | open |
    | 192.168.171.74 | 135 | tcp | msrpc | open |
    | 192.168.171.74 | 139 | tcp | netbios-ssn | open |
    | 192.168.171.74 | 445 | tcp | microsoft-ds | open |
    | 192.168.171.74 | 3389 | tcp | ms-wbt-server | open |
    | 192.168.171.74 | 5800 | tcp | vnc-http | open |
    | 192.168.171.74 | 5900 | tcp | vnc | open |
    +-----------------+------+-------+---------------+-------+

    Or to output a CSV file:

    $ sr2t --nmap example/nmap.xml -oC example
    $ cat example_nmap_tcp.csv
    ip address,53,80,88,135,139,389,445,3389,5800,5900
    192.168.23.78,X,,X,X,X,X,X,X,,
    192.168.27.243,,,,X,X,,X,X,X,X
    192.168.99.164,,,,X,X,,X,X,X,X
    192.168.228.211,,X,,,,,,,,
    192.168.171.74,,,,X,X,,X,X,X,X

    Nikto

    To produce an XLSX format:

    $ sr2t --nikto example/nikto.xml -oX example/nikto.xlsx

    To produce an text tabular format to stdout:

    $ sr2t --nikto example/nikto.xml
    +----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+
    | target ip | target hostname | target port | description | annotations |
    +----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+
    | 192.168.178.10 | 192.168.178.10 | 80 | The anti-clickjacking X-Frame-Options header is not present. | X |
    | 192.168.178.10 | 192.168.178.10 | 80 | The X-XSS-Protection header is not defined. This header can hint to the user | X |
    | | | | agent to protect against some forms of XSS | |
    | 192.168.178.10 | 192.168.178.10 | 8 0 | The X-Content-Type-Options header is not set. This could allow the user agent to | X |
    | | | | render the content of the site in a different fashion to the MIME type | |
    +----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+

    Or to output a CSV file:

    $ sr2t --nikto example/nikto.xml -oC example
    $ cat example_nikto.csv
    target ip,target hostname,target port,description,annotations
    192.168.178.10,192.168.178.10,80,The anti-clickjacking X-Frame-Options header is not present.,X
    192.168.178.10,192.168.178.10,80,"The X-XSS-Protection header is not defined. This header can hint to the user
    agent to protect against some forms of XSS",X
    192.168.178.10,192.168.178.10,80,"The X-Content-Type-Options header is not set. This could allow the user agent to
    render the content of the site in a different fashion to the MIME type",X

    Dirble

    To produce an XLSX format:

    $ sr2t --dirble example/dirble.xml -oX example.xlsx

    To produce an text tabular format to stdout:

    $ sr2t --dirble example/dirble.xml
    +-----------------------------------+------+-------------+--------------+-------------+---------------------+--------------+-------------+
    | url | code | content len | is directory | is listable | found from listable | redirect url | annotations |
    +-----------------------------------+------+-------------+--------------+-------------+---------------------+--------------+-------------+
    | http://example.org/flv | 0 | 0 | false | false | false | | X |
    | http://example.org/hire | 0 | 0 | false | false | false | | X |
    | http://example.org/phpSQLiteAdmin | 0 | 0 | false | false | false | | X |
    | http://example.org/print_order | 0 | 0 | false | false | fa lse | | X |
    | http://example.org/putty | 0 | 0 | false | false | false | | X |
    | http://example.org/receipts | 0 | 0 | false | false | false | | X |
    +-----------------------------------+------+-------------+--------------+-------------+---------------------+--------------+-------------+

    Or to output a CSV file:

    $ sr2t --dirble example/dirble.xml -oC example
    $ cat example_dirble.csv
    url,code,content len,is directory,is listable,found from listable,redirect url,annotations
    http://example.org/flv,0,0,false,false,false,,X
    http://example.org/hire,0,0,false,false,false,,X
    http://example.org/phpSQLiteAdmin,0,0,false,false,false,,X
    http://example.org/print_order,0,0,false,false,false,,X
    http://example.org/putty,0,0,false,false,false,,X
    http://example.org/receipts,0,0,false,false,false,,X

    Testssl

    To produce an XLSX format:

    $ sr2t --testssl example/testssl.json -oX example.xlsx

    To produce an text tabular format to stdout:

    $ sr2t --testssl example/testssl.json
    +-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+
    | ip address | port | BREACH | No HSTS | No PFS | No TLSv1.3 | RC4 | TLSv1.0 | TLSv1.1 | Wildcard |
    +-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+
    | rc4-md5.badssl.com/104.154.89.105 | 443 | X | X | X | X | X | X | X | X |
    +-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+

    Or to output a CSV file:

    $ sr2t --testssl example/testssl.json -oC example
    $ cat example_testssl.csv
    ip address,port,BREACH,No HSTS,No PFS,No TLSv1.3,RC4,TLSv1.0,TLSv1.1,Wildcard
    rc4-md5.badssl.com/104.154.89.105,443,X,X,X,X,X,X,X,X

    Fortify

    To produce an XLSX format:

    $ sr2t --fortify example/fortify.fpr -oX example.xlsx

    To produce an text tabular format to stdout:

    $ sr2t --fortify example/fortify.fpr
    +--------------------------+-----------------------+-------------------------------+----------+------------+-------------+
    | | type | subtype | severity | confidence | annotations |
    +--------------------------+-----------------------+-------------------------------+----------+------------+-------------+
    | example1/web.xml:135:135 | J2EE Misconfiguration | Insecure Transport | 3.0 | 5.0 | X |
    | example2/web.xml:150:150 | J2EE Misconfiguration | Insecure Transport | 3.0 | 5.0 | X |
    | example3/web.xml:109:109 | J2EE Misconfiguration | Incomplete Error Handling | 3.0 | 5.0 | X |
    | example4/web.xml:108:108 | J2EE Misconfiguration | Incomplete Error Handling | 3.0 | 5.0 | X |
    | example5/web.xml:166:166 | J2EE Misconfiguration | Inse cure Transport | 3.0 | 5.0 | X |
    | example6/web.xml:2:2 | J2EE Misconfiguration | Excessive Session Timeout | 3.0 | 5.0 | X |
    | example7/web.xml:162:162 | J2EE Misconfiguration | Missing Authentication Method | 3.0 | 5.0 | X |
    +--------------------------+-----------------------+-------------------------------+----------+------------+-------------+

    Or to output a CSV file:

    $ sr2t --fortify example/fortify.fpr -oC example
    $ cat example_fortify.csv
    ,type,subtype,severity,confidence,annotations
    example1/web.xml:135:135,J2EE Misconfiguration,Insecure Transport,3.0,5.0,X
    example2/web.xml:150:150,J2EE Misconfiguration,Insecure Transport,3.0,5.0,X
    example3/web.xml:109:109,J2EE Misconfiguration,Incomplete Error Handling,3.0,5.0,X
    example4/web.xml:108:108,J2EE Misconfiguration,Incomplete Error Handling,3.0,5.0,X
    example5/web.xml:166:166,J2EE Misconfiguration,Insecure Transport,3.0,5.0,X
    example6/web.xml:2:2,J2EE Misconfiguration,Excessive Session Timeout,3.0,5.0,X
    example7/web.xml:162:162,J2EE Misconfiguration,Missing Authentication Method,3.0,5.0,X

    Donate

    • WOW: WW4L3VCX11zWgKPX51TRw2RENe8STkbCkh5wTV4GuQnbZ1fKYmPFobZhEfS1G9G3vwjBhzioi3vx8JgBx2xLxe4N1gtJee8Mp


    GitHub Launches AI-Powered Autofix Tool to Assist Devs in Patching Security Flaws

    GitHub on Wednesday announced that it's making available a feature called code scanning autofix in public beta for all&nbsp;Advanced Security customers&nbsp;to provide targeted recommendations in an effort to avoid introducing new security issues. "Powered by&nbsp;GitHub Copilot&nbsp;and&nbsp;CodeQL, code scanning autofix covers more than 90% of alert types in JavaScript, Typescript, Java, and

    CVE-2024-23897 - Jenkins <= 2.441 & <= LTS 2.426.2 PoC And Scanner

    By: Zion3R


    Exploitation and scanning tool specifically designed for Jenkins versions <= 2.441 & <= LTS 2.426.2. It leverages CVE-2024-23897 to assess and exploit vulnerabilities in Jenkins instances.


    Usage

    Ensure you have the necessary permissions to scan and exploit the target systems. Use this tool responsibly and ethically.

    python CVE-2024-23897.py -t <target> -p <port> -f <file>

    or

    python CVE-2024-23897.py -i <input_file> -f <file>

    Parameters: - -t or --target: Specify the target IP(s). Supports single IP, IP range, comma-separated list, or CIDR block. - -i or --input-file: Path to input file containing hosts in the format of http://1.2.3.4:8080/ (one per line). - -o or --output-file: Export results to file (optional). - -p or --port: Specify the port number. Default is 8080 (optional). - -f or --file: Specify the file to read on the target system.


    Changelog

    [27th January 2024] - Feature Request
    • Added scanning/exploiting via input file with hosts (-i INPUT_FILE).
    • Added export to file (-o OUTPUT_FILE).

    [26th January 2024] - Initial Release
    • Initial release.

    Contributing

    Contributions are welcome. Please feel free to fork, modify, and make pull requests or report issues.


    Author

    Alexander Hagenah - URL - Twitter


    Disclaimer

    This tool is meant for educational and professional purposes only. Unauthorized scanning and exploiting of systems is illegal and unethical. Always ensure you have explicit permission to test and exploit any systems you target.



    RepoReaper - An Automated Tool Crafted To Meticulously Scan And Identify Exposed .Git Repositories Within Specified Domains And Their Subdomains

    By: Zion3R


    RepoReaper is a precision tool designed to automate the identification of exposed .git repositories across a list of domains and subdomains. By processing a user-provided text file with domain names, RepoReaper systematically checks each for publicly accessible .git files. This enables rapid assessment and protection against information leaks, making RepoReaper an essential resource for security teams and web developers.


    Features
    • Automated scanning of domains and subdomains for exposed .git repositories.
    • Streamlines the detection of sensitive data exposures.
    • User-friendly command-line interface.
    • Ideal for security audits and Bug Bounty.

    Installation

    Clone the repository and install the required dependencies:

    git clone https://github.com/YourUsername/RepoReaper.git
    cd RepoReaper
    pip install -r requirements.txt
    chmod +x RepoReaper.py

    Usage

    RepoReaper is executed from the command line and will prompt for the path to a file containing a list of domains or subdomains to be scanned.

    To start RepoReaper, simply run:

    ./RepoReaper.py
    or
    python3 RepoReaper.py

    Upon execution, RepoReaper will ask for the path to the file containing the domains or subdomains: Enter the path of the file containing domains

    Provide the path to your text file when prompted. The file should contain one domain or subdomain per line, like so:

    example.com
    subdomain.example.com
    anotherdomain.com

    RepoReaper will then proceed to scan the provided domains or subdomains for exposed .git repositories and report its findings.Β 


    Disclaimer

    This tool is intended for educational purposes and security research only. The user assumes all responsibility for any damages or misuse resulting from its use.



    SwaggerSpy - Automated OSINT On SwaggerHub

    By: Zion3R


    SwaggerSpy is a tool designed for automated Open Source Intelligence (OSINT) on SwaggerHub. This project aims to streamline the process of gathering intelligence from APIs documented on SwaggerHub, providing valuable insights for security researchers, developers, and IT professionals.


    What is Swagger?

    Swagger is an open-source framework that allows developers to design, build, document, and consume RESTful web services. It simplifies API development by providing a standard way to describe REST APIs using a JSON or YAML format. Swagger enables developers to create interactive documentation for their APIs, making it easier for both developers and non-developers to understand and use the API.


    About SwaggerHub

    SwaggerHub is a collaborative platform for designing, building, and managing APIs using the Swagger framework. It offers a centralized repository for API documentation, version control, and collaboration among team members. SwaggerHub simplifies the API development lifecycle by providing a unified platform for API design and testing.


    Why OSINT on SwaggerHub?

    Performing OSINT on SwaggerHub is crucial because developers, in their pursuit of efficient API documentation and sharing, may inadvertently expose sensitive information. Here are key reasons why OSINT on SwaggerHub is valuable:

    1. Developer Oversights: Developers might unintentionally include secrets, credentials, or sensitive information in API documentation on SwaggerHub. These oversights can lead to security vulnerabilities and unauthorized access if not identified and addressed promptly.

    2. Security Best Practices: OSINT on SwaggerHub helps enforce security best practices. Identifying and rectifying potential security issues early in the development lifecycle is essential to ensure the confidentiality and integrity of APIs.

    3. Preventing Data Leaks: By systematically scanning SwaggerHub for sensitive information, organizations can proactively prevent data leaks. This is especially crucial in today's interconnected digital landscape where APIs play a vital role in data exchange between services.

    4. Risk Mitigation: Understanding that developers might forget to remove or obfuscate sensitive details in API documentation underscores the importance of continuous OSINT on SwaggerHub. This proactive approach mitigates the risk of unintentional exposure of critical information.

    5. Compliance and Privacy: Many industries have stringent compliance requirements regarding the protection of sensitive data. OSINT on SwaggerHub ensures that APIs adhere to these regulations, promoting a culture of compliance and safeguarding user privacy.

    6. Educational Opportunities: Identifying oversights in SwaggerHub documentation provides educational opportunities for developers. It encourages a security-conscious mindset, fostering a culture of awareness and responsible information handling.

    By recognizing that developers can inadvertently expose secrets, OSINT on SwaggerHub becomes an integral part of the overall security strategy, safeguarding against potential threats and promoting a secure API ecosystem.


    How SwaggerSpy Works

    SwaggerSpy obtains information from SwaggerHub and utilizes regular expressions to inspect API documentation for sensitive information, such as secrets and credentials.


    Getting Started

    To use SwaggerSpy, follow these steps:

    1. Installation: Clone the SwaggerSpy repository and install the required dependencies.
    git clone https://github.com/UndeadSec/SwaggerSpy.git
    cd SwaggerSpy
    pip install -r requirements.txt
    1. Usage: Run SwaggerSpy with the target search terms (more accurate with domains).
    python swaggerspy.py searchterm
    1. Results: SwaggerSpy will generate a report containing OSINT findings, including information about the API, endpoints, and secrets.

    Disclaimer

    SwaggerSpy is intended for educational and research purposes only. Users are responsible for ensuring that their use of this tool complies with applicable laws and regulations.


    Contribution

    Contributions to SwaggerSpy are welcome! Feel free to submit issues, feature requests, or pull requests to help improve this tool.


    About the Author

    SwaggerSpy is developed and maintained by Alisson Moretto (UndeadSec)

    I'm a passionate cyber threat intelligence pro who loves sharing insights and crafting cybersecurity tools.


    TODO

    Regular Expressions Enhancement
    • [ ] Review and improve existing regular expressions.
    • [ ] Ensure that regular expressions adhere to best practices.
    • [ ] Check for any potential optimizations in the regex patterns.
    • [ ] Test regular expressions with various input scenarios for accuracy.
    • [ ] Document any complex or non-trivial regex patterns for better understanding.
    • [ ] Explore opportunities to modularize or break down complex patterns.
    • [ ] Verify the regular expressions against the latest specifications or requirements.
    • [ ] Update documentation to reflect any changes made to the regular expressions.

    License

    SwaggerSpy is licensed under the MIT License. See the LICENSE file for details.


    Thanks

    Special thanks to @Liodeus for providing project inspiration through swaggerHole.



    AzSubEnum - Azure Service Subdomain Enumeration

    By: Zion3R


    AzSubEnum is a specialized subdomain enumeration tool tailored for Azure services. This tool is designed to meticulously search and identify subdomains associated with various Azure services. Through a combination of techniques and queries, AzSubEnum delves into the Azure domain structure, systematically probing and collecting subdomains related to a diverse range of Azure services.


    How it works?

    AzSubEnum operates by leveraging DNS resolution techniques and systematic permutation methods to unveil subdomains associated with Azure services such as Azure App Services, Storage Accounts, Azure Databases (including MSSQL, Cosmos DB, and Redis), Key Vaults, CDN, Email, SharePoint, Azure Container Registry, and more. Its functionality extends to comprehensively scanning different Azure service domains to identify associated subdomains.

    With this tool, users can conduct thorough subdomain enumeration within Azure environments, aiding security professionals, researchers, and administrators in gaining insights into the expansive landscape of Azure services and their corresponding subdomains.


    Why i create this?

    During my learning journey on Azure AD exploitation, I discovered that the Azure subdomain tool, Invoke-EnumerateAzureSubDomains from NetSPI, was unable to run on my Debian PowerShell. Consequently, I created a crude implementation of that tool in Python.


    Usage
    ➜  AzSubEnum git:(main) βœ— python3 azsubenum.py --help
    usage: azsubenum.py [-h] -b BASE [-v] [-t THREADS] [-p PERMUTATIONS]

    Azure Subdomain Enumeration

    options:
    -h, --help show this help message and exit
    -b BASE, --base BASE Base name to use
    -v, --verbose Show verbose output
    -t THREADS, --threads THREADS
    Number of threads for concurrent execution
    -p PERMUTATIONS, --permutations PERMUTATIONS
    File containing permutations

    Basic enumeration:

    python3 azsubenum.py -b retailcorp --thread 10

    Using permutation wordlists:

    python3 azsubenum.py -b retailcorp --thread 10 --permutation permutations.txt

    With verbose output:

    python3 azsubenum.py -b retailcorp --thread 10 --permutation permutations.txt --verbose




    SqliSniper - Advanced Time-based Blind SQL Injection Fuzzer For HTTP Headers

    By: Zion3R


    SqliSniper is a robust Python tool designed to detect time-based blind SQL injections in HTTP request headers. It enhances the security assessment process by rapidly scanning and identifying potential vulnerabilities using multi-threaded, ensuring speed and efficiency. Unlike other scanners, SqliSniper is designed to eliminates false positives through and send alerts upon detection, with the built-in Discord notification functionality.


    Key Features

    • Time-Based Blind SQL Injection Detection: Pinpoints potential SQL injection vulnerabilities in HTTP headers.
    • Multi-Threaded Scanning: Offers faster scanning capabilities through concurrent processing.
    • Discord Notifications: Sends alerts via Discord webhook for detected vulnerabilities.
    • False Positive Checks: Implements response time analysis to differentiate between true positives and false alarms.
    • Custom Payload and Headers Support: Allows users to define custom payloads and headers for targeted scanning.

    Installation

    git clone https://github.com/danialhalo/SqliSniper.git
    cd SqliSniper
    chmod +x sqlisniper.py
    pip3 install -r requirements.txt

    Usage

    This will display help for the tool. Here are all the options it supports.

    ubuntu:~/sqlisniper$ ./sqlisniper.py -h


    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•— β–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•—β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
    β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—
    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β–ˆβ–ˆβ•— β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•
    β•šβ•β•β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–„β–„ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β•šβ•β•β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β• β–ˆβ–ˆβ•”β•β•β• β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—
    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β•šβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘
    β•šβ•β•β•β•β•β•β• β•šβ•β•β–€β–€β•β• β•šβ•β•β•β•β•β•β•β•šβ•β• β•šβ•β•β•β•β•β•β•β•šβ•β• β•šβ•β•β•β•β•šβ•β•β•šβ•β• β•šβ•β•β•β•β•β•β•β•šβ•β• β•šβ•β•

    -: By Muhammad Danial :-

    usage: sqlisniper.py [-h] [-u URL] [-r URLS_FILE] [-p] [--proxy PROXY] [--payload PA YLOAD] [--single-payload SINGLE_PAYLOAD] [--discord DISCORD] [--headers HEADERS]
    [--threads THREADS]

    Detect SQL injection by sending malicious queries

    options:
    -h, --help show this help message and exit
    -u URL, --url URL Single URL for the target
    -r URLS_FILE, --urls_file URLS_FILE
    File containing a list of URLs
    -p, --pipeline Read from pipeline
    --proxy PROXY Proxy for intercepting requests (e.g., http://127.0.0.1:8080)
    --payload PAYLOAD File containing malicious payloads (default is payloads.txt)
    --single-payload SINGLE_PAYLOAD
    Single payload for testing
    --discord DISCORD Discord Webhook URL
    --headers HEADERS File containing headers (default is headers.txt)
    --threads THREADS Number of threads

    Running SqliSniper

    Single Url Scan

    The url can be provided with -u flag for single site scan

    ./sqlisniper.py -u http://example.com

    File Input

    The -r flag allows SqliSniper to read a file containing multiple URLs for simultaneous scanning.

    ./sqlisniper.py -r url.txt

    piping URLs

    The SqliSniper can also worked with the pipeline input with -p flag

    cat url.txt | ./sqlisniper.py -p

    The pipeline feature facilitates seamless integration with other tools. For instance, you can utilize tools like subfinder and httpx, and then pipe their output to SqliSniper for mass scanning.

    subfinder -silent -d google.com | sort -u | httpx -silent | ./sqlisniper.py -p

    Scanning with custom payloads

    By default the SqliSniper use the payloads.txt file. However --payload flag can be used for providing custom payloads file.

    ./sqlisniper.py -u http://example.com --payload mssql_payloads.txt

    While using the custom payloads file, ensure that you substitute the sleep time with %__TIME_OUT__%. SqliSniper dynamically adjusts the sleep time iteratively to mitigate potential false positives. The payloads file should look like this.

    ubuntu:~/sqlisniper$ cat payloads.txt 
    0\"XOR(if(now()=sysdate(),sleep(%__TIME_OUT__%),0))XOR\"Z
    "0"XOR(if(now()=sysdate()%2Csleep(%__TIME_OUT__%)%2C0))XOR"Z"
    0'XOR(if(now()=sysdate(),sleep(%__TIME_OUT__%),0))XOR'Z

    Scanning with Single Payloads

    If you want to only test with the single payload --single-payload flag can be used. Make sure to replace the sleep time with %__TIME_OUT__%

    ./sqlisniper.py -r url.txt --single-payload "0'XOR(if(now()=sysdate(),sleep(%__TIME_OUT__%),0))XOR'Z"

    Scanning Custom Header

    Headers are saved in the file headers.txt for scanning custom header save the custom HTTP Request Header in headers.txt file.

    ubuntu:~/sqlisniper$ cat headers.txt 
    User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64)
    X-Forwarded-For: 127.0.0.1

    Sending Discord Alert Notifications

    SqliSniper also offers Discord alert notifications, enhancing its functionality by providing real-time alerts through Discord webhooks. This feature proves invaluable during large-scale scans, allowing prompt notifications upon detection.

    ./sqlisniper.py -r url.txt --discord <web_hookurl>

    Multi-Threading

    Threads can be defined with --threads flag

     ./sqlisniper.py -r url.txt --threads 10

    Note: It is crucial to consider that employing a higher number of threads might lead to potential false positives or overlooking valid issues. Due to the nature of time-based SQL injection it is recommended to use lower thread for more accurate detection.


    SqliSniper is made inΒ  pythonΒ with lots of <3 by @Muhammad Danial.



    BucketLoot - An Automated S3-compatible Bucket Inspector

    By: Zion3R


    BucketLoot is an automated S3-compatible Bucket inspector that can help users extract assets, flag secret exposures and even search for custom keywords as well as Regular Expressions from publicly-exposed storage buckets by scanning files that store data in plain-text.

    The tool can scan for buckets deployed on Amazon Web Services (AWS), Google Cloud Storage (GCS), DigitalOcean Spaces and even custom domains/URLs which could be connected to these platforms. It returns the output in a JSON format, thus enabling users to parse it according to their liking or forward it to any other tool for further processing.

    BucketLoot comes with a guest mode by default, which means a user doesn't needs to specify any API tokens / Access Keys initially in order to run the scan. The tool will scrape a maximum of 1000 files that are returned in the XML response and if the storage bucket contains more than 1000 entries which the user would like to run the scanner on, they can provide platform credentials to run a complete scan. If you'd like to know more about the tool, make sure to check out our blog.

    Features

    Secret Scanning

    Scans for over 80+ unique RegEx signatures that can help in uncovering secret exposures tagged with their severity from the misconfigured storage bucket. Users have the ability to modify or add their own signatures in the regexes.json file. If you believe you have any cool signatures which might be helpful for others too and could be flagged at scale, go ahead and make a PR!

    Sensitive File Checks

    Accidental sensitive file leakages are a big problem that affects the security posture of individuals and organisations. BucketLoot comes with a 80+ unique regEx signatures list in vulnFiles.json which allows users to flag these sensitive files based on file names or extensions.

    Dig Mode

    Want to quickly check if any target website is using a misconfigured bucket that is leaking secrets or any other sensitive data? Dig Mode allows you to pass non-S3 targets and let the tool scrape URLs from response body for scanning.

    Asset Extraction

    Interested in stepping up your asset discovery game? BucketLoot extracts all the URLs/Subdomains and Domains that could be present in an exposed storage bucket, enabling you to have a chance of discovering hidden endpoints, thus giving you an edge over the other traditional recon tools.

    Searching

    The tool goes beyond just asset discovery and secret exposure scanning by letting users search for custom keywords and even Regular Expression queries which may help them find exactly what they are looking for.

    To know more about our Attack Surface Management platform, check out NVADR.



    CATSploit - An Automated Penetration Testing Tool Using Cyber Attack Techniques Scoring

    By: Zion3R


    CATSploit is an automated penetration testing tool using Cyber Attack Techniques Scoring (CATS) method that can be used without pentester. Currently, pentesters implicitly made the selection of suitable attack techniques for target systems to be attacked. CATSploit uses system configuration information such as OS, open ports, software version collected by scanner and calculates a score value for capture eVc and detectability eVd of each attack techniques for target system. By selecting the highest score values, it is possible to select the most appropriate attack technique for the target system without hack knack(professional pentester’s skill) .

    CATSploit automatically performs penetration tests in the following sequence:

    1. Information gathering and prior information input First, gathering information of target systems. CATSploit supports nmap and OpenVAS to gather information of target systems. CATSploit also supports prior information of target systems if you have.

    2. Calculating score value of attack techniques Using information obtained in the previous phase and attack techniques database, evaluation values of capture (eVc) and detectability (eVd) of each attack techniques are calculated. For each target computer, the values of each attack technique are calculated.

    3. Selection of attack techniques by using scores and make attack scenario Select attack techniques and create attack scenarios according to pre-defined policies. For example, for a policy that prioritized hard-to-detect, the attack techniques with the lowest eVd(Detectable Score) will be selected.

    4. Execution of attack scenario CATSploit executes the attack techniques according to attack scenario constructed in the previous phase. CATSploit uses Metasploit as a framework and Metasploit API to execute actual attacks.


    Prerequisities

    CATSploit has the following prerequisites:

    • Kali Linux 2023.2a

    Installation

    For Metasploit, Nmap and OpenVAS, it is assumed to be installed with the Kali Distribution.

    Installing CATSploit

    To install the latest version of CATSploit, please use the following commands:

    Cloneing and setup
    $ git clone https://github.com/catsploit/catsploit.git
    $ cd catsploit
    $ git clone https://github.com/catsploit/cats-helper.git
    $ sudo ./setup.sh

    Editing configuration file

    CATSploit is a server-client configuration, and the server reads the configuration JSON file at startup. In config.json, the following fields should be modified for your environment.

    • DBMS
      • dbname: database name created for CATSploit
      • user: username of PostgreSQL
      • password: password of PostgrSQL
      • host: If you are using a database on a remote host, specify the IP address of the host
    • SCENARIO
      • generator.maxscenarios: Maximum number of scenarios to calculate (*)
    • ATTACKPF
      • msfpassword: password of MSFRPCD
      • openvas.user: username of PostgreSQL
      • openvas.password: password of PostgreSQL
      • openvas.maxhosts: Maximum number of hosts to be test at the same time (*)
      • openvas.maxchecks: Maximum number of test items to be test at the same time (*)
    • ATTACKDB
      • attack_db_dir: Path to the folder where AtackSteps are stored

    (*) Adjust the number according to the specs of your machine.

    Usage

    To start the server, execute the following command:

    $ python cats_server.py -c [CONFIG_FILE]

    Next, prepare another console, start the client program, and initiate a connection to the server.

    $ python catsploit.py -s [SOCKET_PATH]

    After successfully connecting to the server and initializing it, the session will start.

       _________  ___________       __      _ __
    / ____/ |/_ __/ ___/____ / /___ (_) /_
    / / / /| | / / \__ \/ __ \/ / __ \/ / __/
    / /___/ ___ |/ / ___/ / /_/ / / /_/ / / /_
    \____/_/ |_/_/ /____/ .___/_/\____/_/\__/
    /_/

    [*] Connecting to cats-server
    [*] Done.
    [*] Initializing server
    [*] Done.
    catsploit>

    The client can execute a variety of commands. Each command can be executed with -h option to display the format of its arguments.

    usage: [-h] {host,scenario,scan,plan,attack,post,reset,help,exit} ...

    positional arguments:
    {host,scenario,scan,plan,attack,post,reset,help,exit}

    options:
    -h, --help show this help message and exit

    I've posted the commands and options below as well for reference.

    host list:
    show information about the hosts
    usage: host list [-h]
    options:
    -h, --help show this help message and exit

    host detail:
    show more information about one host
    usage: host detail [-h] host_id
    positional arguments:
    host_id ID of the host for which you want to show information
    options:
    -h, --help show this help message and exit

    scenario list:
    show information about the scenarios
    usage: scenario list [-h]
    options:
    -h, --help show this help message and exit

    scenario detail:
    show more information about one scenario
    usage: scenario detail [-h] scenario_id
    positional arguments:
    scenario_id ID of the scenario for which you want to show information
    options:
    -h, --help show this help message and exit

    scan:
    run network-scan and security-scan
    usage: scan [-h] [--port PORT] targe t_host [target_host ...]
    positional arguments:
    target_host IP address to be scanned
    options:
    -h, --help show this help message and exit
    --port PORT ports to be scanned

    plan:
    planning attack scenarios
    usage: plan [-h] src_host_id dst_host_id
    positional arguments:
    src_host_id originating host
    dst_host_id target host
    options:
    -h, --help show this help message and exit

    attack:
    execute attack scenario
    usage: attack [-h] scenario_id
    positional arguments:
    scenario_id ID of the scenario you want to execute

    options:
    -h, --help show this help message and exit

    post find-secret:
    find confidential information files that can be performed on the pwned host
    usage: post find-secret [-h] host_id
    positional arguments:
    host_id ID of the host for which you want to find confidential information
    op tions:
    -h, --help show this help message and exit

    reset:
    reset data on the server
    usage: reset [-h] {system} ...
    positional arguments:
    {system} reset system
    options:
    -h, --help show this help message and exit

    exit:
    exit CATSploit
    usage: exit [-h]
    options:
    -h, --help show this help message and exit

    Examples

    In this example, we use CATSploit to scan network, plan the attack scenario, and execute the attack.

    catsploit> scan 192.168.0.0/24
    Network Scanning ... 100%
    [*] Total 2 hosts were discovered.
    Vulnerability Scanning ... 100%
    [*] Total 14 vulnerabilities were discovered.
    catsploit> host list
    ┏━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━┓
    ┃ hostID ┃ IP ┃ Hostname ┃ Platform ┃ Pwned ┃
    ┑━━━━━━ ━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━┩
    β”‚ attacker β”‚ 0.0.0.0 β”‚ kali β”‚ kali 2022.4 β”‚ True β”‚
    β”‚ h_exbiy6 β”‚ 192.168.0.10 β”‚ β”‚ Linux 3.10 - 4.11 β”‚ False β”‚
    β”‚ h_nhqyfq β”‚ 192.168.0.20 β”‚ β”‚ Microsoft Windows 7 SP1 β”‚ False β”‚
    └──────────┴ β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜


    catsploit> host detail h_exbiy6
    ┏━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━┓
    ┃ hostID ┃ IP ┃ Hostname ┃ Platform ┃ Pwned ┃
    ┑━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━┩
    β”‚ h_exbiy6 β”‚ 192.168.0.10 β”‚ ubuntu β”‚ ubuntu 14.04 β”‚ False β”‚
    └──────────┴──────────────┴──────────┴──────────────┴─ β”€β”€β”€β”€β”€β”˜

    [IP address]
    ┏━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━┳━━━━━━━━━━━━┓
    ┃ ipv4 ┃ ipv4mask ┃ ipv6 ┃ ipv6prefix ┃
    ┑━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━╇━━━━━━━━━━━━┩
    β”‚ 192.168.0.10 β”‚ β”‚ β”‚ β”‚
    └──────────── β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

    [Open ports]
    ┏━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
    ┃ ip ┃ proto ┃ port ┃ service ┃ product ┃ version ┃
    ┑━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
    β”‚ 192.168.0.10 β”‚ tcp β”‚ 21 β”‚ ftp β”‚ ProFTPD β”‚ 1.3.5 β”‚
    β”‚ 192.168.0.10 β”‚ tcp β”‚ 22 β”‚ ssh β”‚ OpenSSH β”‚ 6.6.1p1 Ubuntu 2ubuntu2.10 β”‚
    β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ http β”‚ Apache httpd β”‚ 2.4.7 β”‚
    β”‚ 192.168.0.10 β”‚ tcp β”‚ 445 β”‚ netbios-ssn β”‚ Samba smbd β”‚ 3.X - 4.X β”‚
    β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ ipp β”‚ CUPS β”‚ 1.7 β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

    [Vulnerabilities]
    ┏━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┓
    ┃ ip ┃ proto ┃ port ┃ vuln_name ┃ cve ┃
    ┑━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━┩
    β”‚ 192.168.0.10 β”‚ tcp β”‚ 0 β”‚ TCP Timestamps Information Disclosure β”‚ N/A β”‚
    β”‚ 192.168.0.10 β”‚ tcp β”‚ 21 β”‚ FTP Unencrypted Cleartext Login β”‚ N/A β”‚
    β”‚ 192.168.0.10 β”‚ tcp β”‚ 22 β”‚ Weak MAC Algorithm(s) Supported (SSH) β”‚ N/A β”‚
    β”‚ 192.168.0.10 β”‚ tcp β”‚ 22 β”‚ Weak Encryption Algorithm(s) Supported (SSH) β”‚ N/A β”‚
    β”‚ 192.168.0.10 β”‚ tcp β”‚ 22 β”‚ Weak Host Key Algorithm(s) (SSH) β”‚ N/A β”‚
    β”‚ 192.168.0.10 β”‚ tcp β”‚ 22 β”‚ Weak Key Exchange (KEX) Algorithm(s) Supported (SSH) β”‚ N/A β”‚
    β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Test HTTP dangerous methods β”‚ N/A β”‚
    β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Drupal Core SQLi Vulnerability (SA-CORE-2014-005) - Active Check β”‚ CVE-2014-3704 β”‚
    β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Drupal Coder RCE Vulnerability (SA-CONTRIB-2016-039) - Active Check β”‚ N/A β”‚
    β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Sensitive File Disclosure (HTTP) β”‚ N/A β”‚
    β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Unprotected Web App / Device Installers (HTTP) β”‚ N/A β”‚
    β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Cleartext Transmission of Sensitive Information via HTTP β”‚ N/A β”‚
    β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ jQuery < 1.9.0 XSS Vulnerability β”‚ CVE-2012-6708 β”‚
    β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ jQuery < 1.6.3 XSS Vulnerability β”‚ CVE-2011-4969 β”‚
    β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Drupal 7.0 Information Disclosure Vulnerability - Active Check β”‚ CVE-2011-3730 β”‚
    β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β”‚ CVE-2016-2183 β”‚
    β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β”‚ CVE-2016-6329 β”‚
    β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β”‚ CVE-2020-12872 β”‚
    β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ SSL/TLS: Deprecated TLSv1.0 and TLSv1.1 Protocol Detection β”‚ CVE-2011-3389 β”‚
    β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ SSL/TLS: Deprecated TLSv1.0 and TLSv1.1 Protocol Detection β”‚ CVE-2015-0204 β”‚
    └──────────────┴───────┴──────┴─────────────────────────────────────────────────────────────────────┴───& #9472;β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

    [Users]
    ┏━━━━━━━━━━━┳━━━━━━━┓
    ┃ user name ┃ group ┃
    ┑━━━━━━━━━━━╇━━━━━━━┩
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜


    catsploit> plan attacker h_exbiy6
    Planning attack scenario...100%
    [*] Done. 15 scenarios was planned.
    [*] To check each scenario, try 'scenario list' and/or 'scenario detail'.
    catsploit> scenario list
    ┏━━━━━━━━━━━━━┳━━━━━ ━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
    ┃ scenario id ┃ src host ip ┃ target host ip ┃ eVc ┃ eVd ┃ steps ┃ first attack step ┃
    ┑━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━&#947 3;━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
    β”‚ 3d3ivc β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 1.0 β”‚ 32.0 β”‚ 1 β”‚ exploit/multi/http/jenkins_s… β”‚
    β”‚ 5gnsvh β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 1.0 β”‚ 53.76 β”‚ 2 β”‚ exploit/multi/http/jenkins_s… β”‚
    β”‚ 6nlxyc β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.0 β”‚ 48.32 β”‚ 2 β”‚ exploit/multi/http/jenkins_s… β”‚
    β”‚ 8jos4z β”‚ 0.0.0.0 β”‚ 192.168.0.1 0 β”‚ 0.7 β”‚ 72.8 β”‚ 2 β”‚ exploit/multi/http/jenkins_s… β”‚
    β”‚ 8kmmts β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.0 β”‚ 32.0 β”‚ 1 β”‚ exploit/multi/elasticsearch/… β”‚
    β”‚ agjmma β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.0 β”‚ 24.0 β”‚ 1 β”‚ exploit/windows/http/managee… β”‚
    β”‚ joglhf β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 70.0 β”‚ 60.0 β”‚ 1 β”‚ auxiliary/scanner/ssh/ssh_lo… β”‚
    β”‚ rmgrof β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 100.0 β”‚ 32.0 β”‚ 1 β”‚ exploit/multi/http/drupal_dr… β”‚
    β”‚ xuowzk β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.0 β”‚ 24.0 β”‚ 1 β”‚ exploit/multi/http/struts_dm… β”‚
    β”‚ yttv51 β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.01 β”‚ 53.76 β”‚ 2 β”‚ exploit/multi/http/jenkins_s… β”‚
    β”‚ znv76x β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.01 β”‚ 53.76 β”‚ 2 β”‚ exploit/multi/http/jenkins_s… β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

    catsploit> scenario detail rmgrof
    ┏━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━┓
    ┃ src host ip ┃ target host ip ┃ eVc ┃ eVd ┃
    ┑━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━┩
    β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 100.0 β”‚ 32.0 β”‚
    └─────────────┴──────── β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜

    [Steps]
    ┏━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━┓
    ┃ # ┃ step ┃ params ┃
    ┑━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━ ━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━┩
    β”‚ 1 β”‚ exploit/multi/http/drupal_drupageddon β”‚ RHOSTS: 192.168.0.10 β”‚
    β”‚ β”‚ β”‚ LHOST: 192.168.10.100 β”‚
    β””β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


    catsploit> attack rmgrof
    > ~> ~
    > Metasploit Console Log
    > ~
    > ~
    [+] Attack scenario succeeded!


    catsploit> exit
    Bye.

    Disclaimer

    All informations and codes are provided solely for educational purposes and/or testing your own systems.

    Contact

    For any inquiry, please contact the email address as follows:

    catsploit@nk.MitsubishiElectric.co.jp



    BlueBunny - BLE Based C2 For Hak5's Bash Bunny

    By: Zion3R


    C2 solution that communicates directly over Bluetooth-Low-Energy with your Bash Bunny Mark II.
    Send your Bash Bunny all the instructions it needs just over the air.

    Overview

    Structure


    Installation & Start

    1. Install required dependencies
    pip install pygatt "pygatt[GATTTOOL]"

    Make sure BlueZ is installed and gatttool is usable

    sudo apt install bluez
    1. Download BlueBunny's repository (and switch into the correct folder)
    git clone https://github.com/90N45-d3v/BlueBunny
    cd BlueBunny/C2
    1. Start the C2 server
    sudo python c2-server.py
    1. Plug your Bash Bunny with the BlueBunny payload into the target machine (payload at: BlueBunny/payload.txt).
    2. Visit your C2 server from your browser on localhost:1472 and connect your Bash Bunny (Your Bash Bunny will light up green when it's ready to pair).

    Manual communication with the Bash Bunny through Python

    You can use BlueBunny's BLE backend and communicate with your Bash Bunny manually.

    Example Code

    # Import the backend (BlueBunny/C2/BunnyLE.py)
    import BunnyLE

    # Define the data to send
    data = "QUACK STRING I love my Bash Bunny"
    # Define the type of the data to send ("cmd" or "payload") (payload data will be temporary written to a file, to execute multiple commands like in a payload script file)
    d_type = "cmd"

    # Initialize BunnyLE
    BunnyLE.init()

    # Connect to your Bash Bunny
    bb = BunnyLE.connect()

    # Send the data and let it execute
    BunnyLE.send(bb, data, d_type)

    Troubleshooting

    Connecting your Bash Bunny doesn't work? Try the following instructions:

    • Try connecting a few more times
    • Check if your bluetooth adapter is available
    • Restart the system your C2 server is running on
    • Check if your Bash Bunny is running the BlueBunny payload properly
    • How far away from your Bash Bunny are you? Is the environment (distance, interferences etc.) still sustainable for typical BLE connections?

    Bugs within BlueZ

    The Bluetooth stack used is well known, but also very buggy. If starting the connection with your Bash Bunny does not work, it is probably a temporary problem due to BlueZ. Here are some kind of errors that can be caused by temporary bugs. These usually disappear at the latest after rebooting the C2's operating system, so don't be surprised and calm down if they show up.

    • Timeout after 5.0 seconds
    • Unknown error while scanning for BLE devices

    Working on...

    • Remote shell access
    • BLE exfiltration channel
    • Improved connecting process

    Additional information

    As I said, BlueZ, the base for the bluetooth part used in BlueBunny, is somewhat bug prone. If you encounter any non-temporary bugs when connecting to Bash Bunny as well as any other bugs/difficulties in the whole BlueBunny project, you are always welcome to contact me. Be it a problem, an idea/solution or just a nice feedback.



    Scaling Security Operations with Automation

    In an increasingly complex and fast-paced digital landscape, organizations strive to protect themselves from various security threats. However, limited resources often hinder security teams when combatting these threats, making it difficult to keep up with the growing number of security incidents and alerts. Implementing automation throughout security operations helps security teams alleviate

    Porch-Pirate - The Most Comprehensive Postman Recon / OSINT Client And Framework That Facilitates The Automated Discovery And Exploitation Of API Endpoints And Secrets Committed To Workspaces, Collections, Requests, Users And Teams

    By: Zion3R


    Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.

    Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:

    • Workspaces
    • Collections
    • Requests
    • Users
    • Teams

    Installation

    python3 -m pip install porch-pirate

    Using the client

    The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.

    Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.

    • --globals
    • --collections
    • --requests
    • --urls
    • --dump
    • --raw
    • --curl

    Simple Search

    porch-pirate -s "coca-cola.com"

    Get Workspace Globals

    By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8

    Dump Workspace

    When an interesting result has been found with a simple search, we can provide the workspace ID to the -w argument with the --dump command to begin extracting information from the workspace and its collections.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump

    Automatic Search and Globals Extraction

    Porch Pirate can be supplied a simple search term, following the --globals argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.

    porch-pirate -s "shopify" --globals

    Automatic Search Dump

    Porch Pirate can be supplied a simple search term, following the --dump argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.

    porch-pirate -s "coca-cola.com" --dump

    Extract URLs from Workspace

    A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls

    Automatic URL Extraction

    Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.

    porch-pirate -s "coca-cola.com" --urls

    Show Collections in a Workspace

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections

    Show Workspace Requests

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests

    Show raw JSON

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw

    Show Entity Information

    porch-pirate -w WORKSPACE_ID
    porch-pirate -c COLLECTION_ID
    porch-pirate -r REQUEST_ID
    porch-pirate -u USERNAME/TEAMNAME

    Convert Request to Curl

    Porch Pirate can build curl requests when provided with a request ID for easier testing.

    porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl

    Use a proxy

    porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080

    Using as a library

    Searching

    p = porchpirate()
    print(p.search('coca-cola.com'))

    Get Workspace Collections

    p = porchpirate()
    print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

    Dumping a Workspace

    p = porchpirate()
    collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
    for collection in collections['data']:
    requests = collection['requests']
    for r in requests:
    request_data = p.request(r['id'])
    print(request_data)

    Grabbing a Workspace's Globals

    p = porchpirate()
    print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

    Other Examples

    Other library usage examples can be located in the examples directory, which contains the following examples:

    • dump_workspace.py
    • format_search_results.py
    • format_workspace_collections.py
    • format_workspace_globals.py
    • get_collection.py
    • get_collections.py
    • get_profile.py
    • get_request.py
    • get_statistics.py
    • get_team.py
    • get_user.py
    • get_workspace.py
    • recursive_globals_from_search.py
    • request_to_curl.py
    • search.py
    • search_by_page.py
    • workspace_collections.py


    Iac-Scan-Runner - Service That Scans Your Infrastructure As Code For Common Vulnerabilities

    By: Zion3R


    Service that scans your Infrastructure as Code for common vulnerabilities.

    Aspect Information
    Tool name IaC Scan Runner
    Docker image xscanner/runner
    PyPI package iac-scan-runner
    Documentation docs
    Contact us xopera@xlab.si

    Purpose and description

    The IaC Scan Runner is a REST API service used to scan IaC (Infrastructure as Code) package and perform various code checks in order to find possible vulnerabilities and improvements. Explore the docs for more info.

    Running

    This section explains how to run the REST API.

    Run with Docker

    You can run the REST API using a public xscanner/runner Docker image as follows:

    # run IaC Scan Runner REST API in a Docker container and 
    # navigate to localhost:8080/swagger or localhost:8080/redoc
    $ docker run --name iac-scan-runner -p 8080:80 xscanner/runner

    Or you can build the image locally and run it as follows:

    # build Docker container (it will take some time) 
    $ docker build -t iac-scan-runner .
    # run IaC Scan Runner REST API in a Docker container and
    # navigate to localhost:8080/swagger or localhost:8080/redoc
    $ docker run --name iac-scan-runner -p 8080:80 iac-scan-runner

    Run from CLI

    To run using the IaC Scan Runner CLI:

    # install the CLI
    $ python3 -m venv .venv && . .venv/bin/activate
    (.venv) $ pip install iac-scan-runner
    # print OpenAPI specification
    (.venv) $ iac-scan-runner openapi
    # install prerequisites
    (.venv) $ iac-scan-runner install
    # run IaC Scan Runner REST API
    (.venv) $ iac-scan-runner run

    Run from source

    To run locally from source:

    # Export env variables 
    export MONGODB_CONNECTION_STRING=mongodb://localhost:27017
    export SCAN_PERSISTENCE=enabled
    export USER_MANAGEMENT=enabled

    # Setup MongoDB
    $ docker run --name mongodb -p 27017:27017 mongo

    # install prerequisites
    $ python3 -m venv .venv && . .venv/bin/activate
    (.venv) $ pip install -r requirements.txt
    (.venv) $ ./install-checks.sh
    # run IaC Scan Runner REST API (add --reload flag to apply code changes on the way)
    (.venv) $ uvicorn src.iac_scan_runner.api:app

    Usage and examples

    This part will show one of the possible deployments and short examples on how to use API calls.

    Firstly we will clone the iac scan runner repository and run the API.

    $ git clone https://github.com/xlab-si/iac-scan-runner.git
    $ docker compose up

    After this is done you can use different API endpoints by calling localhost:8000. You can also navigate to localhost:8000/swagger or localhost:8000/redoc and test all the API endpoints there. In this example, we will use curl for calling API endpoints.

    1. Lets create a project named test.
    curl -X 'POST' \
    'http://0.0.0.0/project?creator_id=test' \
    -H 'accept: application/json' \
    -d ''

    project id will be returned to us. For this example project id is 1e7b2a91-2896-40fd-8d53-83db56088026.

    1. For example, let say we want to initiate all check expect ansible-lint. Let's disable it.
    curl -X 'PUT' \
    'http://0.0.0.0:8000/projects/1e7b2a91-2896-40fd-8d53-83db56088026/checks/ansible-lint/disable' \
    -H 'accept: application/json'
    1. Now when project is configured, we can simply choose files that we want to scan and zip them. For IaC-Scan-Runner to work files are expected to be a compressed archives (usually zip files). In this case response type will be json , but it is possible to change it to html.Please change YOUR.zip to path of your file.
    curl -X 'POST' \
    'http://0.0.0.0:8000/projects/1e7b2a91-2896-40fd-8d53-83db56088026/scan?scan_response_type=json' \
    -H 'accept: application/json' \
    -H 'Content-Type: multipart/form-data' \
    -F 'iac=@YOUR.zip;type=application/zip'

    That is it.

    Extending the scan workflow with new check tools

    At certain point, it might be required to include new check tools within the scan workflow, with aim to provide wider coverage of IaC standards and project types. Therefore, in this subsection, a sequence of required steps for that purpose is identified and described. However, the steps have to be performed manually as it will be described, but it is planned to automatize this procedure in future via API and provide user-friendly interface that will aid the user while importing new tools that will become part of the available catalogue that makes the scan workflow. Figure 16 depicts the required steps which have to be taken in order to extend the scan workflow with a new tool.

    Step 1 – Adding tool-specific class to checks directory First, it is required to add a new tool-specific Python class to the checks directory inside IaC Scan Runner’s source code: iac-scan-runner/src/iac_scan_runner/checks/new_tool.py
    The class of a new tool inherits the existing Check class, which provides generalization of scan workflow tools. Moreover, it is necessary to provide implementation of the following methods:

    1. def configure(self, config_filename: Optional[str], secret: Optional[SecretStr])
    2. def run(self, directory: str) While the first one aims to provide the necessary tool-specific parameters in order to set it up (such as passwords, client ids and tokens), another one specifies how the tool itself is invoked via API or CLI and its raw output returned.

    Step 2 – Adding the check tool class instance within ScanRunner constructor Once the new class derived from Check is added to the IaC Scan Runner’s source code, it is also required to modify the source code of its main class, called ScanRunner. When it comes to modifications of this class, it is required first to import the tool-specific class, create a new check tool-specific class instance and adding it to the dictionary of IaC checks inside def init_checks(self). A. Importing the check tool class from iac_scan_runner.checks.tfsec import TfsecCheck B. Creating new instance of check tool object inside init_checks """Initiate predefined check objects""" new_tool = NewToolCheck() C. Adding it to self.iac_checks dictionary inside init_checks

        self.iac_checks = {
    new_tool.name: new_tool,
    …
    }

    Step 3 – Adding the check tool to the compatibility matrix inside Compatibility class On the other side, inside file src/iac_scan_runner/compatibility.py, the dictionary which represents compatibility matrix should be extended as well. There are two possible cases: a) new file type should be added as a key, together with list of relevant tools as value b) new tool should be added to the compatibility list for the existing file type.

        compatibility_matrix = {
    "new_type": ["new_tool_1", "new_tool_2"],
    …
    "old_typeK": ["tool_1", … "tool_N", "new_tool_3"]
    }

    Step 4 – Providing the support for result summarization Finally, the last step in sequence of required modifications for scan workflow extension is to modify class ResultsSummary (src/iac_scan_runner/results_summary.py). Precisely, it is required to append a part of the code to its method summarize_outcome that will look for specific strings which are tool-specific and can be used to identify whether the check passed or failed. Inside the loop that traverses the compatible checks, for each new tool the following structure of if-else should be included:

            if check == "new_tool":
    if outcome.find("Check pass string") > -1:
    self.outcomes[check]["status"] = "Passed"
    return "Passed"
    else:
    self.outcomes[check]["status"] = "Problems"
    return "Problems"

    Contact

    You can contact the xOpera team by sending an email to xopera@xlab.si.

    Acknowledgement

    This project has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No. 101000162 (PIACERE).



    Deepsecrets - Secrets Scanner That Understands Code

    By: Zion3R


    Yet another tool - why?

    Existing tools don't really "understand" code. Instead, they mostly parse texts.

    DeepSecrets expands classic regex-search approaches with semantic analysis, dangerous variable detection, and more efficient usage of entropy analysis. Code understanding supports 500+ languages and formats and is achieved by lexing and parsing - techniques commonly used in SAST tools.

    DeepSecrets also introduces a new way to find secrets: just use hashed values of your known secrets and get them found plain in your code.

    Under the hood story is in articles here: https://hackernoon.com/modernizing-secrets-scanning-part-1-the-problem


    Mini-FAQ after release :)

    Pff, is it still regex-based?

    Yes and no. Of course, it uses regexes and finds typed secrets like any other tool. But language understanding (the lexing stage) and variable detection also use regexes under the hood. So regexes is an instrument, not a problem.

    Why don't you build true abstract syntax trees? It's academically more correct!

    DeepSecrets tries to keep a balance between complexity and effectiveness. Building a true AST is a pretty complex thing and simply an overkill for our specific task. So the tool still follows the generic SAST-way of code analysis but optimizes the AST part using a different approach.

    I'd like to build my own semantic rules. How do I do that?

    Only through the code by the moment. Formalizing the rules and moving them into a flexible and user-controlled ruleset is in the plans.

    I still have a question

    Feel free to communicate with the maintainer

    Installation

    From Github via pip

    $ pip install git+https://github.com/avito-tech/deepsecrets.git

    From PyPi

    $ pip install deepsecrets

    Scanning

    The easiest way:

    $ deepsecrets --target-dir /path/to/your/code --outfile report.json

    This will run a scan against /path/to/your/code using the default configuration:

    • Regex checks by the built-in ruleset
    • Semantic checks (variable detection, entropy checks)

    Report will be saved to report.json

    Fine-tuning

    Run deepsecrets --help for details.

    Basically, you can use your own ruleset by specifying --regex-rules. Paths to be excluded from scanning can be set via --excluded-paths.

    Building rulesets

    Regex

    The built-in ruleset for regex checks is located in /deepsecrets/rules/regexes.json. You're free to follow the format and create a custom ruleset.

    HashedSecret

    Example ruleset for regex checks is located in /deepsecrets/rules/regexes.json. You're free to follow the format and create a custom ruleset.

    Contributing

    Under the hood

    There are several core concepts:

    • File
    • Tokenizer
    • Token
    • Engine
    • Finding
    • ScanMode

    File

    Just a pythonic representation of a file with all needed methods for management.

    Tokenizer

    A component able to break the content of a file into pieces - Tokens - by its logic. There are four types of tokenizers available:

    • FullContentTokenizer: treats all content as a single token. Useful for regex-based search.
    • PerWordTokenizer: breaks given content by words and line breaks.
    • LexerTokenizer: uses language-specific smarts to break code into semantically correct pieces with additional context for each token.

    Token

    A string with additional information about its semantic role, corresponding file, and location inside it.

    Engine

    A component performing secrets search for a single token by its own logic. Returns a set of Findings. There are three engines available:

    • RegexEngine: checks tokens' values through a special ruleset
    • SemanticEngine: checks tokens produced by the LexerTokenizer using additional context - variable names and values
    • HashedSecretEngine: checks tokens' values by hashing them and trying to find coinciding hashes inside a special ruleset

    Finding

    This is a data structure representing a problem detected inside code. Features information about the precise location inside a file and a rule that found it.

    ScanMode

    This component is responsible for the scan process.

    • Defines the scope of analysis for a given work directory respecting exceptions
    • Allows declaring a PerFileAnalyzer - the method called against each file, returning a list of findings. The primary usage is to initialize necessary engines, tokenizers, and rulesets.
    • Runs the scan: a multiprocessing pool analyzes every file in parallel.
    • Prepares results for output and outputs them.

    The current implementation has a CliScanMode built by the user-provided config through the cli args.

    Local development

    The project is supposed to be developed using VSCode and 'Remote containers' feature.

    Steps:

    1. Clone the repository
    2. Open the cloned folder with VSCode
    3. Agree with 'Reopen in container'
    4. Wait until the container is built and necessary extensions are installed
    5. You're ready


    MemTracer - Memory Scaner

    By: Zion3R


    MemTracer is a tool that offers live memory analysis capabilities, allowing digital forensic practitioners to discover and investigate stealthy attack traces hidden in memory. The MemTracer is implemented in Python language, aiming to detect reflectively loaded native .NET framework Dynamic-Link Library (DLL). This is achieved by looking for the following abnormal memory region’s characteristics:

    • The state of memory pages flags in each memory region. Specifically, the MEM_COMMIT flag which is used to reserve memory pages for virtual memory use.
    • The type of pages in the region. The MEM_MAPPED page type indicates that the memory pages within the region are mapped into the view of a section.
    • The memory protection for the region. The PAGE_READWRITE protection to indicate that the memory region is readable and writable, which happens if Assembly.Load(byte[]) method is used to load a module into memory.
    • The memory region contains a PE header.

    The tool starts by scanning the running processes, and by analyzing the allocated memory regions characteristics to detect reflective DLL loading symptoms. Suspicious memory regions which are identified as DLL modules are dumped for further analysis and investigation.
    Furthermore, the tool features the following options:

    • Dump the compromised process.
    • Export a JSON file that provides information about the compromised process, such as the process name, ID, path, size, and base address.
    • Search for specific loaded module by name.

    Example

    python.exe memScanner.py [-h] [-r] [-m MODULE]
    -h, --help show this help message and exit
    -r, --reflectiveScan Looking for reflective DLL loading
    -m MODULE, --module MODULE Looking for spcefic loaded DLL

    The script needs administrator privileges in order incepect all processes.



    YoroTrooper: Researchers Warn of Kazakhstan's Stealthy Cyber Espionage Group

    A relatively new threat actor known asΒ YoroTrooperΒ is likely made up of operators originating from Kazakhstan. The assessment, which comes from Cisco Talos, is based on their fluency in Kazakh and Russian, use of Tenge to pay for operating infrastructure, and very limited targeting of Kazakhstani entities, barring the government's Anti-Corruption Agency. "YoroTrooper attempts to obfuscate the

    DakshSCRA - Source Code Review Assist

    By: Zion3R


    Daksh SCRA (Source Code Review Assist) tool is built to enhance the efficiency of the source code review process, providing a well-structured and organized approach for code reviewers.

    Rather than indiscriminately flagging everything as a potential issue, Daksh SCRA promotes thoughtful analysis, urging the investigation and confirmation of potential problems. This approach mitigates the scramble to tag every potential concern as a bug, cutting back on the confusion and wasted time spent on false positives.

    What sets Daksh SCRA apart is its emphasis on avoiding unnecessary bug tagging. Unlike conventional methods, it advocates for thorough investigation and confirmation of potential issues before tagging them as bugs. This approach helps mitigate the issue of false positives, which often consume valuable time and resources, thereby fostering a more productive and efficient code review process.


    Debut

    Daksh SCRA was initially introduced during a source code review training session I conducted at Black Hat USA 2022 (August 6 - 9), where it was subtly presented to a specific audience. However, this introduction was carried out with a low-profile approach, avoiding any major announcements.

    While this tool was quietly published on GitHub after the 2022 training, its official public debut took place at Black Hat USA 2023 in Las Vegas.

    Features and Functionalities

    Distinctive Features (Multiple World’s First)

    • Identifies Areas of Interest in Source Code: Encourage focused investigation and confirmation rather than indiscriminately labeling everything as a bug.

    • Identifies Areas of Interest in File Paths (World’s First): Recognises patterns in file paths to pinpoint relevant sections for review.

    • Software-Level Reconnaissance to Identify Technologies Utilised: Identifies project technologies, enabling code reviewers to conduct precise scans with appropriate rules.

    • Automated Scientific Effort Estimation for Code Review (World’s First): Providing a measurable approach for estimating efforts required for a code review process.

    Although this tool has progressed beyond its early stages, it has reached a functional state that is quite usable and delivers on its promised capabilities. Nevertheless, active enhancements are currently underway, and there are multiple new features and improvements expected to be added in the upcoming months.

    Additionally, the tool offers the following functionalities:

    • Options to use platform-specific rules specific for finding areas of interests
    • Options to extend or add new rules for any new or existing languages
    • Generates report in text, HTML and PDF format for inspection

    Refer to the wiki for the tool setup and usage details - https://github.com/coffeeandsecurity/DakshSCRA/wiki

    Feel free to contribute towards updating or adding new rules and future development.

    If you find any bugs, report them to d3basis.m0hanty@gmail.com.

    Tool Setup

    Pre-requisites

    Python3 and all the libraries listed in requirements.txt

    Setting up environment to run this tool

    1. Setup a virtual environment

    $ pip install virtualenv

    $ virtualenv -p python3 {name-of-virtual-env} // Create a virtualenv
    Example: virtualenv -p python3 venv

    $ source {name-of-virtual-env}/bin/activate // To activate virtual environment you just created
    Example: source venv/bin/activate

    After running the activate command you should see the name of your virtual env at the beginning of your terminal like this: (venv) $

    2. Ensure all required libraries are installed within the virtual environment

    You must run the below command after activating the virtual environment as mentioned in the previous steps.

    pip install -r requirements.txt

    Once the above step successfully installs all the required libraries, refer to the following tool usage commands to run the tool.

    Tool Usage

    $ python3 dakshscra.py -h // To view avaialble options and arguments

    usage: dakshscra.py [-h] [-r RULE_FILE] [-f FILE_TYPES] [-v] [-t TARGET_DIR] [-l {R,RF}] [-recon] [-estimate]

    options:
    -h, --help show this help message and exit
    -r RULE_FILE Specify platform specific rule name
    -f FILE_TYPES Specify file types to scan
    -v Specify verbosity level {'-v', '-vv', '-vvv'}
    -t TARGET_DIR Specify target directory path
    -l {R,RF}, --list {R,RF}
    List rules [R] OR rules and filetypes [RF]
    -recon Detects platform, framework and programming language used
    -estimate Estimate efforts required for code review

    Example Usage

    $ python3 dakshscra.py // To view tool usage along with examples

    Examples:
    # '-f' is optional. If not specified, it will default to the corresponding filetypes of the selected rule.
    dakshsca.py -r php -t /source_dir_path

    # To override default settings, other filetypes can be specified with '-f' option.
    dakshsca.py -r php -f dotnet -t /path_to_source_dir
    dakshsca.py -r php -f custom -t /path_to_source_dir

    # Perform reconnaissance and rule based scanning if '-recon' used with '-r' option.
    dakshsca.py -recon -r php -t /path_to_source_dir

    # Perform only reconnaissance if '-recon' used without the '-r' option.
    dakshsca.py -recon -t /path_to_source_dir

    # Verbosity: '-v' is default, '-vvv' will display all rules check within each rule category.
    dakshsca.py -r php -vv -t /path_to_source_dir


    Supported RULE_FILE: dotnet, java, php, javascript
    Supported FILE_TY PES: dotnet, php, java, custom, allfiles

    Reports

    The tool generates reports in three formats: HTML, PDF, and TEXT. Although the HTML and PDF reports are still being improved, they are currently in a reasonably good state. With each subsequent iteration, these reports will continue to be refined and improved even further.

    Scanning (Areas of Security Concerns) Report

    HTML Report:
    • DakshSCRA/reports/html/report.html
    PDF Report:
    • DakshSCRA/reports/html/report.pdf
    RAW TEXT Based Reports:
    • Areas of Interest - Identified Patterns : DakshSCRA/reports/text/areas_of_interest.txt
    • Areas of Interest - Project Files: DakshSCRA/reports/text/filepaths_aoi.txt
    • Identified Project Files: DakshSCRA/runtime/filepaths.txt

    Reconnaissance (Recon) Report

    • Reconnaissance Summary: /reports/text/recon.txt

    Note: Currently, the reconnaissance report is created in a text format. However, in upcoming releases, the plan is to incorporate it into the vulnerability scanning report, which will be available in both HTML and PDF formats.

    Code Review Effort Estimation Report

    • Effort estimation report: /reports/html/estimation.html

    Note: At present, the effort estimation for the source code review is in its early stages. It is considered experimental and will be developed and refined through several iterations. Improvements will be made over multiple releases, as the formula and the concept are new and require time to be honed to achieve accuracy or reasonable estimation.

    Currently, the report is generated in HTML format. However, in future releases, there are plans to also provide it in PDF format.



    Golddigger - Search Files For Gold

    By: Zion3R


    Gold Digger is a simple tool used to help quickly discover sensitive information in files recursively. Originally written to assist in rapidly searching files obtained during a penetration test.


    Installation

    Gold Digger requires Python3.

    virtualenv -p python3 .
    source bin/activate
    python dig.py --help

    Usage

    Directory to search for gold -r RECURSIVE, --recursive RECURSIVE Search directory recursively? -l LOG, --log LOG Log file to save output" dir="auto">
    usage: dig.py [-h] [-e EXCLUDE] [-g GOLD] -d DIRECTORY [-r RECURSIVE] [-l LOG]

    optional arguments:
    -h, --help show this help message and exit
    -e EXCLUDE, --exclude EXCLUDE
    JSON file containing extension exclusions
    -g GOLD, --gold GOLD JSON file containing the gold to search for
    -d DIRECTORY, --directory DIRECTORY
    Directory to search for gold
    -r RECURSIVE, --recursive RECURSIVE
    Search directory recursively?
    -l LOG, --log LOG Log file to save output

    Example Usage

    Gold Digger will recursively go through all folders and files in search of content matching items listed in the gold.json file. Additionally, you can leverage an exclusion file called exclusions.json for skipping files matching specific extensions. Provide the root folder as the --directory flag.

    An example structure could be:

    ~/Engagements/CustomerName/data/randomfiles/
    ~/Engagements/CustomerName/data/randomfiles2/
    ~/Engagements/CustomerName/data/code/

    You would provide the following command to parse all 3 account reports:

    python dig.py --gold gold.json --exclude exclusions.json --directory ~/Engagements/CustomerName/data/ --log Customer_2022-123_gold.log

    Results

    The tool will create a log file containg the scanning results. Due to the nature of using regular expressions, there may be numerous false positives. Despite this, the tool has been proven to increase productivity when processing thousands of files.

    Shout-outs

    Shout out to @d1vious for releasing git-wild-hunt https://github.com/d1vious/git-wild-hunt! Most of the regex in GoldDigger was used from this amazing project.



    Kubestroyer - Kubernetes Exploitation Tool

    By: Zion3R

    Kubestroyer

    Kubestroyer aims to exploit Kubernetes clusters misconfigurations and be the swiss army knife of your Kubernetes pentests


    About The Project

    Kubestroyer is a Golang exploitation tool that aims to take advantage of Kubernetes clusters misconfigurations.

    The tool is scanning known Kubernetes ports that can be exposed as well as exploiting them.

    Getting Started

    To get a local copy up and running, follow these simple example steps.

    Prerequisites

    • Go 1.19
      wget https://go.dev/dl/go1.19.4.linux-amd64.tar.gz
      tar -C /usr/local -xzf go1.19.4.linux-amd64.tar.gz

    Installation

    Use prebuilt binary

    or

    Using go install command :

    $ go install github.com/Rolix44/Kubestroyer@latest

    or

    build from source:

    1. Clone the repo
      $ git clone https://github.com/Rolix44/Kubestroyer.git
    2. build the binary
      $ go build -o Kubestroyer cmd/kubestroyer/main.go 

    Usage

    Parameter Description Mand/opt Example
    -t / --target Target (IP, domain or file) Mandatory -t localhost,127.0.0.1 / -t ./domain.txt
    --node-scan Enable node port scanning (port 30000 to 32767) Optionnal -t localhost --node-scan
    --anon-rce RCE using Kubelet API anonymous auth Optionnal -t localhost --anon-rce
    -x Command to execute when using RCE (display service account token by default) Optionnal -t localhost --anon-rce -x "ls -al"

    Currently supported features

    • Target

      • List of multiple targets
      • Input file as target
    • Scanning

      • Known ports scan
      • Node port scan (30000 to 32767)
      • Port description
    • Vulnerabilities

      • Annon RCE on Kubelet
        • Choose command to execute

    Roadmap

    • Choose the pod for anon RCE
    • Etcd exploit
    • Kubelet read-only API parsing for information disclosure

    See the open issues for a full list of proposed features (and known issues).

    Contributing

    Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

    If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

    1. Fork the Project
    2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
    3. Commit your Changes (git commit -m 'Add some AmazingFeature')
    4. Push to the Branch (git push origin feature/AmazingFeature)
    5. Open a Pull Request

    License

    Distributed under the MIT License. See LICENSE.txt for more information.

    Contact

    Rolix - @Rolix_cy - rolixcy@protonmail.com

    Project Link: https://github.com/Rolix44/Kubestroyer



    Kubei - A Flexible Kubernetes Runtime Scanner


    Kubei is a vulnerabilities scanning tool that allows users to get an accurate and immediate risk assessment of their kubernetes clusters. Kubei scans all images that are being used in a Kubernetes cluster, including images of application pods and system pods. It doesn’t scan the entire image registries and doesn’t require preliminary integration with CI/CD pipelines.
    It is a configurable tool which allows users to define the scope of the scan (target namespaces), the speed, and the vulnerabilities level of interest.
    It provides a graphical UI which allows the viewer to identify where and what should be replaced, in order to mitigate the discovered vulnerabilities.

    Prerequisites
    1. A Kubernetes cluster is ready, and kubeconfig ( ~/.kube/config) is properly configured for the target cluster.

    Required permissions
    1. Read secrets in cluster scope. This is required for getting image pull secrets for scanning private image repositories.
    2. List pods in cluster scope. This is required for calculating the target pods that need to be scanned.
    3. Create jobs in cluster scope. This is required for creating the jobs that will scan the target pods in their namespaces.

    Configurations
    The file deploy/kubei.yaml is used to deploy and configure Kubei on your cluster.
    1. Set the scan scope. Set the IGNORE_NAMESPACES env variable to ignore specific namespaces. Set TARGET_NAMESPACE to scan a specific namespace, or leave empty to scan all namespaces.
    2. Set the scan speed. Expedite scanning by running parallel scanners. Set the MAX_PARALLELISM env variable for the maximum number of simultaneous scanners.
    3. Set severity level threshold. Vulnerabilities with severity level higher than or equal to SEVERITY_THRESHOLD threshold will be reported. Supported levels are Unknown, Negligible, Low, Medium, High, Critical, Defcon1. Default is Medium.
    4. Set the delete job policy. Set the DELETE_JOB_POLICY env variable to define whether or not to delete completed scanner jobs. Supported values are:
      • All - All jobs will be deleted.
      • Successful - Only successful jobs will be deleted (default).
      • Never - Jobs will never be deleted.

    Usage
    1. Run the following command to deploy Kubei on the cluster:
      kubectl apply -f https://raw.githubusercontent.com/Portshift/kubei/master/deploy/kubei.yaml
    2. Run the following command to verify that Kubei is up and running:
      kubectl -n kubei get pod -lapp=kubei
    3. Then, port forwarding into the Kubei webapp via the following command:
      kubectl -n kubei port-forward $(kubectl -n kubei get pods -lapp=kubei -o jsonpath='{.items[0].metadata.name}') 8080
    4. In your browser, navigate to http://localhost:8080/view/ , and then click 'GO' to run a scan.
    5. To check the state of Kubei, and the progress of ongoing scans, run the following command:
      kubectl -n kubei logs $(kubectl -n kubei get pods -lapp=kubei -o jsonpath='{.items[0].metadata.name}')
    6. Refresh the page (http://localhost:8080/view/) to update the results.


    Running Kubei with an external HTTP/HTTPS proxy
    Uncomment and configure the proxy env variables for the Clair and Kubei deployments in deploy/kubei.yaml.

    Limitations
    1. Supports Kubernetes Image Manifest V 2, Schema 2 (https://docs.docker.com/registry/spec/manifest-v2-2/). It will fail to scan on earlier versions.
    2. The CVE database will update once a day.


    FirebaseExploiter - Vulnerability Discovery Tool That Discovers Firebase Database Which Are Open And Can Be Exploitable


    FirebaseExploiter is a vulnerability discovery tool that discovers Firebase Database which are open and can be exploitable. Primarily built for mass hunting bug bounties and for penetration testing.

    Features

    • Mass vulnerability scanning from list of hosts
    • Custom JSON data in exploit.json to upload during exploit
    • Custom URI path for exploit

    Usage

    This will display help for the CLI tool. Here are all the required arguments it supports.

    Installation

    FirebaseExploiter was built using go1.19. Make sure you use latest version of Go to install successfully. Run the following command to install the latest version:

    go install -v github.com/securebinary/firebaseExploiter@latest

    Running FirebaseExploiter

    To scan a specific domain to check for Insecure Firebase DB.

    To exploit a Firebase DB to write your own JSON document in it.

    Create your own exploit.json file in proper JSON format to exploit vulnerable Firebase DBs.

    Checking the exploited URL to verify the vulnerability.

    Adding custom path for exploiting Firebase DBs.

    Mass scanning for Insecure Firebase Databases from list of target hosts.

    Exploiting vulnerable Firebase DBs from the list of target hosts.

    License

    FirebaseExploiter is made with love by the SecureBinary team. Any tweaks / community contribution are welcome.


    UDPX - Fast A nd Lightweight, UDPX Is A Single-Packet UDP Scanner Written In Go That Supports The Discovery Of Over 45 Services With The Ability To Add Custom Ones


    Fast and lightweight, UDPX is a single-packet UDP scanner written in Go that supports the discovery of over 45 services with the ability to add custom ones. It is easy to use and portable, and can be run on Linux, Mac OS, and Windows. Unlike internet-wide scanners like zgrab2 and zmap, UDPX is designed for portability and ease of use.

    • It is fast. It can scan whole /16 network in ~20 seconds for a single service.
    • You don't need to instal libpcap or any other dependencies.
    • Can run on Linux, Mac Os, Windows. Or your Nethunter if you built it for Arm.
    • Customizable. You can add your probes and test for even more protocols.
    • Stores results in JSONL format.
    • Scans also domain names.

    How it works

    Scanning UDP ports is very different than scanning TCP - you may, or may not get any result back from probing an UDP port as UDP is a connectionless protocol. UDPX implements a single-packet based approach. A protocol-specific packet is sent to the defined service (port) and waits for a response. The limit is set to 500 ms by default and can be changed by -w flag. If the service sends a packet back within this time, it is certain that it is indeed listening on that port and is reported as open.

    A typical technique is to send 0 byte UDP packets to each port on the target machine. If we receive an "ICMP Port Unreachable" message, then the port is closed. If an UDP response is received to the probe (unusual), the port is open. If we get no response at all, the state is open or filtered, meaning that the port is either open or packet filters are blocking the communication. This method is not implemented as there is no added value (UDPX tests only for specific protocols).

    Usage

    Concurrency: By default, concurrency is set to 32 connections only (so you don't crash anything). If you have a lot of hosts to scan, you can set it to 128 or 256 connections. Based on your hardware, connection stability, and ulimit (on *nix), you can run 512 or more concurrent connections, but this is not recommended.

    To scan a single IP:

    udpx -t 1.1.1.1

    To scan a CIDR with maximum of 128 connections and timeout of 1000 ms:

    udpx -t 1.2.3.4/24 -c 128 -w 1000

    To scan targets from file with maximum of 128 connections for only specific service:

    udpx -tf targets.txt -c 128 -s ipmi

    Target can be:

    • IP address
    • CIDR
    • Domain

    IPv6 is supported.

    If you want to store the results, use flag -o [filename]. Output is in JSONL format, as can be seen bellow:

    {"address":"45.33.32.156","hostname":"scanme.nmap.org","port":123,"service":"ntp","response_data":"JAME6QAAAEoAAA56LU9vp+d2ZPwOYIyDxU8jS3GxUvM="}

    Options


    __ ______ ____ _ __
    / / / / __ \/ __ \ |/ /
    / / / / / / / /_/ / /
    / /_/ / /_/ / ____/ |
    \____/_____/_/ /_/|_|
    v1.0.2-beta, by @nullt3r

    Usage of ./udpx-linux-amd64:
    -c int
    Maximum number of concurrent connections (default 32)
    -nr
    Do not randomize addresses
    -o string
    Output file to write results
    -s string
    Scan only for a specific service, one of: ard, bacnet, bacnet_rpm, chargen, citrix, coap, db, db, digi1, digi2, digi3, dns, ipmi, ldap, mdns, memcache, mssql, nat_port_mapping, natpmp, netbios, netis, ntp, ntp_monlist, openvpn, pca_nq, pca_st, pcanywhere, portmap, qotd, rdp, ripv, sentinel, sip, snmp1, snmp2, snmp3, ssdp, tftp, ubiquiti, ubiquiti_discovery_v1, ubiquiti_discovery_v2, upnp, valve, wdbrpc, wsd, wsd_malformed, xdmcp, kerberos, ike
    -sp
    Show received packets (only first 32 bytes)
    -t string
    IP/CIDR to scan
    -tf string
    File containing IPs/CIDRs to scan
    -w int
    Maximum time to wait for a response (socket timeout) in ms (default 500)

    Building

    You can grab prebuilt binaries in the release section. If you want to build UDPX from source, follow these steps:

    From git:

    git clone https://github.com/nullt3r/udpx
    cd udpx
    go build ./cmd/udpx

    You can find the binary in the current directory.

    Or via go:

    go install -v github.com/nullt3r/udpx/cmd/udpx@latest

    After that, you can find the binary in $HOME/go/bin/udpx. If you want, move binary to /usr/local/bin/ so you can call it directly.

    Supported services

    The UDPX supports more then 45 services. The most interesting are:

    • ipmi
    • snmp
    • ike
    • tftp
    • openvpn
    • kerberos
    • ldap

    The complete list of supported services:

    • ard
    • bacnet
    • bacnet_rpm
    • chargen
    • citrix
    • coap
    • db
    • db
    • digi1
    • digi2
    • digi3
    • dns
    • ipmi
    • ldap
    • mdns
    • memcache
    • mssql
    • nat_port_mapping
    • natpmp
    • netbios
    • netis
    • ntp
    • ntp_monlist
    • openvpn
    • pca_nq
    • pca_st
    • pcanywhere
    • portmap
    • qotd
    • rdp
    • ripv
    • sentinel
    • sip
    • snmp1
    • snmp2
    • snmp3
    • ssdp
    • tftp
    • ubiquiti
    • ubiquiti_discovery_v1
    • ubiquiti_discovery_v2
    • upnp
    • valve
    • wdbrpc
    • wsd
    • wsd_malformed
    • xdmcp
    • kerberos
    • ike

    How to add your own probe?

    Please send a feature request with protocol name and port and I will make it happen. Or add it on your own, the file pkg/probes/probes.go contains all available payloads. Specify the protocol name, port and packet data (hex-encoded).

    {
    Name: "ike",
    Payloads: []string{"5b5e64c03e99b51100000000000000000110020000000000000001500000013400000001000000010000012801010008030000240101"},
    Port: []int{500, 4500},
    },

    Credits

    Disclaimer

    I am not responsible for any damages. You are responsible for your own actions. Scanning or attacking targets without prior mutual consent can be illegal.

    License

    UDPX is distributed under MIT License.



    CMLoot - Find Interesting Files Stored On (System Center) Configuration Manager (SCCM/CM) SMB Shares


    CMLoot was created to easily find interesting files stored on System Center Configuration Manager (SCCM/CM) SMB shares. The shares are used for distributing software to Windows clients in Windows enterprise environments and can contains scripts/configuration files with passwords, certificates (pfx), etc. Most SCCM deployments are configured to allow all users to read the files on the shares, sometimes it is limited to computer accounts.

    The Content Library of SCCM/CM have a "complex" (annoying) file structure which CMLoot will untangle for you: https://techcommunity.microsoft.com/t5/configuration-manager-archive/understanding-the-configuration-manager-content-library/ba-p/273349

    Essentially the DataLib folder contains .INI files, the .INI file are named the original filename + .INI. The .INI file contains a hash of the file, and the file itself is stored in the FileLib in format of <folder name: 4 first chars of the hash>\fullhash.


    CM Access Accounts

    It is possible to apply Access control to packages in CM. This however only protects the folder for the file descriptor (DataLib), not the actual file itself. CMLoot will during inventory record any package that it can't access (Access denied) to the file _noaccess.txt. Invoke-CMLootHunt can then use this file to enumerate the actual files that the access control is trying to protect.

    OPSEC

    Windows Defender for Endpoint (EDR) or other security mechanisms might trigger because the script parses a lot of files over SMB.

    HOWTO

    Find CM servers by searching for them in Active Directory or by fetching this reqistry key on a workstation with System Center installed:

    (Get-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\SMS\DP -Name ManagementPoints).ManagementPoints

    There may be multiple CM servers deployed and they can contain different files so be sure to find all of them.

    Then you need to create an inventory file which is just a text file containing references to file descriptors (.INI). The following command will parse all .INI files on the SCCM server to create a list of files available.

    PS> Invoke-CMLootInventory -SCCMHost sccm01.domain.local -Outfile sccmfiles.txt

    Then use the inventory file created above to download files of interest:

    Select files using GridView (Milage may vary with large inventory files):

    PS> Invoke-CMLootDownload -InventoryFile .\sccmfiles.txt -GridSelect

    Download a single file, by coping a line in the inventory text:

    PS> Invoke-CMLootDownload -SingleFile \\sccm\SCCMContentLib$\DataLib\SC100001.1\x86\MigApp.xml

    Download all files with a certain file extension:

    PS> Invoke-CMLootDownload -InventoryFile .\sccmfiles.txt -Extension ps1

    Files will by default download to CMLootOut in the folder from which you execute the script, can be changed with -OutFolder parameter. Files are saved in the format of (folder: filext)\(first 4 chars of hash>_original filename).

    Hunt for files that CMLootInventory found inaccessible:

    Invoke-CMLootHunt -SCCMHost sccm -NoAccessFile sccmfiles_noaccess.txt

    Bulk extract MSI files:

    Invoke-CMLootExtract -Path .\CMLootOut\msi

    DEMO

    Run inventory, scanning available files:

    Select files using GridSelect:

    Download all extensions:

    Hunt "inaccessible" files and MSI extract:

    Author

    Tomas Rzepka / WithSecure



    Fingerprintx - Standalone Utility For Service Discovery On Open Ports!



    fingerprintx is a utility similar to httpx that also supports fingerprinting services like as RDP, SSH, MySQL, PostgreSQL, Kafka, etc. fingerprintx can be used alongside port scanners like Naabu to fingerprint a set of ports identified during a port scan. For example, an engineer may wish to scan an IP range and then rapidly fingerprint the service running on all the discovered ports.


    Features

    • Fast fingerprinting of exposed services
    • Application layer service discovery
    • Plays nicely with other command line tools
    • Automatic metadata collection from identified services

    Supported Protocols:

    SERVICE TRANSPORT SERVICE TRANSPORT
    HTTP TCP REDIS TCP
    SSH TCP MQTT3 TCP
    MODBUS TCP VNC TCP
    TELNET TCP MQTT5 TCP
    FTP TCP RSYNC TCP
    SMB TCP RPC TCP
    DNS TCP OracleDB TCP
    SMTP TCP RTSP TCP
    PostgreSQL TCP MQTT5 TCP (TLS)
    RDP TCP HTTPS TCP (TLS)
    POP3 TCP SMTPS TCP (TLS)
    KAFKA TCP MQTT3 TCP (TLS)
    MySQL TCP RDP TCP (TLS)
    MSSQL TCP POP3S TCP (TLS)
    LDAP TCP LDAPS TCP (TLS)
    IMAP TCP IMAPS TCP (TLS)
    SNMP UDP Kafka TCP (TLS)
    OPENVPN UDP NETBIOS-NS UDP
    IPSEC UDP DHCP UDP
    STUN UDP NTP UDP
    DNS UDP

    Installation

    From Github

    go install github.com/praetorian-inc/fingerprintx/cmd/fingerprintx@latest

    From source (go version > 1.18)

    $ git clone git@github.com:praetorian-inc/fingerprintx.git
    $ cd fingerprintx

    # with go version > 1.18
    $ go build ./cmd/fingerprintx
    $ ./fingerprintx -h

    Docker

    $ git clone git@github.com:praetorian-inc/fingerprintx.git
    $ cd fingerprintx

    # build
    docker build -t fingerprintx .

    # and run it
    docker run --rm fingerprintx -h
    docker run --rm fingerprintx -t praetorian.com:80 --json

    Usage

    fingerprintx -h

    The -h option will display all of the supported flags for fingerprintx.

    Usage:
    fingerprintx [flags]
    TARGET SPECIFICATION:
    Requires a host and port number or ip and port number. The port is assumed to be open.
    HOST:PORT or IP:PORT
    EXAMPLES:
    fingerprintx -t praetorian.com:80
    fingerprintx -l input-file.txt
    fingerprintx --json -t praetorian.com:80,127.0.0.1:8000

    Flags:
    --csv output format in csv
    -f, --fast fast mode
    -h, --help help for fingerprintx
    --json output format in json
    -l, --list string input file containing targets
    -o, --output string output file
    -t, --targets strings target or comma separated target list
    -w, --timeout int timeout (milliseconds) (default 500)
    -U, --udp run UDP plugins
    -v, --verbose verbose mode

    The fast mode will only attempt to fingerprint the default service associated with that port for each target. For example, if praetorian.com:8443 is the input, only the https plugin would be run. If https is not running on praetorian.com:8443, there will be NO output. Why do this? It's a quick way to fingerprint most of the services in a large list of hosts (think the 80/20 rule).

    Running Fingerprintx

    With one target:

    $ fingerprintx -t 127.0.0.1:8000
    http://127.0.0.1:8000

    By default, the output is in the form: SERVICE://HOST:PORT. To get more detailed service output specify JSON with the --json flag:

    $ fingerprintx -t 127.0.0.1:8000 --json
    {"ip":"127.0.0.1","port":8000,"service":"http","transport":"tcp","metadata":{"responseHeaders":{"Content-Length":["1154"],"Content-Type":["text/html; charset=utf-8"],"Date":["Mon, 19 Sep 2022 18:23:18 GMT"],"Server":["SimpleHTTP/0.6 Python/3.10.6"]},"status":"200 OK","statusCode":200,"version":"SimpleHTTP/0.6 Python/3.10.6"}}

    Pipe in output from another program (like naabu):

    $ naabu 127.0.0.1 -silent 2>/dev/null | fingerprintx
    http://127.0.0.1:8000
    ftp://127.0.0.1:21

    Run with an input file:

    $ cat input.txt | fingerprintx
    http://praetorian.com:80
    telnet://telehack.com:23

    # or if you prefer
    $ fingerprintx -l input.txt
    http://praetorian.com:80
    telnet://telehack.com:23

    With more metadata output:

    Why Not Nmap?

    Nmap is the standard for network scanning. Why use fingerprintx instead of nmap? The main two reasons are:

    • fingerprintx works smarter, not harder: the first plugin run against a server with port 8080 open is the http plugin. The default service approach cuts down scanning time in the best case. Most of the time the services running on port 80, 443, 22 are http, https, and ssh -- so that's what fingerprintx checks first.
    • fingerprintx supports json output with the --json flag. Nmap supports numerous output options (normal, xml, grep), but they are often hard to parse and script appropriately. fingerprintx supports json output which eases integration with other tools in processing pipelines.

    Notes

    • Why do you have a third_party folder that imports the Go cryptography libraries?
      • Good question! The ssh fingerprinting module identifies the various cryptographic options supported by the server when collecting metadata during the handshake process. This makes use of a few unexported functions, which is why the Go cryptography libraries are included here with an export.go file.
    • Fingerprintx is not designed to identify open ports on the target systems and assumes that every target:port input is open. If none of the ports are open there will be no output as there are no services running on the targets.
    • How does this compare to zgrab2?
      • The zgrab2 command line usage (and use case) is slightly different than fingerprintx. For zgrab2, the protocol must be specified ahead of time: echo praetorian.com | zgrab2 http -p 8000, which assumes you already know what is running there. For fingerprintx, that is not the case: echo praetorian.com:8000 | fingerprintx. The "application layer" protocol scanning approach is very similar.

    Acknowledgements

    fingerprintx is the work of a lot of people, including our great intern class of 2022. Here is a list of contributors so far:



    PortexAnalyzerGUI - Graphical Interface For PortEx, A Portable Executable And Malware Analysis Library



    Graphical interface for PortEx, a Portable Executable and Malware Analysis Library

    Download

    Releases page

    Features

    • Header information from: MSDOS Header, Rich Header, COFF File Header, Optional Header, Section Table
    • PE Structures: Import Section, Resource Section, Export Section, Debug Section
    • Scanning for file format anomalies
    • Visualize file structure, local entropies and byteplot, and save it as PNG
    • Calculate Shannon Entropy, Imphash, MD5, SHA256, Rich and RichPV hash
    • Overlay and overlay signature scanning
    • Version information and manifest
    • Icon extraction and saving as PNG
    • Customized signature scanning via Yara. Internal signature scans using PEiD signatures and an internal filetype scanner.

    Supported OS and JRE

    I test this program on Linux and Windows. But it should work on any OS with JRE version 9 or higher.

    Future

    I will be including more and more features that PortEx already provides.

    These features include among others:

    • customized visualization
    • extraction and conversion of icons to .ICO files
    • dumping of sections, overlay, resources
    • export reports to txt, json, csv

    Some of these features are already provided by PortexAnalyzer CLI version, which you can find here: PortexAnalyzer CLI

    Donations

    I develop PortEx and PortexAnalyzer as a hobby in my free time. If you like it, please consider buying me a coffee: https://ko-fi.com/struppigel

    Author

    Karsten Hahn

    Twitter: @Struppigel

    Mastodon: struppigel@infosec.exchange

    Youtube: MalwareAnalysisForHedgehogs

    License

    License



    Ator - Authentication Token Obtain and Replace Extender


    The plugin is created to help automated scanning using Burp in the following scenarios:

    1. Access/Refresh token
    2. Token replacement in XML,JSON body
    3. Token replacement in cookies
      The above can be achieved using complex macro, session rules or Custom Extender in some scenarios. The rules become tricky and do not work in scenarios where the replacement text is either JSON, XML.

    Key advantages:

    1. We have also achieved in-memory token replacement to avoid duplicate login requests like in both custom extender, macros/session rules.
    2. Easy UX to help obtain data (from response) and replace data (in requests) using regex. This helps achieve complex scenarios where response body is JSON, XML and the request text is also JSON, XML, form data etc.
    3. Scan speed - the scan speed increases considerably because there are no extra login requests. There is something called the "Trigger Request" which is the error condition (also includes regex) when the login requests are triggered. The error condition can include (response code = 401 and body contains "Unauthorized request")

    The inspiration for the plugin is from ExtendedMacro plugin: https://github.com/FrUh/ExtendedMacro

    Blogs

    1. Authentication Token Obtain and Replace (ATOR)Β Burp PluginΒ - Part1 - Single step login sequence and single token extraction
    2. Authentication Token Obtain and Replace (ATOR) Burp Plugin - Part2 - Multi step login sequence and multiple extraction

    Getting Started

    1. Install Java and Maven
    2. Clone the repository
    3. Run the "mvn clean install" command in cloned repo of where pom.xml is present
    4. Take the generated jar with dependencies from the target folder

    Prerequisites

    1. Make sure java environment is setup in your machine.
    2. Confgure the Burp Suite to listen the Proxy traffic
    3. Configure the java environment from extender tab of BURP

    For usage with test application (Install this testing application (Tiredful application) from https://github.com/payatu/Tiredful-API)

    Steps

    1. Identify the request which provides the error
    2. Identify the Error Pattern (details in section below)
    3. Obtain the data from the response using regex (see sample regex values)
    4. Replace this data on the request (use same regex as step 3 along with the variable name)

    Error Pattern:

    Totally there are 4 different ways you can specify the error condition.

    1. Status Code: 401, 400
    2. Error in Body: give any text from the body content (Example: Access token expired)
    3. Error in Header: give any text from header(Example: Unauthorized)
    4. Free Form: use this to give multiple condition (st=400 && bd=Access token expired || hd=Unauthorized)

    Regex with samples

    1. Use Authorization: Bearer \w* to match Authorization: Bearer AXXFFPPNSUSSUSSNSUSN
    2. Use Authorization: Bearer ([\w+_-.]*) to match Authorization: Bearer AXX-F+FPPNS.USSUSSNSUSN

    Break down into end to end tests

    1. Finding the Invalid request:
      • http://HOST:PORT/api/v1/exams/MQ==/ with invalid Bearer token.
    2. Identifying Error Pattern:
      • The above request will give you 401, here error condition is Status Code = 401
    3. Match regex with request data
      • Authorization: Bearer \w* - this regex will match access token which is passed.
    4. Replacement - How to replace
      • Replace the matched text(step 3 regex) with extracted value (Extraction configuration discussed in below, say varibale name is "token")
      • Authorization: Bearer token - extracted token will be replaced.

    Usage with test application

    Idea : Record the Tiredful application request in BURP, configure the ATOR extender, check whether token is replaced by ATOR.

    1. Open the testing application in browser which you configured with BURP
      • Generate a token from http://HOST:PORT/handle-user-token/
      • Send the request http://HOST:PORT/api/v1/exams/MQ==/ by passing Authorization Beaer token(get it from above step)
    2. Add the ATOR jar file as a extender in BURP
    3. Right Click on the request(/handle-user-token) in Proxy history and send it to Authentication Token Optain and Replace Extender
    4. Add the new entry in Extraction configuration by selecting the "access_token" value and give name as "token"(it may be any name) Note: For this application,one request is enough to generate a token.Token can also get generated after multiple requests
    5. TRIGGER CONDITION:
      • Macro steps will get executed if the condition is matched.
      • After execution of steps, replace the incoming request by taking values from "Pattern" and "Replacement Area" if specified.
      • For our testing,
        • Error condition is 401(Status Code)
        • Pattern is "Authorization: Bearer \w*" (Specify the regex Pattern how you want to replace with extraction values)
        • Replacement Area is "Authentication: Bearer <NAME which you gave in STEP 4>"
      • Click on "Add" Button.
    6. For this example, one replacement is enough to make the incoming request as valid but you can add mutiple replacement for a single condition.
    7. Hit the invalid request from Repeater and check the req/res flows in either FLOW/Logger++
      • Invalid Bearer token(http://HOST:PORT/api/v1/exams/MQ==/) from Repeater makes the response as 401.
      • Extender will match this condition and start running the recorded steps, extract the "access_token"
      • Replace the access token(from step ii) in actual response(from Repeater) and makes this invalid request as valid.
      • In the repeater console, you see 200 OK response.
    8. Do the Step7 again and check the flow
      • This time extender will not invoke the steps because existing token is valid and so it uses that.

    Built With

    • SWING - Used to add panel

    Contributing

    Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.

    Versioning

    v1.0

    Authors

    Authors from Synopsys - Ashwath Reddy (@ka3hk) and Manikandan Rajappan (@rmanikdn)

    License

    This software is released by Synopsys under the MIT license.

    Acknowledgments

    • https://github.com/FrUh/ExtendedMacro ExtendedMacro was a great start - we have modified the UI to handle more complex scenarios. We have also fixed bugs and improved speed by replacing tokens in memory.

    Demo Video

    ATOR v2.0.0:

    UI Panel was splitted into 4 different configuration. Check out the code from v2 or use the executable from v2/bin.

    1. Error Condition - Find the error condition req/res and add trigger condition [Can be statuscode/text in body content/text in header]. Multiple condition can also be added.
    2. Obtain Token: Find all the req/res to get the token. It can be single or multiple request (do replacement accordingly)
    3. Error Condition Replacement: Mark the trigger condition and also mark the place on request where replacement needs to taken (map the extraction)
    4. Preview: Dry run it before configure for scan.


    Faraday - Open Source Vulnerability Management Platform


    Security has two difficult tasks: designing smart ways of getting new information, and keeping track of findings to improve remediation efforts. With Faraday, you may focus on discovering vulnerabilities while we help you with the rest. Just use it in your terminal and get your work organized on the run. Faraday was made to let you take advantage of the available tools in the community in a truly multiuser way.

    Faraday aggregates and normalizes the data you load, allowing exploring it into different visualizations that are useful to managers and analysts alike.

    To read about the latest features check out the release notes!


    Install

    Docker-compose

    The easiest way to get faraday up and running is using our docker-compose

    $ wget https://raw.githubusercontent.com/infobyte/faraday/master/docker-compose.yaml
    $ docker-compose up

    If you want to customize, you can find an example config over here Link

    Docker

    You need to have a Postgres running first.

     $ docker run \
    -v $HOME/.faraday:/home/faraday/.faraday \
    -p 5985:5985 \
    -e PGSQL_USER='postgres_user' \
    -e PGSQL_HOST='postgres_ip' \
    -e PGSQL_PASSWD='postgres_password' \
    -e PGSQL_DBNAME='postgres_db_name' \
    faradaysec/faraday:latest

    PyPi

    $ pip3 install faradaysec
    $ faraday-manage initdb
    $ faraday-server

    Binary Packages (Debian/RPM)

    You can find the installers on our releases page

    $ sudo apt install faraday-server_amd64.deb
    # Add your user to the faraday group
    $ faraday-manage initdb
    $ sudo systemctl start faraday-server

    Add your user to the faraday group and then run

    Source

    If you want to run directly from this repo, this is the recommended way:

    $ pip3 install virtualenv
    $ virtualenv faraday_venv
    $ source faraday_venv/bin/activate
    $ git clone git@github.com:infobyte/faraday.git
    $ pip3 install .
    $ faraday-manage initdb
    $ faraday-server

    Check out our documentation for detailed information on how to install Faraday in all of our supported platforms

    For more information about the installation, check out our Installation Wiki.

    In your browser now you can go to http://localhost:5985 and login with "faraday" as username, and the password given by the installation process

    Getting Started

    Learn about Faraday holistic approach and rethink vulnerability management.

    Integrating faraday in your CI/CD

    Setup Bandit and OWASP ZAP in your pipeline

    Setup Bandit, OWASP ZAP and SonarQube in your pipeline

    Faraday Cli

    Faraday-cli is our command line client, providing easy access to the console tools, work in faraday directly from the terminal!

    This is a great way to automate scans, integrate it to CI/CD pipeline or just get metrics from a workspace

    $ pip3 install faraday-cli

    Check our faraday-cli repo

    Check out the documentation here.


    Faraday Agents

    Faraday Agents Dispatcher is a tool that gives Faraday the ability to run scanners or tools remotely from the platform and get the results.

    Plugins

    Connect you favorite tools through our plugins. Right now there are more than 80+ supported tools, among which you will find:


    Missing your favorite one? Create a Pull Request!

    There are two Plugin types:

    Console plugins which interpret the output of the tools you execute.

    $ faraday-cli tool run \"nmap www.exampledomain.com\"
    💻 Processing Nmap command
    Starting Nmap 7.80 ( https://nmap.org ) at 2021-02-22 14:13 -03
    Nmap scan report for www.exampledomain.com (10.196.205.130)
    Host is up (0.17s latency).
    rDNS record for 10.196.205.130: 10.196.205.130.bc.example.com
    Not shown: 996 filtered ports
    PORT STATE SERVICE
    80/tcp open http
    443/tcp open https
    2222/tcp open EtherNetIP-1
    3306/tcp closed mysql
    Nmap done: 1 IP address (1 host up) scanned in 11.12 seconds
    Ò¬† Sending data to workspace: test
    Òœ” Done

    Report plugins which allows you to import previously generated artifacts like XMLs, JSONs.

    faraday-cli tool report burp.xml

    Creating custom plugins is super easy, Read more about Plugins.

    API

    You can access directly to our API, check out the documentation here.

    Links



    DNSrecon-gui - DNSrecon Tool With GUI For Kali Linux


    DNSRecon is a DNS scanning and enumeration tool written in Python, which allows you to perform different tasks, such as enumeration of standard records for a defined domain (A, NS, SOA, and MX). Top-level domain expansion for a defined domain.

    With this graph-oriented user interface, the different records of a specific domain can be observed, classified and ordered in a simple way.

    Install

    git clone https://github.com/micro-joan/dnsrecon-gui
    cd dnsrecon-gui/
    chmod +x run.sh
    ./run.sh

    After executing the application launcher you need to have all the components installed, the launcher will check one by one, and in the case of not having any component installed it will show you the statement that you must enter to install it:


    Use

    When the tool is ready to use the same installer will give you a URL that you must put in the browser in a private window so every time you do a search you will have to open a new window in private or clear your browser cache to refresh the graphics.

    Tools

    Service Functions Status
    Text2MindMap Convert text to mindmap
    βœ…Free
    dnsenum DNS information gathering
    βœ…Free

    My website:Β https://microjoan.com
    My blog:Β https://darkhacking.es/
    Buy me a coffee:Β https://www.buymeacoffee.com/microjoan

    DISCLAIMER

    ThisΒ toolkitΒ contains materials that can be potentially damaging or dangerous for social media. Refer to the laws in your province/country before accessing, using,or in any other way utilizing this in a wrong way.

    This Tool is made for educational purposes only. Do not attempt to violate the law with anything contained here. If this is your intention, then Get the hell out of here!


    EAST - Extensible Azure Security Tool - Documentation


    Extensible Azure Security Tool (Later referred as E.A.S.T) is tool for assessing Azure and to some extent Azure AD security controls. Primary use case of EAST is Security data collection for evaluation in Azure Assessments. This information (JSON content) can then be used in various reporting tools, which we use to further correlate and investigate the data.


    This tool is licensed under MIT license.




    Collaborators

    Release notes

    • Preview branch introduced

      Changes:

      • Installation now accounts for use of Azure Cloud Shell's updated version in regards to depedencies (Cloud Shell has now Node.JS v 16 version installed)

      • Checking of Databricks cluster types as per advisory

        • Audits Databricks clusters for potential privilege elevation - This control requires typically permissions on the databricks cluster"
      • Content.json is has now key and content based sorting. This enables doing delta checks with git diff HEAD^1 ΒΉ as content.json has predetermined order of results

      ΒΉWord of caution, if want to check deltas of content.json, then content.json will need to be "unignored" from .gitignore exposing results to any upstream you might have configured.

      Use this feature with caution, and ensure you don't have public upstream set for the branch you are using this feature for

    • Change of programming patterns to avoid possible race conditions with larger datasets. This is mostly changes of using var to let in for await -style loops


    Important

    Current status of the tool is beta
    • Fixes, updates etc. are done on "Best effort" basis, with no guarantee of time, or quality of the possible fix applied
    • We do some additional tuning before using EAST in our daily work, such as apply various run and environment restrictions, besides formalizing ourselves with the environment in question. Thus we currently recommend, that EAST is run in only in test environments, and with read-only permissions.
      • All the calls in the service are largely to Azure Cloud IP's, so it should work well in hardened environments where outbound IP restrictions are applied. This reduces the risk of this tool containing malicious packages which could "phone home" without also having C2 in Azure.
        • Essentially running it in read-only mode, reduces a lot of the risk associated with possibly compromised NPM packages (Google compromised NPM)
        • Bugs etc: You can protect your environment against certain mistakes in this code by running the tool with reader-only permissions
    • Lot of the code is "AS IS": Meaning, it's been serving only the purpose of creating certain result; Lot of cleaning up and modularizing remains to be finished
    • There are no tests at the moment, apart from certain manual checks, that are run after changes to main.js and various more advanced controls.
    • The control descriptions at this stage are not the final product, so giving feedback on them, while appreciated, is not the focus of the tooling at this stage
    • As the name implies, we use it as tool to evaluate environments. It is not meant to be run as unmonitored for the time being, and should not be run in any internet exposed service that accepts incoming connections.
    • Documentation could be described as incomplete for the time being
    • EAST is mostly focused on PaaS resource, as most of our Azure assessments focus on this resource type
    • No Input sanitization is performed on launch params, as it is always assumed, that the input of these parameters are controlled. That being said, the tool uses extensively exec() - While I have not reviewed all paths, I believe that achieving shellcode execution is trivial. This tool does not assume hostile input, thus the recommendation is that you don't paste launch arguments into command line without reviewing them first.

    Tool operation

    Depedencies

    To reduce amount of code we use the following depedencies for operation and aesthetics are used (Kudos to the maintainers of these fantastic packages)

    package aesthetics operation license
    axios
    βœ…
    MIT
    yargs
    βœ…
    MIT
    jsonwebtoken
    βœ…
    MIT
    chalk
    βœ…
    MIT
    js-beautify
    βœ…
    MIT

    Other depedencies for running the tool: If you are planning to run this in Azure Cloud Shell you don't need to install Azure CLI:

    • This tool does not include or distribute Microsoft Azure CLI, but rather uses it when it has been installed on the source system (Such as Azure Cloud Shell, which is primary platform for running EAST)

    Azure Cloud Shell (BASH) or applicable Linux Distro / WSL

    Requirement description Install
    βœ…
    AZ CLI
    AZCLI USE curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
    βœ…
    Node.js runtime 14
    Node.js runtime for EAST install with NVM

    Controls

    EAST provides three categories of controls: Basic, Advanced, and Composite

    The machine readable control looks like this, regardless of the type (Basic/advanced/composite):

    {
    "name": "fn-sql-2079",
    "resource": "/subscriptions/6193053b-408b-44d0-b20f-4e29b9b67394/resourcegroups/rg-fn-2079/providers/microsoft.web/sites/fn-sql-2079",
    "controlId": "managedIdentity",
    "isHealthy": true,
    "id": "/subscriptions/6193053b-408b-44d0-b20f-4e29b9b67394/resourcegroups/rg-fn-2079/providers/microsoft.web/sites/fn-sql-2079",
    "Description": "\r\n Ensure The Service calls downstream resources with managed identity",
    "metadata": {
    "principalId": {
    "type": "SystemAssigned",
    "tenantId": "033794f5-7c9d-4e98-923d-7b49114b7ac3",
    "principalId": "cb073f1e-03bc-440e-874d-5ed3ce6df7f8"
    },
    "roles": [{
    "role": [{
    "properties": {
    "roleDefinitionId": "/subscriptions/6193053b-408b-44d0-b20f-4e29b9b67394/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c",
    "principalId": "cb073f1e-03b c-440e-874d-5ed3ce6df7f8",
    "scope": "/subscriptions/6193053b-408b-44d0-b20f-4e29b9b67394/resourceGroups/RG-FN-2079",
    "createdOn": "2021-12-27T06:03:09.7052113Z",
    "updatedOn": "2021-12-27T06:03:09.7052113Z",
    "createdBy": "4257db31-3f22-4c0f-bd57-26cbbd4f5851",
    "updatedBy": "4257db31-3f22-4c0f-bd57-26cbbd4f5851"
    },
    "id": "/subscriptions/6193053b-408b-44d0-b20f-4e29b9b67394/resourceGroups/RG-FN-2079/providers/Microsoft.Authorization/roleAssignments/ada69f21-790e-4386-9f47-c9b8a8c15674",
    "type": "Microsoft.Authorization/roleAssignments",
    "name": "ada69f21-790e-4386-9f47-c9b8a8c15674",
    "RoleName": "Contributor"
    }]
    }]
    },
    "category": "Access"
    },

    Basic

    Basic controls include checks on the initial ARM object for simple "toggle on/off"- boolean settings of said service.

    Example: Azure Container Registry adminUser

    acr_adminUser


    Portal EAST

    if (item.properties?.adminUserEnabled == false ){returnObject.isHealthy = true }

    Advanced

    Advanced controls include checks beyond the initial ARM object. Often invoking new requests to get further information about the resource in scope and it's relation to other services.

    Example: Role Assignments

    Besides checking the role assignments of subscription, additional check is performed via Azure AD Conditional Access Reporting for MFA, and that privileged accounts are not only protected by passwords (SPN's with client secrets)

    Example: Azure Data Factory

    ADF_pipeLineRuns

    Azure Data Factory pipeline mapping combines pipelines -> activities -> and data targets together and then checks for secrets leaked on the logs via run history of the said activities.



    Composite

    Composite controls combines two or more control results from pipeline, in order to form one, or more new controls. Using composites solves two use cases for EAST

    1. You cant guarantee an order of control results being returned in the pipeline
    2. You need to return more than one control result from single check

    Example: composite_resolve_alerts

    1. Get alerts from Microsoft Cloud Defender on subscription check
    2. Form new controls per resourceProvider for alerts

    Reporting

    EAST is not focused to provide automated report generation, as it provides mostly JSON files with control and evaluation status. The idea is to use separate tooling to create reports, which are fairly trivial to automate via markdown creation scripts and tools such as Pandoc

    • While focus is not on the reporting, this repo includes example automation for report creation with pandoc to ease reading of the results in single document format.

    While this tool does not distribute pandoc, it can be used when creation of the reports, thus the following citation is added: https://github.com/jgm/pandoc/blob/master/CITATION.cff

    cff-version: 1.2.0
    title: Pandoc
    message: "If you use this software, please cite it as below."
    type: software
    url: "https://github.com/jgm/pandoc"
    authors:
    - given-names: John
    family-names: MacFarlane
    email: jgm@berkeley.edu
    orcid: 'https://orcid.org/0000-0003-2557-9090'
    - given-names: Albert
    family-names: Krewinkel
    email: tarleb+github@moltkeplatz.de
    orcid: '0000-0002-9455-0796'
    - given-names: Jesse
    family-names: Rosenthal
    email: jrosenthal@jhu.edu

    Running EAST scan

    This part has guide how to run this either on BASH@linux, or BASH on Azure Cloud Shell (obviously Cloud Shell is Linux too, but does not require that you have your own linux box to use this)

    ⚠️If you are running the tool in Cloud Shell, you might need to reapply some of the installations again as Cloud Shell does not persist various session settings.

    Fire and forget prerequisites on cloud shell

    curl -o- https://raw.githubusercontent.com/jsa2/EAST/preview/sh/initForuse.sh | bash;

    jump to next step

    Detailed Prerequisites (This is if you opted no to do the "fire and forget version")

    Prerequisites

    git clone https://github.com/jsa2/EAST --branch preview
    cd EAST;
    npm install

    Pandoc installation on cloud shell

    # Get pandoc for reporting (first time only)
    wget "https://github.com/jgm/pandoc/releases/download/2.17.1.1/pandoc-2.17.1.1-linux-amd64.tar.gz";
    tar xvzf "pandoc-2.17.1.1-linux-amd64.tar.gz" --strip-components 1 -C ~

    Installing pandoc on distros that support APT

    # Get pandoc for reporting (first time only)
    sudo apt install pandoc

    Login Az CLI and run the scan

    # Relogin is required to ensure token cache is placed on session on cloud shell

    az account clear
    az login

    #
    cd EAST
    # replace the subid below with your subscription ID!
    subId=6193053b-408b-44d0-b20f-4e29b9b67394
    #
    node ./plugins/main.js --batch=10 --nativescope=true --roleAssignments=true --helperTexts=true --checkAad=true --scanAuditLogs --composites --subInclude=$subId


    Generate report

    cd EAST; node templatehelpers/eastReports.js --doc

    • If you want to include all Azure Security Benchmark results in the report

    cd EAST; node templatehelpers/eastReports.js --doc --asb

    Export report from cloud shell

    pandoc -s fullReport2.md -f markdown -t docx --reference-doc=pandoc-template.docx -o fullReport2.docx


    Azure Devops (Experimental) There is Azure Devops control for dumping pipeline logs. You can specify the control run by following example:

    node ./plugins/main.js --batch=10 --nativescope=true --roleAssignments=true --helperTexts=true --checkAad=true --scanAuditLogs --composites --subInclude=$subId --azdevops "organizationName"

    Licensing

    Community use

    • Share relevant controls across multiple environments as community effort

    Company use

    • Companies have possibility to develop company specific controls which apply to company specific work. Companies can then control these implementations by decision to share, or not share them based on the operating principle of that company.

    Non IPR components

    • Code logic and functions are under MIT license. since code logic and functions are alredy based on open-source components & vendor API's, it does not make sense to restrict something that is already based on open source

    If you use this tool as part of your commercial effort we only require, that you follow the very relaxed terms of MIT license

    Read license

    Tool operation documentation

    Principles

    AZCLI USE

    Existing tooling enhanced with Node.js runtime

    Use rich and maintained context of Microsoft Azure CLI login & commands with Node.js control flow which supplies enhanced rest-requests and maps results to schema.

    • This tool does not include or distribute Microsoft Azure CLI, but rather uses it when it has been installed on the source system (Such as Azure Cloud Shell, which is primary platform for running EAST)

    Speedup

    View more details

    βœ…Using Node.js runtime as orchestrator utilises Nodes asynchronous nature allowing batching of requests. Batching of requests utilizes the full extent of Azure Resource Managers incredible speed.

    βœ…Compared to running requests one-by-one, the speedup can be up to 10x, when Node executes the batch of requests instead of single request at time

    Parameters reference

    Example:

    node ./plugins/main.js --batch=10 --nativescope --roleAssignments --helperTexts=true --checkAad --scanAuditLogs --composites --shuffle --clearTokens
    Param Description Default if undefined
    --nativescope Currently mandatory parameter no values
    --shuffle Can help with throttling. Shuffles the resource list to reduce the possibility of resource provider throttling threshold being met no values
    --roleAssignments Checks controls as per microsoft.authorization no values
    --includeRG Checks controls with ResourceGroups as per microsoft.authorization no values
    --checkAad Checks controls as per microsoft.azureactivedirectory no values
    --subInclude Defines subscription scope no default, requires subscriptionID/s, if not defined will enumerate all subscriptions the user have access to
    --namespace text filter which matches full, or part of the resource ID
    example /microsoft.storage/storageaccounts all storage accounts in the scope
    optional parameter
    --notIncludes text filter which matches full, or part of the resource ID
    example /microsoft.storage/storageaccounts all storage accounts in the scope are excluded
    optional parameter
    --batch size of batch interval between throttles 5
    --wait size of batch interval between throttles 1500
    --scanAuditLogs optional parameter. When defined in hours will toggle Azure Activity Log scanning for weak authentication events
    defined in: scanAuditLogs
    24h
    --composites read composite no values
    --clearTokens clears tokens in session folder, use this if you get authorization errors, or have just changed to other az login account
    use az account clear if you want to clear AZ CLI cache too
    no values
    --tag Filter all results in the end based on single tag--tag=svc=aksdev no values
    --ignorePreCheck use this option when used with browser delegated tokens no values
    --helperTexts Will append text descriptions from general to manual controls no values
    --reprocess Will update results to existing content.json. Useful for incremental runs no values

    Parameters reference for example report:

    node templatehelpers/eastReports.js --asb 
    Param Description Default if undefined
    --asb gets all ASB results available to users no values
    --policy gets all Policy results available to users no values
    --doc prints pandoc string for export to console no values

    (Highly experimental) Running in restricted environments where only browser use is available

    Read here Running in restricted environments

    Developing controls

    Developer guide including control flow description is here dev-guide.md

    Updates and examples

    Auditing Microsoft.Web provider (Functions and web apps)

    βœ…Check roles that are assigned to function managed identity in Azure AD and all Azure Subscriptions the audit account has access to
    βœ…Relation mapping, check which keyVaults the function uses across all subs the audit account has access to
    βœ…Check if Azure AD authentication is enabled
    βœ…Check that generation of access tokens to the api requires assigment .appRoleAssignmentRequired
    βœ…Audit bindings
    • Function or Azure AD Authentication enabled
    • Count and type of triggers

    βœ…Check if SCM and FTP endpoints are secured


    Azure RBAC baseline authorization

    ⚠️Detect principals in privileged subscriptions roles protected only by password-based single factor authentication.
    • Checks for users without MFA policies applied for set of conditions
    • Checks for ServicePrincipals protected only by password (as opposed to using Certificate Credential, workload federation and or workload identity CA policy)

    Maps to App Registration Best Practices

    • An unused credential on an application can result in security breach. While it's convenient to use password. secrets as a credential, we strongly recommend that you use x509 certificates as the only credential type for getting tokens for your application

    βœ…State healthy - User result example

    { 
    "subscriptionName": "EAST -msdn",
    "friendlyName": "joosua@thx138.onmicrosoft.com",
    "mfaResults": {
    "oid": "138ac68f-d8a7-4000-8d41-c10ff26a9097",
    "appliedPol": [{
    "GrantConditions": "challengeWithMfa",
    "policy": "baseline",
    "oid": "138ac68f-d8a7-4000-8d41-c10ff26a9097"
    }],
    "checkType": "mfa"
    },
    "basicAuthResults": {
    "oid": "138ac68f-d8a7-4000-8d41-c10aa26a9097",
    "appliedPol": [{
    "GrantConditions": "challengeWithMfa",
    "policy": "baseline",
    "oid": "138ac68f-d8a7-4000-8d41-c10aa26a9097"
    }],
    "checkType": "basicAuth"
    },
    }

    ⚠️State unHealthy - Application principal example

    { 
    "subscriptionName": "EAST - HoneyPot",
    "friendlyName": "thx138-kvref-6193053b-408b-44d0-b20f-4e29b9b67394",
    "creds": {
    "@odata.context": "https://graph.microsoft.com/beta/$metadata#servicePrincipals(id,displayName,appId,keyCredentials,passwordCredentials,servicePrincipalType)/$entity",
    "id": "babec804-037d-4caf-946e-7a2b6de3a45f",
    "displayName": "thx138-kvref-6193053b-408b-44d0-b20f-4e29b9b67394",
    "appId": "5af1760e-89ff-46e4-a968-0ac36a7b7b69",
    "servicePrincipalType": "Application",
    "keyCredentials": [],
    "passwordCredentials": [],
    "OnlySingleFactor": [{
    "customKeyIdentifier": null,
    "endDateTime": "2023-10-20T06:54:59.2014093Z",
    "keyId": "7df44f81-a52c-4fd6-b704-4b046771f85a",
    "startDateTime": "2021-10-20T06:54:59.2014093Z",
    "secretText": null,
    "hint": nu ll,
    "displayName": null
    }],
    "StrongSingleFactor": []
    }
    }

    Contributing

    Following methods work for contributing for the time being:

    1. Submit a pull request with code / documentation change
    2. Submit a issue
      • issue can be a:
      • ⚠️Problem (issue)
      • Feature request
      • ❔Question

    Other

    1. By default EAST tries to work with the current depedencies - Introducing new (direct) depedencies is not directly encouraged with EAST. If such vital depedency is introduced, then review licensing of such depedency, and update readme.md - depedencies
      • There is nothing to prevent you from creating your own fork of EAST with your own depedencies


    Octopii - An AI-powered Personal Identifiable Information (PII) Scanner


    Octopii is an open-source AI-powered Personal Identifiable Information (PII) scanner that can look for image assets such as Government IDs, passports, photos and signatures in a directory.


    Working

    Octopii uses Tesseract's Optical Character Recognition (OCR) and Keras' Convolutional Neural Networks (CNN) models to detect various forms of personal identifiable information that may be leaked on a publicly facing location. This is done in the following steps:

    1. Importing and cleaning image(s)

    The image is imported via OpenCV and Python Imaging Library (PIL) and is cleaned, deskewed and rotated for scanning.

    2. Performing image classification and Optical Character Recognition (OCR)

    A directory is looped over and searched for images. These images are scanned for unique features via the image classifier (done by comparing it to a trained model), along with OCR for finding substrings within the image. This may have one of the following outcomes:

    • Best case (score >=90): The image is sent into the image classifier algorithm to be scanned for features such as an ISO/IEC 7810 card specification, colors, location of text, photos, holograms etc. If it is successfully classified as a type of PII, OCR is performed on it looking for particular words and strings as a final check. When both of these are confirmed, the result from Octopii is extremely reliable.

    • Average case (score >=50): The image is partially/incorrectly identified by the image classifier algorithm, but an OCR check finds contradicting substrings and reclassifies it.

    • Worst case (score >=0): The image is only identified by the image classifier algorithm but an OCR scan returns no results.

    • Incorrect classification: False positives due to a very small model or OCR list may incorrectly classify PIIs, giving inaccurate results.

    As a final verification method, images are scanned for certain strings to verify the accuracy of the model.

    The accuracy of the scan can determined via the confidence scores in output. If all the mentioned conditions are met, a score of 100.0 is returned.

    To train the model, data can also be fed into the model_generator.py script, and the newly improved h5 file can be used.

    Usage

    1. Install all dependencies via pip install -r requirements.txt.
    2. Install the Tesseract helper locally via sudo apt install tesseract-ocr -y (for Ubuntu/Debian).
    3. To run Octopii, type python3 octopii.py <location name>, for example python3 octopii.py pii_list/
    python3 octopii.py <location to scan> <additional flags>

    Octopii currently supports local scanning and scanning S3 directories and open directory listings via their URLs.

    Example

    Contributing

    Open-source projects like these thrive on community support. Since Octopii relies heavily on machine learning and optical character recognition, contributions are much appreciated. Here's how to contribute:

    1. Fork

    Fork the official repository at https://github.com/redhuntlabs/octopii

    2. Understand

    There are 3 files in the models/ directory. - The keras_models.h5 file is the Keras h5 model that can be obtained from Google's Teachable Machine or via Keras in Python. - The labels.txt file contains the list of labels corresponding to the index that the model returns. - The ocr_list.json file consists of keywords to search for during an OCR scan, as well as other miscellaneous information such as country of origin, regular expressions etc.

    Generating models via Teachable Machine

    Since our current dataset is quite small, we could benefit from a large Keras model of international PII for this project. If you do not have expertise in Keras, Google provides an extremely easy to use model generator called the Teachable Machine. To use it:

    • Visit https://teachablemachine.withgoogle.com/train and select 'Image Project' β†’ 'Standard Image Model'.
    • A few classes are visible. Rename the class to an asset type ypu'd like to upload, such as "German Passport" or "California Driver License".
    • Add images by clicking the 'Upload' button and upload some image assets. Note: images have to be square

    Tip: segregate your image assets into folders with the folder name being the same as the class name. You can then drag and drop a folder into the upload dialog.

    • Click '+ Add a class' at the bottom of the page to add more classes with data and repeat. You can make the classes more specific, such as "Goa Driver License Old Format".

    Note: Only upload the same as the class name, for example, the German Passport class must have German Passport pictures. Uploading the wrong data to the wrong class will confuse the machine learning algorithms.

    • Verify the classes and images one last time. Once you're ready, click on the 'Train Model' button. You can increase the epoch size (such as 5000) to improve model accuracy.
    • To test, you can test the model by clicking the Input dropdown and selecting 'File', then uploading a sample image.
    • Once you're ready, click the 'Export Model' button. In the dialog that pops up, select the 'Tensorflow' tab (not Tensorflow.js) and select the 'Keras' radio button, then click 'Download my model' to export the newly generated model. Extract the downloaded zip file and paste the keras_model.h5 file and labels.txt file into the models/ directory in Octopii.

    The images used for the model above are not visible to us since they're in a proprietary format. You can use both dummy and actual PII. Make sure they are square-ish in image size.

    Updating OCR list

    Once you generate models using Teachable Machine, you can improve Octopii's accuracy via OCR. To do this:

    • Open the existing ocr_list.json file. Create a JSONObject with the key having the same name as the asset class. NOTE: The key name must be exactly the same as the asset class name from Teachable Machine.
    • For the keywords, use as many unique terms from your asset as possible, such as "Income Tax Department". Store them in a JSONArray.
    • (Advanced) you can also add regexes for things like ID numbers and MRZ on passports if they are unique enough. Use https://regex101.com to test your regexes before adding them.
    • Save/overwrite the existing ocr_list.json file.

    3. Edit

    You can replace each file you modify in the models/ directory after you create or edit them via the above methods.

    4. Pull request

    Submit a pull request from your forked repo and we'll pick it up and replace our current model with it if the changes are large enough.

    Note: Please take the following steps to ensure quality

    • Make sure the model returns extremely accurate results by testing it locally first.
    • Use proper text casing for label names in both the Keras model and ocr_list.json.
    • Make sure all JSON is valid with appropriate character escapes with no duplicate keys, regexes or keywords.
    • For country names, please use the ISO 3166-1 alpha-2 code of the country.

    Credits

    License

    MIT License

    (c) Copyright 2022 RedHunt Labs Private Limited

    Author: Owais Shaikh



    Why Vulnerability Scanning is Critical for SOC 2

    SOC 2 may be a voluntary standard, but for today's security-conscious business, it's a minimal requirement when considering a SaaS provider. Compliance can be a long and complicated process, but a scanner likeΒ IntruderΒ makes it easy to tick the vulnerability management box. Security is critical for all organisations, including those that outsource key business operations to third parties like

    Penetration Testing or Vulnerability Scanning? What's the Difference?

    Pentesting and vulnerability scanning are often confused for the same service. The problem is, business owners often use one when they really need the other. Let's dive in and explain the differences. People frequently confuse penetration testing and vulnerability scanning, and it's easy to see why. Both look for weaknesses in your IT infrastructure by exploring your systems in the same way an

    Smap - A Drop-In Replacement For Nmap Powered By Shodan.Io


    Smap is a replica of Nmap which uses shodan.io's free API for port scanning. It takes same command line arguments as Nmap and produces the same output which makes it a drop-in replacament for Nmap.


    Features

    • Scans 200 hosts per second
    • Doesn't require any account/api key
    • Vulnerability detection
    • Supports all nmap's output formats
    • Service and version fingerprinting
    • Makes no contact to the targets

    Installation

    Binaries

    You can download a pre-built binary from here and use it right away.

    Manual

    go install -v github.com/s0md3v/smap/cmd/smap@latest

    Confused or something not working? For more detailed instructions, click here

    AUR pacakge

    Smap is available on AUR as smap-git (builds from source) and smap-bin (pre-built binary).

    Homebrew/Mac

    Smap is also avaible on Homebrew.

    brew update
    brew install smap

    Usage

    Smap takes the same arguments as Nmap but options other than -p, -h, -o*, -iL are ignored. If you are unfamiliar with Nmap, here's how to use Smap.

    Specifying targets

    smap 127.0.0.1 127.0.0.2

    You can also use a list of targets, seperated by newlines.

    smap -iL targets.txt

    Supported formats

    1.1.1.1         // IPv4 address
    example.com // hostname
    178.23.56.0/8 // CIDR

    Output

    Smap supports 6 output formats which can be used with the -o* as follows

    smap example.com -oX output.xml

    If you want to print the output to terminal, use hyphen (-) as filename.

    Supported formats

    oX    // nmap's xml format
    oG // nmap's greppable format
    oN // nmap's default format
    oA // output in all 3 formats above at once
    oP // IP:PORT pairs seperated by newlines
    oS // custom smap format
    oJ // json

    Note: Since Nmap doesn't scan/display vulnerabilities and tags, that data is not available in nmap's formats. Use -oS to view that info.

    Specifying ports

    Smap scans these 1237 ports by default. If you want to display results for certain ports, use the -p option.

    smap -p21-30,80,443 -iL targets.txt

    Considerations

    Since Smap simply fetches existent port data from shodan.io, it is super fast but there's more to it. You should use Smap if:

    You want

    • vulnerability detection
    • a super fast port scanner
    • results for most common ports (top 1237)
    • no connections to be made to the targets

    You are okay with

    • not being able to scan IPv6 addresses
    • results being up to 7 days old
    • a few false negatives


    Trufflehog - Find Credentials All Over The Place

    TruffleHog

    Find leaked credentials.


    Join The Slack

    Have questions? Feedback? Jump in slack and hang out with us

    https://join.slack.com/t/trufflehog-community/shared_invite/zt-pw2qbi43-Aa86hkiimstfdKH9UCpPzQ

    Demo

    Find credentials all over the place (6)

    docker run -it -v "$PWD:/pwd" trufflesecurity/trufflehog:latest github --org=trufflesecurity

    What's new in v3?

    TruffleHog v3 is a complete rewrite in Go with many new powerful features.

    • We've added over 700 credential detectors that support active verification against their respective APIs.
    • We've also added native support for scanning GitHub, GitLab, filesystems, and S3.
    • Instantly verify private keys against millions of github users and billions of TLS certificates using our Driftwood technology.

    What is credential verification?

    For every potential credential that is detected, we've painstakingly implemented programatic verification against the API that we think it belongs to. Verification eliminates false positives. For example, the AWS credential detector performs a GetCallerIdentity API call against the AWS API to verify if an AWS credential is active.

    Installation

    Several options:

    1. Go

    git clone https://github.com/trufflesecurity/trufflehog.git

    cd trufflehog; go install

    2. Release binaries

    3. Docker

    Note: Apple M1 hardware users should run with docker run --platform linux/arm64 for better performance.

    Most users

    docker run -it -v "$PWD:/pwd" trufflesecurity/trufflehog:latest github --repo https://github.com/trufflesecurity/test_keys

    Apple M1 users

    The linux/arm64 image is better to run on the M1 than the amd64 image. Even better is running the native darwin binary avilable, but there is not container image for that.

    docker run --platform linux/arm64 -it -v "$PWD:/pwd" trufflesecurity/trufflehog:latest github --repo https://github.com/trufflesecurity/test_keys 

    4. Pip (help wanted)

    It's possible to distribute binaries in pip wheels.

    Here is an example of a project that does it.

    Help with setting up this packaging would be appreciated!

    5. Brew

    brew tap trufflesecurity/trufflehog
    brew install trufflehog

    Usage

    TruffleHog has a sub-command for each source of data that you may want to scan:

    • git
    • github
    • gitlab
    • S3
    • filesystem
    • syslog
    • file and stdin (coming soon)

    Each subcommand can have options that you can see with the -h flag provided to the sub command:

    $ trufflehog git --help
    usage: TruffleHog git [<flags>] <uri>

    Find credentials in git repositories.

    Flags:
    --help Show context-sensitive help (also try --help-long and --help-man).
    --debug Run in debug mode
    --version Prints trufflehog version.
    -j, --json Output in JSON format.
    --json-legacy Use the pre-v3.0 JSON format. Only works with git, gitlab, and github sources.
    --concurrency=1 Number of concurrent workers.
    --no-verification Don't verify the results.
    --only-verified Only output verified results.
    --print-avg-detector-time Print the average time spent on each detector.
    --no-update Don't check for updates.
    -i, --include-paths=INCLUDE-PATHS
    Path to file with newline separated regexes for files to include in scan.
    -x, --exclude-paths=EXCLUDE-PATHS
    Path to file with newline separated regexes for files to exclude in scan.
    --since-commit=SINCE-COMMIT
    Commit to start scan from.
    --branch=BRANCH Branch to scan.
    --max-depth=MAX-DEPTH Maximum depth of commits to scan.
    --allow No-op flag for backwards compat.
    --entropy No-op flag for backwards compat.
    --regex No-op flag for backwards compat.

    Args:
    <uri> Git repository URL. https:// or file:// schema expected.

    For example, to scan a git repository, start with

    $ trufflehog git https://github.com/trufflesecurity/trufflehog.git

    Exit Codes:

    • 0: No errors and no results were found.
    • 1: An error was encountered. Sources may not have completed scans.
    • 183: No errors were encountered, but results were found. Will only be returned if --fail flag is used.

    Scanning an organization

    Try scanning an entire GitHub organization with the following:

    docker run -it -v "$PWD:/pwd" trufflesecurity/trufflehog:latest github --org=trufflesecurity

    TruffleHog OSS Github Action

    - name: TruffleHog OSS
    uses: trufflesecurity/trufflehog@main
    with:
    # Repository path
    path:
    # Start scanning from here (usually main branch).
    base:
    # Scan commits until here (usually dev branch).
    head: # optional

    The TruffleHog OSS Github Action can be used to scan a range of commits for leaked credentials. The action will fail if any results are found.

    For example, to scan the contents of pull requests you could use the following workflow:

    name: Leaked Secrets Scan
    on: [pull_request]
    jobs:
    TruffleHog:
    runs-on: ubuntu-latest
    steps:
    - name: Checkout code
    uses: actions/checkout@v3
    with:
    fetch-depth: 0
    - name: TruffleHog OSS
    uses: trufflesecurity/trufflehog@v3.4.3
    with:
    path: ./
    base: ${{ github.event.repository.default_branch }}
    head: HEAD

    Contributors

    This project exists thanks to all the people who contribute. [Contribute].

    Contributing

    Contributions are very welcome! Please see our contribution guidelines first.

    We no longer accept contributions to TruffleHog v2, but that code is available in the v2 branch.

    Adding new secret detectors

    We have published some documentation and tooling to get started on adding new secret detectors. Let's improve detection together!

    License Change

    Since v3.0, TruffleHog is released under a AGPL 3 license, included in LICENSE. TruffleHog v3.0 uses none of the previous codebase, but care was taken to preserve backwards compatibility on the command line interface. The work previous to this release is still available licensed under GPL 2.0 in the history of this repository and the previous package releases and tags. A completed CLA is required for us to accept contributions going forward.



    ❌