FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
☐ ☆ ✇ KitPloit - PenTest Tools!

Secator - The Pentester'S Swiss Knife

By: Zion3R — September 22nd 2024 at 11:30


secator is a task and workflow runner used for security assessments. It supports dozens of well-known security tools and it is designed to improve productivity for pentesters and security researchers.


Features

  • Curated list of commands

  • Unified input options

  • Unified output schema

  • CLI and library usage

  • Distributed options with Celery

  • Complexity from simple tasks to complex workflows

  • Customizable


Supported tools

secator integrates the following tools:

Name Description Category
httpx Fast HTTP prober. http
cariddi Fast crawler and endpoint secrets / api keys / tokens matcher. http/crawler
gau Offline URL crawler (Alien Vault, The Wayback Machine, Common Crawl, URLScan). http/crawler
gospider Fast web spider written in Go. http/crawler
katana Next-generation crawling and spidering framework. http/crawler
dirsearch Web path discovery. http/fuzzer
feroxbuster Simple, fast, recursive content discovery tool written in Rust. http/fuzzer
ffuf Fast web fuzzer written in Go. http/fuzzer
h8mail Email OSINT and breach hunting tool. osint
dnsx Fast and multi-purpose DNS toolkit designed for running DNS queries. recon/dns
dnsxbrute Fast and multi-purpose DNS toolkit designed for running DNS queries (bruteforce mode). recon/dns
subfinder Fast subdomain finder. recon/dns
fping Find alive hosts on local networks. recon/ip
mapcidr Expand CIDR ranges into IPs. recon/ip
naabu Fast port discovery tool. recon/port
maigret Hunt for user accounts across many websites. recon/user
gf A wrapper around grep to avoid typing common patterns. tagger
grype A vulnerability scanner for container images and filesystems. vuln/code
dalfox Powerful XSS scanning tool and parameter analyzer. vuln/http
msfconsole CLI to access and work with the Metasploit Framework. vuln/http
wpscan WordPress Security Scanner vuln/multi
nmap Vulnerability scanner using NSE scripts. vuln/multi
nuclei Fast and customisable vulnerability scanner based on simple YAML based DSL. vuln/multi
searchsploit Exploit searcher. exploit/search

Feel free to request new tools to be added by opening an issue, but please check that the tool complies with our selection criterias before doing so. If it doesn't but you still want to integrate it into secator, you can plug it in (see the dev guide).

Installation

Installing secator

Pipx
pipx install secator
Pip
pip install secator
Bash
wget -O - https://raw.githubusercontent.com/freelabz/secator/main/scripts/install.sh | sh
Docker
docker run -it --rm --net=host -v ~/.secator:/root/.secator freelabz/secator --help
The volume mount -v is necessary to save all secator reports to your host machine, and--net=host is recommended to grant full access to the host network. You can alias this command to run it easier:
alias secator="docker run -it --rm --net=host -v ~/.secator:/root/.secator freelabz/secator"
Now you can run secator like if it was installed on baremetal:
secator --help
Docker Compose
git clone https://github.com/freelabz/secator
cd secator
docker-compose up -d
docker-compose exec secator secator --help

Note: If you chose the Bash, Docker or Docker Compose installation methods, you can skip the next sections and go straight to Usage.

Installing languages

secator uses external tools, so you might need to install languages used by those tools assuming they are not already installed on your system.

We provide utilities to install required languages if you don't manage them externally:

Go
secator install langs go
Ruby
secator install langs ruby

Installing tools

secator does not install any of the external tools it supports by default.

We provide utilities to install or update each supported tool which should work on all systems supporting apt:

All tools
secator install tools
Specific tools
secator install tools <TOOL_NAME>
For instance, to install `httpx`, use:
secator install tools httpx

Please make sure you are using the latest available versions for each tool before you run secator or you might run into parsing / formatting issues.

Installing addons

secator comes installed with the minimum amount of dependencies.

There are several addons available for secator:

worker Add support for Celery workers (see [Distributed runs with Celery](https://docs.freelabz.com/in-depth/distributed-runs-with-celery)).
secator install addons worker
google Add support for Google Drive exporter (`-o gdrive`).
secator install addons google
mongodb Add support for MongoDB driver (`-driver mongodb`).
secator install addons mongodb
redis Add support for Redis backend (Celery).
secator install addons redis
dev Add development tools like `coverage` and `flake8` required for running tests.
secator install addons dev
trace Add tracing tools like `memray` and `pyinstrument` required for tracing functions.
secator install addons trace
build Add `hatch` for building and publishing the PyPI package.
secator install addons build

Install CVEs

secator makes remote API calls to https://cve.circl.lu/ to get in-depth information about the CVEs it encounters. We provide a subcommand to download all known CVEs locally so that future lookups are made from disk instead:

secator install cves

Checking installation health

To figure out which languages or tools are installed on your system (along with their version):

secator health

Usage

secator --help


Usage examples

Run a fuzzing task (ffuf):

secator x ffuf http://testphp.vulnweb.com/FUZZ

Run a url crawl workflow:

secator w url_crawl http://testphp.vulnweb.com

Run a host scan:

secator s host mydomain.com

and more... to list all tasks / workflows / scans that you can use:

secator x --help
secator w --help
secator s --help

Learn more

To go deeper with secator, check out: * Our complete documentation * Our getting started tutorial video * Our Medium post * Follow us on social media: @freelabz on Twitter and @FreeLabz on YouTube



☐ ☆ ✇ KitPloit - PenTest Tools!

Ashok - A OSINT Recon Tool, A.K.A Swiss Army Knife

By: Zion3R — June 26th 2024 at 12:30


Reconnaissance is the first phase of penetration testing which means gathering information before any real attacks are planned So Ashok is an Incredible fast recon tool for penetration tester which is specially designed for Reconnaissance" title="Reconnaissance">Reconnaissance phase. And in Ashok-v1.1 you can find the advanced google dorker and wayback crawling machine.



Main Features

- Wayback Crawler Machine
- Google Dorking without limits
- Github Information Grabbing
- Subdomain Identifier
- Cms/Technology Detector With Custom Headers

Installation

~> git clone https://github.com/ankitdobhal/Ashok
~> cd Ashok
~> python3.7 -m pip3 install -r requirements.txt

How to use Ashok?

A detailed usage guide is available on Usage section of the Wiki.

But Some index of options is given below:

Docker

Ashok can be launched using a lightweight Python3.8-Alpine Docker image.

$ docker pull powerexploit/ashok-v1.2
$ docker container run -it powerexploit/ashok-v1.2 --help


    Credits



    ☐ ☆ ✇ KitPloit - PenTest Tools!

    X-Recon - A Utility For Detecting Webpage Inputs And Conducting XSS Scans

    By: Zion3R — June 5th 2024 at 12:30

    A utility for identifying web page inputs and conducting XSS scanning.


    Features:

    • Subdomain Discovery:
    • Retrieves relevant subdomains for the target website and consolidates them into a whitelist. These subdomains can be utilized during the scraping process.

    • Site-wide Link Discovery:

    • Collects all links throughout the website based on the provided whitelist and the specified max_depth.

    • Form and Input Extraction:

    • Identifies all forms and inputs found within the extracted links, generating a JSON output. This JSON output serves as a foundation for leveraging the XSS scanning capability of the tool.

    • XSS Scanning:

    • Once the start recon option returns a custom JSON containing the extracted entries, the X-Recon tool can initiate the XSS vulnerability testing process and furnish you with the desired results!



    Note:

    The scanning functionality is currently inactive on SPA (Single Page Application) web applications, and we have only tested it on websites developed with PHP, yielding remarkable results. In the future, we plan to incorporate these features into the tool.




    Note:

    This tool maintains an up-to-date list of file extensions that it skips during the exploration process. The default list includes common file types such as images, stylesheets, and scripts (".css",".js",".mp4",".zip","png",".svg",".jpeg",".webp",".jpg",".gif"). You can customize this list to better suit your needs by editing the setting.json file..

    Installation

    $ git clone https://github.com/joshkar/X-Recon
    $ cd X-Recon
    $ python3 -m pip install -r requirements.txt
    $ python3 xr.py

    Target For Test:

    You can use this address in the Get URL section

      http://testphp.vulnweb.com


    ☐ ☆ ✇ The Hacker News

    China-Linked Hackers Used ROOTROT Webshell in MITRE Network Intrusion

    By: Newsroom — May 7th 2024 at 12:55
    The MITRE Corporation has offered more details into the recently disclosed cyber attack, stating that the first evidence of the intrusion&nbsp;now&nbsp;dates back to December 31, 2023. The attack, which&nbsp;came to light last month, singled out MITRE's Networked Experimentation, Research, and Virtualization Environment (NERVE) through the exploitation of two Ivanti Connect Secure zero-day
    ☐ ☆ ✇ KitPloit - PenTest Tools!

    Sicat - The Useful Exploit Finder

    By: Zion3R — April 9th 2024 at 12:30

    Introduction

    SiCat is an advanced exploit search tool designed to identify and gather information about exploits from both open sources and local repositories effectively. With a focus on cybersecurity, SiCat allows users to quickly search online, finding potential vulnerabilities and relevant exploits for ongoing projects or systems.

    SiCat's main strength lies in its ability to traverse both online and local resources to collect information about relevant exploitations. This tool aids cybersecurity professionals and researchers in understanding potential security risks, providing valuable insights to enhance system security.


    SiCat Resources

    Installation

    git clone https://github.com/justakazh/sicat.git && cd sicat

    pip install -r requirements.txt

    Usage


    ~$ python sicat.py --help

    Command Line Options:

    Command Description
    -h Show help message and exit
    -k KEYWORD
    -kv KEYWORK_VERSION
    -nm Identify via nmap output
    --nvd Use NVD as info source
    --packetstorm Use PacketStorm as info source
    --exploitdb Use ExploitDB as info source
    --exploitalert Use ExploitAlert as info source
    --msfmoduke Use metasploit as info source
    -o OUTPUT Path to save output to
    -ot OUTPUT_TYPE Output file type: json or html

    Examples

    From keyword


    python sicat.py -k telerik --exploitdb --msfmodule

    From nmap output


    nmap --open -sV localhost -oX nmap_out.xml
    python sicat.py -nm nmap_out.xml --packetstorm

    To-do

    • [ ] Input from nmap result from pipeline
    • [ ] Nmap multiple host support
    • [ ] Search NSE Script
    • [ ] Search by PORT

    Contribution

    I'm aware that perfection is elusive in coding. If you come across any bugs, feel free to contribute by fixing the code or suggesting new features. Your input is always welcomed and valued.



    ☐ ☆ ✇ The Hacker News

    Russia Hackers Using TinyTurla-NG to Breach European NGO's Systems

    By: Newsroom — March 21st 2024 at 16:03
    The Russia-linked threat actor known as Turla infected several systems belonging to an unnamed European non-governmental organization (NGO) in order to deploy a backdoor called TinyTurla-NG (TTNG). "The attackers compromised the first system, established persistence and added exclusions to antivirus products running on these endpoints as part of their preliminary post-compromise actions," Cisco
    ☐ ☆ ✇ The Hacker News

    From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

    By: Newsroom — March 19th 2024 at 13:55
    Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules. "Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates," Recorded Future&nbsp;said&nbsp;in a new report shared with The Hacker News.
    ☐ ☆ ✇ KitPloit - PenTest Tools!

    Dorkish - Chrome Extension Tool For OSINT & Recon

    By: Zion3R — March 16th 2024 at 11:30


    During reconaissance phase or when doing OSINT , we often use google dorking and shodan and thus the idea of Dorkish.
    Dorkish is a Chrome extension tool that facilitates custom dork creation for Google and Shodan using the builder and it offers prebuilt dorks for efficient reconnaissance and OSINT engagement.


    Installation And Setup

    1- Clone the repository

    git clone https://github.com/yousseflahouifi/dorkish.git

    2- Go to chrome://extensions/ and enable the Developer mode in the top right corner.
    3- click on Load unpacked extension button and select the dorkish folder.

    Note: For firefox users , you can find the extension here : https://addons.mozilla.org/en-US/firefox/addon/dorkish/

    Features

    Google dorking

    • Builder with keywords to filter your google search results.
    • Prebuilt dorks for Bug bounty programs.
    • Prebuilt dorks used during the reconnaissance phase in bug bounty.
    • Prebuilt dorks for exposed files and directories
    • Prebuilt dorks for logins and sign up portals
    • Prebuilt dorks for cyber secruity jobs

    Shodan dorking

    • Builder with filter keywords used in shodan.
    • Varierty of prebuilt dorks to find IOT , Network infrastructure , cameras , ICS , databases , etc.

    Usage

    Once you have found or built the dork you need, simply click it and click search. This will direct you to the desired search engine, Shodan or Google, with the specific dork you've entered. Then, you can explore and enjoy the results that match your query.

    TODO

    • Add more useful dorks and catogories
    • Fix some bugs
    • Add a search bar to search through the results
    • Might add some LLM models to build dorks

    Notes

    I have built some dorks and I have used some public resources to gather the dorks , here's few : - https://github.com/lothos612/shodan - https://github.com/TakSec/google-dorks-bug-bounty

    Warning

    • I am not responsible for any damage caused by using the tool


    ☐ ☆ ✇ KitPloit - PenTest Tools!

    swaggerHole - A Python3 Script Searching For Secret On Swaggerhub

    By: Zion3R — February 24th 2024 at 11:30


    Introduction 

    This tool is made to automate the process of retrieving secrets in the public APIs on [swaggerHub](https://app.swaggerhub.com/search). This tool is multithreaded and pipe mode is available :) 

    Requirements 

     - python3 (sudo apt install python3) - pip3 (sudo apt install python3-pip) ## Installation
    pip3 install swaggerhole
    or cloning this repository and running
    git clone https://github.com/Liodeus/swaggerHole.git
    pip3 install .

    Usage

       _____ _      __ ____ _ ____ _ ____ _ ___   _____
    / ___/| | /| / // __ `// __ `// __ `// _ \ / ___/
    (__ ) | |/ |/ // /_/ // /_/ // /_/ // __// /
    /____/ |__/|__/ \__,_/ \__, / \__, / \___//_/
    __ __ __ /____/ /____/
    / / / /____ / /___
    / /_/ // __ \ / // _ \
    / __ // /_/ // // __/
    /_/ /_/ \____//_/ \___/

    usage: swaggerhole [-h] [-s SEARCH] [-o OUT] [-t THREADS] [-j] [-q] [-du] [-de]

    optional arguments:
    -h, --help show this help message and exit
    -s SEARCH, --search SEARCH
    Term to search
    -o OUT, --out OUT Output directory
    -t THREADS, --threads THREADS
    Threads number (Default 25)
    -j, --json Json ouput
    -q, --quiet Remove banner
    -du, --deactivate_url
    Deactivate the URL filtering
    -de, --deactivate_email
    Deactivate the email filtering

    Search for secret about a domain

    swaggerHole -s test.com

    echo test.com | swaggerHole

    Search for secret about a domain and output to json

    swaggerHole -s test.com --json

    echo test.com | swaggerHole --json

    Search for secret about a domain and do it fast :)

    swaggerHole -s test.com -t 100

    echo test.com | swaggerHole -t 100

    Output explanation

    Normal output

     `Finding_Type - Finding - [Swagger_Name][Date_Last_Update][Line:Number]` 

    Json output

     `{"Finding_Type": Finding, "File": File_path, "Date": Date_Last_Update, "Line": Number}` 

    Deactivate url/email 

    Using -du or -de remove the filtering done by the tool. There is more false positive with those options. 
    ☐ ☆ ✇ The Hacker News

    Chinese Hackers Operate Undetected in U.S. Critical Infrastructure for Half a Decade

    By: Newsroom — February 8th 2024 at 13:05
    The U.S. government on Wednesday said the Chinese state-sponsored hacking group known as&nbsp;Volt Typhoon&nbsp;had been embedded into some critical infrastructure networks in the country for at least five years. Targets of the threat actor include communications, energy, transportation, and water and wastewater systems sectors in the U.S. and Guam. "Volt Typhoon's choice of targets and pattern
    ☐ ☆ ✇ The Hacker News

    After FBI Takedown, KV-Botnet Operators Shift Tactics in Attempt to Bounce Back

    By: Newsroom — February 7th 2024 at 15:11
    The threat actors behind the&nbsp;KV-botnet&nbsp;made "behavioral changes" to the malicious network as U.S. law enforcement began issuing commands to neutralize the activity. KV-botnet is the name given to a network of compromised small office and home office (SOHO) routers and firewall devices across the world, with one specific cluster acting as a covert data transfer system for other Chinese
    ☐ ☆ ✇ KitPloit - PenTest Tools!

    BucketLoot - An Automated S3-compatible Bucket Inspector

    By: Zion3R — January 29th 2024 at 11:30


    BucketLoot is an automated S3-compatible Bucket inspector that can help users extract assets, flag secret exposures and even search for custom keywords as well as Regular Expressions from publicly-exposed storage buckets by scanning files that store data in plain-text.

    The tool can scan for buckets deployed on Amazon Web Services (AWS), Google Cloud Storage (GCS), DigitalOcean Spaces and even custom domains/URLs which could be connected to these platforms. It returns the output in a JSON format, thus enabling users to parse it according to their liking or forward it to any other tool for further processing.

    BucketLoot comes with a guest mode by default, which means a user doesn't needs to specify any API tokens / Access Keys initially in order to run the scan. The tool will scrape a maximum of 1000 files that are returned in the XML response and if the storage bucket contains more than 1000 entries which the user would like to run the scanner on, they can provide platform credentials to run a complete scan. If you'd like to know more about the tool, make sure to check out our blog.

    Features

    Secret Scanning

    Scans for over 80+ unique RegEx signatures that can help in uncovering secret exposures tagged with their severity from the misconfigured storage bucket. Users have the ability to modify or add their own signatures in the regexes.json file. If you believe you have any cool signatures which might be helpful for others too and could be flagged at scale, go ahead and make a PR!

    Sensitive File Checks

    Accidental sensitive file leakages are a big problem that affects the security posture of individuals and organisations. BucketLoot comes with a 80+ unique regEx signatures list in vulnFiles.json which allows users to flag these sensitive files based on file names or extensions.

    Dig Mode

    Want to quickly check if any target website is using a misconfigured bucket that is leaking secrets or any other sensitive data? Dig Mode allows you to pass non-S3 targets and let the tool scrape URLs from response body for scanning.

    Asset Extraction

    Interested in stepping up your asset discovery game? BucketLoot extracts all the URLs/Subdomains and Domains that could be present in an exposed storage bucket, enabling you to have a chance of discovering hidden endpoints, thus giving you an edge over the other traditional recon tools.

    Searching

    The tool goes beyond just asset discovery and secret exposure scanning by letting users search for custom keywords and even Regular Expression queries which may help them find exactly what they are looking for.

    To know more about our Attack Surface Management platform, check out NVADR.



    ☐ ☆ ✇ KitPloit - PenTest Tools!

    Rayder - A Lightweight Tool For Orchestrating And Organizing Your Bug Hunting Recon / Pentesting Command-Line Workflows

    By: Zion3R — January 23rd 2024 at 11:30


    Rayder is a command-line tool designed to simplify the orchestration and execution of workflows. It allows you to define a series of modules in a YAML file, each consisting of commands to be executed. Rayder helps you automate complex processes, making it easy to streamline repetitive modules and execute them parallelly if the commands do not depend on each other.


    Installation

    To install Rayder, ensure you have Go (1.16 or higher) installed on your system. Then, run the following command:

    go install github.com/devanshbatham/rayder@v0.0.4

    Usage

    Rayder offers a straightforward way to execute workflows defined in YAML files. Use the following command:

    rayder -w path/to/workflow.yaml

    Workflow Configuration

    A workflow is defined in a YAML file with the following structure:

    vars:
    VAR_NAME: value
    # Add more variables...

    parallel: true|false
    modules:
    - name: task-name
    cmds:
    - command-1
    - command-2
    # Add more commands...
    silent: true|false
    # Add more modules...

    Using Variables in Workflows

    Rayder allows you to use variables in your workflow configuration, making it easy to parameterize your commands and achieve more flexibility. You can define variables in the vars section of your workflow YAML file. These variables can then be referenced within your command strings using double curly braces ({{}}).

    Defining Variables

    To define variables, add them to the vars section of your workflow YAML file:

    vars:
    VAR_NAME: value
    ANOTHER_VAR: another_value
    # Add more variables...

    Referencing Variables in Commands

    You can reference variables within your command strings using double curly braces ({{}}). For example, if you defined a variable OUTPUT_DIR, you can use it like this:

    modules:
    - name: example-task
    cmds:
    - echo "Output directory {{OUTPUT_DIR}}"

    Supplying Variables via the Command Line

    You can also supply values for variables via the command line when executing your workflow. Use the format VARIABLE_NAME=value to provide values for specific variables. For example:

    rayder -w path/to/workflow.yaml VAR_NAME=new_value ANOTHER_VAR=updated_value

    If you don't provide values for variables via the command line, Rayder will automatically apply default values defined in the vars section of your workflow YAML file.

    Remember that variables supplied via the command line will override the default values defined in the YAML configuration.

    Example

    Example 1:

    Here's an example of how you can define, reference, and supply variables in your workflow configuration:

    vars:
    ORG: "example.org"
    OUTPUT_DIR: "results"

    modules:
    - name: example-task
    cmds:
    - echo "Organization {{ORG}}"
    - echo "Output directory {{OUTPUT_DIR}}"

    When executing the workflow, you can provide values for ORG and OUTPUT_DIR via the command line like this:

    rayder -w path/to/workflow.yaml ORG=custom_org OUTPUT_DIR=custom_results_dir

    This will override the default values and use the provided values for these variables.

    Example 2:

    Here's an example workflow configuration tailored for reverse whois recon and processing the root domains into subdomains, resolving them and checking which ones are alive:

    vars:
    ORG: "Acme, Inc"
    OUTPUT_DIR: "results-dir"

    parallel: false
    modules:
    - name: reverse-whois
    silent: false
    cmds:
    - mkdir -p {{OUTPUT_DIR}}
    - revwhoix -k "{{ORG}}" > {{OUTPUT_DIR}}/root-domains.txt

    - name: finding-subdomains
    cmds:
    - xargs -I {} -a {{OUTPUT_DIR}}/root-domains.txt echo "subfinder -d {} -o {}.out" | quaithe -workers 30
    silent: false

    - name: cleaning-subdomains
    cmds:
    - cat *.out > {{OUTPUT_DIR}}/root-subdomains.txt
    - rm *.out
    silent: true

    - name: resolving-subdomains
    cmds:
    - cat {{OUTPUT_DIR}}/root-subdomains.txt | dnsx -silent -threads 100 -o {{OUTPUT_DIR}}/resolved-subdomains.txt
    silent: false

    - name: checking-alive-subdomains
    cmds:
    - cat {{OUTPUT_DIR}}/resolved-subdomains.txt | httpx -silent -threads 100 0 -o {{OUTPUT_DIR}}/alive-subdomains.txt
    silent: false

    To execute the above workflow, run the following command:

    rayder -w path/to/reverse-whois.yaml ORG="Yelp, Inc" OUTPUT_DIR=results

    Parallel Execution

    The parallel field in the workflow configuration determines whether modules should be executed in parallel or sequentially. Setting parallel to true allows modules to run concurrently, making it suitable for modules with no dependencies. When set to false, modules will execute one after another.

    Workflows

    Explore a collection of sample workflows and examples in the Rayder workflows repository. Stay tuned for more additions!

    Inspiration

    Inspiration of this project comes from Awesome taskfile project.



    ☐ ☆ ✇ KitPloit - PenTest Tools!

    Uscrapper - Powerful OSINT Webscraper For Personal Data Collection

    By: Zion3R — January 22nd 2024 at 11:30


    Introducing Uscrapper 2.0, A powerfull OSINT webscrapper that allows users to extract various personal information from a website. It leverages web scraping techniques and regular expressions to extract email addresses, social media links, author names, geolocations, phone numbers, and usernames from both hyperlinked and non-hyperlinked sources on the webpage, supports multithreading to make this process faster, Uscrapper 2.0 is equipped with advanced Anti-webscrapping bypassing modules and supports webcrawling to scrape from various sublinks within the same domain. The tool also provides an option to generate a report containing the extracted details.


    Extracted Details:

    Uscrapper extracts the following details from the provided website:

    • Email Addresses: Displays email addresses found on the website.
    • Social Media Links: Displays links to various social media platforms found on the website.
    • Author Names: Displays the names of authors associated with the website.
    • Geolocations: Displays geolocation information associated with the website.
    • Non-Hyperlinked Details: Displays non-hyperlinked details found on the website including email addresses phone numbers and usernames.

    Whats New?:

    Uscrapper 2.0:

    • Introduced multiple modules to bypass anti-webscrapping techniques.
    • Introducing Crawl and scrape: an advanced crawl and scrape module to scrape the websites from within.
    • Implemented Multithreading to make these processes faster.

    Installation Steps:

    git clone https://github.com/z0m31en7/Uscrapper.git
    cd Uscrapper/install/ 
    chmod +x ./install.sh && ./install.sh #For Unix/Linux systems

    Usage:

    To run Uscrapper, use the following command-line syntax:

    python Uscrapper-v2.0.py [-h] [-u URL] [-c (INT)] [-t THREADS] [-O] [-ns]


    Arguments:

    • -h, --help: Show the help message and exit.
    • -u URL, --url URL: Specify the URL of the website to extract details from.
    • -c INT, --crawl INT: Specify the number of links to crawl
    • -t INT, --threads INT: Specify the number of threads to use while crawling and scraping.
    • -O, --generate-report: Generate a report file containing the extracted details.
    • -ns, --nonstrict: Display non-strict usernames during extraction.

    Note:

    • Uscrapper relies on web scraping techniques to extract information from websites. Make sure to use it responsibly and in compliance with the website's terms of service and applicable laws.

    • The accuracy and completeness of the extracted details depend on the structure and content of the website being analyzed.

    • To bypass some Anti-Webscrapping methods we have used selenium which can make the overall process slower.

    Contribution:

    Want a new feature to be added?

    • Make a pull request with all the necessary details and it will be merged after a review.
    • You can contribute by making the regular expressions more efficient and accurate, or by suggesting some more features that can be added.


    ☐ ☆ ✇ KitPloit - PenTest Tools!

    CloudRecon - Finding assets from certificates

    By: Zion3R — January 16th 2024 at 11:30


    CloudRecon

    Finding assets from certificates! Scan the web! Tool presented @DEFCON 31


    Install

    ** You must have CGO enabled, and may have to install gcc to run CloudRecon**

    sudo apt install gcc
    go install github.com/g0ldencybersec/CloudRecon@latest

    Description

    CloudRecon

    CloudRecon is a suite of tools for red teamers and bug hunters to find ephemeral and development assets in their campaigns and hunts.

    Often, target organizations stand up cloud infrastructure that is not tied to their ASN or related to known infrastructure. Many times these assets are development sites, IT product portals, etc. Sometimes they don't have domains at all but many still need HTTPs.

    CloudRecon is a suite of tools to scan IP addresses or CIDRs (ex: cloud providers IPs) and find these hidden gems for testers, by inspecting those SSL certificates.

    The tool suite is three parts in GO:

    Scrape - A LIVE running tool to inspect the ranges for a keywork in SSL certs CN and SN fields in real time.

    Store - a tool to retrieve IPs certs and download all their Orgs, CNs, and SANs. So you can have your OWN cert.sh database.

    Retr - a tool to parse and search through the downloaded certs for keywords.

    Usage

    MAIN

    Usage: CloudRecon scrape|store|retr [options]

    -h Show the program usage message

    Subcommands:

    cloudrecon scrape - Scrape given IPs and output CNs & SANs to stdout
    cloudrecon store - Scrape and collect Orgs,CNs,SANs in local db file
    cloudrecon retr - Query local DB file for results

    SCRAPE

    scrape [options] -i <IPs/CIDRs or File>
    -a Add this flag if you want to see all output including failures
    -c int
    How many goroutines running concurrently (default 100)
    -h print usage!
    -i string
    Either IPs & CIDRs separated by commas, or a file with IPs/CIDRs on each line (default "NONE" )
    -p string
    TLS ports to check for certificates (default "443")
    -t int
    Timeout for TLS handshake (default 4)

    STORE

    store [options] -i <IPs/CIDRs or File>
    -c int
    How many goroutines running concurrently (default 100)
    -db string
    String of the DB you want to connect to and save certs! (default "certificates.db")
    -h print usage!
    -i string
    Either IPs & CIDRs separated by commas, or a file with IPs/CIDRs on each line (default "NONE")
    -p string
    TLS ports to check for certificates (default "443")
    -t int
    Timeout for TLS handshake (default 4)

    RETR

    retr [options]
    -all
    Return all the rows in the DB
    -cn string
    String to search for in common name column, returns like-results (default "NONE")
    -db string
    String of the DB you want to connect to and save certs! (default "certificates.db")
    -h print usage!
    -ip string
    String to search for in IP column, returns like-results (default "NONE")
    -num
    Return the Number of rows (results) in the DB (By IP)
    -org string
    String to search for in Organization column, returns like-results (default "NONE")
    -san string
    String to search for in common name column, returns like-results (default "NONE")


    ☐ ☆ ✇ KitPloit - PenTest Tools!

    WebCopilot - An Automation Tool That Enumerates Subdomains Then Filters Out Xss, Sqli, Open Redirect, Lfi, Ssrf And Rce Parameters And Then Scans For Vulnerabilities

    By: Zion3R — January 10th 2024 at 11:30


    WebCopilot is an automation tool designed to enumerate subdomains of the target and detect bugs using different open-source tools.

    The script first enumerate all the subdomains of the given target domain using assetfinder, sublister, subfinder, amass, findomain, hackertarget, riddler and crt then do active subdomain enumeration using gobuster from SecLists wordlist then filters out all the live subdomains using dnsx then it extract titles of the subdomains using httpx & scans for subdomain takeover using subjack. Then it uses gauplus & waybackurls to crawl all the endpoints of the given subdomains then it use gf patterns to filters out xss, lfi, ssrf, sqli, open redirect & rce parameters from that given subdomains, and then it scans for vulnerabilities on the sub domains using different open-source tools (like kxss, dalfox, openredirex, nuclei, etc). Then it'll print out the result of the scan and save all the output in a specified directory.


    Features

    Usage

    g!2m0:~ webcopilot -h
                 
    ──────▄▀▄─────▄▀▄
    ─────▄█░░▀▀▀▀▀░░█▄
    ─▄▄──█░░░░░░░░░░░█──▄▄
    █▄▄█─█░░▀░░┬░░▀░░█─█▄▄█
    ██╗░░░░░░░██╗███████╗██████╗░░█████╗░░█████╗░██████╗░██╗██╗░░░░░░█████╗░████████╗
    ░██║░░██╗░░██║██╔════╝██╔══██╗██╔══██╗██╔══██╗██╔══██╗██║██║░░░░░██╔══██╗╚══██╔══╝
    ░╚██╗████╗██╔╝█████╗░░██████╦╝██║░░╚═╝██║░░██║██████╔╝██║██║░░░░░██║░░██║░░░██║░░░
    ░░████╔═████║░██╔══╝░░██╔══██╗██║░░██╗██║░░██║██╔═══╝░██║██║ ░░░░██║░░██║░░░██║░░░
    ░░╚██╔╝░╚██╔╝░███████╗██████╦╝╚█████╔╝╚█████╔╝██║░░░░░██║███████╗╚█████╔╝░░░██║░░░
    ░░░╚═╝░░░╚═╝░░╚══════╝╚═════╝░░╚════╝ ░╚════╝░╚═╝░░░░░╚═╝╚══════╝░╚════╝░░░░╚═╝░░░
    [●] @h4r5h1t.hrs | G!2m0

    Usage:
    webcopilot -d <target>
    webcopilot -d <target> -s
    webcopilot [-d target] [-o output destination] [-t threads] [-b blind server URL] [-x exclude domains]

    Flags:
    -d Add your target [Requried]
    -o To save outputs in folder [Default: domain.com]
    -t Number of threads [Default: 100]
    -b Add your server for BXSS [Default: False]
    -x Exclude out of scope domains [Default: False]
    -s Run only Subdomain Enumeration [Default: False]
    -h Show this help message

    Example: webcopilot -d domain.com -o domain -t 333 -x exclude.txt -b testServer.xss
    Use https://xsshunter.com/ or https://interact.projectdiscovery.io/ to get your server

    Installing WebCopilot

    WebCopilot requires git to install successfully. Run the following command as a root to install webcopilot

    git clone https://github.com/h4r5h1t/webcopilot && cd webcopilot/ && chmod +x webcopilot install.sh && mv webcopilot /usr/bin/ && ./install.sh

    Tools Used:

    SubFinderSublist3rFindomaingfOpenRedireXdnsxsqlmapgobusterassetfinderhttpxkxssqsreplaceNucleidalfoxanewjqaquatoneurldedupeAmassgaupluswaybackurlscrlfuzz

    Running WebCopilot

    To run the tool on a target, just use the following command.

    g!2m0:~ webcopilot -d bugcrowd.com

    The -o command can be used to specify an output dir.

    g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd

    The -s command can be used for only subdomain enumerations (Active + Passive and also get title & screenshots).

    g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -s 

    The -t command can be used to add thrads to your scan for faster result.

    g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 

    The -b command can be used for blind xss (OOB), you can get your server from xsshunter or interact

    g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 -b testServer.xss

    The -x command can be used to exclude out of scope domains.

    g!2m0:~ echo out.bugcrowd.com > excludeDomain.txt
    g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 -x excludeDomain.txt -b testServer.xss

    Example

    Default options looks like this:

    g!2m0:~ webcopilot -d bugcrowd.com - bugcrowd
                                    ──────▄▀▄─────▄▀▄
    ─────▄█░░▀▀▀▀▀░░█▄
    ─▄▄──█░░░░░░░░░░░█──▄▄
    █▄▄█─█░░▀░░┬░░▀░░█─█▄▄█
    ██╗░░░░░░░██╗███████╗██████╗░░█████╗░ █████╗░██████╗░██╗██╗░░░░░░█████╗░████████╗
    ░██║░░██╗░░██║██╔════╝██╔══██╗██╔══██╗██╔══██╗██╔══██╗██║██║░░░░░██╔══██╗╚══██╔══╝
    ░╚██╗████╗██╔╝█ ███╗░░██████╦╝██║░░╚═╝██║░░██║██████╔╝██║██║░░░░░██║░░██║░░░██║░░░
    ░░████╔═████║░██╔══╝░░██╔══██╗██║░░██╗██║░░██║██╔═══╝░██║██║░░░░░██║░░██║░░ ██║░░░
    ░░╚██╔╝░╚██╔╝░███████╗██████╦╝╚█████╔╝╚█████╔╝██║░░░░░██║███████╗╚█████╔╝░░░██║░░░
    ░░░╚═╝░░░╚═╝░░╚══════╝╚═════╝░░╚════╝░░╚════╝░╚═╝░░░ ░╚═╝╚══════╝░╚════╝░░░░╚═╝░░░
    [●] @h4r5h1t.hrs | G!2m0


    [❌] Warning: Use with caution. You are responsible for your own actions.
    [❌] Developers assume no liability and are not responsible for any misuse or damage cause by this tool.


    Target: bugcrowd.com
    Output: /home/gizmo/targets/bugcrowd
    Threads: 100
    Server: False
    Exclude: False
    Mode: Running all Enumeration
    Time: 30-08-2021 15:10:00

    [!] Please wait while scanning...

    [●] Subdoamin Scanning is in progress: Scanning subdomains of bugcrowd.com
    [●] Subdoamin Scanned - [assetfinder✔] Subdomain Found: 34
    [●] Subdoamin Scanned - [sublist3r✔] Subdomain Found: 29
    [●] Subdoamin Scanned - [subfinder✔] Subdomain Found: 54
    [●] Subdoamin Scanned - [amass✔] Subdomain Found: 43
    [●] Subdoamin Scanned - [findomain✔] Subdomain Found: 27

    [●] Active Subdoamin Scanning is in progress:
    [!] Please be patient. This may take a while...
    [●] Active Subdoamin Scanned - [gobuster✔] Subdomain Found: 11
    [●] Active Subdoamin Scanned - [amass✔] Subdomain Found: 0

    [●] Subdomain Scanning: Filtering out of scope subdomains
    [●] Subdomain Scanning: Filtering Alive subdomains
    [●] Subdomain Scanning: Getting titles of valid subdomains
    [●] Visual inspection of Subdoamins is completed. Check: /subdomains/aquatone/

    [●] Scanning Completed for Subdomains of bugcrowd.com Total: 43 | Alive: 30

    [●] Endpoints Scanning Completed for Subdomains of bugcrowd.com Total: 11032
    [●] Vulnerabilities Scanning is in progress: Getting all vulnerabilities of bugcrowd.com
    [●] Vulnerabilities Scanned - [XSS✔] Found: 0
    [●] Vulnerabilities Scanned - [SQLi✔] Found: 0
    [●] Vulnerabilities Scanned - [LFI✔] Found: 0
    [●] Vulnerabilities Scanned - [CRLF✔] Found: 0
    [●] Vulnerabilities Scanned - [SSRF✔] Found: 0
    [●] Vulnerabilities Scanned - [Sensitive Data✔] Found: 0
    [●] Vulnerabilities Scanned - [Open redirect✔] Found: 0
    [●] Vulnerabilities Scanned - [Subdomain Takeover✔] Found: 0
    [●] Vulnerabilities Scanned - [Nuclie✔] Found: 0
    [●] Vulnerabilities Scanning Completed for Subdomains of bugcrowd.com Check: /vulnerabilities/


    ▒█▀▀█ █▀▀ █▀▀ █░░█ █░░ ▀▀█▀▀
    ▒█▄▄▀ █▀▀ ▀▀█ █░░█ █░░ ░░█░░
    ▒█░▒█ ▀▀▀ ▀▀▀ ░▀▀▀ ▀▀▀ ░░▀░░

    [+] Subdomains of bugcrowd.com
    [+] Subdomains Found: 0
    [+] Subdomains Alive: 0
    [+] Endpoints: 11032
    [+] XSS: 0
    [+] SQLi: 0
    [+] Open Redirect: 0
    [+] SSRF: 0
    [+] CRLF: 0
    [+] LFI: 0
    [+] Sensitive Data: 0
    [+] Subdomain Takeover: 0
    [+] Nuclei: 0

    Acknowledgement

    WebCopilot is inspired from Garud & Pinaak by ROX4R.

    Thanks to the authors of the tools & wordlists used in this script.

    @aboul3la @tomnomnom @lc @hahwul @projectdiscovery @maurosoria @shelld3v @devanshbatham @michenriksen @defparam @projectdiscovery @bp0lr @ameenmaali @sqlmapproject @dwisiswant0 @OWASP @OJ @Findomain @danielmiessler @1ndianl33t @ROX4R

    Warning: Developers assume no liability and are not responsible for any misuse or damage cause by this tool. So, please se with caution because you are responsible for your own actions.


    ☐ ☆ ✇ The Hacker News

    CERT-UA Uncovers New Malware Wave Distributing OCEANMAP, MASEPIE, STEELHOOK

    By: Newsroom — December 29th 2023 at 10:41
    The Computer Emergency Response Team of Ukraine (CERT-UA) has warned of a new phishing campaign orchestrated by the&nbsp;Russia-linked&nbsp;APT28&nbsp;group&nbsp;to deploy previously undocumented malware such as OCEANMAP, MASEPIE, and STEELHOOK to harvest sensitive information. The activity, which was&nbsp;detected&nbsp;by the agency between December 15 and 25, 2023, targeted Ukrainian
    ☐ ☆ ✇ KitPloit - PenTest Tools!

    PySQLRecon - Offensive MSSQL Toolkit Written In Python, Based Off SQLRecon

    By: Zion3R — December 19th 2023 at 11:30


    PySQLRecon is a Python port of the awesome SQLRecon project by @sanjivkawa. See the commands section for a list of capabilities.


    Install

    PySQLRecon can be installed with pip3 install pysqlrecon or by cloning this repository and running pip3 install .

    Commands

    All of the main modules from SQLRecon have equivalent commands. Commands noted with [PRIV] require elevated privileges or sysadmin rights to run. Alternatively, commands marked with [NORM] can likely be run by normal users and do not require elevated privileges.

    Support for impersonation ([I]) or execution on linked servers ([L]) are denoted at the end of the command description.

    adsi                 [PRIV] Obtain ADSI creds from ADSI linked server [I,L]
    agentcmd [PRIV] Execute a system command using agent jobs [I,L]
    agentstatus [PRIV] Enumerate SQL agent status and jobs [I,L]
    checkrpc [NORM] Enumerate RPC status of linked servers [I,L]
    clr [PRIV] Load and execute .NET assembly in a stored procedure [I,L]
    columns [NORM] Enumerate columns within a table [I,L]
    databases [NORM] Enumerate databases on a server [I,L]
    disableclr [PRIV] Disable CLR integration [I,L]
    disableole [PRIV] Disable OLE automation procedures [I,L]
    disablerpc [PRIV] Disable RPC and RPC Out on linked server [I]
    disablexp [PRIV] Disable xp_cmdshell [I,L]
    enableclr [PRIV] Enable CLR integration [I,L]
    enableole [PRIV] Enable OLE automation procedures [I,L]
    enablerpc [PRIV] Enable RPC and RPC Out on linked server [I]
    enablexp [PRIV] Enable xp_cmdshell [I,L]
    impersonate [NORM] Enumerate users that can be impersonated
    info [NORM] Gather information about the SQL server
    links [NORM] Enumerate linked servers [I,L]
    olecmd [PRIV] Execute a system command using OLE automation procedures [I,L]
    query [NORM] Execute a custom SQL query [I,L]
    rows [NORM] Get the count of rows in a table [I,L]
    search [NORM] Search a table for a column name [I,L]
    smb [NORM] Coerce NetNTLM auth via xp_dirtree [I,L]
    tables [NORM] Enu merate tables within a database [I,L]
    users [NORM] Enumerate users with database access [I,L]
    whoami [NORM] Gather logged in user, mapped user and roles [I,L]
    xpcmd [PRIV] Execute a system command using xp_cmdshell [I,L]

    Usage

    PySQLRecon has global options (available to any command), with some commands introducing additional flags. All global options must be specified before the command name:

    pysqlrecon [GLOBAL_OPTS] COMMAND [COMMAND_OPTS]

    View global options:

    pysqlrecon --help

    View command specific options:

    pysqlrecon [GLOBAL_OPTS] COMMAND --help

    Change the database authenticated to, or used in certain PySQLRecon commands (query, tables, columns rows), with the --database flag.

    Target execution of a PySQLRecon command on a linked server (instead of the SQL server being authenticated to) using the --link flag.

    Impersonate a user account while running a PySQLRecon command with the --impersonate flag.

    --link and --impersonate and incompatible.

    Development

    pysqlrecon uses Poetry to manage dependencies. Install from source and setup for development with:

    git clone https://github.com/tw1sm/pysqlrecon
    cd pysqlrecon
    poetry install
    poetry run pysqlrecon --help

    Adding a Command

    PySQLRecon is easily extensible - see the template and instructions in resources

    TODO

    • Add SQLRecon SCCM commands
    • Add Azure SQL DB support?

    References and Credits



    ☐ ☆ ✇ The Hacker News

    Russian SVR-Linked APT29 Targets JetBrains TeamCity Servers in Ongoing Attacks

    By: Newsroom — December 14th 2023 at 10:32
    Threat actors affiliated with the Russian Foreign Intelligence Service (SVR) have targeted unpatched JetBrains TeamCity servers in widespread attacks since September 2023. The activity has been tied to a nation-state group known as&nbsp;APT29, which is also tracked as BlueBravo, Cloaked Ursa, Cozy Bear, Midnight Blizzard (formerly Nobelium), and The Dukes. It's notable for the supply chain
    ☐ ☆ ✇ KitPloit - PenTest Tools!

    Porch-Pirate - The Most Comprehensive Postman Recon / OSINT Client And Framework That Facilitates The Automated Discovery And Exploitation Of API Endpoints And Secrets Committed To Workspaces, Collections, Requests, Users And Teams

    By: Zion3R — December 5th 2023 at 11:30


    Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.

    Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:

    • Workspaces
    • Collections
    • Requests
    • Users
    • Teams

    Installation

    python3 -m pip install porch-pirate

    Using the client

    The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.

    Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.

    • --globals
    • --collections
    • --requests
    • --urls
    • --dump
    • --raw
    • --curl

    Simple Search

    porch-pirate -s "coca-cola.com"

    Get Workspace Globals

    By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8

    Dump Workspace

    When an interesting result has been found with a simple search, we can provide the workspace ID to the -w argument with the --dump command to begin extracting information from the workspace and its collections.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump

    Automatic Search and Globals Extraction

    Porch Pirate can be supplied a simple search term, following the --globals argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.

    porch-pirate -s "shopify" --globals

    Automatic Search Dump

    Porch Pirate can be supplied a simple search term, following the --dump argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.

    porch-pirate -s "coca-cola.com" --dump

    Extract URLs from Workspace

    A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls

    Automatic URL Extraction

    Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.

    porch-pirate -s "coca-cola.com" --urls

    Show Collections in a Workspace

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections

    Show Workspace Requests

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests

    Show raw JSON

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw

    Show Entity Information

    porch-pirate -w WORKSPACE_ID
    porch-pirate -c COLLECTION_ID
    porch-pirate -r REQUEST_ID
    porch-pirate -u USERNAME/TEAMNAME

    Convert Request to Curl

    Porch Pirate can build curl requests when provided with a request ID for easier testing.

    porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl

    Use a proxy

    porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080

    Using as a library

    Searching

    p = porchpirate()
    print(p.search('coca-cola.com'))

    Get Workspace Collections

    p = porchpirate()
    print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

    Dumping a Workspace

    p = porchpirate()
    collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
    for collection in collections['data']:
    requests = collection['requests']
    for r in requests:
    request_data = p.request(r['id'])
    print(request_data)

    Grabbing a Workspace's Globals

    p = porchpirate()
    print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

    Other Examples

    Other library usage examples can be located in the examples directory, which contains the following examples:

    • dump_workspace.py
    • format_search_results.py
    • format_workspace_collections.py
    • format_workspace_globals.py
    • get_collection.py
    • get_collections.py
    • get_profile.py
    • get_request.py
    • get_statistics.py
    • get_team.py
    • get_user.py
    • get_workspace.py
    • recursive_globals_from_search.py
    • request_to_curl.py
    • search.py
    • search_by_page.py
    • workspace_collections.py


    ☐ ☆ ✇ The Hacker News

    New Threat Actor 'AeroBlade' Emerges in Espionage Attack on U.S. Aerospace

    By: Newsroom — December 5th 2023 at 07:55
    A previously undocumented threat actor has been linked to a cyber attack targeting an aerospace organization in the U.S. as part of what's suspected to be a cyber espionage mission. The BlackBerry Threat Research and Intelligence team is tracking the activity cluster as&nbsp;AeroBlade. Its origin is currently unknown and it's not clear if the attack was successful. "The actor used spear-phishing
    ☐ ☆ ✇ KitPloit - PenTest Tools!

    OSINT-Framework - OSINT Framework

    By: Zion3R — November 25th 2023 at 11:30


    OSINT framework focused on gathering information from free tools or resources. The intention is to help people find free OSINT resources. Some of the sites included might require registration or offer more data for $$$, but you should be able to get at least a portion of the available information for no cost.

    I originally created this framework with an information security point of view. Since then, the response from other fields and disciplines has been incredible. I would love to be able to include any other OSINT resources, especially from fields outside of infosec. Please let me know about anything that might be missing!

    Please visit the framework at the link below and good hunting!


    https://osintframework.com

    Legend

    (T) - Indicates a link to a tool that must be installed and run locally
    (D) - Google Dork, for more information: Google Hacking
    (R) - Requires registration
    (M) - Indicates a URL that contains the search term and the URL itself must be edited manually

    For Update Notifications

    Follow me on Twitter: @jnordine - https://twitter.com/jnordine
    Watch or star the project on Github: https://github.com/lockfale/osint-framework

    Suggestions, Comments, Feedback

    Feedback or new tool suggestions are extremely welcome! Please feel free to submit a pull request or open an issue on github or reach out on Twitter.

    Contribute with a GitHub Pull Request

    For new resources, please ensure that the site is available for public and free use.

    1. Update the arf.json file in the format shown below. If this isn't the first entry for a folder, add a comma to the last closing brace of the previous entry.
  • Submit pull request!
  • Thank you!

    OSINT Framework Website

    https://osintframework.com

    Happy Hunting!



    ☐ ☆ ✇ KitPloit - PenTest Tools!

    Goblob - A Fast Enumeration Tool For Publicly Exposed Azure Storage Blobs

    By: Zion3R — November 15th 2023 at 11:30


    Goblob is a lightweight and fast enumeration tool designed to aid in the discovery of sensitive information exposed publicy in Azure blobs, which can be useful for various research purposes such as vulnerability assessments, penetration testing, and reconnaissance.

    Warning. Goblob will issue individual goroutines for each container name to check in each storage account, only limited by the maximum number of concurrent goroutines specified in the -goroutines flag. This implementation can exhaust bandwidth pretty quickly in most cases with the default wordlist, or potentially cost you a lot of money if you're using the tool in a cloud environment. Make sure you understand what you are doing before running the tool.


    Installation

    go install github.com/Macmod/goblob@latest

    Usage

    To use goblob simply run the following command:

    $ ./goblob <storageaccountname>

    Where <storageaccountname> is the target storage account to enumerate public Azure blob storage URLs on.

    You can also specify a list of storage account names to check:

    $ ./goblob -accounts accounts.txt

    By default, the tool will use a list of common Azure Blob Storage container names to construct potential URLs. However, you can also specify a custom list of container names using the -containers option. For example:

    $ ./goblob -accounts accounts.txt -containers wordlists/goblob-folder-names.txt

    The tool also supports outputting the results to a file using the -output option:

    $ ./goblob -accounts accounts.txt -containers wordlists/goblob-folder-names.txt -output results.txt

    If you want to provide accounts to test via stdin you can also omit -accounts (or the account name) entirely:

    $ cat accounts.txt | ./goblob

    Wordlists

    Goblob comes bundled with basic wordlists that can be used with the -containers option:

    Optional Flags

    Goblob provides several flags that can be tuned in order to improve the enumeration process:

    • -goroutines=N - Maximum number of concurrent goroutines to allow (default: 5000).
    • -blobs=true - Report the URL of each blob instead of the URL of the containers (default: false).
    • -verbose=N - Set verbosity level (default: 1, min: 0, max: 3).
    • -maxpages=N - Maximum of container pages to traverse looking for blobs (default: 20, set to -1 to disable limit or to 0 to avoid listing blobs at all and just check if the container is public)
    • -timeout=N - Timeout for HTTP requests (seconds, default: 90)
    • -maxidleconns=N - MaxIdleConns transport parameter for HTTP client (default: 100)
    • -maxidleconnsperhost=N - MaxIdleConnsPerHost transport parameter for HTTP client (default: 10)
    • -maxconnsperhost=N - MaxConnsPerHost transport parameter for HTTP client (default: 0)
    • -skipssl=true - Skip SSL verification (default: false)
    • -invertsearch=true - Enumerate accounts for each container instead of containers for each account (default: false)

    For instance, if you just want to find publicly exposed containers using large lists of storage accounts and container names, you should use -maxpages=0 to prevent the goroutines from paginating the results. Then run it again on the set of results you found with -blobs=true and -maxpages=-1 to actually get the URLs of the blobs.

    If, on the other hand, you want to test a small list of very popular container names against a large set of storage accounts, you might want to try -invertsearch=true with -maxpages=0, in order to see the public accounts for each container name instead of the container names for each storage account.

    You may also want to try changing -goroutines, -timeout and -maxidleconns, -maxidleconnsperhost and -maxconnsperhost and -skipssl in order to best use your bandwidth and find results faster.

    Experiment with the flags to find what works best for you ;-)

    Example

    A fast enumeration tool for publicly exposed Azure Storage blobs. (6)

    Contributing

    Contributions are welcome by opening an issue or by submitting a pull request.

    TODO

    • Check blob domain for NXDOMAIN before trying wordlist to save bandwidth (maybe)
    • Improve default parameters for better performance

    Wordcloud

    An interesting visualization of popular container names found in my experiments with the tool:


    If you want to know more about my experiments and the subject in general, take a look at my article:



    ☐ ☆ ✇ KitPloit - PenTest Tools!

    CloudPulse - AWS Cloud Landscape Search Engine

    By: Zion3R — October 28th 2023 at 11:30


    During the reconnaissance phase, an attacker searches for any information about his target to create a profile that will later help him to identify possible ways to get in an organization.
    CloudPulse is a powerful tool that simplifies and enhances the analysis of SSL certificate data. It leverages the extensive repository of SSL certificates obtained from the AWS EC2 machines available at Trickest Cloud. With CloudPulse , security researchers can efficiently explore SSL certificate details, uncover potential vulnerabilities, and gather valuable insights for a variety of security-related tasks.


    Simplifies security assessments with a user-friendly interface. It allows you to effortlessly find company's asset's on aws cloud:

    • IPs
    • subdomains
    • domains associated with a target
    • organization name
    • discover origin ips

    1- Download CloudPulse :

    git clone https://github.com/yousseflahouifi/CloudPulse
    cd CloudPulse/

    2- Run docker compose :

    docker-compose up -d

    3- Run script.py script

    docker-compose exec web python script.py

    4 - Now go to http://:8000/search and enjoy the search engine

    1- download CloudPulse :

    git clone https://github.com/yousseflahouifi/CloudPulse
    cd CloudPulse/

    2- Setup virtual environment :

    python3 -m venv myenv
    source myenv/bin/activate

    3- Install requirements.txt file :

    pip install -r requirements.txt

    4- run an instance of elasticsearch using docker :

    docker run -d --name elasticsearch -p 9200:9200 -e "discovery.type=single-node" elasticsearch:6.6.1

    5- update script.py and settings file to the host 'localhost':

    #script.py
    es = Elasticsearch([{'host': 'localhost', 'port': 9200}])
    #se/settings.py

    ELASTICSEARCH_DSL = {
    'default': {
    'hosts': 'localhost:9200'
    },
    }

    6- Run script.py to index data in elasticsearch:

    python script.py

    7- Run the app:

    python manage.py runserver 0:8000

    Included in the CloudPulse repository is a sample data.csv file containing close to 4,000 records, which provides a glimpse of the tool's capabilities. For the full dataset, visit the Trickest Cloud repository clone the data and update data.csv file (it contains close to 9 millions data)

    as an example searching for .mil data gives:

    searching for tesla as en example gives :

    CloudPulse heavily depends on the data.csv file, which is a sample dataset extracted from the larger collection maintained by Trickest. While the sample dataset provides valuable insights, the tool's full potential is realized when used in conjunction with the complete dataset, which is accessible in the Trickest repository here.
    Users are encouraged to refer to the Trickest dataset for a more comprehensive and up-to-date analysis.



    ☐ ☆ ✇ KitPloit - PenTest Tools!

    DakshSCRA - Source Code Review Assist

    By: Zion3R — October 9th 2023 at 11:30


    Daksh SCRA (Source Code Review Assist) tool is built to enhance the efficiency of the source code review process, providing a well-structured and organized approach for code reviewers.

    Rather than indiscriminately flagging everything as a potential issue, Daksh SCRA promotes thoughtful analysis, urging the investigation and confirmation of potential problems. This approach mitigates the scramble to tag every potential concern as a bug, cutting back on the confusion and wasted time spent on false positives.

    What sets Daksh SCRA apart is its emphasis on avoiding unnecessary bug tagging. Unlike conventional methods, it advocates for thorough investigation and confirmation of potential issues before tagging them as bugs. This approach helps mitigate the issue of false positives, which often consume valuable time and resources, thereby fostering a more productive and efficient code review process.


    Debut

    Daksh SCRA was initially introduced during a source code review training session I conducted at Black Hat USA 2022 (August 6 - 9), where it was subtly presented to a specific audience. However, this introduction was carried out with a low-profile approach, avoiding any major announcements.

    While this tool was quietly published on GitHub after the 2022 training, its official public debut took place at Black Hat USA 2023 in Las Vegas.

    Features and Functionalities

    Distinctive Features (Multiple World’s First)

    • Identifies Areas of Interest in Source Code: Encourage focused investigation and confirmation rather than indiscriminately labeling everything as a bug.

    • Identifies Areas of Interest in File Paths (World’s First): Recognises patterns in file paths to pinpoint relevant sections for review.

    • Software-Level Reconnaissance to Identify Technologies Utilised: Identifies project technologies, enabling code reviewers to conduct precise scans with appropriate rules.

    • Automated Scientific Effort Estimation for Code Review (World’s First): Providing a measurable approach for estimating efforts required for a code review process.

    Although this tool has progressed beyond its early stages, it has reached a functional state that is quite usable and delivers on its promised capabilities. Nevertheless, active enhancements are currently underway, and there are multiple new features and improvements expected to be added in the upcoming months.

    Additionally, the tool offers the following functionalities:

    • Options to use platform-specific rules specific for finding areas of interests
    • Options to extend or add new rules for any new or existing languages
    • Generates report in text, HTML and PDF format for inspection

    Refer to the wiki for the tool setup and usage details - https://github.com/coffeeandsecurity/DakshSCRA/wiki

    Feel free to contribute towards updating or adding new rules and future development.

    If you find any bugs, report them to d3basis.m0hanty@gmail.com.

    Tool Setup

    Pre-requisites

    Python3 and all the libraries listed in requirements.txt

    Setting up environment to run this tool

    1. Setup a virtual environment

    $ pip install virtualenv

    $ virtualenv -p python3 {name-of-virtual-env} // Create a virtualenv
    Example: virtualenv -p python3 venv

    $ source {name-of-virtual-env}/bin/activate // To activate virtual environment you just created
    Example: source venv/bin/activate

    After running the activate command you should see the name of your virtual env at the beginning of your terminal like this: (venv) $

    2. Ensure all required libraries are installed within the virtual environment

    You must run the below command after activating the virtual environment as mentioned in the previous steps.

    pip install -r requirements.txt

    Once the above step successfully installs all the required libraries, refer to the following tool usage commands to run the tool.

    Tool Usage

    $ python3 dakshscra.py -h // To view avaialble options and arguments

    usage: dakshscra.py [-h] [-r RULE_FILE] [-f FILE_TYPES] [-v] [-t TARGET_DIR] [-l {R,RF}] [-recon] [-estimate]

    options:
    -h, --help show this help message and exit
    -r RULE_FILE Specify platform specific rule name
    -f FILE_TYPES Specify file types to scan
    -v Specify verbosity level {'-v', '-vv', '-vvv'}
    -t TARGET_DIR Specify target directory path
    -l {R,RF}, --list {R,RF}
    List rules [R] OR rules and filetypes [RF]
    -recon Detects platform, framework and programming language used
    -estimate Estimate efforts required for code review

    Example Usage

    $ python3 dakshscra.py // To view tool usage along with examples

    Examples:
    # '-f' is optional. If not specified, it will default to the corresponding filetypes of the selected rule.
    dakshsca.py -r php -t /source_dir_path

    # To override default settings, other filetypes can be specified with '-f' option.
    dakshsca.py -r php -f dotnet -t /path_to_source_dir
    dakshsca.py -r php -f custom -t /path_to_source_dir

    # Perform reconnaissance and rule based scanning if '-recon' used with '-r' option.
    dakshsca.py -recon -r php -t /path_to_source_dir

    # Perform only reconnaissance if '-recon' used without the '-r' option.
    dakshsca.py -recon -t /path_to_source_dir

    # Verbosity: '-v' is default, '-vvv' will display all rules check within each rule category.
    dakshsca.py -r php -vv -t /path_to_source_dir


    Supported RULE_FILE: dotnet, java, php, javascript
    Supported FILE_TY PES: dotnet, php, java, custom, allfiles

    Reports

    The tool generates reports in three formats: HTML, PDF, and TEXT. Although the HTML and PDF reports are still being improved, they are currently in a reasonably good state. With each subsequent iteration, these reports will continue to be refined and improved even further.

    Scanning (Areas of Security Concerns) Report

    HTML Report:
    • DakshSCRA/reports/html/report.html
    PDF Report:
    • DakshSCRA/reports/html/report.pdf
    RAW TEXT Based Reports:
    • Areas of Interest - Identified Patterns : DakshSCRA/reports/text/areas_of_interest.txt
    • Areas of Interest - Project Files: DakshSCRA/reports/text/filepaths_aoi.txt
    • Identified Project Files: DakshSCRA/runtime/filepaths.txt

    Reconnaissance (Recon) Report

    • Reconnaissance Summary: /reports/text/recon.txt

    Note: Currently, the reconnaissance report is created in a text format. However, in upcoming releases, the plan is to incorporate it into the vulnerability scanning report, which will be available in both HTML and PDF formats.

    Code Review Effort Estimation Report

    • Effort estimation report: /reports/html/estimation.html

    Note: At present, the effort estimation for the source code review is in its early stages. It is considered experimental and will be developed and refined through several iterations. Improvements will be made over multiple releases, as the formula and the concept are new and require time to be honed to achieve accuracy or reasonable estimation.

    Currently, the report is generated in HTML format. However, in future releases, there are plans to also provide it in PDF format.



    ☐ ☆ ✇ KitPloit - PenTest Tools!

    Nodesub - Command-Line Tool For Finding Subdomains In Bug Bounty Programs

    By: Zion3R — October 3rd 2023 at 11:30


    Nodesub is a command-line tool for finding subdomains in bug bounty programs. It supports various subdomain enumeration techniques and provides flexible options for customization.


    Features

    • Perform subdomain enumeration using CIDR notation (Support input list).
    • Perform subdomain enumeration using ASN (Support input list).
    • Perform subdomain enumeration using a list of domains.

    Installation

    To install Nodesub, use the following command:

    npm install -g nodesub

    NOTE:

    • Edit File ~/.config/nodesub/config.ini

    Usage

    nodesub -h

    This will display help for the tool. Here are all the switches it supports.

    Examples
    • Enumerate subdomains for a single domain:

       nodesub -u example.com
    • Enumerate subdomains for a list of domains from a file:

       nodesub -l domains.txt
    • Perform subdomain enumeration using CIDR:

      node nodesub.js -c 192.168.0.0/24 -o subdomains.txt

      node nodesub.js -c CIDR.txt -o subdomains.txt

    • Perform subdomain enumeration using ASN:

      node nodesub.js -a AS12345 -o subdomains.txt
      node nodesub.js -a ASN.txt -o subdomains.txt
    • Enable recursive subdomain enumeration and output the results to a JSON file:

       nodesub -u example.com -r -o output.json -f json

    Output

    The tool provides various output formats for the results, including:

    • Text (txt)
    • JSON (json)
    • CSV (csv)
    • PDF (pdf)

    The output file contains the resolved subdomains, failed resolved subdomains, or all subdomains based on the options chosen.



    ☐ ☆ ✇ KitPloit - PenTest Tools!

    AtlasReaper - A Command-Line Tool For Reconnaissance And Targeted Write Operations On Confluence And Jira Instances

    By: Zion3R — September 26th 2023 at 11:30

     


    AtlasReaper is a command-line tool developed for offensive security purposes, primarily focused on reconnaissance of Confluence and Jira. It also provides various features that can be helpful for tasks such as credential farming and social engineering. The tool is written in C#.


    Blog post: Sowing Chaos and Reaping Rewards in Confluence and Jira

                                                       .@@@@
    @@@@@
    @@@@@ @@@@@@@
    @@@@@ @@@@@@@@@@@
    @@@@@ @@@@@@@@@@@@@@@
    @@@@, @@@@ *@@@@
    @@@@ @@@ @@ @@@ .@@@
    _ _ _ ___ @@@@@@@ @@@@@@
    /_\| |_| |__ _ __| _ \___ __ _ _ __ ___ _ _ @@ @@@@@@@@
    / _ \ _| / _` (_-< / -_) _` | '_ \/ -_) '_| @@ @@@@@@@@
    /_/ \_\__|_\__,_/__/_|_\___\__,_| .__/\___|_| @@@@@@@@ &@
    |_| @@@@@@@@@@ @@&
    @@@@@@@@@@@@@@@@@
    @@@@@@@@@@@@@@@@. @@
    @werdhaihai

    Usage

    AtlasReaper uses commands, subcommands, and options. The format for executing commands is as follows:

    .\AtlasReaper.exe [command] [subcommand] [options]

    Replace [command], [subcommand], and [options] with the appropriate values based on the action you want to perform. For more information about each command or subcommand, use the -h or --help option.

    Below is a list of available commands and subcommands:

    Commands

    Each command has sub commands for interacting with the specific product.

    • confluence
    • jira

    Subcommands

    Confluence

    • confluence attach - Attach a file to a page.
    • confluence download - Download an attachment.
    • confluence embed - Embed a 1x1 pixel image to perform farming attacks.
    • confluence link - Add a link to a page.
    • confluence listattachments - List attachments.
    • confluence listpages - List pages in Confluence.
    • confluence listspaces - List spaces in Confluence.
    • confluence search - Search Confluence.

    Jira

    • jira addcomment - Add a comment to an issue.
    • jira attach - Attach a file to an issue.
    • jira createissue - Create a new issue.
    • jira download - Download attachment(s) from an issue.
    • jira listattachments - List attachments on an issue.
    • jira listissues - List issues in Jira.
    • jira listprojects - List projects in Jira.
    • jira listusers - List Atlassian users.
    • jira searchissues - Search issues in Jira.

    Common Commands

    • help - Display more information on a specific command.

    Examples

    Here are a few examples of how to use AtlasReaper:

    • Search for a keyword in Confluence with wildcard search:

      .\AtlasReaper.exe confluence search --query "http*example.com*" --url $url --cookie $cookie

    • Attach a file to a page in Confluence:

      .\AtlasReaper.exe confluence attach --page-id "12345" --file "C:\path\to\file.exe" --url $url --cookie $cookie

    • Create a new issue in Jira:

      .\AtlasReaper.exe jira createissue --project "PROJ" --issue-type Task --message "I can't access this link from my host" --url $url --cookie $cookie

    Authentication

    Confluence and Jira can be configured to allow anonymous access. You can check this by supplying omitting the -c/--cookie from the commands.

    In the event authentication is required, you can dump cookies from a user's browser with SharpChrome or another similar tool.

    1. .\SharpChrome.exe cookies /showall

    2. Look for any cookies scoped to the *.atlassian.net named cloud.session.token or tenant.session.token

    Limitations

    Please note the following limitations of AtlasReaper:

    • The tool has not been thoroughly tested in all environments, so it's possible to encounter crashes or unexpected behavior. Efforts have been made to minimize these issues, but caution is advised.
    • AtlasReaper uses the cloud.session.token or tenant.session.token which can be obtained from a user's browser. Alternatively, it can use anonymous access if permitted. (API tokens or other auth is not currently supported)
    • For write operations, the username associated with the user session token (or "anonymous") will be listed.

    Contributing

    If you encounter any issues or have suggestions for improvements, please feel free to contribute by submitting a pull request or opening an issue in the AtlasReaper repo.



    ☐ ☆ ✇ KitPloit - PenTest Tools!

    Poastal - The Email OSINT Tool

    By: Zion3R — August 25th 2023 at 12:30


    Poastal is an email OSINT tool that provides valuable information on any email address. With Poastal, you can easily input an email address and it will quickly answer several questions, providing you with crucial information.


    Features

    • Determine the name of the person who has the email.
    • Check if the email is deliverable or not.
    • Find out if the email is disposable or not.
    • Identify if the email is considered spam.
    • Check if the email is registered on popular platforms such as Facebook, Twitter, Snapchat, Parler, Rumble, MeWe, Imgur, Adobe, Wordpress, and Duolingo.

    Usage

    Make sure you have the requirements installed.

    pip install -r requirements.txt

    Navigate to the backend folder and run poastal.py to start the Flask app. This points to port:8080.

    python poastal.py

    Open index.html in the root directory to use the UI.

    Enter an email address and see the results.

    Test with example@gmail.com.

    There's a new GitHub module.

    If you open up github.py you'll see a section that asks you to replace it with your own API key.

    Feedback

    I hope you find Poastal to be a valuable tool for your OSINT investigations. If you have any feedback or suggestions on how we can improve Poastal, please let me know. I'm always looking for ways to improve this tool to better serve the OSINT community.



    ☐ ☆ ✇ KitPloit - PenTest Tools!

    Xsubfind3R - A CLI Utility To Find Domain'S Known Subdomains From Curated Passive Online Sources

    By: Zion3R — August 19th 2023 at 12:30


    xsubfind3r is a command-line interface (CLI) utility to find domain's known subdomains from curated passive online sources.


    Features

    • Fetches domains from curated passive sources to maximize results.

    • Supports stdin and stdout for easy integration into workflows.

    • Cross-Platform (Windows, Linux & macOS).

    Installation

    Install release binaries (Without Go Installed)

    Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

    • ...with wget:

       wget https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz
    • ...or, with curl:

       curl -OL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz

    ...then, extract the binary:

    tar xf xsubfind3r-<version>-linux-amd64.tar.gz

    TIP: The above steps, download and extract, can be combined into a single step with this onliner

    curl -sL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz | tar -xzv

    NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xsubfind3r executable.

    ...move the xsubfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

    sudo mv xsubfind3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r to their PATH.

    Install source (With Go Installed)

    Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

    go install ...

    go install -v github.com/hueristiq/xsubfind3r/cmd/xsubfind3r@latest

    go build ... the development Version

    • Clone the repository

       git clone https://github.com/hueristiq/xsubfind3r.git 
    • Build the utility

       cd xsubfind3r/cmd/xsubfind3r && \
      go build .
    • Move the xsubfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

       sudo mv xsubfind3r /usr/local/bin/

      NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r to their PATH.

    NOTE: While the development version is a good way to take a peek at xsubfind3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

    Post Installation

    xsubfind3r will work right after installation. However, BeVigil, Chaos, Fullhunt, Github, Intelligence X and Shodan require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xsubfind3r/config.yaml file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.

    Example config.yaml:

    version: 0.3.0
    sources:
    - alienvault
    - anubis
    - bevigil
    - chaos
    - commoncrawl
    - crtsh
    - fullhunt
    - github
    - hackertarget
    - intelx
    - shodan
    - urlscan
    - wayback
    keys:
    bevigil:
    - awA5nvpKU3N8ygkZ
    chaos:
    - d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39asdsd54bbc1aabb208c9acfb
    fullhunt:
    - 0d9652ce-516c-4315-b589-9b241ee6dc24
    github:
    - d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
    - asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
    intelx:
    - 2.intelx.io:00000000-0000-0000-0000-000000000000
    shodan:
    - AAAAClP1bJJSRMEYJazgwhJKrggRwKA
    urlscan:
    - d4c85d34-e425-446e-d4ab-f5a3412acbe8

    Usage

    To display help message for xsubfind3r use the -h flag:

    xsubfind3r -h

    help message:


    _ __ _ _ _____
    __ _____ _ _| |__ / _(_)_ __ __| |___ / _ __
    \ \/ / __| | | | '_ \| |_| | '_ \ / _` | |_ \| '__|
    > <\__ \ |_| | |_) | _| | | | | (_| |___) | |
    /_/\_\___/\__,_|_.__/|_| |_|_| |_|\__,_|____/|_| v0.3.0

    USAGE:
    xsubfind3r [OPTIONS]

    INPUT:
    -d, --domain string[] target domains
    -l, --list string target domains' list file path

    SOURCES:
    --sources bool list supported sources
    -u, --sources-to-use string[] comma(,) separeted sources to use
    -e, --sources-to-exclude string[] comma(,) separeted sources to exclude

    OPTIMIZATION:
    -t, --threads int number of threads (default: 50)

    OUTPUT:
    --no-color bool disable colored output
    -o, --output string output subdomains' file path
    -O, --output-directory string output subdomains' directory path
    -v, --verbosity string debug, info, warning, error, fatal or silent (default: info)

    CONFIGURATION:
    -c, --configuration string configuration file path (default: ~/.hueristiq/xsubfind3r/config.yaml)

    Contribution

    Issues and Pull Requests are welcome! Check out the contribution guidelines.

    Licensing

    This utility is distributed under the MIT license.



    ☐ ☆ ✇ KitPloit - PenTest Tools!

    InfoHound - An OSINT To Extract A Large Amount Of Data Given A Web Domain Name

    By: Zion3R — August 16th 2023 at 20:58


    During the reconnaissance phase, an attacker searches for any information about his target to create a profile that will later help him to identify possible ways to get in an organization. InfoHound performs passive analysis techniques (which do not interact directly with the target) using OSINT to extract a large amount of data given a web domain name. This tool will retrieve emails, people, files, subdomains, usernames and urls that will be later analyzed to extract even more valuable information.


    Infohound architecture

    Installation

    git clone https://github.com/xampla/InfoHound.git
    cd InfoHound/infohound
    mv infohound_config.sample.py infohound_config.py
    cd ..
    docker-compose up -d

    You must add API Keys inside infohound_config.py file

    Default modules

    InfoHound has 2 different types of modules, those which retreives data and those which analyse it to extract more relevant information.

     Retrievval modules

    Name Description
    Get Whois Info Get relevant information from Whois register.
    Get DNS Records This task queries the DNS.
    Get Subdomains This task uses Alienvault OTX API, CRT.sh, and HackerTarget as data sources to discover cached subdomains.
    Get Subdomains From URLs Once some tasks have been performed, the URLs table will have a lot of entries. This task will check all the URLs to find new subdomains.
    Get URLs It searches all URLs cached by Wayback Machine and saves them into the database. This will later help to discover other data entities like files or subdomains.
    Get Files from URLs It loops through the URLs database table to find files and store them in the Files database table for later analysis. The files that will be retrieved are: doc, docx, ppt, pptx, pps, ppsx, xls, xlsx, odt, ods, odg, odp, sxw, sxc, sxi, pdf, wpd, svg, indd, rdp, ica, zip, rar
    Find Email It looks for emails using queries to Google and Bing.
    Find People from Emails Once some emails have been found, it can be useful to discover the person behind them. Also, it finds usernames from those people.
    Find Emails From URLs Sometimes, the discovered URLs can contain sensitive information. This task retrieves all the emails from URL paths.
    Execute Dorks It will execute the dorks defined in the dorks folder. Remember to group the dorks by categories (filename) to understand their objectives.
    Find Emails From Dorks By default, InfoHound has some dorks defined to discover emails. This task will look for them in the results obtained from dork execution.

    Analysis

    Name Description
    Check Subdomains Take-Over It performs some checks to determine if a subdomain can be taken over.
    Check If Domain Can Be Spoofed It checks if a domain, from the emails InfoHound has discovered, can be spoofed. This could be used by attackers to impersonate a person and send emails as him/her.
    Get Profiles From Usernames This task uses the discovered usernames from each person to find profiles from services or social networks where that username exists. This is performed using the Maigret tool. It is worth noting that although a profile with the same username is found, it does not necessarily mean it belongs to the person being analyzed.
    Download All Files Once files have been stored in the Files database table, this task will download them in the "download_files" folder.
    Get Metadata Using exiftool, this task will extract all the metadata from the downloaded files and save it to the database.
    Get Emails From Metadata As some metadata can contain emails, this task will retrieve all of them and save them to the database.
    Get Emails From Files Content Usually, emails can be included in corporate files, so this task will retrieve all the emails from the downloaded files' content.
    Find Registered Services using Emails It is possible to find services or social networks where an email has been used to create an account. This task will check if an email InfoHound has discovered has an account in Twitter, Adobe, Facebook, Imgur, Mewe, Parler, Rumble, Snapchat, Wordpress, and/or Duolingo.
    Check Breach This task checks Firefox Monitor service to see if an email has been found in a data breach. Although it is a free service, it has a limitation of 10 queries per day. If Leak-Lookup API key is set, it also checks it.

    Custom modules

    InfoHound lets you create custom modules, you just need to add your script inside infohoudn/tool/custom_modules. One custome module has been added as an example which uses Holehe tool to check if the emails previously are attached to an account on sites like Twitter, Instagram, Imgur and more than 120 others.

    Inspired by



    ☐ ☆ ✇ KitPloit - PenTest Tools!

    Chaos - Origin IP Scanning Utility Developed With ChatGPT

    By: Zion3R — August 10th 2023 at 12:30


    chaos is an 'origin' IP scanner developed by RST in collaboration with ChatGPT. It is a niche utility with an intended audience of mostly penetration testers and bug hunters.

    An origin-IP is a term-of-art expression describing the final public IP destination for websites that are publicly served via 3rd parties. If you'd like to understand more about why anyone might be interested in Origin-IPs, please check out our blog post.

    chaos was rapidly prototyped from idea to functional proof-of-concept in less than 24 hours using our principles of DevOps with ChatGPT.

    usage: chaos.py [-h] -f FQDN -i IP [-a AGENT] [-C] [-D] [-j JITTER] [-o OUTPUT] [-p PORTS] [-P] [-r] [-s SLEEP] [-t TIMEOUT] [-T] [-v] [-x] 
    _..._
    .-'` `'-.
    __|___________|__
    \ /
    `._ CHAOS _.'
    `-------`
    / \\
    / \\
    / \\
    / \\
    / \\
    / \\
    / \\
    / \\
    / \\
    /_____________________\\
    CHAtgpt Origin-ip Scanner
    _______ _______ _______ _______ _______
    |\\ /|\\ /|\\ /|\\ /|\\/|
    | +---+ | +---+ | +---+ | +---+ | +---+ |
    | |H | | |U | | |M | | |A | | |N | |
    | |U | | |S | | |A | | |N | | |C | |
    | |M | | |E | | |N | | |D | | |O | |
    | |A | | |R | | |C | | | | | |L | |
    | +---+ | +---+ | +---+ | +---+ | +---+ |
    |/_____|\\_____|\\_____|\\_____|\\_____\\

    Origin IP Scanner developed with ChatGPT
    cha*os (n): complete disorder and confusion
    (ver: 0.9.4)


    Features

    • Threaded for performance gains
    • Real-time status updates and progress bars, nice for large scans ;)
    • Flexible user options for various scenarios & constraints
    • Dataset reduction for improved scan times
    • Easy to use CSV output

    Installation

    1. Download / clone / unzip / whatever
    2. cd path/to/chaos
    3. pip3 install -U pip setuptools virtualenv
    4. virtualenv env
    5. source env/bin/activate
    6. (env) pip3 install -U -r ./requirements.txt
    7. (env) ./chaos.py -h

    Options

    -h, --help            show this help message and exit
    -f FQDN, --fqdn FQDN Path to FQDN file (one FQDN per line)
    -i IP, --ip IP IP address(es) for HTTP requests (Comma-separated IPs, IP networks, and/or files with IP/network per line)
    -a AGENT, --agent AGENT
    User-Agent header value for requests
    -C, --csv Append CSV output to OUTPUT_FILE.csv
    -D, --dns Perform fwd/rev DNS lookups on FQDN/IP values prior to request; no impact to testing queue
    -j JITTER, --jitter JITTER
    Add a 0-N second randomized delay to the sleep value
    -o OUTPUT, --output OUTPUT
    Append console output to FILE
    -p PORTS, --ports PORTS
    Comma-separated list of TCP ports to use (default: "80,443")
    -P, --no-prep Do not pre-scan each IP/port w ith `GET /` using `Host: {IP:Port}` header to eliminate unresponsive hosts
    -r, --randomize Randomize(ish) the order IPs/ports are tested
    -s SLEEP, --sleep SLEEP
    Add N seconds before thread completes
    -t TIMEOUT, --timeout TIMEOUT
    Wait N seconds for an unresponsive host
    -T, --test Test-mode; don't send requests
    -v, --verbose Enable verbose output
    -x, --singlethread Single threaded execution; for 1-2 core systems; default threads=(cores-1) if cores>2

    Examples

    Localhost Testing

    Launch python HTTP server

    % python3 -u -m http.server 8001
    Serving HTTP on :: port 8001 (http://[::]:8001/) ...

    Launch ncat as HTTP on a port detected as SSL; use a loop because --keep-open can hang

    % while true; do ncat -lvp 8443 -c 'printf "HTTP/1.0 204 Plaintext OK\n\n<html></html>\n"'; done
    Ncat: Version 7.94 ( https://nmap.org/ncat )
    Ncat: Listening on [::]:8443
    Ncat: Listening on 0.0.0.0:8443

    Also launch ncat as SSL on a port that will default to HTTP detection

    % while true; do ncat --ssl -lvp 8444 -c 'printf "HTTP/1.0 202 OK\n\n<html></html>\n"'; done    
    Ncat: Version 7.94 ( https://nmap.org/ncat )
    Ncat: Generating a temporary 2048-bit RSA key. Use --ssl-key and --ssl-cert to use a permanent one.
    Ncat: SHA-1 fingerprint: 0208 1991 FA0D 65F0 608A 9DAB A793 78CB A6EC 27B8
    Ncat: Listening on [::]:8444
    Ncat: Listening on 0.0.0.0:8444

    Prepare an FQDN file:

    % cat ../test_localhost_fqdn.txt 
    www.example.com
    localhost.example.com
    localhost.local
    localhost
    notreally.arealdomain

    Prepare an IP file / list:

    % cat ../test_localhost_ips.txt 
    127.0.0.1
    127.0.0.0/29
    not_an_ip_addr
    -6.a
    =4.2
    ::1

    Run the scan

    • Note an IPv6 network added to IPs on the CLI
    • -p to specify the ports we are listening on
    • -x for single threaded run to give our ncat servers time to restart
    • -s0.2 short sleep for our ncat servers to restart
    • -t1 to timeout after 1 second
    % ./chaos.py -f ../test_localhost_fqdn.txt -i ../test_localhost_ips.txt,::1/126 -p 8001,8443,8444 -x -s0.2 -t1   
    2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: localhost.local
    2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: localhost
    2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: notreally.arealdomain
    2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block =4.2
    2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block -6.a
    2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block not_an_ip_addr
    2023-06-21 12:48:33 [INFO] * ---- <META> ---- *
    2023-06-21 12:48:33 [INFO] * Version: 0.9.4
    2023-06-21 12:48:33 [INFO] * FQDN file: ../test_localhost_fqdn.txt
    2023-06-21 12:48:33 [INFO] * FQDNs loaded: ['www.example.com', 'localhost.example.com']
    2023-06-21 12:48:33 [INFO] * IP input value(s): ../test_localhost_ips.txt,::1/126
    2023-06-21 12:48:33 [INFO] * Addresses pars ed from IP inputs: 12
    2023-06-21 12:48:33 [INFO] * Port(s): 8001,8443,8444
    2023-06-21 12:48:33 [INFO] * Thread(s): 1
    2023-06-21 12:48:33 [INFO] * Sleep value: 0.2
    2023-06-21 12:48:33 [INFO] * Timeout: 1.0
    2023-06-21 12:48:33 [INFO] * User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36 ch4*0s/0.9.4
    2023-06-21 12:48:33 [INFO] * ---- </META> ---- *
    2023-06-21 12:48:33 [INFO] 36 unique address/port addresses for testing
    Prep Tests: 100%|█████████████████████████████████████████████████████████████████&# 9608;██████████████████████████████████████████████████████████████████████████████| 36/36 [00:29<00:00, 1.20it/s]
    2023-06-21 12:49:03 [INFO] 9 IP/ports verified, reducing test dataset from 72 entries
    2023-06-21 12:49:03 [INFO] 18 pending tests remain after pre-testing
    2023-06-21 12:49:03 [INFO] Queuing 18 threads
    ++RCVD++ (200 OK) www.example.com @ :::8001
    ++RCVD++ (204 Plaintext OK) www.example.com @ :::8443
    ++RCVD++ (202 OK) www.example.com @ :::8444
    ++RCVD++ (200 OK) www.example.com @ ::1:8001
    ++RCVD++ (204 Plaintext OK) www.example.com @ ::1:8443
    ++RCVD++ (202 OK) www.example.com @ ::1:8444
    ++RCVD++ (200 OK) www.example.com @ 127.0.0.1:8001
    ++RCVD++ (204 Plaintext OK) www.example.com @ 127.0.0.1:8443
    ++RCVD++ (202 OK) www.example.com @ 127.0.0.1:8444
    ++RCVD++ (200 OK) localhost.example.com @ :::8001
    ++RCVD++ (204 Plaintext OK) localhost.example.com @ :::8443
    ++RCVD+ + (202 OK) localhost.example.com @ :::8444
    ++RCVD++ (200 OK) localhost.example.com @ ::1:8001
    ++RCVD++ (204 Plaintext OK) localhost.example.com @ ::1:8443
    ++RCVD++ (202 OK) localhost.example.com @ ::1:8444
    ++RCVD++ (200 OK) localhost.example.com @ 127.0.0.1:8001
    ++RCVD++ (204 Plaintext OK) localhost.example.com @ 127.0.0.1:8443
    ++RCVD++ (202 OK) localhost.example.com @ 127.0.0.1:8444
    Origin Scan: 100%|█████████████████████████████████████████████████████████████████████████████████████&#96 08;█████████████████████████████████████████████████████████| 18/18 [00:06<00:00, 2.76it/s]
    2023-06-21 12:49:09 [RSLT] Results from 5 FQDNs:
    ::1
    ::1:8444 => (202 / OK)
    ::1:8443 => (204 / Plaintext OK)
    ::1:8001 => (200 / OK)

    127.0.0.1
    127.0.0.1:8001 => (200 / OK)
    127.0.0.1:8443 => (204 / Plaintext OK)
    127.0.0.1:8444 => (202 / OK)

    ::
    :::8001 => (200 / OK)
    :::8443 => (204 / Plaintext OK)
    :::8444 => (202 / OK)

    www.example.com
    :::8001 => (200 / OK)
    :::8443 => (204 / Plaintext OK)
    :::8444 => (202 / OK)
    ::1:8001 => (200 / OK)
    ::1:8443 => (204 / Plaintext OK)
    ::1:8444 => (202 / OK)
    127.0.0.1:8001 => (200 / OK)
    127.0.0.1:8443 => (204 / Plaintext OK)
    127.0.0.1:8444 => (202 / OK)

    localhost.example.com
    :::8001 => (200 / OK)
    :::8443 => (204 / Plaintext OK)
    :::8444 => (202 / OK)
    ::1:8001 => (200 / OK)
    ::1:8443 => (204 / Plaintext OK)
    ::1:8444 => (202 / OK)
    127.0.0.1:8001 => (200 / OK)
    127.0.0.1:8443 => (204 / Plaintext OK)
    127.0.0.1:8444 => (202 / OK)


    rst@r57 chaos %

    Test & Verbose localhost

    -T runs in test mode (do everything except send requests)

    -v verbose option provides additional output


    Known Defects

    • HTTP/HTTPS detection is not ideal
    • Need option to adjust CSV newline delimiter
    • Need options to adjust where long strings / many lines are truncated
    • Try to figure out why we marked requests v2.x as required ;)
    • Options for very-verbose / quiet
    • Stagger thread launch when we're using sleep / jitter
    • Search for meta-refresh in 200 responses
    • Content-Location header for 201s ?
    • Improve thread name generation so we have the right number of unique names
    • Sanity check on IPv6 netmasks to prevent scans that outlive the sun?
    • TBD?

    Related Links

    Disclaimers

    • Copyright (C) 2023 RST
    • This software is distributed on an "AS IS" basis, without express or implied warranties of any kind
    • This software is intended for research and/or authorized testing; it is your responsibility to ensure you are authorized to use this software in any way
    • By using this software you acknowledge that you are responsible for your actions and assume all liability for any direct, indirect, or other damages


    ☐ ☆ ✇ KitPloit - PenTest Tools!

    Xurlfind3R - A CLI Utility To Find Domain'S Known URLs From Curated Passive Online Sources

    By: Zion3R — August 9th 2023 at 12:30


    xurlfind3r is a command-line interface (CLI) utility to find domain's known URLs from curated passive online sources.


    Features

    Installation

    Install release binaries (Without Go Installed)

    Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

    • ...with wget:

       wget https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz
    • ...or, with curl:

       curl -OL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz

    ...then, extract the binary:

    tar xf xurlfind3r-<version>-linux-amd64.tar.gz

    TIP: The above steps, download and extract, can be combined into a single step with this onliner

    curl -sL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz | tar -xzv

    NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xurlfind3r executable.

    ...move the xurlfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

    sudo mv xurlfind3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r to their PATH.

    Install source (With Go Installed)

    Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

    go install ...

    go install -v github.com/hueristiq/xurlfind3r/cmd/xurlfind3r@latest

    go build ... the development Version

    • Clone the repository

       git clone https://github.com/hueristiq/xurlfind3r.git 
    • Build the utility

       cd xurlfind3r/cmd/xurlfind3r && \
      go build .
    • Move the xurlfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

       sudo mv xurlfind3r /usr/local/bin/

      NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r to their PATH.

    NOTE: While the development version is a good way to take a peek at xurlfind3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

    Post Installation

    xurlfind3r will work right after installation. However, BeVigil, Github and Intelligence X require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xurlfind3r/config.yaml file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.

    Example config.yaml:

    version: 0.2.0
    sources:
    - bevigil
    - commoncrawl
    - github
    - intelx
    - otx
    - urlscan
    - wayback
    keys:
    bevigil:
    - awA5nvpKU3N8ygkZ
    github:
    - d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
    - asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
    intelx:
    - 2.intelx.io:00000000-0000-0000-0000-000000000000
    urlscan:
    - d4c85d34-e425-446e-d4ab-f5a3412acbe8

    Usage

    To display help message for xurlfind3r use the -h flag:

    xurlfind3r -h

    help message:

                     _  __ _           _ _____      
    __ ___ _ _ __| |/ _(_)_ __ __| |___ / _ __
    \ \/ / | | | '__| | |_| | '_ \ / _` | |_ \| '__|
    > <| |_| | | | | _| | | | | (_| |___) | |
    /_/\_\\__,_|_| |_|_| |_|_| |_|\__,_|____/|_| v0.2.0

    USAGE:
    xurlfind3r [OPTIONS]

    TARGET:
    -d, --domain string (sub)domain to match URLs

    SCOPE:
    --include-subdomains bool match subdomain's URLs

    SOURCES:
    -s, --sources bool list sources
    -u, --use-sources string sources to use (default: bevigil,commoncrawl,github,intelx,otx,urlscan,wayback)
    --skip-wayback-robots bool with wayback, skip parsing robots.txt snapshots
    --skip-wayback-source bool with wayback , skip parsing source code snapshots

    FILTER & MATCH:
    -f, --filter string regex to filter URLs
    -m, --match string regex to match URLs

    OUTPUT:
    --no-color bool no color mode
    -o, --output string output URLs file path
    -v, --verbosity string debug, info, warning, error, fatal or silent (default: info)

    CONFIGURATION:
    -c, --configuration string configuration file path (default: ~/.hueristiq/xurlfind3r/config.yaml)

    Examples

    Basic

    xurlfind3r -d hackerone.com --include-subdomains

    Filter Regex

    # filter images
    xurlfind3r -d hackerone.com --include-subdomains -f '`^https?://[^/]*?/.*\.(jpg|jpeg|png|gif|bmp)(\?[^\s]*)?$`'

    Match Regex

    # match js URLs
    xurlfind3r -d hackerone.com --include-subdomains -m '^https?://[^/]*?/.*\.js(\?[^\s]*)?$'

    Contributing

    Issues and Pull Requests are welcome! Check out the contribution guidelines.

    Licensing

    This utility is distributed under the MIT license.



    ☐ ☆ ✇ Krebs on Security

    Who and What is Behind the Malware Proxy Service SocksEscort?

    By: BrianKrebs — July 25th 2023 at 21:20

    Researchers this month uncovered a two-year-old Linux-based remote access trojan dubbed AVrecon that enslaves Internet routers into botnet that bilks online advertisers and performs password-spraying attacks. Now new findings reveal that AVrecon is the malware engine behind a 12-year-old service called SocksEscort, which rents hacked residential and small business devices to cybercriminals looking to hide their true location online.

    Image: Lumen’s Black Lotus Labs.

    In a report released July 12, researchers at Lumen’s Black Lotus Labs called the AVrecon botnet “one of the largest botnets targeting small-office/home-office (SOHO) routers seen in recent history,” and a crime machine that has largely evaded public attention since first being spotted in mid-2021.

    “The malware has been used to create residential proxy services to shroud malicious activity such as password spraying, web-traffic proxying and ad fraud,” the Lumen researchers wrote.

    Malware-based anonymity networks are a major source of unwanted and malicious web traffic directed at online retailers, Internet service providers (ISPs), social networks, email providers and financial institutions. And a great many of these “proxy” networks are marketed primarily to cybercriminals seeking to anonymize their traffic by routing it through an infected PC, router or mobile device.

    Proxy services can be used in a legitimate manner for several business purposes — such as price comparisons or sales intelligence — but they are massively abused for hiding cybercrime activity because they make it difficult to trace malicious traffic to its original source. Proxy services also let users appear to be getting online from nearly anywhere in the world, which is useful if you’re a cybercriminal who is trying to impersonate someone from a specific place.

    Spur.us, a startup that tracks proxy services, told KrebsOnSecurity that the Internet addresses Lumen tagged as the AVrecon botnet’s “Command and Control” (C2) servers all tie back to a long-running proxy service called SocksEscort.

    SocksEscort[.]com, is what’s known as a “SOCKS Proxy” service. The SOCKS (or SOCKS5) protocol allows Internet users to channel their Web traffic through a proxy server, which then passes the information on to the intended destination. From a website’s perspective, the traffic of the proxy network customer appears to originate from a rented/malware-infected PC tied to a residential ISP customer, not from the proxy service customer.

    The SocksEscort home page says its services are perfect for people involved in automated online activity that often results in IP addresses getting blocked or banned, such as Craigslist and dating scams, search engine results manipulation, and online surveys.

    Spur tracks SocksEscort as a malware-based proxy offering, which means the machines doing the proxying of traffic for SocksEscort customers have been infected with malicious software that turns them into a traffic relay. Usually, these users have no idea their systems are compromised.

    Spur says the SocksEscort proxy service requires customers to install a Windows based application in order to access a pool of more than 10,000 hacked devices worldwide.

    “We created a fingerprint to identify the call-back infrastructure for SocksEscort proxies,” Spur co-founder Riley Kilmer said. “Looking at network telemetry, we were able to confirm that we saw victims talking back to it on various ports.”

    According to Kilmer, AVrecon is the malware that gives SocksEscort its proxies.

    “When Lumen released their report and IOCs [indicators of compromise], we queried our system for which proxy service call-back infrastructure overlapped with their IOCs,” Kilmer continued. “The second stage C2s they identified were the same as the IPs we labeled for SocksEscort.”

    Lumen’s research team said the purpose of AVrecon appears to be stealing bandwidth – without impacting end-users – in order to create a residential proxy service to help launder malicious activity and avoid attracting the same level of attention from Tor-hidden services or commercially available VPN services.

    “This class of cybercrime activity threat may evade detection because it is less likely than a crypto-miner to be noticed by the owner, and it is unlikely to warrant the volume of abuse complaints that internet-wide brute-forcing and DDoS-based botnets typically draw,” Lumen’s Black Lotus researchers wrote.

    Preserving bandwidth for both customers and victims was a primary concern for SocksEscort in July 2022, when 911S5 — at the time the world’s largest known malware proxy network — got hacked and imploded just days after being exposed in a story here. Kilmer said after 911’s demise, SocksEscort closed its registration for several months to prevent an influx of new users from swamping the service.

    Danny Adamitis, principal information security researcher at Lumen and co-author of the report on AVrecon, confirmed Kilmer’s findings, saying the C2 data matched up with what Spur was seeing for SocksEscort dating back to September 2022.

    Adamitis said that on July 13 — the day after Lumen published research on AVrecon and started blocking any traffic to the malware’s control servers — the people responsible for maintaining the botnet reacted quickly to transition infected systems over to a new command and control infrastructure.

    “They were clearly reacting and trying to maintain control over components of the botnet,” Adamitis said. “Probably, they wanted to keep that revenue stream going.”

    Frustratingly, Lumen was not able to determine how the SOHO devices were being infected with AVrecon. Some possible avenues of infection include exploiting weak or default administrative credentials on routers, and outdated, insecure firmware that has known, exploitable security vulnerabilities.

    WHO’S BEHIND SOCKSESCORT?

    KrebsOnSecurity briefly visited SocksEscort last year and promised a follow-up on the history and possible identity of its proprietors. A review of the earliest posts about this service on Russian cybercrime forums suggests the 12-year-old malware proxy network is tied to a Moldovan company that also offers VPN software on the Apple Store and elsewhere.

    SocksEscort began in 2009 as “super-socks[.]com,” a Russian-language service that sold access to thousands of compromised PCs that could be used to proxy traffic. Someone who picked the nicknames “SSC” and “super-socks” and email address “michvatt@gmail.com” registered on multiple cybercrime forums and began promoting the proxy service.

    According to DomainTools.com, the apparently related email address “michdomain@gmail.com” was used to register SocksEscort[.]com, super-socks[.]com, and a few other proxy-related domains, including ip-score[.]com, segate[.]org seproxysoft[.]com, and vipssc[.]us. Cached versions of both super-socks[.]com and vipssc[.]us show these sites sold the same proxy service, and both displayed the letters “SSC” prominently at the top of their homepages.

    Image: Archive.org. Page translation from Russian via Google Translate.

    According to cyber intelligence firm Intel 471, the very first “SSC” identity registered on the cybercrime forums happened in 2009 at the Russian language hacker community Antichat, where SSC asked fellow forum members for help in testing the security of a website they claimed was theirs: myiptest[.]com, which promised to tell visitors whether their proxy address was included on any security or anti-spam block lists.

    Myiptest[.]com is no longer responding, but a cached copy of it from Archive.org shows that for about four years it included in its HTML source a Google Analytics code of US-2665744, which was also present on more than a dozen other websites.

    Most of the sites that once bore that Google tracking code are no longer online, but nearly all of them centered around services that were similar to myiptest[.]com, such as abuseipdb[.]com, bestiptest[.]com, checkdnslbl[.]com, dnsbltools[.]com and dnsblmonitor[.]com.

    Each of these services were designed to help visitors quickly determine whether the Internet address they were visiting the site from was listed by any security firms as spammy, malicious or phishous. In other words, these services were designed so that proxy service users could easily tell if their rented Internet address was still safe to use for online fraud.

    Another domain with the Google Analytics code US-2665744 was sscompany[.]net. An archived copy of the site says SSC stands for “Server Support Company,” which advertised outsourced solutions for technical support and server administration.

    Leaked copies of the hacked Antichat forum indicate the SSC identity registered on the forum using the IP address 71.229.207.214. That same IP was used to register the nickname “Deem3n®,” a prolific poster on Antichat between 2005 and 2009 who served as a moderator on the forum.

    There was a Deem3n® user on the webmaster forum Searchengines.guru whose signature in their posts says they run a popular community catering to programmers in Moldova called sysadmin[.]md, and that they were a systems administrator for sscompany[.]net.

    That same Google Analytics code is also now present on the homepages of wiremo[.]co and a VPN provider called HideIPVPN[.]com.

    Wiremo sells software and services to help website owners better manage their customer reviews. Wiremo’s Contact Us page lists a “Server Management LLC” in Wilmington, DE as the parent company. Server Management LLC is currently listed in Apple’s App Store as the owner of a “free” VPN app called HideIPVPN.

    “The best way to secure the transmissions of your mobile device is VPN,” reads HideIPVPN’s description on the Apple Store. “Now, we provide you with an even easier way to connect to our VPN servers. We will hide your IP address, encrypt all your traffic, secure all your sensitive information (passwords, mail credit card details, etc.) form [sic] hackers on public networks.”

    When asked about the company’s apparent connection to SocksEscort, Wiremo responded, “We do not control this domain and no one from our team is connected to this domain.” Wiremo did not respond when presented with the findings in this report.

    ☐ ☆ ✇ KitPloit - PenTest Tools!

    Artemis - A Modular Web Reconnaissance Tool And Vulnerability Scanner

    By: Zion3R — June 29th 2023 at 12:30


    A modular web reconnaissance tool and vulnerability scanner based on Karton (https://github.com/CERT-Polska/karton).

    The Artemis project has been initiated by the KN Cyber science club of Warsaw University of Technology and is currently being maintained by CERT Polska.

    Artemis is experimental software, under active development - use at your own risk.

    Features

    For an up-to-date list of features, please refer to the documentation.

    Development

    Tests

    To run the tests, use:

    ./scripts/test

    Code formatting

    Artemis uses pre-commit to run linters and format the code. pre-commit is executed on CI to verify that the code is formatted properly.

    To run it locally, use:

    pre-commit run --all-files

    To setup pre-commit so that it runs before each commit, use:

    pre-commit install

    Building the docs

    To build the documentation, use:

    cd docs
    python3 -m venv venv
    . venv/bin/activate
    pip install -r requirements.txt
    make html

    How do I write my own module?

    Please refer to the documentation.

    Contributing

    Contributions are welcome! We will appreciate both ideas for new Artemis modules (added as GitHub issues) as well as pull requests with new modules or code improvements.

    However obvious it may seem we kindly remind you that by contributing to Artemis you agree that the BSD 3-Clause License shall apply to your input automatically, without the need for any additional declarations to be made.



    ☐ ☆ ✇ KitPloit - PenTest Tools!

    ReconAIzer - A Burp Suite Extension To Add OpenAI (GPT) On Burp And Help You With Your Bug Bounty Recon To Discover Endpoints, Params, URLs, Subdomains And More!

    By: Zion3R — June 28th 2023 at 12:30


    ReconAIzer is a powerful Jython extension for Burp Suite that leverages OpenAI to help bug bounty hunters optimize their recon process. This extension automates various tasks, making it easier and faster for security researchers to identify and exploit vulnerabilities.

    Once installed, ReconAIzer add a contextual menu and a dedicated tab to see the results:


    Prerequisites

    • Burp Suite
    • Jython Standalone Jar

    Installation

    Follow these steps to install the ReconAIzer extension on Burp Suite:

    Step 1: Download Jython

    1. Download the latest Jython Standalone Jar from the official website: https://www.jython.org/download
    2. Save the Jython Standalone Jar file in a convenient location on your computer.

    Step 2: Configure Jython in Burp Suite

    1. Open Burp Suite.
    2. Go to the "Extensions" tab.
    3. Click on the "Extensions settings" sub-tab.
    4. Under "Python Environment," click on the "Select file..." button next to "Location of the Jython standalone JAR file."
    5. Browse to the location where you saved the Jython Standalone Jar file in Step 1 and select it.
    6. Wait for the "Python Environment" status to change to "Jython (version x.x.x) successfully loaded," where x.x.x represents the Jython version.

    Step 3: Download and Install ReconAIzer

    1. Download the latest release of ReconAIzer
    2. Open Burp Suite
    3. Go back to the "Extensions" tab in Burp Suite.
    4. Click the "Add" button.
    5. In the "Add extension" dialog, select "Python" as the "Extension type."
    6. Click on the "Select file..." button next to "Extension file" and browse to the location where you saved the ReconAIzer.py file in Step 3.1. Select the file and click "Open."
    7. Make sure the "Load" checkbox is selected and click the "Next" button.
    8. Wait for the extension to be loaded. You should see a message in the "Output" section stating that the ReconAIzer extension has been successfully loaded.

    Congratulations! You have successfully installed the ReconAIzer extension in Burp Suite. You can now start using it to enhance your bug bounty hunting experience.

    Once it's done, you must configure your OpenAI API key on the "Config" tab under "ReconAIzer" tab.

    Feel free to suggest prompts improvements or anything you would like to see on ReconAIzer!

    Happy bug hunting!



    ☐ ☆ ✇ KitPloit - PenTest Tools!

    NTLMRecon - A Tool For Performing Light Brute-Forcing Of HTTP Servers To Identify Commonly Accessible NTLM Authentication Endpoints

    By: Zion3R — May 7th 2023 at 12:30


    NTLMRecon is a Golang version of the original NTLMRecon utility written by Sachin Kamath (AKA pwnfoo). NTLMRecon can be leveraged to perform brute forcing against a targeted webserver to identify common application endpoints supporting NTLM authentication. This includes endpoints such as the Exchange Web Services endpoint which can often be leveraged to bypass multi-factor authentication.

    The tool supports collecting metadata from the exposed NTLM authentication endpoints including information on the computer name, Active Directory domain name, and Active Directory forest name. This information can be obtained without prior authentication by sending an NTLM NEGOTIATE_MESSAGE packet to the server and examining the NTLM CHALLENGE_MESSAGE returned by the targeted server. We have also published a blog post alongside this tool discussing some of the motiviations behind it's development and how we are approaching more advanced metadata collectoin within Chariot.


    Why build a new version of this capability?

    We wanted to perform brute-forcing and automated identification of exposed NTLM authentication endpoints within Chariot, our external attack surface management and continuous automated red teaming platform. Our primary backend scanning infrastructure is written in Golang and we didn't want to have to download and shell out to the NTLMRecon utility in Python to collect this information. We also wanted more control over the level of detail of the information we collected, etc.

    Installation

    The following command can be leveraged to install the NTLMRecon utility. Alternatively, you may download a precompiled version of the binary from the releases tab in GitHub.

    go install github.com/praetorian-inc/NTLMRecon/cmd/NTLMRecon@latest

    Usage

    The following command can be leveraged to invoke the NTLM recon utility and discover exposed NTLM authentication endpoints:

    NTLMRecon -t https://autodiscover.contoso.com

    The following command can be leveraged to invoke the NTLM recon utility and discover exposed NTLM endpoints while outputting collected metadata in a JSON format:

    NTLMRecon -t https://autodiscover.contoso.com -o json

    Example JSON Output

    Below is an example JSON output with the data we collect from the NTLM CHALLENGE_MESSAGE returned by the server:

    {
    "url": "https://autodiscover.contoso.com/EWS/",
    "ntlm": {
    "netbiosComputerName": "MSEXCH1",
    "netbiosDomainName": "CONTOSO",
    "dnsDomainName": "na.contoso.local",
    "dnsComputerName": "msexch1.na.contoso.local",
    "forestName": "contoso.local"
    }
    }
    ➜  ~ NTLMRecon -t https://adfs.contoso.com -o json | jq
    {
    "url": "https://adfs.contoso.com/adfs/services/trust/2005/windowstransport",
    "ntlm": {
    "netbiosComputerName": "MSFED1",
    "netbiosDomainName": "CONTOSO",
    "dnsDomainName": "corp.contoso.com",
    "dnsComputerName": "MSEXCH1.corp.contoso.com",
    "forestName": "contoso.com"
    }
    }
    ➜ ~ NTLMRecon -t https://autodiscover.contoso.com
    https://autodiscover.contoso.com/Autodiscover
    https://autodiscover.contoso.com/Autodiscover/AutodiscoverService.svc/root
    https://autodiscover.contoso.com/Autodiscover/Autodiscover.xml
    https://autodiscover.contoso.com/EWS/
    https://autodiscover.contoso.com/OAB/
    https://autodiscover.contoso.com/Rpc/
    ➜ ~

    Potential Additional Features

    Our methodology when developing this tool has targeted the most barebones version of the desired capability for the initial release. The goal for this project was to create an initial tool we could integrate into Chariot and then allow community contributions and feedback to drive additional tooling improvements or functionality. Below are some ideas for additional functionality which could be added to NTLMRecon:

    • Concurrency and Performance Improvements: There could be some additional improvements to concurrency and performance. Currently, the tool sequentially makes HTTP requests and waits for the previous request to be performed.
    • Batch Scanning Functionality: Another idea would be to extend the NTLMRecon utility to accept a list of hosts from standard input. One usage scenario for this could be an attacker running a combination of “subfinder | httpx | NTLMRecon” to enumerate HTTP servers and then identify NTLM authentication endpoints that are exposed externally across an entire attack surface.
    • One-off Data Collection Capability: A user may wish to perform one-off data collection targeting a specific endpoint which is currently not supported by NTLMRecon.
    • User-Agent Randomization or Control: A user may wish to randomize the user-agents used to make requests. Alternatively when targeting Microsoft Exchange servers sometimes using a user-agent with a mobile client or legacy third-party email client can allow requests to the /EWS/Exchange.asmx endpoint through, etc.

    References

    [1] https://www.praetorian.com/blog/automating-the-discovery-of-ntlm-authentication-endpoints/



    ☐ ☆ ✇ KitPloit - PenTest Tools!

    Scriptkiddi3 - Streamline Your Recon And Vulnerability Detection Process With SCRIPTKIDDI3, A Recon And Initial Vulnerability Detection Tool Built Using Shell Script And Open Source Tools

    By: noreply@blogger.com (Unknown) — April 17th 2023 at 12:30


    Streamline your recon and vulnerability detection process with SCRIPTKIDDI3, A recon and initial vulnerability detection tool built using shell script and open source tools.

    How it worksInstallationUsageMODESFor DevelopersCredits

    Introducing SCRIPTKIDDI3, a powerful recon and initial vulnerability detection tool for Bug Bounty Hunters. Built using a variety of open-source tools and a shell script, SCRIPTKIDDI3 allows you to quickly and efficiently run a scan on the target domain and identify potential vulnerabilities.

    SCRIPTKIDDI3 begins by performing recon on the target system, collecting information such as subdomains, and running services with nuclei. It then uses this information to scan for known vulnerabilities and potential attack vectors, alerting you to any high-risk issues that may need to be addressed.

    In addition, SCRIPTKIDDI3 also includes features for identifying misconfigurations and insecure default settings with nuclei templates, helping you ensure that your systems are properly configured and secure.

    SCRIPTKIDDI3 is an essential tool for conducting thorough and effective recon and vulnerability assessments. Let's Find Bugs with SCRIPTKIDDI3

    [Thanks ChatGPT for the Description]


    How it Works ?

    This tool mainly performs 3 tasks

    1. Effective Subdomain Enumeration from Various Tools
    2. Get URLs with open HTTP and HTTPS service.
    3. Run a Nuclei and other scans on previous output So basically, this is an autmation script for your initial recon in bugbounty

    Install SCRIPTKIDDI3

    SCRIPTKIDDI3 requires different tools to run successfully. Run the following command to install the latest version with all requirments-

    git clone https://github.com/thecyberneh/scriptkiddi3.git
    cd scriptkiddi3
    bash installer.sh

    Usage

    scriptkiddi3 -h

    This will display help for the tool. Here are all the switches it supports.

    Vulnerability Detection with Nuclei, and Scan for SUBDOMAINE TAKEOVER [FLAGS:] [TARGET:] -d, --domain target domain to scan [CONFIG:] -c, --config path of your configuration file for subfinder [HELP:] -h, --help to get help menu [UPDATE:] -u, --update to update tool [Examples:] Run scriptkiddi3 in full Exploitation mode scriptkiddi3 -m EXP -d target.com Use your own CONFIG file for subfinder scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode scriptkiddi3 -m SUB -d target.com Run scriptkiddi3 in URL ENUMERATION mode scriptkiddi3 -m SUB -d target.com " dir="auto">
    [ABOUT:]
    Streamline your recon and vulnerability detection process with SCRIPTKIDDI3,
    A recon and initial vulnerability detection tool built using shell script and open source tools.


    [Usage:]
    scriptkiddi3 [MODE] [FLAGS]
    scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml


    [MODES:]
    ['-m'/'--mode']
    Available Options for MODE:
    SUB | sub | SUBDOMAIN | subdomain Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode
    URL | url Run scriptkiddi3 in URL ENUMERATION mode
    EXP | exp | EXPLOIT | exploit Run scriptkiddi3 in Full Exploitation mode


    Feature of EXPLOI mode : subdomain enumaration, URL Enumeration,
    Vulnerability Detection with Nuclei,
    an d Scan for SUBDOMAINE TAKEOVER

    [FLAGS:]
    [TARGET:] -d, --domain target domain to scan

    [CONFIG:] -c, --config path of your configuration file for subfinder

    [HELP:] -h, --help to get help menu

    [UPDATE:] -u, --update to update tool

    [Examples:]
    Run scriptkiddi3 in full Exploitation mode
    scriptkiddi3 -m EXP -d target.com


    Use your own CONFIG file for subfinder
    scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml


    Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode
    scriptkiddi3 -m SUB -d target.com


    Run scriptkiddi3 in URL ENUMERATION mode
    scriptkiddi3 -m SUB -d target.com

    MODES

    1. FULL EXPLOITATION MODE

    Run SCRIPTKIDDI3 in FULL EXPLOITATION MODE

      scriptkiddi3 -m EXP -d target.com

    FULL EXPLOITATION MODE contains following functions

    • Effective Subdomain Enumeration with different services and open source tools
    • Effective URL Enumeration ( HTTP and HTTPs service )
    • Run Vulnerability Detection with Nuclei
    • Subdomain Takeover Test on previous results

    2. SUBDOMAIN ENUMERATION MODE

    Run scriptkiddi3 in SUBDOMAIN ENUMERATION MODE

      scriptkiddi3 -m SUB -d target.com

    SUBDOMAIN ENUMERATION MODE contains following functions

    • Effective Subdomain Enumeration with different services and open source tools
    • You can use this mode if you only want to get subdomains from this tool or we can say Automation of Subdmain Enumeration by different tools

    3. URL ENUMERATION MODE

    Run scriptkiddi3 in URL ENUMERATION MODE

      scriptkiddi3 -m URL -d target.com

    URL ENUMERATION MODE contains following functions

    • Same Feature as SUBDOMAIN ENUMERATION MODE but also identifies HTTP or HTTPS service

    Using your own CONFIG File for subfinder

      scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml

    You can also provie your own CONDIF file with your API Keys for subdomain enumeration with subfinder

    Updating tool to latest version You can run following command to update tool

      scriptkiddi3 -u

    An Example of config.yaml

    binaryedge:
    - 0bf8919b-aab9-42e4-9574-d3b639324597
    - ac244e2f-b635-4581-878a-33f4e79a2c13
    censys:
    - ac244e2f-b635-4581-878a-33f4e79a2c13:dd510d6e-1b6e-4655-83f6-f347b363def9
    certspotter: []
    passivetotal:
    - sample-email@user.com:sample_password
    securitytrails: []
    shodan:
    - AAAAClP1bJJSRMEYJazgwhJKrggRwKA
    github:
    - ghp_lkyJGU3jv1xmwk4SDXavrLDJ4dl2pSJMzj4X
    - ghp_gkUuhkIYdQPj13ifH4KA3cXRn8JD2lqir2d4
    zoomeye:
    - zoomeye_username:zoomeye_password

    For Developers

    If you have ideas for new functionality or modes that you would like to see in this tool, you can always submit a pull request (PR) to contribute your changes.

    If you have any other queries, you can always contact me on Twitter(thecyberneh)

    Credits

    I would like to express my gratitude to all of the open source projects that have made this tool possible and have made recon tasks easier to accomplish.



    ☐ ☆ ✇ KitPloit - PenTest Tools!

    DataSurgeon - Quickly Extracts IP's, Email Addresses, Hashes, Files, Credit Cards, Social Secuirty Numbers And More From Text

    By: noreply@blogger.com (Unknown) — March 7th 2023 at 11:30

     DataSurgeon (ds) is a versatile tool designed for incident response, penetration testing, and CTF challenges. It allows for the extraction of various types of sensitive information including emails, phone numbers, hashes, credit cards, URLs, IP addresses, MAC addresses, SRV DNS records and a lot more!

    • Supports Windows, Linux and MacOS

    Extraction Features

    • Emails
    • Files
    • Phone numbers
    • Credit Cards
    • Google API Private Key ID's
    • Social Security Numbers
    • AWS Keys
    • Bitcoin wallets
    • URL's
    • IPv4 Addresses and IPv6 addresses
    • MAC Addresses
    • SRV DNS Records
    • Extract Hashes
      • MD4 & MD5
      • SHA-1, SHA-224, SHA-256, SHA-384, SHA-512
      • SHA-3 224, SHA-3 256, SHA-3 384, SHA-3 512
      • MySQL 323, MySQL 41
      • NTLM
      • bcrypt

    Want more?

    Please read the contributing guidelines here

    Quick Install

    Install Rust and Github

    Linux

    wget -O - https://raw.githubusercontent.com/Drew-Alleman/DataSurgeon/main/install/install.sh | bash

    Windows

    Enter the line below in an elevated powershell window.

    IEX (New-Object Net.WebClient).DownloadString("https://raw.githubusercontent.com/Drew-Alleman/DataSurgeon/main/install/install.ps1")

    Relaunch your terminal and you will be able to use ds from the command line.

    Mac

    curl --proto '=https' --tlsv1.2 -sSf https://raw.githubusercontent.com/Drew-Alleman/DataSurgeon/main/install/install.sh | sh

    Command Line Arguments



    Video Guide

    Examples

    Extracting Files From a Remote Webiste

    Here I use wget to make a request to stackoverflow then I forward the body text to ds . The -F option will list all files found. --clean is used to remove any extra text that might have been returned (such as extra html). Then the result of is sent to uniq which removes any non unique files found.

     wget -qO - https://www.stackoverflow.com | ds -F --clean | uniq


    Extracting Mac Addresses From an Output File

    Here I am pulling all mac addresses found in autodeauth's log file using the -m query. The --hide option will hide the identifer string infront of the results. In this case 'mac_address: ' is hidden from the output. The -T option is used to check the same line multiple times for matches. Normallly when a match is found the tool moves on to the next line rather then checking again.

    $ ./ds -m -T --hide -f /var/log/autodeauth/log     
    2023-02-26 00:28:19 - Sending 500 deauth frames to network: BC:2E:48:E5:DE:FF -- PrivateNetwork
    2023-02-26 00:35:22 - Sending 500 deauth frames to network: 90:58:51:1C:C9:E1 -- TestNet

    Reading all files in a directory

    The line below will will read all files in the current directory recursively. The -D option is used to display the filename (-f is required for the filename to display) and -e used to search for emails.

    $ find . -type f -exec ds -f {} -CDe \;


    Speed Tests

    When no specific query is provided, ds will search through all possible types of data, which is SIGNIFICANTLY slower than using individual queries. The slowest query is --files. Its also slightly faster to use cat to pipe the data to ds.

    Below is the elapsed time when processing a 5GB test file generated by ds-test. Each test was ran 3 times and the average time was recorded.

    Computer Specs

    Processor	Intel(R) Core(TM) i5-10400F CPU @ 2.90GHz, 2904 Mhz, 6 Core(s), 12 Logical Processor(s)
    Ram 12.0 GB (11.9 GB usable)

    Searching all data types

    Command Speed
    cat test.txt | ds -t 00h:02m:04s
    ds -t -f test.txt 00h:02m:05s
    cat test.txt | ds -t -o output.txt 00h:02m:06s

    Using specific queries

    Command Speed Query Count
    cat test.txt | ds -t -6 00h:00m:12s 1
    cat test.txt | ds -t -i -m 00h:00m:22 2
    cat test.txt | ds -tF6c 00h:00m:32s 3

    Project Goals

    • JSON and CSV output
    • Untar/unzip and a directorty searching mode
    • Base64 Detection and decoding


    ☑ ☆ ✇ ToolsWatch.org – The Hackers Arsenal Tools Portal

    Recon Village @ DEFCON 2018 (Hackathon)

    By: MaxiSoler — July 28th 2018 at 21:24
    ToolsWatch likes open source tools, for that reason we will participate in the Recon Village @ DEF CON 2018 as part of jury. Maxi Soler will be there 🙂 Recon Village is an Open Space with Talks,...

    [[ This is a content summary only. Visit my website for full links, other content, and more! ]]
    ❌