FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

SQLMC - Check All Urls Of A Domain For SQL Injections

By: Zion3R


SQLMC (SQL Injection Massive Checker) is a tool designed to scan a domain for SQL injection vulnerabilities. It crawls the given URL up to a specified depth, checks each link for SQL injection vulnerabilities, and reports its findings.

Features

  • Scans a domain for SQL injection vulnerabilities
  • Crawls the given URL up to a specified depth
  • Checks each link for SQL injection vulnerabilities
  • Reports vulnerabilities along with server information and depth

Installation

  1. Install the required dependencies: bash pip3 install sqlmc

Usage

Run sqlmc with the following command-line arguments:

  • -u, --url: The URL to scan (required)
  • -d, --depth: The depth to scan (required)
  • -o, --output: The output file to save the results

Example usage:

sqlmc -u http://example.com -d 2

Replace http://example.com with the URL you want to scan and 3 with the desired depth of the scan. You can also specify an output file using the -o or --output flag followed by the desired filename.

The tool will then perform the scan and display the results.

ToDo

  • Check for multiple GET params
  • Better injection checker trigger methods

Credits

License

This project is licensed under the GNU Affero General Public License v3.0.



mapXplore - Allow Exporting The Information Downloaded With Sqlmap To A Relational Database Like Postgres And Sqlite

By: Zion3R


mapXplore is a modular application that imports data extracted of the sqlmap to PostgreSQL or SQLite database.

Its main features are:

  • Import of information extracted from sqlmap to PostgreSQL or SQLite for subsequent querying.
  • Sanitized information, which means that at the time of import, it decodes or transforms unreadable information into readable information.
  • Search for information in all tables, such as passwords, users, and desired information.
  • Automatic export of information stored in base64, such as:

    • Word, Excel, PowerPoint files
    • .zip files
    • Text files or plain text information
    • Images
  • Filter tables and columns by criteria.

  • Filter by different types of hash functions without requiring prior conversion.
  • Export relevant information to Excel or HTML

Installation

Requirements

  • python-3.11
git clone https://github.com/daniel2005d/mapXplore
cd mapXplore
pip install -r requirements

Usage

It is a modular application, and consists of the following:

  • config: It is responsible for configuration, such as the database engine to use, import paths, among others.
  • import: It is responsible for importing and processing the information extracted from sqlmap.
  • query: It is the main module capable of filtering and extracting the required information.
    • Filter by tables
    • Filter by columns
    • Filter by one or more words
    • Filter by one or more hash functions within which are:
      • MD5
      • SHA1
      • SHA256
      • SHA3
      • ....

Beginning

Allows loading a default configuration at the start of the program

python engine.py [--config config.json]

Modules



SqliSniper - Advanced Time-based Blind SQL Injection Fuzzer For HTTP Headers

By: Zion3R


SqliSniper is a robust Python tool designed to detect time-based blind SQL injections in HTTP request headers. It enhances the security assessment process by rapidly scanning and identifying potential vulnerabilities using multi-threaded, ensuring speed and efficiency. Unlike other scanners, SqliSniper is designed to eliminates false positives through and send alerts upon detection, with the built-in Discord notification functionality.


Key Features

  • Time-Based Blind SQL Injection Detection: Pinpoints potential SQL injection vulnerabilities in HTTP headers.
  • Multi-Threaded Scanning: Offers faster scanning capabilities through concurrent processing.
  • Discord Notifications: Sends alerts via Discord webhook for detected vulnerabilities.
  • False Positive Checks: Implements response time analysis to differentiate between true positives and false alarms.
  • Custom Payload and Headers Support: Allows users to define custom payloads and headers for targeted scanning.

Installation

git clone https://github.com/danialhalo/SqliSniper.git
cd SqliSniper
chmod +x sqlisniper.py
pip3 install -r requirements.txt

Usage

This will display help for the tool. Here are all the options it supports.

ubuntu:~/sqlisniper$ ./sqlisniper.py -h


β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•— β–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•—β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β–ˆβ–ˆβ•— β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•
β•šβ•β•β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–„β–„ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β•šβ•β•β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β• β–ˆβ–ˆβ•”β•β•β• β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β•šβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘
β•šβ•β•β•β•β•β•β• β•šβ•β•β–€β–€β•β• β•šβ•β•β•β•β•β•β•β•šβ•β• β•šβ•β•β•β•β•β•β•β•šβ•β• β•šβ•β•β•β•β•šβ•β•β•šβ•β• β•šβ•β•β•β•β•β•β•β•šβ•β• β•šβ•β•

-: By Muhammad Danial :-

usage: sqlisniper.py [-h] [-u URL] [-r URLS_FILE] [-p] [--proxy PROXY] [--payload PA YLOAD] [--single-payload SINGLE_PAYLOAD] [--discord DISCORD] [--headers HEADERS]
[--threads THREADS]

Detect SQL injection by sending malicious queries

options:
-h, --help show this help message and exit
-u URL, --url URL Single URL for the target
-r URLS_FILE, --urls_file URLS_FILE
File containing a list of URLs
-p, --pipeline Read from pipeline
--proxy PROXY Proxy for intercepting requests (e.g., http://127.0.0.1:8080)
--payload PAYLOAD File containing malicious payloads (default is payloads.txt)
--single-payload SINGLE_PAYLOAD
Single payload for testing
--discord DISCORD Discord Webhook URL
--headers HEADERS File containing headers (default is headers.txt)
--threads THREADS Number of threads

Running SqliSniper

Single Url Scan

The url can be provided with -u flag for single site scan

./sqlisniper.py -u http://example.com

File Input

The -r flag allows SqliSniper to read a file containing multiple URLs for simultaneous scanning.

./sqlisniper.py -r url.txt

piping URLs

The SqliSniper can also worked with the pipeline input with -p flag

cat url.txt | ./sqlisniper.py -p

The pipeline feature facilitates seamless integration with other tools. For instance, you can utilize tools like subfinder and httpx, and then pipe their output to SqliSniper for mass scanning.

subfinder -silent -d google.com | sort -u | httpx -silent | ./sqlisniper.py -p

Scanning with custom payloads

By default the SqliSniper use the payloads.txt file. However --payload flag can be used for providing custom payloads file.

./sqlisniper.py -u http://example.com --payload mssql_payloads.txt

While using the custom payloads file, ensure that you substitute the sleep time with %__TIME_OUT__%. SqliSniper dynamically adjusts the sleep time iteratively to mitigate potential false positives. The payloads file should look like this.

ubuntu:~/sqlisniper$ cat payloads.txt 
0\"XOR(if(now()=sysdate(),sleep(%__TIME_OUT__%),0))XOR\"Z
"0"XOR(if(now()=sysdate()%2Csleep(%__TIME_OUT__%)%2C0))XOR"Z"
0'XOR(if(now()=sysdate(),sleep(%__TIME_OUT__%),0))XOR'Z

Scanning with Single Payloads

If you want to only test with the single payload --single-payload flag can be used. Make sure to replace the sleep time with %__TIME_OUT__%

./sqlisniper.py -r url.txt --single-payload "0'XOR(if(now()=sysdate(),sleep(%__TIME_OUT__%),0))XOR'Z"

Scanning Custom Header

Headers are saved in the file headers.txt for scanning custom header save the custom HTTP Request Header in headers.txt file.

ubuntu:~/sqlisniper$ cat headers.txt 
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64)
X-Forwarded-For: 127.0.0.1

Sending Discord Alert Notifications

SqliSniper also offers Discord alert notifications, enhancing its functionality by providing real-time alerts through Discord webhooks. This feature proves invaluable during large-scale scans, allowing prompt notifications upon detection.

./sqlisniper.py -r url.txt --discord <web_hookurl>

Multi-Threading

Threads can be defined with --threads flag

 ./sqlisniper.py -r url.txt --threads 10

Note: It is crucial to consider that employing a higher number of threads might lead to potential false positives or overlooking valid issues. Due to the nature of time-based SQL injection it is recommended to use lower thread for more accurate detection.


SqliSniper is made inΒ  pythonΒ with lots of <3 by @Muhammad Danial.



Logsensor - A Powerful Sensor Tool To Discover Login Panels, And POST Form SQLi Scanning

By: Zion3R


A Powerful Sensor Tool to discover login panels, and POST Form SQLi Scanning

Features

  • login panel Scanning for multiple hosts
  • Proxy compatibility (http, https)
  • Login panel scanning are done in multiprocessing

so the script is super fast at scanning many urls

quick tutorial & screenshots are shown at the bottom
project contribution tips at the bottom

Β 

Installation

git clone https://github.com/Mr-Robert0/Logsensor.git
cd Logsensor && sudo chmod +x logsensor.py install.sh
pip install -r requirements.txt
./install.sh

Dependencies

Β 

Quick Tutorial

1. Multiple hosts scanning to detect login panels

  • You can increase the threads (default 30)
  • only run login detector module
python3 logsensor.py -f <subdomains-list> 
python3 logsensor.py -f <subdomains-list> -t 50
python3 logsensor.py -f <subdomains-list> --login

2. Targeted SQLi form scanning

  • can provide only specifc url of login panel with --sqli or -s flag for run only SQLi form scanning Module
  • turn on the proxy to see the requests
  • customize user input name of login panel with actual name (default "username")
python logsensor.py -u www.example.com/login --sqli 
python logsensor.py -u www.example.com/login -s --proxy http://127.0.0.1:8080
python logsensor.py -u www.example.com/login -s --inputname email

View help

Login panel Detector Module -s, --sqli run only POST Form SQLi Scanning Module with provided Login panels Urls -n , --inputname Customize actual username input for SQLi scan (e.g. 'username' or 'email') -t , --threads Number of threads (default 30) -h, --help Show this help message and exit " dir="auto">
python logsensor.py --help

usage: logsensor.py [-h --help] [--file ] [--url ] [--proxy] [--login] [--sqli] [--threads]

optional arguments:
-u , --url Target URL (e.g. http://example.com/ )
-f , --file Select a target hosts list file (e.g. list.txt )
--proxy Proxy (e.g. http://127.0.0.1:8080)
-l, --login run only Login panel Detector Module
-s, --sqli run only POST Form SQLi Scanning Module with provided Login panels Urls
-n , --inputname Customize actual username input for SQLi scan (e.g. 'username' or 'email')
-t , --threads Number of threads (default 30)
-h, --help Show this help message and exit

Screenshots


Development

TODO

  1. adding "POST form SQli (Time based) scanning" and check for delay
  2. Fuzzing on Url Paths So as not to miss any login panel


KnowsMore - A Swiss Army Knife Tool For Pentesting Microsoft Active Directory (NTLM Hashes, BloodHound, NTDS And DCSync)

By: Zion3R


KnowsMore officially supports Python 3.8+.

Main features

  • Import NTLM Hashes from .ntds output txt file (generated by CrackMapExec or secretsdump.py)
  • Import NTLM Hashes from NTDS.dit and SYSTEM
  • Import Cracked NTLM hashes from hashcat output file
  • Import BloodHound ZIP or JSON file
  • BloodHound importer (import JSON to Neo4J without BloodHound UI)
  • Analyse the quality of password (length , lower case, upper case, digit, special and latin)
  • Analyse similarity of password with company and user name
  • Search for users, passwords and hashes
  • Export all cracked credentials direct to BloodHound Neo4j Database as 'owned object'
  • Other amazing features...

Getting stats

knowsmore --stats

This command will produce several statistics about the passwords like the output bellow

weak passwords by company name similarity +-------+--------------+---------+----------------------+-------+ | top | password | score | company_similarity | qty | |-------+--------------+---------+----------------------+-------| | 1 | company123 | 7024 | 80 | 1111 | | 2 | Company123 | 5209 | 80 | 824 | | 3 | company | 3674 | 100 | 553 | | 4 | Company@10 | 2080 | 80 | 329 | | 5 | company10 | 1722 | 86 | 268 | | 6 | Company@2022 | 1242 | 71 | 202 | | 7 | Company@2024 | 1015 | 71 | 165 | | 8 | Company2022 | 978 | 75 | 157 | | 9 | Company10 | 745 | 86 | 116 | | 10 | Company21 | 707 | 86 | 110 | +-------+--------------+---------+----------------------+-------+ " dir="auto">
KnowsMore v0.1.4 by Helvio Junior
Active Directory, BloodHound, NTDS hashes and Password Cracks correlation tool
https://github.com/helviojunior/knowsmore

[+] Startup parameters
command line: knowsmore --stats
module: stats
database file: knowsmore.db

[+] start time 2023-01-11 03:59:20
[?] General Statistics
+-------+----------------+-------+
| top | description | qty |
|-------+----------------+-------|
| 1 | Total Users | 95369 |
| 2 | Unique Hashes | 74299 |
| 3 | Cracked Hashes | 23177 |
| 4 | Cracked Users | 35078 |
+-------+----------------+-------+

[?] General Top 10 passwords
+-------+-------------+-------+
| top | password | qty |
|-------+-------------+-------|
| 1 | password | 1111 |
| 2 | 123456 | 824 |
| 3 | 123456789 | 815 |
| 4 | guest | 553 |
| 5 | qwerty | 329 |
| 6 | 12345678 | 277 |
| 7 | 111111 | 268 |
| 8 | 12345 | 202 |
| 9 | secret | 170 |
| 10 | sec4us | 165 |
+-------+-------------+-------+

[?] Top 10 weak passwords by company name similarity
+-------+--------------+---------+----------------------+-------+
| top | password | score | company_similarity | qty |
|-------+--------------+---------+----------------------+-------|
| 1 | company123 | 7024 | 80 | 1111 |
| 2 | Company123 | 5209 | 80 | 824 |
| 3 | company | 3674 | 100 | 553 |
| 4 | Company@10 | 2080 | 80 | 329 |
| 5 | company10 | 1722 | 86 | 268 |
| 6 | Company@2022 | 1242 | 71 | 202 |
| 7 | Company@2024 | 1015 | 71 | 165 |
| 8 | Company2022 | 978 | 75 | 157 |
| 9 | Company10 | 745 | 86 | 116 |
| 10 | Company21 | 707 | 86 | 110 |
+-------+--------------+---------+----------------------+-------+

Installation

Simple

pip3 install --upgrade knowsmore

Note: If you face problem with dependency version Check the Virtual ENV file

Execution Flow

There is no an obligation order to import data, but to get better correlation data we suggest the following execution flow:

  1. Create database file
  2. Import BloodHound files
    1. Domains
    2. GPOs
    3. OUs
    4. Groups
    5. Computers
    6. Users
  3. Import NTDS file
  4. Import cracked hashes

Create database file

All data are stored in a SQLite Database

knowsmore --create-db

Importing BloodHound files

We can import all full BloodHound files into KnowsMore, correlate data, and sync it to Neo4J BloodHound Database. So you can use only KnowsMore to import JSON files directly into Neo4j database instead of use extremely slow BloodHound User Interface

# Bloodhound ZIP File
knowsmore --bloodhound --import-data ~/Desktop/client.zip

# Bloodhound JSON File
knowsmore --bloodhound --import-data ~/Desktop/20220912105336_users.json

Note: The KnowsMore is capable to import BloodHound ZIP File and JSON files, but we recommend to use ZIP file, because the KnowsMore will automatically order the files to better data correlation.

Sync data to Neo4j BloodHound database

# Bloodhound ZIP File
knowsmore --bloodhound --sync 10.10.10.10:7687 -d neo4j -u neo4j -p 12345678

Note: The KnowsMore implementation of bloodhount-importer was inpired from Fox-It BloodHound Import implementation. We implemented several changes to save all data in KnowsMore SQLite database and after that do an incremental sync to Neo4J database. With this strategy we have several benefits such as at least 10x faster them original BloodHound User interface.

Importing NTDS file

Option 1

Note: Import hashes and clear-text passwords directly from NTDS.dit and SYSTEM registry

knowsmore --secrets-dump -target LOCAL -ntds ~/Desktop/ntds.dit -system ~/Desktop/SYSTEM

Option 2

Note: First use the secretsdump to extract ntds hashes with the command bellow

secretsdump.py -ntds ntds.dit -system system.reg -hashes lmhash:ntlmhash LOCAL -outputfile ~/Desktop/client_name

After that import

knowsmore --ntlm-hash --import-ntds ~/Desktop/client_name.ntds

Generating a custom wordlist

knowsmore --word-list -o "~/Desktop/Wordlist/my_custom_wordlist.txt" --batch --name company_name

Importing cracked hashes

Cracking hashes

First extract all hashes to a txt file

# Extract NTLM hashes to file
nowsmore --ntlm-hash --export-hashes "~/Desktop/ntlm_hash.txt"

# Or, extract NTLM hashes from NTDS file
cat ~/Desktop/client_name.ntds | cut -d ':' -f4 > ntlm_hashes.txt

In order to crack the hashes, I usually use hashcat with the command bellow

# Wordlist attack
hashcat -m 1000 -a 0 -O -o "~/Desktop/cracked.txt" --remove "~/Desktop/ntlm_hash.txt" "~/Desktop/Wordlist/*"

# Mask attack
hashcat -m 1000 -a 3 -O --increment --increment-min 4 -o "~/Desktop/cracked.txt" --remove "~/Desktop/ntlm_hash.txt" ?a?a?a?a?a?a?a?a

importing hashcat output file

knowsmore --ntlm-hash --company clientCompanyName --import-cracked ~/Desktop/cracked.txt

Note: Change clientCompanyName to name of your company

Wipe sensitive data

As the passwords and his hashes are extremely sensitive data, there is a module to replace the clear text passwords and respective hashes.

Note: This command will keep all generated statistics and imported user data.

knowsmore --wipe

BloodHound Mark as owned

One User

During the assessment you can find (in a several ways) users password, so you can add this to the Knowsmore database

knowsmore --user-pass --username administrator --password Sec4US@2023

# or adding the company name

knowsmore --user-pass --username administrator --password Sec4US@2023 --company sec4us

Integrate all credentials cracked to Neo4j Bloodhound database

knowsmore --bloodhound --mark-owned 10.10.10.10 -d neo4j -u neo4j -p 123456

To remote connection make sure that Neo4j database server is accepting remote connection. Change the line bellow at the config file /etc/neo4j/neo4j.conf and restart the service.

server.bolt.listen_address=0.0.0.0:7687


HBSQLI - Automated Tool For Testing Header Based Blind SQL Injection

By: Zion3R


HBSQLI is an automated command-line tool for performing Header Based Blind SQL injection attacks on web applications. It automates the process of detecting Header Based Blind SQL injection vulnerabilities, making it easier for security researchers , penetration testers & bug bounty hunters to test the security of web applications.Β 


Disclaimer:

This tool is intended for authorized penetration testing and security assessment purposes only. Any unauthorized or malicious use of this tool is strictly prohibited and may result in legal action.

The authors and contributors of this tool do not take any responsibility for any damage, legal issues, or other consequences caused by the misuse of this tool. The use of this tool is solely at the user's own risk.

Users are responsible for complying with all applicable laws and regulations regarding the use of this tool, including but not limited to, obtaining all necessary permissions and consents before conducting any testing or assessment.

By using this tool, users acknowledge and accept these terms and conditions and agree to use this tool in accordance with all applicable laws and regulations.

Installation

Install HBSQLI with following steps:

$ git clone https://github.com/SAPT01/HBSQLI.git
$ cd HBSQLI
$ pip3 install -r requirements.txt

Usage/Examples

usage: hbsqli.py [-h] [-l LIST] [-u URL] -p PAYLOADS -H HEADERS [-v]

options:
-h, --help show this help message and exit
-l LIST, --list LIST To provide list of urls as an input
-u URL, --url URL To provide single url as an input
-p PAYLOADS, --payloads PAYLOADS
To provide payload file having Blind SQL Payloads with delay of 30 sec
-H HEADERS, --headers HEADERS
To provide header file having HTTP Headers which are to be injected
-v, --verbose Run on verbose mode

For Single URL:

$ python3 hbsqli.py -u "https://target.com" -p payloads.txt -H headers.txt -v

For List of URLs:

$ python3 hbsqli.py -l urls.txt -p payloads.txt -H headers.txt -v

Modes

There are basically two modes in this, verbose which will show you all the process which is happening and show your the status of each test done and non-verbose, which will just print the vulnerable ones on the screen. To initiate the verbose mode just add -v in your command

Notes

  • You can use the provided payload file or use a custom payload file, just remember that delay in each payload in the payload file should be set to 30 seconds.

  • You can use the provided headers file or even some more custom header in that file itself according to your need.

Demo



ICMPWatch - ICMP Packet Sniffer

By: Zion3R


ICMP Packet Sniffer is a Python program that allows you to capture and analyze ICMP (Internet Control Message Protocol) packets on a network interface. It provides detailed information about the captured packets, including source and destination IP addresses, MAC addresses, ICMP type, payload data, and more. The program can also store the captured packets in a SQLite database and save them in a pcap format.


Features

  • Capture and analyze ICMP Echo Request and Echo Reply packets.
  • Display detailed information about each ICMP packet, including source and destination IP addresses, MAC addresses, packet size, ICMP type, and payload content.
  • Save captured packet information to a text file.
  • Store captured packet information in an SQLite database.
  • Save captured packets to a PCAP file for further analysis.
  • Support for custom packet filtering based on source and destination IP addresses.
  • Colorful console output using ANSI escape codes.
  • User-friendly command-line interface.

Requirements

  • Python 3.7+
  • scapy 2.4.5 or higher
  • colorama 0.4.4 or higher

Installation

  1. Clone this repository:
git clone https://github.com/HalilDeniz/ICMPWatch.git
  1. Install the required dependencies:
pip install -r requirements.txt

Usage

python ICMPWatch.py [-h] [-v] [-t TIMEOUT] [-f FILTER] [-o OUTPUT] [--type {0,8}] [--src-ip SRC_IP] [--dst-ip DST_IP] -i INTERFACE [-db] [-c CAPTURE]
  • -v or --verbose: Show verbose packet details.
  • -t or --timeout: Sniffing timeout in seconds (default is 300 seconds).
  • -f or --filter: BPF filter for packet sniffing (default is "icmp").
  • -o or --output: Output file to save captured packets.
  • --type: ICMP packet type to filter (0: Echo Reply, 8: Echo Request).
  • --src-ip: Source IP address to filter.
  • --dst-ip: Destination IP address to filter.
  • -i or --interface: Network interface to capture packets (required).
  • -db or --database: Store captured packets in an SQLite database.
  • -c or --capture: Capture file to save packets in pcap format.

Press Ctrl+C to stop the sniffing process.

Examples

  • Capture ICMP packets on the "eth0" interface:
python icmpwatch.py -i eth0
  • Sniff ICMP traffic on interface "eth0" and save the results to a file:
python dnssnif.py -i eth0 -o icmp_results.txt
  • Filtering by Source and Destination IP:
python icmpwatch.py -i eth0 --src-ip 192.168.1.10 --dst-ip 192.168.1.20
  • Filtering ICMP Echo Requests:
python icmpwatch.py -i eth0 --type 8
  • Saving Captured Packets
python icmpwatch.py -i eth0 -c captured_packets.pcap


Grepmarx - A Source Code Static Analysis Platform For AppSec Enthusiasts


Grepmarx is a web application providing a single platform to quickly understand, analyze and identify vulnerabilities in possibly large and unknown code bases.

Features

SAST (Static Analysis Security Testing) capabilities:

  • Multiple languages support: C/C++, C#, Go, HTML, Java, Kotlin, JavaScript, TypeScript, OCaml, PHP, Python, Ruby, Bash, Rust, Scala, Solidity, Terraform, Swift
  • Multiple frameworks support: Spring, Laravel, Symfony, Django, Flask, Node.js, jQuery, Express, Angular...
  • 1600+ existing analysis rules
  • Easily extend analysis rules using Semgrep syntax: https://semgrep.dev/editor
  • Manage rules in rule packs to tailor code scanning

SCA (Software Composition Analysis) capabilities:

  • Multiple package-dependency formats support: NPM, Maven, Gradle, Composer, pip, Gopkg, Gem, Cargo, NuPkg, CSProj, PubSpec, Cabal, Mix, Conan, Clojure, Docker, GitHub Actions, Jenkins HPI, Kubernetes
  • SBOM (Software Bill-of-Materials) generation (CycloneDX compliant)

Extra

  • Analysis workbench designed to efficiently browse scan results
  • Scan code that doesn't compile
  • Comprehensive LOC (Lines of Code) counter
  • Inspector: automatic application features discovery
  • ... and a Dark Mode

Screenshots

Scan customization Analysis workbench Rule pack edition

Execution

Grepmarx is provided with a configuration to be executed in Docker and Gunicorn.

Docker execution


Make sure you have docker-composer installed on the system, and the docker daemon is running. The application can then be easily executed in a docker container. The steps:

Get the code

$ git clone https://github.com/Orange-Cyberdefense/grepmarx.git
$ cd grepmarx

Start the app in Docker

$ sudo docker-compose pull && sudo docker-compose build && sudo docker-compose up -d

Visit http://localhost:5000 in your browser. The app should be up & running.

Note: a default user account is created on first launch (user=admin / password=admin). Change the default password immediately.

Gunicorn


Gunicorn 'Green Unicorn' is a Python WSGI HTTP Server for UNIX. A supervisor configuration file is provided to start it along with the required Celery worker (used for security scans queuing).

Install using pip

$ pip install gunicorn supervisor

Start the app using gunicorn binary

$ supervisord -c supervisord.conf

Visit http://localhost:8001 in your browser. The app should be up & running.

Note: a default user account is created on first launch (user=admin / password=admin). Change the default password immediately.

Build from sources

Get the code

$ git clone https://github.com/Orange-Cyberdefense/grepmarx.git
$ cd grepmarx

Install virtualenv modules

$ virtualenv env
$ source env/bin/activate

Install Python modules

PostgreSQL connector (Production) $ # pip install -r requirements-pgsql.txt" dir="auto">
$ # SQLite Database (Development)
$ pip3 install -r requirements.txt
$ # OR with PostgreSQL connector (Production)
$ # pip install -r requirements-pgsql.txt

Install additionnal requirements

# Dependency scan (cdxgen / depscan) requirements
$ sudo apt install npm openjdk-17-jdk maven gradle golang composer
$ sudo npm install -g @cyclonedx/cdxgen
$ pip install appthreat-depscan

A Redis server is required to queue security scans. Install the redis package with your favorite distro package manager, then:

$ redis-server

Set the FLASK_APP environment variable

$ export FLASK_APP=run.py
$ # Set up the DEBUG environment
$ # export FLASK_ENV=development

Start the celery worker process

$ celery -A app.celery_worker.celery worker --pool=prefork --loglevel=info --detach

Start the application (development mode)

$ # --host=0.0.0.0 - expose the app on all network interfaces (default 127.0.0.1)
$ # --port=5000 - specify the app port (default 5000)
$ flask run --host=0.0.0.0 --port=5000

Access grepmarx in browser: http://127.0.0.1:5000/

Note: a default user account is created on first launch (user=admin / password=admin). Change the default password immediately.

Credits & Links



Grepmarx - Provided by Orange Cyberdefense.



Waf-Bypass - Check Your WAF Before An Attacker Does


WAF bypass Tool is an open source tool to analyze the security of any WAF for False Positives and False Negatives using predefined and customizable payloads. Check your WAF before an attacker does. WAF Bypass Tool is developed by Nemesida WAF team with the participation of community.


How to run

It is forbidden to use for illegal and illegal purposes. Don't break the law. We are not responsible for possible risks associated with the use of this software.

Run from Docker

The latest waf-bypass always available via the Docker Hub. It can be easily pulled via the following command:

# docker pull nemesida/waf-bypass
# docker run nemesida/waf-bypass --host='example.com'

Run source code from GitHub

# git clone https://github.com/nemesida-waf/waf_bypass.git /opt/waf-bypass/
# python3 -m pip install -r /opt/waf-bypass/requirements.txt
# python3 /opt/waf-bypass/main.py --host='example.com'

Options

  • '--proxy' (--proxy='http://proxy.example.com:3128') - option allows to specify where to connect to instead of the host.

  • '--header' (--header 'Authorization: Basic YWRtaW46YWRtaW4=' --header 'X-TOKEN: ABCDEF') - option allows to specify the HTTP header to send with all requests (e.g. for authentication). Multiple use is allowed.

  • '--user-agent' (--user-agent 'MyUserAgent 1/1') - option allows to specify the HTTP User-Agent to send with all requests, except when the User-Agent is set by the payload ("USER-AGENT").

  • '--block-code' (--block-code='403' --block-code='222') - option allows you to specify the HTTP status code to expect when the WAF is blocked. (default is 403). Multiple use is allowed.

  • '--threads' (--threads=15) - option allows to specify the number of parallel scan threads (default is 10).

  • '--timeout' (--timeout=10) - option allows to specify a request processing timeout in sec. (default is 30).

  • '--json-format' - an option that allows you to display the result of the work in JSON format (useful for integrating the tool with security platforms).

  • '--details' - display the False Positive and False Negative payloads. Not available in JSON format.

  • '--exclude-dir' - exclude the payload's directory (--exclude-dir='SQLi' --exclude-dir='XSS'). Multiple use is allowed.

Payloads

Depending on the purpose, payloads are located in the appropriate folders:

  • FP - False Positive payloads
  • API - API testing payloads
  • CM - Custom HTTP Method payloads
  • GraphQL - GraphQL testing payloads
  • LDAP - LDAP Injection etc. payloads
  • LFI - Local File Include payloads
  • MFD - multipart/form-data payloads
  • NoSQLi - NoSQL injection payloads
  • OR - Open Redirect payloads
  • RCE - Remote Code Execution payloads
  • RFI - Remote File Inclusion payloads
  • SQLi - SQL injection payloads
  • SSI - Server-Side Includes payloads
  • SSRF - Server-side request forgery payloads
  • SSTI - Server-Side Template Injection payloads
  • UWA - Unwanted Access payloads
  • XSS - Cross-Site Scripting payloads

Write your own payloads

When compiling a payload, the following zones, method and options are used:

  • URL - request's path
  • ARGS - request's query
  • BODY - request's body
  • COOKIE - request's cookie
  • USER-AGENT - request's user-agent
  • REFERER - request's referer
  • HEADER - request's header
  • METHOD - request's method
  • BOUNDARY - specifies the contents of the request's boundary. Applicable only to payloads in the MFD directory.
  • ENCODE - specifies the type of payload encoding (Base64, HTML-ENTITY, UTF-16) in addition to the encoding for the payload. Multiple values are indicated with a space (e.g. Base64 UTF-16). Applicable only to for ARGS, BODY, COOKIE and HEADER zone. Not applicable to payloads in API and MFD directories. Not compatible with option JSON.
  • JSON - specifies that the request's body should be in JSON format
  • BLOCKED - specifies that the request should be blocked (FN testing) or not (FP)

Except for some cases described below, the zones are independent of each other and are tested separately (those if 2 zones are specified - the script will send 2 requests - alternately checking one and the second zone).

For the zones you can use %RND% suffix, which allows you to generate an arbitrary string of 6 letters and numbers. (e.g.: param%RND=my_payload or param=%RND% OR A%RND%B)

You can create your own payloads, to do this, create your own folder on the '/payload/' folder, or place the payload in an existing one (e.g.: '/payload/XSS'). Allowed data format is JSON.

API directory

API testing payloads located in this directory are automatically appended with a header 'Content-Type: application/json'.

MFD directory

For MFD (multipart/form-data) payloads located in this directory, you must specify the BODY (required) and BOUNDARY (optional). If BOUNDARY is not set, it will be generated automatically (in this case, only the payload must be specified for the BODY, without additional data ('... Content-Disposition: form-data; ...').

If a BOUNDARY is specified, then the content of the BODY must be formatted in accordance with the RFC, but this allows for multiple payloads in BODY a separated by BOUNDARY.

Other zones are allowed in this directory (e.g.: URL, ARGS etc.). Regardless of the zone, header 'Content-Type: multipart/form-data; boundary=...' will be added to all requests.



Wifi_Db - Script To Parse Aircrack-ng Captures To A SQLite Database


Script to parse Aircrack-ng captures into a SQLite database and extract useful information like handshakes (in 22000 hashcat format), MGT identities, interesting relations between APs, clients and it's Probes, WPS information and a global view of all the APs seen.

           _   __  _             _  _     
__ __(_) / _|(_) __| || |__
\ \ /\ / /| || |_ | | / _` || '_ \
\ V V / | || _|| | | (_| || |_) |
\_/\_/ |_||_| |_| _____ \__,_||_.__/
|_____|
by r4ulcl

Features

  • Displays if a network is cloaked (hidden) even if you have the ESSID.
  • Shows a detailed table of connected clients and their respective APs.
  • Identifies client probes connected to APs, providing insight into potential security risks usin Rogue APs.
  • Extracts handshakes for use with hashcat, facilitating password cracking.
  • Displays identity information from enterprise networks, including the EAP method used for authentication.
  • Generates a summary of each AP group by ESSID and encryption, giving an overview of the security status of nearby networks.
  • Provides a WPS info table for each AP, detailing information about the Wi-Fi Protected Setup configuration of the network.
  • Logs all instances when a client or AP has been seen with the GPS data and timestamp, enabling location-based analysis.
  • Upload files with capture folder or file. This option supports the use of wildcards (*) to select multiple files or folders.
  • Docker version in Docker Hub to avoid dependencies.
  • Obfuscated mode for demonstrations and conferences.
  • Possibility to add static GPS data.

Install

From DockerHub (RECOMMENDED)

docker pull r4ulcl/wifi_db

Manual installation

Debian based systems (Ubuntu, Kali, Parrot, etc.)

Dependencies:

  • python3
  • python3-pip
  • tshark
  • hcxtools
sudo apt install tshark
sudo apt install python3 python3-pip

git clone https://github.com/ZerBea/hcxtools.git
cd hcxtools
make
sudo make install
cd ..

Installation

git clone https://github.com/r4ulcl/wifi_db
cd wifi_db
pip3 install -r requirements.txt

Arch

Dependencies:

  • python3
  • python3-pip
  • tshark
  • hcxtools
sudo pacman -S wireshark-qt
sudo pacman -S python-pip python

git clone https://github.com/ZerBea/hcxtools.git
cd hcxtools
make
sudo make install
cd ..

Installation

git clone https://github.com/r4ulcl/wifi_db
cd wifi_db
pip3 install -r requirements.txt

Usage

Scan with airodump-ng

Run airodump-ng saving the output with -w:

sudo airodump-ng wlan0mon -w scan --manufacturer --wps --gpsd

Create the SQLite database using Docker

#Folder with captures
CAPTURESFOLDER=/home/user/wifi

# Output database
touch db.SQLITE

docker run -t -v $PWD/db.SQLITE:/db.SQLITE -v $CAPTURESFOLDER:/captures/ r4ulcl/wifi_db
  • -v $PWD/db.SQLITE:/db.SQLITE: To save de output in current folder db.SQLITE file
  • -v $CAPTURESFOLDER:/captures/: To share the folder with the captures with the docker

Create the SQLite database using manual installation

Once the capture is created, we can create the database by importing the capture. To do this, put the name of the capture without format.

python3 wifi_db.py scan-01

In the event that we have multiple captures we can load the folder in which they are directly. And with -d we can rename the output database.

python3 wifi_db.py -d database.sqlite scan-folder

Open database

The database can be open with:

Below is an example of a ProbeClientsConnected table.

Arguments

usage: wifi_db.py [-h] [-v] [--debug] [-o] [-t LAT] [-n LON] [--source [{aircrack-ng,kismet,wigle}]] [-d DATABASE] capture [capture ...]

positional arguments:
capture capture folder or file with extensions .csv, .kismet.csv, .kismet.netxml, or .log.csv. If no extension is provided, all types will
be added. This option supports the use of wildcards (*) to select multiple files or folders.

options:
-h, --help show this help message and exit
-v, --verbose increase output verbosity
--debug increase output verbosity to debug
-o, --obfuscated Obfuscate MAC and BSSID with AA:BB:CC:XX:XX:XX-defghi (WARNING: replace all database)
-t LAT, --lat LAT insert a fake lat in the new elements
-n LON, --lon LON insert a fake lon i n the new elements
--source [{aircrack-ng,kismet,wigle}]
source from capture data (default: aircrack-ng)
-d DATABASE, --database DATABASE
output database, if exist append to the given database (default name: db.SQLITE)

Kismet

TODO

Wigle

TODO

Database

wifi_db contains several tables to store information related to wireless network traffic captured by airodump-ng. The tables are as follows:

  • AP: This table stores information about the access points (APs) detected during the captures, including their MAC address (bssid), network name (ssid), whether the network is cloaked (cloaked), manufacturer (manuf), channel (channel), frequency (frequency), carrier (carrier), encryption type (encryption), and total packets received from this AP (packetsTotal). The table uses the MAC address as a primary key.

  • Client: This table stores information about the wireless clients detected during the captures, including their MAC address (mac), network name (ssid), manufacturer (manuf), device type (type), and total packets received from this client (packetsTotal). The table uses the MAC address as a primary key.

  • SeenClient: This table stores information about the clients seen during the captures, including their MAC address (mac), time of detection (time), tool used to capture the data (tool), signal strength (signal_rssi), latitude (lat), longitude (lon), altitude (alt). The table uses the combination of MAC address and detection time as a primary key, and has a foreign key relationship with the Client table.

  • Connected: This table stores information about the wireless clients that are connected to an access point, including the MAC address of the access point (bssid) and the client (mac). The table uses a combination of access point and client MAC addresses as a primary key, and has foreign key relationships with both the AP and Client tables.

  • WPS: This table stores information about access points that have Wi-Fi Protected Setup (WPS) enabled, including their MAC address (bssid), network name (wlan_ssid), WPS version (wps_version), device name (wps_device_name), model name (wps_model_name), model number (wps_model_number), configuration methods (wps_config_methods), and keypad configuration methods (wps_config_methods_keypad). The table uses the MAC address as a primary key, and has a foreign key relationship with the AP table.

  • SeenAp: This table stores information about the access points seen during the captures, including their MAC address (bssid), time of detection (time), tool used to capture the data (tool), signal strength (signal_rssi), latitude (lat), longitude (lon), altitude (alt), and timestamp (bsstimestamp). The table uses the combination of access point MAC address and detection time as a primary key, and has a foreign key relationship with the AP table.

  • Probe: This table stores information about the probes sent by clients, including the client MAC address (mac), network name (ssid), and time of probe (time). The table uses a combination of client MAC address and network name as a primary key, and has a foreign key relationship with the Client table.

  • Handshake: This table stores information about the handshakes captured during the captures, including the MAC address of the access point (bssid), the client (mac), the file name (file), and the hashcat format (hashcat). The table uses a combination of access point and client MAC addresses, and file name as a primary key, and has foreign key relationships with both the AP and Client tables.

  • Identity: This table represents EAP (Extensible Authentication Protocol) identities and methods used in wireless authentication. The bssid and mac fields are foreign keys that reference the AP and Client tables, respectively. Other fields include the identity and method used in the authentication process.

Views

  • ProbeClients: This view selects the MAC address of the probe, the manufacturer and type of the client device, the total number of packets transmitted by the client, and the SSID of the probe. It joins the Probe and Client tables on the MAC address and orders the results by SSID.

  • ConnectedAP: This view selects the BSSID of the connected access point, the SSID of the access point, the MAC address of the connected client device, and the manufacturer of the client device. It joins the Connected, AP, and Client tables on the BSSID and MAC address, respectively, and orders the results by BSSID.

  • ProbeClientsConnected: This view selects the BSSID and SSID of the connected access point, the MAC address of the probe, the manufacturer and type of the client device, the total number of packets transmitted by the client, and the SSID of the probe. It joins the Probe, Client, and ConnectedAP tables on the MAC address of the probe, and filters the results to exclude probes that are connected to the same SSID that they are probing. The results are ordered by the SSID of the probe.

  • HandshakeAP: This view selects the BSSID of the access point, the SSID of the access point, the MAC address of the client device that performed the handshake, the manufacturer of the client device, the file containing the handshake, and the hashcat output. It joins the Handshake, AP, and Client tables on the BSSID and MAC address, respectively, and orders the results by BSSID.

  • HandshakeAPUnique: This view selects the BSSID of the access point, the SSID of the access point, the MAC address of the client device that performed the handshake, the manufacturer of the client device, the file containing the handshake, and the hashcat output. It joins the Handshake, AP, and Client tables on the BSSID and MAC address, respectively, and filters the results to exclude handshakes that were not cracked by hashcat. The results are grouped by SSID and ordered by BSSID.

  • IdentityAP: This view selects the BSSID of the access point, the SSID of the access point, the MAC address of the client device that performed the identity request, the manufacturer of the client device, the identity string, and the method used for the identity request. It joins the Identity, AP, and Client tables on the BSSID and MAC address, respectively, and orders the results by BSSID.

  • SummaryAP: This view selects the SSID, the count of access points broadcasting the SSID, the encryption type, the manufacturer of the access point, and whether the SSID is cloaked. It groups the results by SSID and orders them by the count of access points in descending order.

TODO

  • Aircrack-ng

  • All in 1 file (and separately)

  • Kismet

  • Wigle

  • install

  • parse all files in folder -f --folder

  • Fix Extended errors, tildes, etc (fixed in aircrack-ng 1.6)

  • Support bash multi files: "capture*-1*"

  • Script to delete client or AP from DB (mac). - (Whitelist)

  • Whitelist to don't add mac to DB (file whitelist.txt, add macs, create DB)

  • Overwrite if there is new info (old ESSID='', New ESSID='WIFI')

  • Table Handhsakes and PMKID

  • Hashcat hash format 22000

  • Table files, if file exists skip (full path)

  • Get HTTP POST passwords

  • DNS querys


This program is a continuation of a part of: https://github.com/T1GR3S/airo-heat

Author

  • RaΓΊl Calvo Laorden (@r4ulcl)

License

GNU General Public License v3.0



SQLiDetector - Helps You To Detect SQL Injection "Error Based" By Sending Multiple Requests With 14 Payloads And Checking For 152 Regex Patterns For Different Databases


Simple python script supported with BurpBouty profile that helps you to detect SQL injection "Error based" by sending multiple requests with 14 payloads and checking for 152 regex patterns for different databases.

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
| S|Q|L|i| |D|e|t|e|c|t|o|r|
| Coded By: Eslam Akl @eslam3kll & Khaled Nassar @knassar702
| Version: 1.0.0
| Blog: eslam3kl.medium.com
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-


Description

The main idea for the tool is scanning for Error Based SQL Injection by using different payloads like

'123
''123
`123
")123
"))123
`)123
`))123
'))123
')123"123
[]123
""123
'"123
"'123
\123

And match for 152 error regex patterns for different databases.
Source: https://github.com/sqlmapproject/sqlmap/blob/master/data/xml/errors.xml

How does it work?

It's very simple, just organize your steps as follows

  1. Use your subdomain grabber script or tools.
  2. Pass all collected subdomains to httpx or httprobe to get only live subs.
  3. Use your links and URLs tools to grab all waybackurls like waybackurls, gau, gauplus, etc.
  4. Use URO tool to filter them and reduce the noise.
  5. Grep to get all the links that contain parameters only. You can use Grep or GF tool.
  6. Pass the final URLs file to the tool, and it will test them.

The final schema of URLs that you will pass to the tool must be like this one

https://aykalam.com?x=test&y=fortest
http://test.com?parameter=ayhaga

Installation and Usage

Just run the following command to install the required libraries.

~/eslam3kl/SQLiDetector# pip3 install -r requirements.txt 

To run the tool itself.

# cat urls.txt
http://testphp.vulnweb.com/artists.php?artist=1

# python3 sqlidetector.py -h
usage: sqlidetector.py [-h] -f FILE [-w WORKERS] [-p PROXY] [-t TIMEOUT] [-o OUTPUT]
A simple tool to detect SQL errors
optional arguments:
-h, --help show this help message and exit]
-f FILE, --file FILE [File of the urls]
-w WORKERS, --workers [WORKERS Number of threads]
-p PROXY, --proxy [PROXY Proxy host]
-t TIMEOUT, --timeout [TIMEOUT Connection timeout]
-o OUTPUT, --output [OUTPUT [Output file]

# python3 sqlidetector.py -f urls.txt -w 50 -o output.txt -t 10

BurpBounty Module

I've created a burpbounty profile that uses the same payloads add injecting them at multiple positions like

  • Parameter name
  • Parameter value
  • Headers
  • Paths

I think it's more effective and will helpful for POST request that you can't test them using the Python script.

How does it test the parameter?

What's the difference between this tool and any other one? If we have a link like this one https://example.com?file=aykalam&username=eslam3kl so we have 2 parameters. It creates 2 possible vulnerable URLs.

  1. It will work for every payload like the following
https://example.com?file=123'&username=eslam3kl
https://example.com?file=aykalam&username=123'
  1. It will send a request for every link and check if one of the patterns is existing using regex.
  2. For any vulnerable link, it will save it at a separate file for every process.

Upcoming updates

  • Output json option.
  • Adding proxy option.
  • Adding threads to increase the speed.
  • Adding progress bar.
  • Adding more payloads.
  • Adding BurpBounty Profile.
  • Inject the payloads in the parameter name itself.

If you want to contribute, feel free to do that. You're welcome :)

Thanks to

Thanks to Mohamed El-Khayat and Orwa for the amazing paylaods and ideas. Follow them and you will learn more

https://twitter.com/Mohamed87Khayat
https://twitter.com/GodfatherOrwa

Stay in touch <3

LinkedIn | Blog | Twitter



Fortinet and Zoho Urge Customers to Patch Enterprise Software Vulnerabilities

Fortinet has warned of a high-severity flaw affecting multiple versions of FortiADC application delivery controller that could lead to the execution of arbitrary code. "An improper neutralization of special elements used in an OS command vulnerability in FortiADC may allow an authenticated attacker with access to the web GUI to execute unauthorized code or commands via specifically crafted HTTP

Neton - Tool For Getting Information From Internet Connected Sandboxes


Neton is a tool for getting information from Internet connected sandboxes. It is composed by an agent and a web interface that displays the collected information.
The Neton agent gets information from the systems on which it runs and exfiltrates it via HTTPS to the web server.

Some of the information it collects:

  • Operating system and hardware information
  • Find files on mounted drives
  • List unsigned microsoft drivers
  • Run SharpEDRChecker
  • Run Pafish
  • Run Al-Khaser
  • Detect hooks
  • Take screenshots of the desktop

All this information can be used to improve Red Team artifacts or to learn how sandboxes work and improve them.


Images

Deployment

NetonWeb

  1. Install (with virtualenv):
python3 -m venv venv
source venv/bin/activate
pip3 install -r requirements.txt
  1. Configure the database:
python3 manage.py migrate
python3 manage.py makemigrations core
python3 manage.py migrate core
  • Create user:
python3 manage.py createsuperuser

Launch (test)

python3 manage.py runserver

Launch (prod)

  • Generate the certificates and store them in the certs folder:
openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout server.key -out server.crt

Launch gunicorn:

./launch_prod.sh

Agent

Build solution with Visual Studio. The agent configuration can be done from the Program.cs class.

  • url variable: Url where the information will be exfiltrated (NetonWeb's).
  • sandboxId variable: Identifier of the sandbox where the samples are uploaded.
  • wave variable: Way of organising the different times the samples are sent. muestras.

Sample data

In the sample data folder there is a sqlite database with several samples collected from the following services:

  • Virustotal
  • Tria.ge
  • Metadefender
  • Hybrid Analysis
  • Any.run
  • Intezer Analyze
  • Pikker
  • AlienVault OTX
  • Threat.Zone

To access the sample information copy the sqlite file to the NetonWeb folder and run the application.

Credentials:

  • User: raccoon
  • Password: jAmb.Abj3.j11pmMa

Extra info

Credits



Researchers Reported Critical SQLi and Access Flaws in Zendesk Analytics Service

Cybersecurity researchers have disclosed details of now-patched flaws in Zendesk Explore that could have been exploited by an attacker to gain unauthorized access to information from customer accounts that have the feature turned on. "Before it was patched, the flaw would have allowed threat actors to access conversations, email addresses, tickets, comments, and other information from Zendesk

22-Year-Old Vulnerability Reported in Widely Used SQLite Database Library

A high-severity vulnerability has been disclosed in the SQLite database library, which was introduced as part of a code change dating all the way back to October 2000 and could enable attackers to crash or control programs. Tracked asΒ CVE-2022-35737Β (CVSS score: 7.5), the 22-year-old issue affects SQLite versionsΒ 1.0.12Β through 3.39.1, and has been addressed inΒ version 3.39.2Β released on July 21

Pinecone - A WLAN Red Team Framework


Pinecone is a WLAN networks auditing tool, suitable for red team usage. It is extensible via modules, and it is designed to be run in Debian-based operating systems. Pinecone is specially oriented to be used with a Raspberry Pi, as a portable wireless auditing box.

This tool is designed for educational and research purposes only. Only use it with explicit permission.


Installation

For running Pinecone, you need a Debian-based operating system (it has been tested on Raspbian, Raspberry Pi Desktop and Kali Linux). Pinecone has the following requirements:

  • Python 3.5+. Your distribution probably comes with Python3 already installed, if not it can be installed using apt-get install python3.
  • dnsmasq (tested with version 2.76). Can be installed using apt-get install dnsmasq.
  • hostapd-wpe (tested with version 2.6). Can be installed using apt-get install hostapd-wpe. If your distribution repository does not have a hostapd-wpe package, you can either try to install it using a Kali Linux repository pre-compiled package, or compile it from its source code.

After installing the necessary packages, you can install the Python packages requirements for Pinecone using pip3 install -r requirements.txt in the project root folder.

Usage

For starting Pinecone, execute python3 pinecone.py from within the project root folder:

root@kali:~/pinecone# python pinecone.py 
[i] Database file: ~/pinecone/db/database.sqlite
pinecone >

Pinecone is controlled via a Metasploit-like command-line interface. You can type help to get the list of available commands, or help 'command' to get more information about a specific command:

pinecone > help

Documented commands (type help <topic>):
========================================
alias help load pyscript set shortcuts use
edit history py quit shell unalias

Undocumented commands:
======================
back run stop

pinecone > help use
Usage: use module [-h]

Interact with the specified module.

positional arguments:
module module ID

optional arguments:
-h, --help show this help message and exit

Use the command use 'moduleID' to activate a Pinecone module. You can use Tab auto-completion to see the list of current loaded modules:

pinecone > use 
attack/deauth daemon/hostapd-wpe report/db2json scripts/infrastructure/ap
daemon/dnsmasq discovery/recon scripts/attack/wpa_handshake
pinecone > use discovery/recon
pcn module(discovery/recon) >

Every module has options, that can be seen typing help run or run --help when a module is activated. Most modules have default values for their options (check them before running):

pcn module(discovery/recon) > help run
usage: run [-h] [-i INTERFACE]

optional arguments:
-h, --help show this help message and exit
-i INTERFACE, --iface INTERFACE
monitor mode capable WLAN interface (default: wlan0)

When a module is activated, you can use the run [options...] command to start its functionality. The modules provide feedback of their execution state:

pcn script(attack/wpa_handshake) > run -s TEST_SSID
[i] Sending 64 deauth frames to all clients from AP 00:11:22:33:44:55 on channel 1...
................................................................
Sent 64 packets.
[i] Monitoring for 10 secs on channel 1 WPA handshakes between all clients and AP 00:11:22:33:44:55...

If the module runs in background (for example, scripts/infrastructure/ap), you can stop it using the stop command when the module is running:

When you are done using a module, you can deactivate it by using the back command. You can also activate another module issuing the use command again.

Shell commands may be executed with the command shell or the ! shortcut:

pinecone > !ls
LICENSE modules module_template.py pinecone pinecone.py README.md requirements.txt TODO.md

Currently, Pinecone reconnaissance SQLite database is stored in the db/ directory inside the project root folder. All the temporary files that Pinecone needs to use are stored in the tmp/ directory also under the project root folder.



LiveTargetsFinder - Generates Lists Of Live Hosts And URLs For Targeting, Automating The Usage Of MassDNS, Masscan And Nmap To Filter Out Unreachable Hosts And Gather Service Information


Generates lists of live hosts and URLs for targeting, automating the usage of Massdns, Masscan and nmap to filter out unreachable hosts

Given an input file of domain names, this script will automate the usage of MassDNS to filter out unresolvable hosts, and then pass the results on to Masscan to confirm that the hosts are reachable and on which ports. The script will then generate a list of full URLs to be used for further targeting (passing into tools like gobuster or dirsearch, or making HTTP requests), a list of reachable domain names, and a list of reachable IP addresses. As an optional last step, you can run an nmap version scan on this reduced host list, verifying that the earlier reachable hosts are up, and gathering service information from their open ports.


Overview

This script is especially useful for large domain sets, such as subdomain enumerations gathered from an apex domain with thousands of subdomains. With these large lists, an nmap scan would simply take too long. The goal here is to first use the less accurate, but much faster, MassDNS to quickly reduce the size of your input list by removing unresolvable domains. Then, Masscan will be able to take the output from MassDNS, and further confirm that the hosts are reachable, and on which ports. The script will then parse these results and generate lists of the live hosts discovered.

Now, the list of hosts should be reduced enough to be suitable for further scanning/testing. If you want to go a step further, you can tell the script to run an nmap scan on the list of reachable hosts, which should take more reasonable amount of time with the shorter list of hosts. After running nmap, any false positives given from Masscan will be filtered out. Raw nmap output will be stored in the regular nmap XML format, and additional information from the version detection will be added to a SQLite database.

Installation

If using the nmap scan option, this tool assumes that you already have nmap installed

Note: Running the install script is only needed if you do not already have MassDNS and Masscan installed, or if you would like to reinstall them inside this repo. If you do not run the script, you can provide the paths to the respective executables as arguments. The script additionally expects that the resolvers list included with MassDNS be located at {massDNS_directory}/lists/resolvers.txt.

git clone https://github.com/allyomalley/LiveTargetsFinder.git
cd LiveTargetsFinder
sudo pip3 install -r requirements.txt

(OPTIONAL)

chmod +x install_deps.sh
./install_deps.sh

If you do not already have MassDNS and Masscan installed, and would prefer to install them yourself, see the documentation for instructions:

MassDNS

Masscan

I have only tested this script on macOS and Linux - the python script itself should work on a Windows machine, though I believe the installation for MassDNS and Masscan will differ.

Usage

python3 liveTargetsFinder.py [domainList] [options]
Flag Description Default Required
Β  Β  Β  Β  Β  Β  Β  Β  --target-list Β  Β  Β  Β  Β  Β  Β  Β  Input file containing list of domains, e.g google.com Yes
Β  --massdns-path Β  Path to the MassDNS executable, if non-default ./massdns/bin/massdns No
Β  --masscan-path Β  Path to the Masscan executable, if non-default ./masscan/bin/masscan No
Β  --nmap Β  Run an nmap version detection scan on the gathered live hosts Disabled No
Β  --db-path Β  If using the --nmap option, supply the path to the database you would like to append to (will be created if does not exist) output/liveTargetsFinder.sqlite3 No
  • Note that the Masscan and MassDNS settings are hardcoded inside liveTargetsFinder.py. Feel free to edit them (lines 87 + 97).
  • Since this tool was designed with very large lists in mind, I tweaked many of the settings to try to balance speed, accuracy, and network constraints - these can all be adjusted to suit your needs and bandwith.
  • Default settings for Masscan only scans ports 80 and 443.
    • -s, (--hashmap-size) in particular was chosen for performance reasons - you will likely be able to increase this.
    • Full MassDNS arguments:
      • -c 25 -o J -r ./massdns/lists/resolvers.txt -s 100 -w massdnsOutput -t A targetHosts
      • Documentation
  • Another setting of note is the --max-rate argument for Masscan - you will likely want to adjust this.
    • Full Masscan arguments:
      • -iL ipFile -oD masscanOutput --open-only --max-rate 5000 -p80,443 --max-retries 10
      • Documentation
  • Default nmap settings only scans ports 80 and 443, with timing -T4 and a few NSE scripts.
    • Full nmap arguments:
      • --script http-server-header.nse,http-devframework.nse,http-headers -sV -T4 -p80,443 -oX {output.xml}

Example

Did run install script:

python3 liveTargetsFinder.py --target-list victim_domains.txt

Did NOT run the install script:

python3 liveTargetsFinder.py --target-list victim_domains.txt --massdns-path ../massdns/bin/massdns --masscan-path ../masscan/bin/masscan 

Perform an nmap scan and write to/append to the default DB path (liveTargetsFinder.sqlite3)

python3 liveTargetsFinder.py --target-list victim_domains.txt --nmap

Perform an nmap scan and write to/append to the specified database

python3 liveTargetsFinder.py --target-list victim_domains.txt --nmap --db-path serviceinfo_victim.sqlite3

Output

Input: victimDomains.txt

File Description Examples
output/victimDomains_targetUrls.txt List of reachable, live URLs https://github.com, http://github.com
output/victimDomains_domains_alive.txt List of live domain names github.com, google.com
output/victimDomains_ips_alive.txt List of live IP addresses 10.1.0.200, 52.3.1.166
Supplied or default DB Path SQLite database storing live hosts and information about their services running
output/victimDomains_massdns.txt The raw output from MassDNS, in ndjson format
output/victimDomains_masscan.txt The raw output from Masscan, in ndjson format
output/victimDomains_nmap.txt The raw output from nmap, in XML format


Pinecone - A WLAN Red Team Framework


Pinecone is a WLAN networks auditing tool, suitable for red team usage. It is extensible via modules, and it is designed to be run in Debian-based operating systems. Pinecone is specially oriented to be used with a Raspberry Pi, as a portable wireless auditing box.

This tool is designed for educational and research purposes only. Only use it with explicit permission.


Installation

For running Pinecone, you need a Debian-based operating system (it has been tested on Raspbian, Raspberry Pi Desktop and Kali Linux). Pinecone has the following requirements:

  • Python 3.5+. Your distribution probably comes with Python3 already installed, if not it can be installed using apt-get install python3.
  • dnsmasq (tested with version 2.76). Can be installed using apt-get install dnsmasq.
  • hostapd-wpe (tested with version 2.6). Can be installed using apt-get install hostapd-wpe. If your distribution repository does not have a hostapd-wpe package, you can either try to install it using a Kali Linux repository pre-compiled package, or compile it from its source code.

After installing the necessary packages, you can install the Python packages requirements for Pinecone using pip3 install -r requirements.txt in the project root folder.

Usage

For starting Pinecone, execute python3 pinecone.py from within the project root folder:

root@kali:~/pinecone# python pinecone.py 
[i] Database file: ~/pinecone/db/database.sqlite
pinecone >

Pinecone is controlled via a Metasploit-like command-line interface. You can type help to get the list of available commands, or help 'command' to get more information about a specific command:

pinecone > help

Documented commands (type help <topic>):
========================================
alias help load pyscript set shortcuts use
edit history py quit shell unalias

Undocumented commands:
======================
back run stop

pinecone > help use
Usage: use module [-h]

Interact with the specified module.

positional arguments:
module module ID

optional arguments:
-h, --help show this help message and exit

Use the command use 'moduleID' to activate a Pinecone module. You can use Tab auto-completion to see the list of current loaded modules:

pinecone > use 
attack/deauth daemon/hostapd-wpe report/db2json scripts/infrastructure/ap
daemon/dnsmasq discovery/recon scripts/attack/wpa_handshake
pinecone > use discovery/recon
pcn module(discovery/recon) >

Every module has options, that can be seen typing help run or run --help when a module is activated. Most modules have default values for their options (check them before running):

pcn module(discovery/recon) > help run
usage: run [-h] [-i INTERFACE]

optional arguments:
-h, --help show this help message and exit
-i INTERFACE, --iface INTERFACE
monitor mode capable WLAN interface (default: wlan0)

When a module is activated, you can use the run [options...] command to start its functionality. The modules provide feedback of their execution state:

pcn script(attack/wpa_handshake) > run -s TEST_SSID
[i] Sending 64 deauth frames to all clients from AP 00:11:22:33:44:55 on channel 1...
................................................................
Sent 64 packets.
[i] Monitoring for 10 secs on channel 1 WPA handshakes between all clients and AP 00:11:22:33:44:55...

If the module runs in background (for example, scripts/infrastructure/ap), you can stop it using the stop command when the module is running:

When you are done using a module, you can deactivate it by using the back command. You can also activate another module issuing the use command again.

Shell commands may be executed with the command shell or the ! shortcut:

pinecone > !ls
LICENSE modules module_template.py pinecone pinecone.py README.md requirements.txt TODO.md

Currently, Pinecone reconnaissance SQLite database is stored in the db/ directory inside the project root folder. All the temporary files that Pinecone needs to use are stored in the tmp/ directory also under the project root folder.



❌