mapXplore is a modular application that imports data extracted of the sqlmap to PostgreSQL or SQLite database.
Its main features are:
Automatic export of information stored in base64, such as:
Filter tables and columns by criteria.
git clone https://github.com/daniel2005d/mapXplore
cd mapXplore
pip install -r requirements
It is a modular application, and consists of the following:
Allows loading a default configuration at the start of the program
python engine.py [--config config.json]
KnowsMore officially supports Python 3.8+.
knowsmore --stats
This command will produce several statistics about the passwords like the output bellow
KnowsMore v0.1.4 by Helvio Junior
Active Directory, BloodHound, NTDS hashes and Password Cracks correlation tool
https://github.com/helviojunior/knowsmore
[+] Startup parameters
command line: knowsmore --stats
module: stats
database file: knowsmore.db
[+] start time 2023-01-11 03:59:20
[?] General Statistics
+-------+----------------+-------+
| top | description | qty |
|-------+----------------+-------|
| 1 | Total Users | 95369 |
| 2 | Unique Hashes | 74299 |
| 3 | Cracked Hashes | 23177 |
| 4 | Cracked Users | 35078 |
+-------+----------------+-------+
[?] General Top 10 passwords
+-------+-------------+-------+
| top | password | qty |
|-------+-------------+-------|
| 1 | password | 1111 |
| 2 | 123456 | 824 |
| 3 | 123456789 | 815 |
| 4 | guest | 553 |
| 5 | qwerty | 329 |
| 6 | 12345678 | 277 |
| 7 | 111111 | 268 |
| 8 | 12345 | 202 |
| 9 | secret | 170 |
| 10 | sec4us | 165 |
+-------+-------------+-------+
[?] Top 10 weak passwords by company name similarity
+-------+--------------+---------+----------------------+-------+
| top | password | score | company_similarity | qty |
|-------+--------------+---------+----------------------+-------|
| 1 | company123 | 7024 | 80 | 1111 |
| 2 | Company123 | 5209 | 80 | 824 |
| 3 | company | 3674 | 100 | 553 |
| 4 | Company@10 | 2080 | 80 | 329 |
| 5 | company10 | 1722 | 86 | 268 |
| 6 | Company@2022 | 1242 | 71 | 202 |
| 7 | Company@2024 | 1015 | 71 | 165 |
| 8 | Company2022 | 978 | 75 | 157 |
| 9 | Company10 | 745 | 86 | 116 |
| 10 | Company21 | 707 | 86 | 110 |
+-------+--------------+---------+----------------------+-------+
pip3 install --upgrade knowsmore
Note: If you face problem with dependency version Check the Virtual ENV file
There is no an obligation order to import data, but to get better correlation data we suggest the following execution flow:
All data are stored in a SQLite Database
knowsmore --create-db
We can import all full BloodHound files into KnowsMore, correlate data, and sync it to Neo4J BloodHound Database. So you can use only KnowsMore to import JSON files directly into Neo4j database instead of use extremely slow BloodHound User Interface
# Bloodhound ZIP File
knowsmore --bloodhound --import-data ~/Desktop/client.zip
# Bloodhound JSON File
knowsmore --bloodhound --import-data ~/Desktop/20220912105336_users.json
Note: The KnowsMore is capable to import BloodHound ZIP File and JSON files, but we recommend to use ZIP file, because the KnowsMore will automatically order the files to better data correlation.
# Bloodhound ZIP File
knowsmore --bloodhound --sync 10.10.10.10:7687 -d neo4j -u neo4j -p 12345678
Note: The KnowsMore implementation of bloodhount-importer was inpired from Fox-It BloodHound Import implementation. We implemented several changes to save all data in KnowsMore SQLite database and after that do an incremental sync to Neo4J database. With this strategy we have several benefits such as at least 10x faster them original BloodHound User interface.
Note: Import hashes and clear-text passwords directly from NTDS.dit and SYSTEM registry
knowsmore --secrets-dump -target LOCAL -ntds ~/Desktop/ntds.dit -system ~/Desktop/SYSTEM
Note: First use the secretsdump to extract ntds hashes with the command bellow
secretsdump.py -ntds ntds.dit -system system.reg -hashes lmhash:ntlmhash LOCAL -outputfile ~/Desktop/client_name
After that import
knowsmore --ntlm-hash --import-ntds ~/Desktop/client_name.ntds
knowsmore --word-list -o "~/Desktop/Wordlist/my_custom_wordlist.txt" --batch --name company_name
First extract all hashes to a txt file
# Extract NTLM hashes to file
nowsmore --ntlm-hash --export-hashes "~/Desktop/ntlm_hash.txt"
# Or, extract NTLM hashes from NTDS file
cat ~/Desktop/client_name.ntds | cut -d ':' -f4 > ntlm_hashes.txt
In order to crack the hashes, I usually use hashcat
with the command bellow
# Wordlist attack
hashcat -m 1000 -a 0 -O -o "~/Desktop/cracked.txt" --remove "~/Desktop/ntlm_hash.txt" "~/Desktop/Wordlist/*"
# Mask attack
hashcat -m 1000 -a 3 -O --increment --increment-min 4 -o "~/Desktop/cracked.txt" --remove "~/Desktop/ntlm_hash.txt" ?a?a?a?a?a?a?a?a
knowsmore --ntlm-hash --company clientCompanyName --import-cracked ~/Desktop/cracked.txt
Note: Change clientCompanyName to name of your company
As the passwords and his hashes are extremely sensitive data, there is a module to replace the clear text passwords and respective hashes.
Note: This command will keep all generated statistics and imported user data.
knowsmore --wipe
During the assessment you can find (in a several ways) users password, so you can add this to the Knowsmore database
knowsmore --user-pass --username administrator --password Sec4US@2023
# or adding the company name
knowsmore --user-pass --username administrator --password Sec4US@2023 --company sec4us
Integrate all credentials cracked to Neo4j Bloodhound database
knowsmore --bloodhound --mark-owned 10.10.10.10 -d neo4j -u neo4j -p 123456
To remote connection make sure that Neo4j database server is accepting remote connection. Change the line bellow at the config file /etc/neo4j/neo4j.conf and restart the service.
server.bolt.listen_address=0.0.0.0:7687
ICMP Packet Sniffer is a Python program that allows you to capture and analyze ICMP (Internet Control Message Protocol) packets on a network interface. It provides detailed information about the captured packets, including source and destination IP addresses, MAC addresses, ICMP type, payload data, and more. The program can also store the captured packets in a SQLite database and save them in a pcap format.
git clone https://github.com/HalilDeniz/ICMPWatch.git
pip install -r requirements.txt
python ICMPWatch.py [-h] [-v] [-t TIMEOUT] [-f FILTER] [-o OUTPUT] [--type {0,8}] [--src-ip SRC_IP] [--dst-ip DST_IP] -i INTERFACE [-db] [-c CAPTURE]
-v
or --verbose
: Show verbose packet details.-t
or --timeout
: Sniffing timeout in seconds (default is 300 seconds).-f
or --filter
: BPF filter for packet sniffing (default is "icmp").-o
or --output
: Output file to save captured packets.--type
: ICMP packet type to filter (0: Echo Reply, 8: Echo Request).--src-ip
: Source IP address to filter.--dst-ip
: Destination IP address to filter.-i
or --interface
: Network interface to capture packets (required).-db
or --database
: Store captured packets in an SQLite database.-c
or --capture
: Capture file to save packets in pcap format.Press Ctrl+C
to stop the sniffing process.
python icmpwatch.py -i eth0
python dnssnif.py -i eth0 -o icmp_results.txt
python icmpwatch.py -i eth0 --src-ip 192.168.1.10 --dst-ip 192.168.1.20
python icmpwatch.py -i eth0 --type 8
python icmpwatch.py -i eth0 -c captured_packets.pcap
Grepmarx is a web application providing a single platform to quickly understand, analyze and identify vulnerabilities in possibly large and unknown code bases.
SAST (Static Analysis Security Testing) capabilities:
SCA (Software Composition Analysis) capabilities:
Extra
Scan customization | Analysis workbench | Rule pack edition |
---|---|---|
![]() |
![]() |
![]() |
Grepmarx is provided with a configuration to be executed in Docker and Gunicorn.
Make sure you have docker-composer installed on the system, and the docker daemon is running. The application can then be easily executed in a docker container. The steps:
Get the code
$ git clone https://github.com/Orange-Cyberdefense/grepmarx.git
$ cd grepmarx
Start the app in Docker
$ sudo docker-compose pull && sudo docker-compose build && sudo docker-compose up -d
Visit http://localhost:5000
in your browser. The app should be up & running.
Note: a default user account is created on first launch (user=admin / password=admin). Change the default password immediately.
Gunicorn 'Green Unicorn' is a Python WSGI HTTP Server for UNIX. A supervisor configuration file is provided to start it along with the required Celery worker (used for security scans queuing).
Install using pip
$ pip install gunicorn supervisor
Start the app using gunicorn binary
$ supervisord -c supervisord.conf
Visit http://localhost:8001
in your browser. The app should be up & running.
Note: a default user account is created on first launch (user=admin / password=admin). Change the default password immediately.
Get the code
$ git clone https://github.com/Orange-Cyberdefense/grepmarx.git
$ cd grepmarx
Install virtualenv modules
$ virtualenv env
$ source env/bin/activate
Install Python modules
$ # SQLite Database (Development)
$ pip3 install -r requirements.txt
$ # OR with PostgreSQL connector (Production)
$ # pip install -r requirements-pgsql.txt
Install additionnal requirements
# Dependency scan (cdxgen / depscan) requirements
$ sudo apt install npm openjdk-17-jdk maven gradle golang composer
$ sudo npm install -g @cyclonedx/cdxgen
$ pip install appthreat-depscan
A Redis server is required to queue security scans. Install the
redis
package with your favorite distro package manager, then:
$ redis-server
Set the FLASK_APP environment variable
$ export FLASK_APP=run.py
$ # Set up the DEBUG environment
$ # export FLASK_ENV=development
Start the celery worker process
$ celery -A app.celery_worker.celery worker --pool=prefork --loglevel=info --detach
Start the application (development mode)
$ # --host=0.0.0.0 - expose the app on all network interfaces (default 127.0.0.1)
$ # --port=5000 - specify the app port (default 5000)
$ flask run --host=0.0.0.0 --port=5000
Access grepmarx in browser: http://127.0.0.1:5000/
Note: a default user account is created on first launch (user=admin / password=admin). Change the default password immediately.
Grepmarx - Provided by Orange Cyberdefense.
Script to parse Aircrack-ng captures into a SQLite database and extract useful information like handshakes (in 22000 hashcat format), MGT identities, interesting relations between APs, clients and it's Probes, WPS information and a global view of all the APs seen.
_ __ _ _ _
__ __(_) / _|(_) __| || |__
\ \ /\ / /| || |_ | | / _` || '_ \
\ V V / | || _|| | | (_| || |_) |
\_/\_/ |_||_| |_| _____ \__,_||_.__/
|_____|
by r4ulcl
docker pull r4ulcl/wifi_db
Dependencies:
sudo apt install tshark
sudo apt install python3 python3-pip
git clone https://github.com/ZerBea/hcxtools.git
cd hcxtools
make
sudo make install
cd ..
Installation
git clone https://github.com/r4ulcl/wifi_db
cd wifi_db
pip3 install -r requirements.txt
Dependencies:
sudo pacman -S wireshark-qt
sudo pacman -S python-pip python
git clone https://github.com/ZerBea/hcxtools.git
cd hcxtools
make
sudo make install
cd ..
Installation
git clone https://github.com/r4ulcl/wifi_db
cd wifi_db
pip3 install -r requirements.txt
Run airodump-ng saving the output with -w:
sudo airodump-ng wlan0mon -w scan --manufacturer --wps --gpsd
#Folder with captures
CAPTURESFOLDER=/home/user/wifi
# Output database
touch db.SQLITE
docker run -t -v $PWD/db.SQLITE:/db.SQLITE -v $CAPTURESFOLDER:/captures/ r4ulcl/wifi_db
-v $PWD/db.SQLITE:/db.SQLITE
: To save de output in current folder db.SQLITE file-v $CAPTURESFOLDER:/captures/
: To share the folder with the captures with the dockerOnce the capture is created, we can create the database by importing the capture. To do this, put the name of the capture without format.
python3 wifi_db.py scan-01
In the event that we have multiple captures we can load the folder in which they are directly. And with -d we can rename the output database.
python3 wifi_db.py -d database.sqlite scan-folder
The database can be open with:
Below is an example of a ProbeClientsConnected table.
usage: wifi_db.py [-h] [-v] [--debug] [-o] [-t LAT] [-n LON] [--source [{aircrack-ng,kismet,wigle}]] [-d DATABASE] capture [capture ...]
positional arguments:
capture capture folder or file with extensions .csv, .kismet.csv, .kismet.netxml, or .log.csv. If no extension is provided, all types will
be added. This option supports the use of wildcards (*) to select multiple files or folders.
options:
-h, --help show this help message and exit
-v, --verbose increase output verbosity
--debug increase output verbosity to debug
-o, --obfuscated Obfuscate MAC and BSSID with AA:BB:CC:XX:XX:XX-defghi (WARNING: replace all database)
-t LAT, --lat LAT insert a fake lat in the new elements
-n LON, --lon LON insert a fake lon i n the new elements
--source [{aircrack-ng,kismet,wigle}]
source from capture data (default: aircrack-ng)
-d DATABASE, --database DATABASE
output database, if exist append to the given database (default name: db.SQLITE)
TODO
TODO
wifi_db contains several tables to store information related to wireless network traffic captured by airodump-ng. The tables are as follows:
AP
: This table stores information about the access points (APs) detected during the captures, including their MAC address (bssid
), network name (ssid
), whether the network is cloaked (cloaked
), manufacturer (manuf
), channel (channel
), frequency (frequency
), carrier (carrier
), encryption type (encryption
), and total packets received from this AP (packetsTotal
). The table uses the MAC address as a primary key.
Client
: This table stores information about the wireless clients detected during the captures, including their MAC address (mac
), network name (ssid
), manufacturer (manuf
), device type (type
), and total packets received from this client (packetsTotal
). The table uses the MAC address as a primary key.
SeenClient
: This table stores information about the clients seen during the captures, including their MAC address (mac
), time of detection (time
), tool used to capture the data (tool
), signal strength (signal_rssi
), latitude (lat
), longitude (lon
), altitude (alt
). The table uses the combination of MAC address and detection time as a primary key, and has a foreign key relationship with the Client
table.
Connected
: This table stores information about the wireless clients that are connected to an access point, including the MAC address of the access point (bssid
) and the client (mac
). The table uses a combination of access point and client MAC addresses as a primary key, and has foreign key relationships with both the AP
and Client
tables.
WPS
: This table stores information about access points that have Wi-Fi Protected Setup (WPS) enabled, including their MAC address (bssid
), network name (wlan_ssid
), WPS version (wps_version
), device name (wps_device_name
), model name (wps_model_name
), model number (wps_model_number
), configuration methods (wps_config_methods
), and keypad configuration methods (wps_config_methods_keypad
). The table uses the MAC address as a primary key, and has a foreign key relationship with the AP
table.
SeenAp
: This table stores information about the access points seen during the captures, including their MAC address (bssid
), time of detection (time
), tool used to capture the data (tool
), signal strength (signal_rssi
), latitude (lat
), longitude (lon
), altitude (alt
), and timestamp (bsstimestamp
). The table uses the combination of access point MAC address and detection time as a primary key, and has a foreign key relationship with the AP
table.
Probe
: This table stores information about the probes sent by clients, including the client MAC address (mac
), network name (ssid
), and time of probe (time
). The table uses a combination of client MAC address and network name as a primary key, and has a foreign key relationship with the Client
table.
Handshake
: This table stores information about the handshakes captured during the captures, including the MAC address of the access point (bssid
), the client (mac
), the file name (file
), and the hashcat format (hashcat
). The table uses a combination of access point and client MAC addresses, and file name as a primary key, and has foreign key relationships with both the AP
and Client
tables.
Identity
: This table represents EAP (Extensible Authentication Protocol) identities and methods used in wireless authentication. The bssid
and mac
fields are foreign keys that reference the AP
and Client
tables, respectively. Other fields include the identity and method used in the authentication process.
ProbeClients
: This view selects the MAC address of the probe, the manufacturer and type of the client device, the total number of packets transmitted by the client, and the SSID of the probe. It joins the Probe
and Client
tables on the MAC address and orders the results by SSID.
ConnectedAP
: This view selects the BSSID of the connected access point, the SSID of the access point, the MAC address of the connected client device, and the manufacturer of the client device. It joins the Connected
, AP
, and Client
tables on the BSSID and MAC address, respectively, and orders the results by BSSID.
ProbeClientsConnected
: This view selects the BSSID and SSID of the connected access point, the MAC address of the probe, the manufacturer and type of the client device, the total number of packets transmitted by the client, and the SSID of the probe. It joins the Probe
, Client
, and ConnectedAP
tables on the MAC address of the probe, and filters the results to exclude probes that are connected to the same SSID that they are probing. The results are ordered by the SSID of the probe.
HandshakeAP
: This view selects the BSSID of the access point, the SSID of the access point, the MAC address of the client device that performed the handshake, the manufacturer of the client device, the file containing the handshake, and the hashcat output. It joins the Handshake
, AP
, and Client
tables on the BSSID and MAC address, respectively, and orders the results by BSSID.
HandshakeAPUnique
: This view selects the BSSID of the access point, the SSID of the access point, the MAC address of the client device that performed the handshake, the manufacturer of the client device, the file containing the handshake, and the hashcat output. It joins the Handshake
, AP
, and Client
tables on the BSSID and MAC address, respectively, and filters the results to exclude handshakes that were not cracked by hashcat. The results are grouped by SSID and ordered by BSSID.
IdentityAP
: This view selects the BSSID of the access point, the SSID of the access point, the MAC address of the client device that performed the identity request, the manufacturer of the client device, the identity string, and the method used for the identity request. It joins the Identity
, AP
, and Client
tables on the BSSID and MAC address, respectively, and orders the results by BSSID.
SummaryAP
: This view selects the SSID, the count of access points broadcasting the SSID, the encryption type, the manufacturer of the access point, and whether the SSID is cloaked. It groups the results by SSID and orders them by the count of access points in descending order.
Aircrack-ng
All in 1 file (and separately)
Kismet
Wigle
install
parse all files in folder -f --folder
Fix Extended errors, tildes, etc (fixed in aircrack-ng 1.6)
Support bash multi files: "capture*-1*"
Script to delete client or AP from DB (mac). - (Whitelist)
Whitelist to don't add mac to DB (file whitelist.txt, add macs, create DB)
Overwrite if there is new info (old ESSID='', New ESSID='WIFI')
Table Handhsakes and PMKID
Hashcat hash format 22000
Table files, if file exists skip (full path)
Get HTTP POST passwords
DNS querys
This program is a continuation of a part of: https://github.com/T1GR3S/airo-heat
GNU General Public License v3.0