FreshRSS

๐Ÿ”’
โŒ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayTools

Wappalyzer-Next - Python library that uses Wappalyzer extension (and its fingerprints) to detect technologies

By: Unknown


This project is a command line tool and python library that uses Wappalyzer extension (and its fingerprints) to detect technologies. Other projects emerged after discontinuation of the official open source project are using outdated fingerpints and lack accuracy when used on dynamic web-apps, this project bypasses those limitations.


Installation

Before installing wappalyzer, you will to install Firefox and geckodriver/releases">geckodriver. Below are detailed steps for setting up geckodriver but you may use google/youtube for help.

Setting up geckodriver

Step 1: Download GeckoDriver

  1. Visit the official GeckoDriver releases page on GitHub:
    https://github.com/mozilla/geckodriver/releases
  2. Download the version compatible with your system:
  3. For Windows: geckodriver-vX.XX.X-win64.zip
  4. For macOS: geckodriver-vX.XX.X-macos.tar.gz
  5. For Linux: geckodriver-vX.XX.X-linux64.tar.gz
  6. Extract the downloaded file to a folder of your choice.

Step 2: Add GeckoDriver to the System Path

To ensure Selenium can locate the GeckoDriver executable: - Windows: 1. Move the geckodriver.exe to a directory (e.g., C:\WebDrivers\). 2. Add this directory to the system's PATH: - Open Environment Variables. - Under System Variables, find and select the Path variable, then click Edit. - Click New and enter the directory path where geckodriver.exe is stored. - Click OK to save. - macOS/Linux: 1. Move the geckodriver file to /usr/local/bin/ or another directory in your PATH. 2. Use the following command in the terminal: bash sudo mv geckodriver /usr/local/bin/ Ensure /usr/local/bin/ is in your PATH.

Install as a command-line tool

pipx install wappalyzer

Install as a library

To use it as a library, install it with pip inside an isolated container e.g. venv or docker. You may also --break-system-packages to do a 'regular' install but it is not recommended.

Install with docker

Steps

  1. Clone the repository:
git clone https://github.com/s0md3v/wappalyzer-next.git
cd wappalyzer-next
  1. Build and run with Docker Compose:
docker compose up -d
  1. To scan URLs using the Docker container:

  2. Scan a single URL:

docker compose run --rm wappalyzer -i https://example.com
  • Scan Multiple URLs from a file:
docker compose run --rm wappalyzer -i https://example.com -oJ output.json

For Users

Some common usage examples are given below, refer to list of all options for more information.

  • Scan a single URL: wappalyzer -i https://example.com
  • Scan multiple URLs from a file: wappalyzer -i urls.txt -t 10
  • Scan with authentication: wappalyzer -i https://example.com -c "sessionid=abc123; token=xyz789"
  • Export results to JSON: wappalyzer -i https://example.com -oJ results.json

Options

Note: For accuracy use 'full' scan type (default). 'fast' and 'balanced' do not use browser emulation.

  • -i: Input URL or file containing URLs (one per line)
  • --scan-type: Scan type (default: 'full')
  • fast: Quick HTTP-based scan (sends 1 request)
  • balanced: HTTP-based scan with more requests
  • full: Complete scan using wappalyzer extension
  • -t, --threads: Number of concurrent threads (default: 5)
  • -oJ: JSON output file path
  • -oC: CSV output file path
  • -oH: HTML output file path
  • -c, --cookie: Cookie header string for authenticated scans

For Developers

The python library is a available on pypi as wappalyzer and can be imported with the same name.

Using the Library

The main function you'll interact with is analyze():

from wappalyzer import analyze

# Basic usage
results = analyze('https://example.com')

# With options
results = analyze(
url='https://example.com',
scan_type='full', # 'fast', 'balanced', or 'full'
threads=3,
cookie='sessionid=abc123'
)

analyze() Function Parameters

  • url (str): The URL to analyze
  • scan_type (str, optional): Type of scan to perform
  • 'fast': Quick HTTP-based scan
  • 'balanced': HTTP-based scan with more requests
  • 'full': Complete scan including JavaScript execution (default)
  • threads (int, optional): Number of threads for parallel processing (default: 3)
  • cookie (str, optional): Cookie header string for authenticated scans

Return Value

Returns a dictionary with the URL as key and detected technologies as value:

{
"https://github.com": {
"Amazon S3": {"version": "", "confidence": 100, "categories": ["CDN"], "groups": ["Servers"]},
"lit-html": {"version": "1.1.2", "confidence": 100, "categories": ["JavaScript libraries"], "groups": ["Web development"]},
"React Router": {"version": "6", "confidence": 100, "categories": ["JavaScript frameworks"], "groups": ["Web development"]},
"https://google.com" : {},
"https://example.com" : {},
}}

FAQ

Why use Firefox instead of Chrome?

Firefox extensions are .xpi files which are essentially zip files. This makes it easier to extract data and slightly modify the extension to make this tool work.

What is the difference between 'fast', 'balanced', and 'full' scan types?

  • fast: Sends a single HTTP request to the URL. Doesn't use the extension.
  • balanced: Sends additional HTTP requests to .js files, /robots.txt annd does DNS queries. Doesn't use the extension.
  • full: Uses the official Wappalyzer extension to scan the URL in a headless browser.


Huntr-Com-Bug-Bounties-Collector - Keep Watching New Bug Bounty (Vulnerability) Postings

By: Zion3R


New bug bounty(vulnerabilities) collector


Requirements
  • Chrome with GUI (If you encounter trouble with script execution, check the status of VMs GPU features, if available.)
  • Chrome WebDriver

Preview
# python3 main.py

*2024-02-20 16:14:47.836189*

1. Arbitrary File Reading due to Lack of Input Filepath Validation
- Feb 6th 2024 / High (CVE-2024-0964)
- gradio-app/gradio
- https://huntr.com/bounties/25e25501-5918-429c-8541-88832dfd3741/

2. View Barcode Image leads to Remote Code Execution
- Jan 31st 2024 / Critical (CVE: Not yet)
- dolibarr/dolibarr
- https://huntr.com/bounties/f0ffd01e-8054-4e43-96f7-a0d2e652ac7e/

(delimiter-based file database)

# vim feeds.db

1|2024-02-20 16:17:40.393240|7fe14fd58ca2582d66539b2fe178eeaed3524342|CVE-2024-0964|https://huntr.com/bounties/25e25501-5918-429c-8541-88832dfd3741/
2|2024-02-20 16:17:40.393987|c6b84ac808e7f229a4c8f9fbd073b4c0727e07e1|CVE: Not yet|https://huntr.com/bounties/f0ffd01e-8054-4e43-96f7-a0d2e652ac7e/
3|2024-02-20 16:17:40.394582|7fead9658843919219a3b30b8249700d968d0cc9|CVE: Not yet|https://huntr.com/bounties/d6cb06dc-5d10-4197-8f89-847c3203d953/
4|2024-02-20 16:17:40.395094|81fecdd74318ce7da9bc29e81198e62f3225bd44|CVE: Not yet|https://huntr.com/bounties/d875d1a2-7205-4b2b-93cf-439fa4c4f961/
5|2024-02-20 16:17:40.395613|111045c8f1a7926174243db403614d4a58dc72ed|CVE: Not yet|https://huntr.com/bounties/10e423cd-7051-43fd-b736-4e18650d0172/

Notes
  • This code is designed to parse HTML elements from huntr.com, so it may not function correctly if the HTML page structure changes.
  • In case of errors during parsing, exception handling has been included, so if it doesn't work as expected, please inspect the HTML source for any changes.
  • If get in trouble In a typical cloud environment, scripts may not function properly within virtual machines (VMs).


Uscrapper - Powerful OSINT Webscraper For Personal Data Collection

By: Zion3R


Introducing Uscrapper 2.0, A powerfull OSINT webscrapper that allows users to extract various personal information from a website. It leverages web scraping techniques and regular expressions to extract email addresses, social media links, author names, geolocations, phone numbers, and usernames from both hyperlinked and non-hyperlinked sources on the webpage, supports multithreading to make this process faster, Uscrapper 2.0 is equipped with advanced Anti-webscrapping bypassing modules and supports webcrawling to scrape from various sublinks within the same domain. The tool also provides an option to generate a report containing the extracted details.


Extracted Details:

Uscrapper extracts the following details from the provided website:

  • Email Addresses: Displays email addresses found on the website.
  • Social Media Links: Displays links to various social media platforms found on the website.
  • Author Names: Displays the names of authors associated with the website.
  • Geolocations: Displays geolocation information associated with the website.
  • Non-Hyperlinked Details: Displays non-hyperlinked details found on the website including email addresses phone numbers and usernames.

Whats New?:

Uscrapper 2.0:

  • Introduced multiple modules to bypass anti-webscrapping techniques.
  • Introducing Crawl and scrape: an advanced crawl and scrape module to scrape the websites from within.
  • Implemented Multithreading to make these processes faster.

Installation Steps:

git clone https://github.com/z0m31en7/Uscrapper.git
cd Uscrapper/install/ 
chmod +x ./install.sh && ./install.sh #For Unix/Linux systems

Usage:

To run Uscrapper, use the following command-line syntax:

python Uscrapper-v2.0.py [-h] [-u URL] [-c (INT)] [-t THREADS] [-O] [-ns]


Arguments:

  • -h, --help: Show the help message and exit.
  • -u URL, --url URL: Specify the URL of the website to extract details from.
  • -c INT, --crawl INT: Specify the number of links to crawl
  • -t INT, --threads INT: Specify the number of threads to use while crawling and scraping.
  • -O, --generate-report: Generate a report file containing the extracted details.
  • -ns, --nonstrict: Display non-strict usernames during extraction.

Note:

  • Uscrapper relies on web scraping techniques to extract information from websites. Make sure to use it responsibly and in compliance with the website's terms of service and applicable laws.

  • The accuracy and completeness of the extracted details depend on the structure and content of the website being analyzed.

  • To bypass some Anti-Webscrapping methods we have used selenium which can make the overall process slower.

Contribution:

Want a new feature to be added?

  • Make a pull request with all the necessary details and it will be merged after a review.
  • You can contribute by making the regular expressions more efficient and accurate, or by suggesting some more features that can be added.


โŒ