FreshRSS

๐Ÿ”’
โŒ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayKitPloit - PenTest Tools!

TruffleHog Explorer - A User-Friendly Web-Based Tool To Visualize And Analyze Data Extracted Using TruffleHog

By: Unknown


Welcome to TruffleHog Explorer, a user-friendly web-based tool to visualize and analyze data extracted using TruffleHog. TruffleHog is one of the most powerful secrets discovery, classification, validation, and analysis open source tool. In this context, a secret refers to a credential a machine uses to authenticate itself to another machine. This includes API keys, database passwords, private encryption keys, and more.

With an improved UI/UX, powerful filtering options, and export capabilities, this tool helps security professionals efficiently review potential secrets and credentials found in their repositories.

โš ๏ธ This dashboard has been tested only with GitHub TruffleHog JSON outputs. Expect updates soon to support additional formats and platforms.

You can use online version here: TruffleHog Explorer


๐Ÿš€ Features

  • Intuitive UI/UX: Beautiful pastel theme with smooth navigation.
  • Powerful Filtering:
  • Filter findings by repository, detector type, and uploaded file.
  • Flexible date range selection with a calendar picker.
  • Verification status categorization for effective review.
  • Advanced search capabilities for faster identification.
  • Batch Operations:
  • Verify or reject multiple findings with a single click.
  • Toggle visibility of rejected results for a streamlined view.
  • Bulk processing to manage large datasets efficiently.
  • Export Capabilities:
  • Export verified secrets or filtered findings effortlessly.
  • Save and load session backups for continuity.
  • Generate reports in multiple formats (JSON, CSV).
  • Dynamic Sorting:
  • Sort results by repository, date, or verification status.
  • Customizable sorting preferences for a personalized experience.

๐Ÿ“ฅ Installation & Usage

1. Clone the Repository

$ git clone https://github.com/yourusername/trufflehog-explorer.git
$ cd trufflehog-explorer

2. Open the index.html

Simply open the index.html file in your preferred web browser.

$ open index.html

๐Ÿ“‚ How to Use

  1. Upload TruffleHog JSON Findings:
  2. Click on the "Load Data" section and select your .json files from TruffleHog output.
  3. Multiple files are supported.
  4. Apply Filters:
  5. Choose filters such as repository, detector type, and verification status.
  6. Utilize the date range picker to narrow down findings.
  7. Leverage the search function to locate specific findings quickly.
  8. Review Findings:
  9. Click on a finding to expand and view its details.
  10. Use the action buttons to verify or reject findings.
  11. Add comments and annotations for better tracking.
  12. Export Results:
  13. Export verified or filtered findings for reporting.
  14. Save session data for future review and analysis.
  15. Save Your Progress:
  16. Save your session and resume later without losing any progress.
  17. Automatic backup feature to prevent data loss.

Happy Securing! ๐Ÿ”’



Telegram-Checker - A Python Tool For Checking Telegram Accounts Via Phone Numbers Or Usernames

By: Unknown


Enhanced version of bellingcat's Telegram Phone Checker!

A Python script to check Telegram accounts using phone numbers or username.


โœจ Features

  • ๐Ÿ” Check single or multiple phone numbers and usernames
  • ๐Ÿ“ Import numbers from text file
  • ๐Ÿ“ธ Auto-download profile pictures
  • ๐Ÿ’พ Save results as JSON
  • ๐Ÿ” Secure credential storage
  • ๐Ÿ“Š Detailed user information

๐Ÿš€ Installation

  1. Clone the repository:
git clone https://github.com/unnohwn/telegram-checker.git
cd telegram-checker
  1. Install required packages:
pip install -r requirements.txt

๐Ÿ“ฆ Requirements

Contents of requirements.txt:

telethon
rich
click
python-dotenv

Or install packages individually:

pip install telethon rich click python-dotenv

โš™๏ธ Configuration

First time running the script, you'll need: - Telegram API credentials (get from https://my.telegram.org/apps) - Your Telegram phone number including countrycode + - Verification code (sent to your Telegram)

๐Ÿ’ป Usage

Run the script:

python telegram_checker.py

Choose from options: 1. Check phone numbers from input 2. Check phone numbers from file 3. Check usernames from input 4. Check usernames from file 5. Clear saved credentials 6. Exit

๐Ÿ“‚ Output

Results are saved in: - results/ - JSON files with detailed information - profile_photos/ - Downloaded profile pictures

โš ๏ธ Note

This tool is for educational purposes only. Please respect Telegram's terms of service and user privacy.

๐Ÿ“„ License

MIT License



Telegram-Scraper - A Powerful Python Script That Allows You To Scrape Messages And Media From Telegram Channels Using The Telethon Library

By: Unknown


A powerful Python script that allows you to scrape messages and media from Telegram channels using the Telethon library. Features include real-time continuous scraping, media downloading, and data export capabilities.

___________________  _________
\__ ___/ _____/ / _____/
| | / \ ___ \_____ \
| | \ \_\ \/ \
|____| \______ /_______ /
\/ \/

Features ๐Ÿš€

  • Scrape messages from multiple Telegram channels
  • Download media files (photos, documents)
  • Real-time continuous scraping
  • Export data to JSON and CSV formats
  • SQLite database storage
  • Resume capability (saves progress)
  • Media reprocessing for failed downloads
  • Progress tracking
  • Interactive menu interface

Prerequisites ๐Ÿ“‹

Before running the script, you'll need:

  • Python 3.7 or higher
  • Telegram account
  • API credentials from Telegram

Required Python packages

pip install -r requirements.txt

Contents of requirements.txt:

telethon
aiohttp
asyncio

Getting Telegram API Credentials ๐Ÿ”‘

  1. Visit https://my.telegram.org/auth
  2. Log in with your phone number
  3. Click on "API development tools"
  4. Fill in the form:
  5. App title: Your app name
  6. Short name: Your app short name
  7. Platform: Can be left as "Desktop"
  8. Description: Brief description of your app
  9. Click "Create application"
  10. You'll receive:
  11. api_id: A number
  12. api_hash: A string of letters and numbers

Keep these credentials safe, you'll need them to run the script!

Setup and Running ๐Ÿ”ง

  1. Clone the repository:
git clone https://github.com/unnohwn/telegram-scraper.git
cd telegram-scraper
  1. Install requirements:
pip install -r requirements.txt
  1. Run the script:
python telegram-scraper.py
  1. On first run, you'll be prompted to enter:
  2. Your API ID
  3. Your API Hash
  4. Your phone number (with country code)
  5. Your phone number (with country code) or bot, but use the phone number option when prompted second time.
  6. Verification code (sent to your Telegram)

Initial Scraping Behavior ๐Ÿ•’

When scraping a channel for the first time, please note:

  • The script will attempt to retrieve the entire channel history, starting from the oldest messages
  • Initial scraping can take several minutes or even hours, depending on:
  • The total number of messages in the channel
  • Whether media downloading is enabled
  • The size and number of media files
  • Your internet connection speed
  • Telegram's rate limiting
  • The script uses pagination and maintains state, so if interrupted, it can resume from where it left off
  • Progress percentage is displayed in real-time to track the scraping status
  • Messages are stored in the database as they are scraped, so you can start analyzing available data even before the scraping is complete

Usage ๐Ÿ“

The script provides an interactive menu with the following options:

  • [A] Add new channel
  • Enter the channel ID or channelname
  • [R] Remove channel
  • Remove a channel from scraping list
  • [S] Scrape all channels
  • One-time scraping of all configured channels
  • [M] Toggle media scraping
  • Enable/disable downloading of media files
  • [C] Continuous scraping
  • Real-time monitoring of channels for new messages
  • [E] Export data
  • Export to JSON and CSV formats
  • [V] View saved channels
  • List all saved channels
  • [L] List account channels
  • List all channels with ID:s for account
  • [Q] Quit

Channel IDs ๐Ÿ“ข

You can use either: - Channel username (e.g., channelname) - Channel ID (e.g., -1001234567890)

Data Storage ๐Ÿ’พ

Database Structure

Data is stored in SQLite databases, one per channel: - Location: ./channelname/channelname.db - Table: messages - id: Primary key - message_id: Telegram message ID - date: Message timestamp - sender_id: Sender's Telegram ID - first_name: Sender's first name - last_name: Sender's last name - username: Sender's username - message: Message text - media_type: Type of media (if any) - media_path: Local path to downloaded media - reply_to: ID of replied message (if any)

Media Storage ๐Ÿ“

Media files are stored in: - Location: ./channelname/media/ - Files are named using message ID or original filename

Exported Data ๐Ÿ“Š

Data can be exported in two formats: 1. CSV: ./channelname/channelname.csv - Human-readable spreadsheet format - Easy to import into Excel/Google Sheets

  1. JSON: ./channelname/channelname.json
  2. Structured data format
  3. Ideal for programmatic processing

Features in Detail ๐Ÿ”

Continuous Scraping

The continuous scraping feature ([C] option) allows you to: - Monitor channels in real-time - Automatically download new messages - Download media as it's posted - Run indefinitely until interrupted (Ctrl+C) - Maintains state between runs

Media Handling

The script can download: - Photos - Documents - Other media types supported by Telegram - Automatically retries failed downloads - Skips existing files to avoid duplicates

Error Handling ๐Ÿ› ๏ธ

The script includes: - Automatic retry mechanism for failed media downloads - State preservation in case of interruption - Flood control compliance - Error logging for failed operations

Limitations โš ๏ธ

  • Respects Telegram's rate limits
  • Can only access public channels or channels you're a member of
  • Media download size limits apply as per Telegram's restrictions

Contributing ๐Ÿค

Contributions are welcome! Please feel free to submit a Pull Request.

License ๐Ÿ“„

This project is licensed under the MIT License - see the LICENSE file for details.

Disclaimer โš–๏ธ

This tool is for educational purposes only. Make sure to: - Respect Telegram's Terms of Service - Obtain necessary permissions before scraping - Use responsibly and ethically - Comply with data protection regulations



Telegram-Story-Scraper - A Python Script That Allows You To Automatically Scrape And Download Stories From Your Telegram Friends

By: Unknown


A Python script that allows you to automatically scrape and download stories from your Telegram friends using the Telethon library. The script continuously monitors and saves both photos and videos from stories, along with their metadata.


Important Note About Story Access โš ๏ธ

Due to Telegram API restrictions, this script can only access stories from: - Users you have added to your friend list - Users whose privacy settings allow you to view their stories

This is a limitation of Telegram's API and cannot be bypassed.

Features ๐Ÿš€

  • Automatically scrapes all available stories from your Telegram friends
  • Downloads both photos and videos from stories
  • Stores metadata in SQLite database
  • Exports data to Excel spreadsheet
  • Real-time monitoring with customizable intervals
  • Timestamp is set to (UTC+2)
  • Maintains record of previously downloaded stories
  • Resume capability
  • Automatic retry mechanism

Prerequisites ๐Ÿ“‹

Before running the script, you'll need:

  • Python 3.7 or higher
  • Telegram account
  • API credentials from Telegram
  • Friends on Telegram whose stories you want to track

Required Python packages

pip install -r requirements.txt

Contents of requirements.txt:

telethon
openpyxl
schedule

Getting Telegram API Credentials ๐Ÿ”‘

  1. Visit https://my.telegram.org/auth
  2. Log in with your phone number
  3. Click on "API development tools"
  4. Fill in the form:
  5. App title: Your app name
  6. Short name: Your app short name
  7. Platform: Can be left as "Desktop"
  8. Description: Brief description of your app
  9. Click "Create application"
  10. You'll receive:
  11. api_id: A number
  12. api_hash: A string of letters and numbers

Keep these credentials safe, you'll need them to run the script!

Setup and Running ๐Ÿ”ง

  1. Clone the repository:
git clone https://github.com/unnohwn/telegram-story-scraper.git
cd telegram-story-scraper
  1. Install requirements:
pip install -r requirements.txt
  1. Run the script:
python TGSS.py
  1. On first run, you'll be prompted to enter:
  2. Your API ID
  3. Your API Hash
  4. Your phone number (with country code)
  5. Verification code (sent to your Telegram)
  6. Checking interval in seconds (default is 60)

How It Works ๐Ÿ”„

The script: 1. Connects to your Telegram account 2. Periodically checks for new stories from your friends 3. Downloads any new stories (photos/videos) 4. Stores metadata in a SQLite database 5. Exports information to an Excel file 6. Runs continuously until interrupted (Ctrl+C)

Data Storage ๐Ÿ’พ

Database Structure (stories.db)

SQLite database containing: - user_id: Telegram user ID of the story creator - story_id: Unique story identifier - timestamp: When the story was posted (UTC+2) - filename: Local filename of the downloaded media

CSV and Excel Export (stories_export.csv/xlsx)

Export file containing the same information as the database, useful for: - Easy viewing of story metadata - Filtering and sorting - Data analysis - Sharing data with others

Media Storage ๐Ÿ“

  • Photos are saved as: {user_id}_{story_id}.jpg
  • Videos are saved with their original extension: {user_id}_{story_id}.{extension}
  • All media files are saved in the script's directory

Features in Detail ๐Ÿ”

Continuous Monitoring

  • Customizable checking interval (default: 60 seconds)
  • Runs continuously until manually stopped
  • Maintains state between runs
  • Avoids duplicate downloads

Media Handling

  • Supports both photos and videos
  • Automatically detects media type
  • Preserves original quality
  • Generates unique filenames

Error Handling ๐Ÿ› ๏ธ

The script includes: - Automatic retry mechanism for failed downloads - Error logging for failed operations - Connection error handling - State preservation in case of interruption

Limitations โš ๏ธ

  • Subject to Telegram's rate limits
  • Stories must be currently active (not expired)
  • Media download size limits apply as per Telegram's restrictions

Contributing ๐Ÿค

Contributions are welcome! Please feel free to submit a Pull Request.

License ๐Ÿ“„

This project is licensed under the MIT License - see the LICENSE file for details.

Disclaimer โš–๏ธ

This tool is for educational purposes only. Make sure to: - Respect Telegram's Terms of Service - Obtain necessary permissions before scraping - Use responsibly and ethically - Comply with data protection regulations - Respect user privacy



gitGRAB - This Tool Is Designed To Interact With The GitHub API And Retrieve Specific User Details, Repository Information, And Commit Emails For A Given User

By: Unknown


This tool is designed to interact with the GitHub API and retrieve specific user details, repository information, and commit emails for a given user.


Install Requests

pip install requests

Execute the program

python3 gitgrab.py



PIP-INTEL - OSINT and Cyber Intelligence Tool

By: Unknown

 


Pip-Intel is a powerful tool designed for OSINT (Open Source Intelligence) and cyber intelligence gathering activities. It consolidates various open-source tools into a single user-friendly interface simplifying the data collection and analysis processes for researchers and cybersecurity professionals.

Pip-Intel utilizes Python-written pip packages to gather information from various data points. This tool is equipped with the capability to collect detailed information through email addresses, phone numbers, IP addresses, and social media accounts. It offers a wide range of functionalities including email-based OSINT operations, phone number-based inquiries, geolocating IP addresses, social media and user analyses, and even dark web searches.




Uscrapper - Powerful OSINT Webscraper For Personal Data Collection

By: Zion3R


Introducing Uscrapper 2.0, A powerfull OSINT webscrapper that allows users to extract various personal information from a website. It leverages web scraping techniques and regular expressions to extract email addresses, social media links, author names, geolocations, phone numbers, and usernames from both hyperlinked and non-hyperlinked sources on the webpage, supports multithreading to make this process faster, Uscrapper 2.0 is equipped with advanced Anti-webscrapping bypassing modules and supports webcrawling to scrape from various sublinks within the same domain. The tool also provides an option to generate a report containing the extracted details.


Extracted Details:

Uscrapper extracts the following details from the provided website:

  • Email Addresses: Displays email addresses found on the website.
  • Social Media Links: Displays links to various social media platforms found on the website.
  • Author Names: Displays the names of authors associated with the website.
  • Geolocations: Displays geolocation information associated with the website.
  • Non-Hyperlinked Details: Displays non-hyperlinked details found on the website including email addresses phone numbers and usernames.

Whats New?:

Uscrapper 2.0:

  • Introduced multiple modules to bypass anti-webscrapping techniques.
  • Introducing Crawl and scrape: an advanced crawl and scrape module to scrape the websites from within.
  • Implemented Multithreading to make these processes faster.

Installation Steps:

git clone https://github.com/z0m31en7/Uscrapper.git
cd Uscrapper/install/ 
chmod +x ./install.sh && ./install.sh #For Unix/Linux systems

Usage:

To run Uscrapper, use the following command-line syntax:

python Uscrapper-v2.0.py [-h] [-u URL] [-c (INT)] [-t THREADS] [-O] [-ns]


Arguments:

  • -h, --help: Show the help message and exit.
  • -u URL, --url URL: Specify the URL of the website to extract details from.
  • -c INT, --crawl INT: Specify the number of links to crawl
  • -t INT, --threads INT: Specify the number of threads to use while crawling and scraping.
  • -O, --generate-report: Generate a report file containing the extracted details.
  • -ns, --nonstrict: Display non-strict usernames during extraction.

Note:

  • Uscrapper relies on web scraping techniques to extract information from websites. Make sure to use it responsibly and in compliance with the website's terms of service and applicable laws.

  • The accuracy and completeness of the extracted details depend on the structure and content of the website being analyzed.

  • To bypass some Anti-Webscrapping methods we have used selenium which can make the overall process slower.

Contribution:

Want a new feature to be added?

  • Make a pull request with all the necessary details and it will be merged after a review.
  • You can contribute by making the regular expressions more efficient and accurate, or by suggesting some more features that can be added.


CloakQuest3r - Uncover The True IP Address Of Websites Safeguarded By Cloudflare

By: Zion3R


CloakQuest3r is a powerful Python tool meticulously crafted to uncover the true IP address of websites safeguarded by Cloudflare, a widely adopted web security and performance enhancement service. Its core mission is to accurately discern the actual IP address of web servers that are concealed behind Cloudflare's protective shield. Subdomain scanning is employed as a key technique in this pursuit. This tool is an invaluable resource for penetration testers, security professionals, and web administrators seeking to perform comprehensive security assessments and identify vulnerabilities that may be obscured by Cloudflare's security measures.


Key Features:

  • Real IP Detection: CloakQuest3r excels in the art of discovering the real IP address of web servers employing Cloudflare's services. This crucial information is paramount for conducting comprehensive penetration tests and ensuring the security of web assets.

  • Subdomain Scanning: Subdomain scanning is harnessed as a fundamental component in the process of finding the real IP address. It aids in the identification of the actual server responsible for hosting the website and its associated subdomains.

  • Threaded Scanning: To enhance efficiency and expedite the real IP detection process, CloakQuest3r utilizes threading. This feature enables scanning of a substantial list of subdomains without significantly extending the execution time.

  • Detailed Reporting: The tool provides comprehensive output, including the total number of subdomains scanned, the total number of subdomains found, and the time taken for the scan. Any real IP addresses unveiled during the process are also presented, facilitating in-depth analysis and penetration testing.

With CloakQuest3r, you can confidently evaluate website security, unveil hidden vulnerabilities, and secure your web assets by disclosing the true IP address concealed behind Cloudflare's protective layers.

Limitation

infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information. 3. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the primary host. Some subdomains may also be protected by Cloudflare. " dir="auto">
- Still in the development phase, sometimes it can't detect the real Ip.

- CloakQuest3r combines multiple indicators to uncover real IP addresses behind Cloudflare. While subdomain scanning is a part of the process, we do not assume that all subdomains' A records point to the target host. The tool is designed to provide valuable insights but may not work in every scenario. We welcome any specific suggestions for improvement.

1. False Negatives: CloakReveal3r may not always accurately identify the real IP address behind Cloudflare, particularly for websites with complex network configurations or strict security measures.

2. Dynamic Environments: Websites' infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information.

3. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the pri mary host. Some subdomains may also be protected by Cloudflare.

This tool is a Proof of Concept and is for Educational Purposes Only.

How to Use:

  1. Run CloudScan with a single command-line argument: the target domain you want to analyze.

     git clone https://github.com/spyboy-productions/CloakQuest3r.git
    cd CloakQuest3r
    pip3 install -r requirements.txt
    python cloakquest3r.py example.com
  2. The tool will check if the website is using Cloudflare. If not, it will inform you that subdomain scanning is unnecessary.

  3. If Cloudflare is detected, CloudScan will scan for subdomains and identify their real IP addresses.

  4. You will receive detailed output, including the number of subdomains scanned, the total number of subdomains found, and the time taken for the scan.

  5. Any real IP addresses found will be displayed, allowing you to conduct further analysis and penetration testing.

CloudScan simplifies the process of assessing website security by providing a clear, organized, and informative report. Use it to enhance your security assessments, identify potential vulnerabilities, and secure your web assets.

Run It Online:

Run it online on replit.com : https://replit.com/@spyb0y/CloakQuest3r



Holehe - Tool To Check If The Mail Is Used On Different Sites Like Twitter, Instagram And Will Retrieve Information On Sites With The Forgotten Password Function

By: Zion3R

Holehe Online Version

Summary

Efficiently finding registered accounts from emails.

Holehe checks if an email is attached to an account on sites like twitter, instagram, imgur and more than 120 others.


Installation

With PyPI

pip3 install holehe

With Github

git clone https://github.com/megadose/holehe.git
cd holehe/
python3 setup.py install

Quick Start

Holehe can be run from the CLI and rapidly embedded within existing python applications.

๏“š CLI Example

holehe test@gmail.com

๏“ˆ Python Example

import trio
import httpx

from holehe.modules.social_media.snapchat import snapchat


async def main():
email = "test@gmail.com"
out = []
client = httpx.AsyncClient()

await snapchat(email, client, out)

print(out)
await client.aclose()

trio.run(main)

Module Output

For each module, data is returned in a standard dictionary with the following json-equivalent format :

{
"name": "example",
"rateLimit": false,
"exists": true,
"emailrecovery": "ex****e@gmail.com",
"phoneNumber": "0*******78",
"others": null
}
  • rateLitmit : Lets you know if you've been rate-limited.
  • exists : If an account exists for the email on that service.
  • emailrecovery : Sometimes partially obfuscated recovery emails are returned.
  • phoneNumber : Sometimes partially obfuscated recovery phone numbers are returned.
  • others : Any extra info.

Rate limit? Change your IP.

Maltego Transform : Holehe Maltego

Thank you to :

Donations

For BTC Donations : 1FHDM49QfZX6pJmhjLE5tB2K6CaTLMZpXZ

๏“ License

GNU General Public License v3.0

Built for educational purposes only.

Modules

Name Domain Method Frequent Rate Limit
aboutme about.me register โœ˜
adobe adobe.com password recovery โœ˜
amazon amazon.com login โœ˜
amocrm amocrm.com register โœ˜
anydo any.do login โœ”
archive archive.org register โœ˜
armurerieauxerre armurerie-auxerre.com register โœ˜
atlassian atlassian.com register โœ˜
axonaut axonaut.com register โœ˜
babeshows babeshows.co.uk register โœ˜
badeggsonline badeggsonline.com register โœ˜
biosmods bios-mods.com register โœ˜
biotechnologyforums biotechnologyforums.com register โœ˜
bitmoji bitmoji.com login โœ˜
blablacar blablacar.com register โœ”
blackworldforum blackworldforum.com register โœ”
blip blip.fm register โœ”
blitzortung forum.blitzortung.org register โœ˜
bluegrassrivals bluegrassrivals.com register โœ˜
bodybuilding bodybuilding.com register โœ˜
buymeacoffee buymeacoffee.com register โœ”
cambridgemt discussion.cambridge-mt.com register โœ˜
caringbridge caringbridge.org register โœ˜
chinaphonearena chinaphonearena.com register โœ˜
clashfarmer clashfarmer.com register โœ”
codecademy codecademy.com register โœ”
codeigniter forum.codeigniter.com register โœ˜
codepen codepen.io register โœ˜
coroflot coroflot.com register โœ˜
cpaelites cpaelites.com register โœ˜
cpahero cpahero.com register โœ˜
cracked_to cracked.to register โœ”
crevado crevado.com register โœ”
deliveroo deliveroo.com register โœ”
demonforums demonforums.net register โœ”
devrant devrant.com register โœ˜
diigo diigo.com register โœ˜
discord discord.com register โœ˜
docker docker.com register โœ˜
dominosfr dominos.fr register โœ”
ebay ebay.com login โœ”
ello ello.co register โœ˜
envato envato.com register โœ˜
eventbrite eventbrite.com login โœ˜
evernote evernote.com login โœ˜
fanpop fanpop.com register โœ˜
firefox firefox.com register โœ˜
flickr flickr.com login โœ˜
freelancer freelancer.com register โœ˜
freiberg drachenhort.user.stunet.tu-freiberg.de register โœ˜
garmin garmin.com register โœ”
github github.com register โœ˜
google google.com register โœ”
gravatar gravatar.com other โœ˜
hubspot hubspot.com login โœ˜
imgur imgur.com register โœ”
insightly insightly.com login โœ˜
instagram instagram.com register โœ”
issuu issuu.com register โœ˜
koditv forum.kodi.tv register โœ˜
komoot komoot.com register โœ”
laposte laposte.fr register โœ˜
lastfm last.fm register โœ˜
lastpass lastpass.com register โœ˜
mail_ru mail.ru password recovery โœ˜
mybb community.mybb.com register โœ˜
myspace myspace.com register โœ˜
nattyornot nattyornotforum.nattyornot.com register โœ˜
naturabuy naturabuy.fr register โœ˜
ndemiccreations forum.ndemiccreations.com register โœ˜
nextpvr forums.nextpvr.com register โœ˜
nike nike.com register โœ˜
nimble nimble.com register โœ˜
nocrm nocrm.io register โœ˜
nutshell nutshell.com register โœ˜
odnoklassniki ok.ru password recovery โœ˜
office365 office365.com other โœ”
onlinesequencer onlinesequencer.net register โœ˜
parler parler.com login โœ˜
patreon patreon.com login โœ”
pinterest pinterest.com register โœ˜
pipedrive pipedrive.com register โœ˜
plurk plurk.com register โœ˜
pornhub pornhub.com register โœ˜
protonmail protonmail.ch other โœ˜
quora quora.com register โœ˜
rambler rambler.ru register โœ˜
redtube redtube.com register โœ˜
replit replit.com register โœ”
rocketreach rocketreach.co register โœ˜
samsung samsung.com register โœ˜
seoclerks seoclerks.com register โœ˜
sevencups 7cups.com register โœ”
smule smule.com register โœ”
snapchat snapchat.com login โœ˜
soundcloud soundcloud.com register โœ˜
sporcle sporcle.com register โœ˜
spotify spotify.com register โœ”
strava strava.com register โœ˜
taringa taringa.net register โœ”
teamleader teamleader.com register โœ˜
teamtreehouse teamtreehouse.com register โœ˜
tellonym tellonym.me register โœ˜
thecardboard thecardboard.org register โœ˜
therianguide forums.therian-guide.com register โœ˜
thevapingforum thevapingforum.com register โœ˜
tumblr tumblr.com register โœ˜
tunefind tunefind.com register โœ”
twitter twitter.com register โœ˜
venmo venmo.com register โœ”
vivino vivino.com register โœ˜
voxmedia voxmedia.com register โœ˜
vrbo vrbo.com register โœ˜
vsco vsco.co register โœ˜
wattpad wattpad.com register โœ”
wordpress wordpress login โœ˜
xing xing.com register โœ˜
xnxx xnxx.com register โœ”
xvideos xvideos.com register โœ˜
yahoo yahoo.com login โœ”
zoho zoho.com login โœ”


Xsubfind3R - A CLI Utility To Find Domain'S Known Subdomains From Curated Passive Online Sources

By: Zion3R


xsubfind3r is a command-line interface (CLI) utility to find domain's known subdomains from curated passive online sources.


Features

  • Fetches domains from curated passive sources to maximize results.

  • Supports stdin and stdout for easy integration into workflows.

  • Cross-Platform (Windows, Linux & macOS).

Installation

Install release binaries (Without Go Installed)

Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

  • ...with wget:

     wget https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz
  • ...or, with curl:

     curl -OL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz

...then, extract the binary:

tar xf xsubfind3r-<version>-linux-amd64.tar.gz

TIP: The above steps, download and extract, can be combined into a single step with this onliner

curl -sL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz | tar -xzv

NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xsubfind3r executable.

...move the xsubfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

sudo mv xsubfind3r /usr/local/bin/

NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r to their PATH.

Install source (With Go Installed)

Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

go install ...

go install -v github.com/hueristiq/xsubfind3r/cmd/xsubfind3r@latest

go build ... the development Version

  • Clone the repository

     git clone https://github.com/hueristiq/xsubfind3r.git 
  • Build the utility

     cd xsubfind3r/cmd/xsubfind3r && \
    go build .
  • Move the xsubfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

     sudo mv xsubfind3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r to their PATH.

NOTE: While the development version is a good way to take a peek at xsubfind3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

Post Installation

xsubfind3r will work right after installation. However, BeVigil, Chaos, Fullhunt, Github, Intelligence X and Shodan require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xsubfind3r/config.yaml file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.

Example config.yaml:

version: 0.3.0
sources:
- alienvault
- anubis
- bevigil
- chaos
- commoncrawl
- crtsh
- fullhunt
- github
- hackertarget
- intelx
- shodan
- urlscan
- wayback
keys:
bevigil:
- awA5nvpKU3N8ygkZ
chaos:
- d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39asdsd54bbc1aabb208c9acfb
fullhunt:
- 0d9652ce-516c-4315-b589-9b241ee6dc24
github:
- d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
- asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
intelx:
- 2.intelx.io:00000000-0000-0000-0000-000000000000
shodan:
- AAAAClP1bJJSRMEYJazgwhJKrggRwKA
urlscan:
- d4c85d34-e425-446e-d4ab-f5a3412acbe8

Usage

To display help message for xsubfind3r use the -h flag:

xsubfind3r -h

help message:


_ __ _ _ _____
__ _____ _ _| |__ / _(_)_ __ __| |___ / _ __
\ \/ / __| | | | '_ \| |_| | '_ \ / _` | |_ \| '__|
> <\__ \ |_| | |_) | _| | | | | (_| |___) | |
/_/\_\___/\__,_|_.__/|_| |_|_| |_|\__,_|____/|_| v0.3.0

USAGE:
xsubfind3r [OPTIONS]

INPUT:
-d, --domain string[] target domains
-l, --list string target domains' list file path

SOURCES:
--sources bool list supported sources
-u, --sources-to-use string[] comma(,) separeted sources to use
-e, --sources-to-exclude string[] comma(,) separeted sources to exclude

OPTIMIZATION:
-t, --threads int number of threads (default: 50)

OUTPUT:
--no-color bool disable colored output
-o, --output string output subdomains' file path
-O, --output-directory string output subdomains' directory path
-v, --verbosity string debug, info, warning, error, fatal or silent (default: info)

CONFIGURATION:
-c, --configuration string configuration file path (default: ~/.hueristiq/xsubfind3r/config.yaml)

Contribution

Issues and Pull Requests are welcome! Check out the contribution guidelines.

Licensing

This utility is distributed under the MIT license.



Xurlfind3R - A CLI Utility To Find Domain'S Known URLs From Curated Passive Online Sources

By: Zion3R


xurlfind3r is a command-line interface (CLI) utility to find domain's known URLs from curated passive online sources.


Features

Installation

Install release binaries (Without Go Installed)

Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

  • ...with wget:

     wget https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz
  • ...or, with curl:

     curl -OL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz

...then, extract the binary:

tar xf xurlfind3r-<version>-linux-amd64.tar.gz

TIP: The above steps, download and extract, can be combined into a single step with this onliner

curl -sL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz | tar -xzv

NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xurlfind3r executable.

...move the xurlfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

sudo mv xurlfind3r /usr/local/bin/

NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r to their PATH.

Install source (With Go Installed)

Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

go install ...

go install -v github.com/hueristiq/xurlfind3r/cmd/xurlfind3r@latest

go build ... the development Version

  • Clone the repository

     git clone https://github.com/hueristiq/xurlfind3r.git 
  • Build the utility

     cd xurlfind3r/cmd/xurlfind3r && \
    go build .
  • Move the xurlfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

     sudo mv xurlfind3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r to their PATH.

NOTE: While the development version is a good way to take a peek at xurlfind3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

Post Installation

xurlfind3r will work right after installation. However, BeVigil, Github and Intelligence X require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xurlfind3r/config.yaml file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.

Example config.yaml:

version: 0.2.0
sources:
- bevigil
- commoncrawl
- github
- intelx
- otx
- urlscan
- wayback
keys:
bevigil:
- awA5nvpKU3N8ygkZ
github:
- d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
- asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
intelx:
- 2.intelx.io:00000000-0000-0000-0000-000000000000
urlscan:
- d4c85d34-e425-446e-d4ab-f5a3412acbe8

Usage

To display help message for xurlfind3r use the -h flag:

xurlfind3r -h

help message:

                 _  __ _           _ _____      
__ ___ _ _ __| |/ _(_)_ __ __| |___ / _ __
\ \/ / | | | '__| | |_| | '_ \ / _` | |_ \| '__|
> <| |_| | | | | _| | | | | (_| |___) | |
/_/\_\\__,_|_| |_|_| |_|_| |_|\__,_|____/|_| v0.2.0

USAGE:
xurlfind3r [OPTIONS]

TARGET:
-d, --domain string (sub)domain to match URLs

SCOPE:
--include-subdomains bool match subdomain's URLs

SOURCES:
-s, --sources bool list sources
-u, --use-sources string sources to use (default: bevigil,commoncrawl,github,intelx,otx,urlscan,wayback)
--skip-wayback-robots bool with wayback, skip parsing robots.txt snapshots
--skip-wayback-source bool with wayback , skip parsing source code snapshots

FILTER & MATCH:
-f, --filter string regex to filter URLs
-m, --match string regex to match URLs

OUTPUT:
--no-color bool no color mode
-o, --output string output URLs file path
-v, --verbosity string debug, info, warning, error, fatal or silent (default: info)

CONFIGURATION:
-c, --configuration string configuration file path (default: ~/.hueristiq/xurlfind3r/config.yaml)

Examples

Basic

xurlfind3r -d hackerone.com --include-subdomains

Filter Regex

# filter images
xurlfind3r -d hackerone.com --include-subdomains -f '`^https?://[^/]*?/.*\.(jpg|jpeg|png|gif|bmp)(\?[^\s]*)?$`'

Match Regex

# match js URLs
xurlfind3r -d hackerone.com --include-subdomains -m '^https?://[^/]*?/.*\.js(\?[^\s]*)?$'

Contributing

Issues and Pull Requests are welcome! Check out the contribution guidelines.

Licensing

This utility is distributed under the MIT license.



โŒ