FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

TruffleHog Explorer - A User-Friendly Web-Based Tool To Visualize And Analyze Data Extracted Using TruffleHog

By: Unknown


Welcome toΒ TruffleHog Explorer, a user-friendly web-based tool to visualize and analyze data extracted using TruffleHog. TruffleHog is one of the most powerful secrets discovery, classification, validation, and analysis open source tool. In this context, a secret refers to a credential a machine uses to authenticate itself to another machine. This includes API keys, database passwords, private encryption keys, and more.

With an improved UI/UX, powerful filtering options, and export capabilities, this tool helps security professionals efficiently review potential secrets and credentials found in their repositories.

⚠️ This dashboard has been tested only with GitHub TruffleHog JSON outputs. Expect updates soon to support additional formats and platforms.

You can use online version here: TruffleHog Explorer


πŸš€ Features

  • Intuitive UI/UX: Beautiful pastel theme with smooth navigation.
  • Powerful Filtering:
  • Filter findings by repository, detector type, and uploaded file.
  • Flexible date range selection with a calendar picker.
  • Verification status categorization for effective review.
  • Advanced search capabilities for faster identification.
  • Batch Operations:
  • Verify or reject multiple findings with a single click.
  • Toggle visibility of rejected results for a streamlined view.
  • Bulk processing to manage large datasets efficiently.
  • Export Capabilities:
  • Export verified secrets or filtered findings effortlessly.
  • Save and load session backups for continuity.
  • Generate reports in multiple formats (JSON, CSV).
  • Dynamic Sorting:
  • Sort results by repository, date, or verification status.
  • Customizable sorting preferences for a personalized experience.

πŸ“₯ Installation & Usage

1. Clone the Repository

$ git clone https://github.com/yourusername/trufflehog-explorer.git
$ cd trufflehog-explorer

2. Open the index.html

Simply open the index.html file in your preferred web browser.

$ open index.html

πŸ“‚ How to Use

  1. Upload TruffleHog JSON Findings:
  2. Click on the "Load Data" section and select your .json files from TruffleHog output.
  3. Multiple files are supported.
  4. Apply Filters:
  5. Choose filters such as repository, detector type, and verification status.
  6. Utilize the date range picker to narrow down findings.
  7. Leverage the search function to locate specific findings quickly.
  8. Review Findings:
  9. Click on a finding to expand and view its details.
  10. Use the action buttons to verify or reject findings.
  11. Add comments and annotations for better tracking.
  12. Export Results:
  13. Export verified or filtered findings for reporting.
  14. Save session data for future review and analysis.
  15. Save Your Progress:
  16. Save your session and resume later without losing any progress.
  17. Automatic backup feature to prevent data loss.

Happy Securing! πŸ”’



Telegram-Checker - A Python Tool For Checking Telegram Accounts Via Phone Numbers Or Usernames

By: Unknown


Enhanced version of bellingcat's Telegram Phone Checker!

A Python script to check Telegram accounts using phone numbers or username.


✨ Features

  • πŸ” Check single or multiple phone numbers and usernames
  • πŸ“ Import numbers from text file
  • πŸ“Έ Auto-download profile pictures
  • πŸ’Ύ Save results as JSON
  • πŸ” Secure credential storage
  • πŸ“Š Detailed user information

πŸš€ Installation

  1. Clone the repository:
git clone https://github.com/unnohwn/telegram-checker.git
cd telegram-checker
  1. Install required packages:
pip install -r requirements.txt

πŸ“¦ Requirements

Contents of requirements.txt:

telethon
rich
click
python-dotenv

Or install packages individually:

pip install telethon rich click python-dotenv

βš™οΈ Configuration

First time running the script, you'll need: - Telegram API credentials (get from https://my.telegram.org/apps) - Your Telegram phone number including countrycode + - Verification code (sent to your Telegram)

πŸ’» Usage

Run the script:

python telegram_checker.py

Choose from options: 1. Check phone numbers from input 2. Check phone numbers from file 3. Check usernames from input 4. Check usernames from file 5. Clear saved credentials 6. Exit

πŸ“‚ Output

Results are saved in: - results/ - JSON files with detailed information - profile_photos/ - Downloaded profile pictures

⚠️ Note

This tool is for educational purposes only. Please respect Telegram's terms of service and user privacy.

πŸ“„ License

MIT License



Telegram-Scraper - A Powerful Python Script That Allows You To Scrape Messages And Media From Telegram Channels Using The Telethon Library

By: Unknown


A powerful Python script that allows you to scrape messages and media from Telegram channels using the Telethon library. Features include real-time continuous scraping, media downloading, and data export capabilities.

___________________  _________
\__ ___/ _____/ / _____/
| | / \ ___ \_____ \
| | \ \_\ \/ \
|____| \______ /_______ /
\/ \/

Features πŸš€

  • Scrape messages from multiple Telegram channels
  • Download media files (photos, documents)
  • Real-time continuous scraping
  • Export data to JSON and CSV formats
  • SQLite database storage
  • Resume capability (saves progress)
  • Media reprocessing for failed downloads
  • Progress tracking
  • Interactive menu interface

Prerequisites πŸ“‹

Before running the script, you'll need:

  • Python 3.7 or higher
  • Telegram account
  • API credentials from Telegram

Required Python packages

pip install -r requirements.txt

Contents of requirements.txt:

telethon
aiohttp
asyncio

Getting Telegram API Credentials πŸ”‘

  1. Visit https://my.telegram.org/auth
  2. Log in with your phone number
  3. Click on "API development tools"
  4. Fill in the form:
  5. App title: Your app name
  6. Short name: Your app short name
  7. Platform: Can be left as "Desktop"
  8. Description: Brief description of your app
  9. Click "Create application"
  10. You'll receive:
  11. api_id: A number
  12. api_hash: A string of letters and numbers

Keep these credentials safe, you'll need them to run the script!

Setup and Running πŸ”§

  1. Clone the repository:
git clone https://github.com/unnohwn/telegram-scraper.git
cd telegram-scraper
  1. Install requirements:
pip install -r requirements.txt
  1. Run the script:
python telegram-scraper.py
  1. On first run, you'll be prompted to enter:
  2. Your API ID
  3. Your API Hash
  4. Your phone number (with country code)
  5. Your phone number (with country code) or bot, but use the phone number option when prompted second time.
  6. Verification code (sent to your Telegram)

Initial Scraping Behavior πŸ•’

When scraping a channel for the first time, please note:

  • The script will attempt to retrieve the entire channel history, starting from the oldest messages
  • Initial scraping can take several minutes or even hours, depending on:
  • The total number of messages in the channel
  • Whether media downloading is enabled
  • The size and number of media files
  • Your internet connection speed
  • Telegram's rate limiting
  • The script uses pagination and maintains state, so if interrupted, it can resume from where it left off
  • Progress percentage is displayed in real-time to track the scraping status
  • Messages are stored in the database as they are scraped, so you can start analyzing available data even before the scraping is complete

Usage πŸ“

The script provides an interactive menu with the following options:

  • [A] Add new channel
  • Enter the channel ID or channelname
  • [R] Remove channel
  • Remove a channel from scraping list
  • [S] Scrape all channels
  • One-time scraping of all configured channels
  • [M] Toggle media scraping
  • Enable/disable downloading of media files
  • [C] Continuous scraping
  • Real-time monitoring of channels for new messages
  • [E] Export data
  • Export to JSON and CSV formats
  • [V] View saved channels
  • List all saved channels
  • [L] List account channels
  • List all channels with ID:s for account
  • [Q] Quit

Channel IDs πŸ“’

You can use either: - Channel username (e.g., channelname) - Channel ID (e.g., -1001234567890)

Data Storage πŸ’Ύ

Database Structure

Data is stored in SQLite databases, one per channel: - Location: ./channelname/channelname.db - Table: messages - id: Primary key - message_id: Telegram message ID - date: Message timestamp - sender_id: Sender's Telegram ID - first_name: Sender's first name - last_name: Sender's last name - username: Sender's username - message: Message text - media_type: Type of media (if any) - media_path: Local path to downloaded media - reply_to: ID of replied message (if any)

Media Storage πŸ“

Media files are stored in: - Location: ./channelname/media/ - Files are named using message ID or original filename

Exported Data πŸ“Š

Data can be exported in two formats: 1. CSV: ./channelname/channelname.csv - Human-readable spreadsheet format - Easy to import into Excel/Google Sheets

  1. JSON: ./channelname/channelname.json
  2. Structured data format
  3. Ideal for programmatic processing

Features in Detail πŸ”

Continuous Scraping

The continuous scraping feature ([C] option) allows you to: - Monitor channels in real-time - Automatically download new messages - Download media as it's posted - Run indefinitely until interrupted (Ctrl+C) - Maintains state between runs

Media Handling

The script can download: - Photos - Documents - Other media types supported by Telegram - Automatically retries failed downloads - Skips existing files to avoid duplicates

Error Handling πŸ› οΈ

The script includes: - Automatic retry mechanism for failed media downloads - State preservation in case of interruption - Flood control compliance - Error logging for failed operations

Limitations ⚠️

  • Respects Telegram's rate limits
  • Can only access public channels or channels you're a member of
  • Media download size limits apply as per Telegram's restrictions

Contributing 🀝

Contributions are welcome! Please feel free to submit a Pull Request.

License πŸ“„

This project is licensed under the MIT License - see the LICENSE file for details.

Disclaimer βš–οΈ

This tool is for educational purposes only. Make sure to: - Respect Telegram's Terms of Service - Obtain necessary permissions before scraping - Use responsibly and ethically - Comply with data protection regulations



Telegram-Story-Scraper - A Python Script That Allows You To Automatically Scrape And Download Stories From Your Telegram Friends

By: Unknown


A Python script that allows you to automatically scrape and download stories from your Telegram friends using the Telethon library. The script continuously monitors and saves both photos and videos from stories, along with their metadata.


Important Note About Story Access ⚠️

Due to Telegram API restrictions, this script can only access stories from: - Users you have added to your friend list - Users whose privacy settings allow you to view their stories

This is a limitation of Telegram's API and cannot be bypassed.

Features πŸš€

  • Automatically scrapes all available stories from your Telegram friends
  • Downloads both photos and videos from stories
  • Stores metadata in SQLite database
  • Exports data to Excel spreadsheet
  • Real-time monitoring with customizable intervals
  • Timestamp is set to (UTC+2)
  • Maintains record of previously downloaded stories
  • Resume capability
  • Automatic retry mechanism

Prerequisites πŸ“‹

Before running the script, you'll need:

  • Python 3.7 or higher
  • Telegram account
  • API credentials from Telegram
  • Friends on Telegram whose stories you want to track

Required Python packages

pip install -r requirements.txt

Contents of requirements.txt:

telethon
openpyxl
schedule

Getting Telegram API Credentials πŸ”‘

  1. Visit https://my.telegram.org/auth
  2. Log in with your phone number
  3. Click on "API development tools"
  4. Fill in the form:
  5. App title: Your app name
  6. Short name: Your app short name
  7. Platform: Can be left as "Desktop"
  8. Description: Brief description of your app
  9. Click "Create application"
  10. You'll receive:
  11. api_id: A number
  12. api_hash: A string of letters and numbers

Keep these credentials safe, you'll need them to run the script!

Setup and Running πŸ”§

  1. Clone the repository:
git clone https://github.com/unnohwn/telegram-story-scraper.git
cd telegram-story-scraper
  1. Install requirements:
pip install -r requirements.txt
  1. Run the script:
python TGSS.py
  1. On first run, you'll be prompted to enter:
  2. Your API ID
  3. Your API Hash
  4. Your phone number (with country code)
  5. Verification code (sent to your Telegram)
  6. Checking interval in seconds (default is 60)

How It Works πŸ”„

The script: 1. Connects to your Telegram account 2. Periodically checks for new stories from your friends 3. Downloads any new stories (photos/videos) 4. Stores metadata in a SQLite database 5. Exports information to an Excel file 6. Runs continuously until interrupted (Ctrl+C)

Data Storage πŸ’Ύ

Database Structure (stories.db)

SQLite database containing: - user_id: Telegram user ID of the story creator - story_id: Unique story identifier - timestamp: When the story was posted (UTC+2) - filename: Local filename of the downloaded media

CSV and Excel Export (stories_export.csv/xlsx)

Export file containing the same information as the database, useful for: - Easy viewing of story metadata - Filtering and sorting - Data analysis - Sharing data with others

Media Storage πŸ“

  • Photos are saved as: {user_id}_{story_id}.jpg
  • Videos are saved with their original extension: {user_id}_{story_id}.{extension}
  • All media files are saved in the script's directory

Features in Detail πŸ”

Continuous Monitoring

  • Customizable checking interval (default: 60 seconds)
  • Runs continuously until manually stopped
  • Maintains state between runs
  • Avoids duplicate downloads

Media Handling

  • Supports both photos and videos
  • Automatically detects media type
  • Preserves original quality
  • Generates unique filenames

Error Handling πŸ› οΈ

The script includes: - Automatic retry mechanism for failed downloads - Error logging for failed operations - Connection error handling - State preservation in case of interruption

Limitations ⚠️

  • Subject to Telegram's rate limits
  • Stories must be currently active (not expired)
  • Media download size limits apply as per Telegram's restrictions

Contributing 🀝

Contributions are welcome! Please feel free to submit a Pull Request.

License πŸ“„

This project is licensed under the MIT License - see the LICENSE file for details.

Disclaimer βš–οΈ

This tool is for educational purposes only. Make sure to: - Respect Telegram's Terms of Service - Obtain necessary permissions before scraping - Use responsibly and ethically - Comply with data protection regulations - Respect user privacy



gitGRAB - This Tool Is Designed To Interact With The GitHub API And Retrieve Specific User Details, Repository Information, And Commit Emails For A Given User

By: Unknown


This tool is designed to interact with the GitHub API and retrieve specific user details, repository information, and commit emails for a given user.


Install Requests

pip install requests

Execute the program

python3 gitgrab.py



PIP-INTEL - OSINT and Cyber Intelligence Tool

By: Unknown

Β 


Pip-Intel is a powerful tool designed for OSINT (Open Source Intelligence) and cyber intelligence gathering activities. It consolidates various open-source tools into a single user-friendly interface simplifying the data collection and analysis processes for researchers and cybersecurity professionals.

Pip-Intel utilizes Python-written pip packages to gather information from various data points. This tool is equipped with the capability to collect detailed information through email addresses, phone numbers, IP addresses, and social media accounts. It offers a wide range of functionalities including email-based OSINT operations, phone number-based inquiries, geolocating IP addresses, social media and user analyses, and even dark web searches.




Uscrapper - Powerful OSINT Webscraper For Personal Data Collection

By: Zion3R


Introducing Uscrapper 2.0, A powerfull OSINT webscrapper that allows users to extract various personal information from a website. It leverages web scraping techniques and regular expressions to extract email addresses, social media links, author names, geolocations, phone numbers, and usernames from both hyperlinked and non-hyperlinked sources on the webpage, supports multithreading to make this process faster, Uscrapper 2.0 is equipped with advanced Anti-webscrapping bypassing modules and supports webcrawling to scrape from various sublinks within the same domain. The tool also provides an option to generate a report containing the extracted details.


Extracted Details:

Uscrapper extracts the following details from the provided website:

  • Email Addresses: Displays email addresses found on the website.
  • Social Media Links: Displays links to various social media platforms found on the website.
  • Author Names: Displays the names of authors associated with the website.
  • Geolocations: Displays geolocation information associated with the website.
  • Non-Hyperlinked Details: Displays non-hyperlinked details found on the website including email addresses phone numbers and usernames.

Whats New?:

Uscrapper 2.0:

  • Introduced multiple modules to bypass anti-webscrapping techniques.
  • Introducing Crawl and scrape: an advanced crawl and scrape module to scrape the websites from within.
  • Implemented Multithreading to make these processes faster.

Installation Steps:

git clone https://github.com/z0m31en7/Uscrapper.git
cd Uscrapper/install/ 
chmod +x ./install.sh && ./install.sh #For Unix/Linux systems

Usage:

To run Uscrapper, use the following command-line syntax:

python Uscrapper-v2.0.py [-h] [-u URL] [-c (INT)] [-t THREADS] [-O] [-ns]


Arguments:

  • -h, --help: Show the help message and exit.
  • -u URL, --url URL: Specify the URL of the website to extract details from.
  • -c INT, --crawl INT: Specify the number of links to crawl
  • -t INT, --threads INT: Specify the number of threads to use while crawling and scraping.
  • -O, --generate-report: Generate a report file containing the extracted details.
  • -ns, --nonstrict: Display non-strict usernames during extraction.

Note:

  • Uscrapper relies on web scraping techniques to extract information from websites. Make sure to use it responsibly and in compliance with the website's terms of service and applicable laws.

  • The accuracy and completeness of the extracted details depend on the structure and content of the website being analyzed.

  • To bypass some Anti-Webscrapping methods we have used selenium which can make the overall process slower.

Contribution:

Want a new feature to be added?

  • Make a pull request with all the necessary details and it will be merged after a review.
  • You can contribute by making the regular expressions more efficient and accurate, or by suggesting some more features that can be added.


CloakQuest3r - Uncover The True IP Address Of Websites Safeguarded By Cloudflare

By: Zion3R


CloakQuest3r is a powerful Python tool meticulously crafted to uncover the true IP address of websites safeguarded by Cloudflare, a widely adopted web security and performance enhancement service. Its core mission is to accurately discern the actual IP address of web servers that are concealed behind Cloudflare's protective shield. Subdomain scanning is employed as a key technique in this pursuit. This tool is an invaluable resource for penetration testers, security professionals, and web administrators seeking to perform comprehensive security assessments and identify vulnerabilities that may be obscured by Cloudflare's security measures.


Key Features:

  • Real IP Detection: CloakQuest3r excels in the art of discovering the real IP address of web servers employing Cloudflare's services. This crucial information is paramount for conducting comprehensive penetration tests and ensuring the security of web assets.

  • Subdomain Scanning: Subdomain scanning is harnessed as a fundamental component in the process of finding the real IP address. It aids in the identification of the actual server responsible for hosting the website and its associated subdomains.

  • Threaded Scanning: To enhance efficiency and expedite the real IP detection process, CloakQuest3r utilizes threading. This feature enables scanning of a substantial list of subdomains without significantly extending the execution time.

  • Detailed Reporting: The tool provides comprehensive output, including the total number of subdomains scanned, the total number of subdomains found, and the time taken for the scan. Any real IP addresses unveiled during the process are also presented, facilitating in-depth analysis and penetration testing.

With CloakQuest3r, you can confidently evaluate website security, unveil hidden vulnerabilities, and secure your web assets by disclosing the true IP address concealed behind Cloudflare's protective layers.

Limitation

infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information. 3. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the primary host. Some subdomains may also be protected by Cloudflare. " dir="auto">
- Still in the development phase, sometimes it can't detect the real Ip.

- CloakQuest3r combines multiple indicators to uncover real IP addresses behind Cloudflare. While subdomain scanning is a part of the process, we do not assume that all subdomains' A records point to the target host. The tool is designed to provide valuable insights but may not work in every scenario. We welcome any specific suggestions for improvement.

1. False Negatives: CloakReveal3r may not always accurately identify the real IP address behind Cloudflare, particularly for websites with complex network configurations or strict security measures.

2. Dynamic Environments: Websites' infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information.

3. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the pri mary host. Some subdomains may also be protected by Cloudflare.

This tool is a Proof of Concept and is for Educational Purposes Only.

How to Use:

  1. Run CloudScan with a single command-line argument: the target domain you want to analyze.

     git clone https://github.com/spyboy-productions/CloakQuest3r.git
    cd CloakQuest3r
    pip3 install -r requirements.txt
    python cloakquest3r.py example.com
  2. The tool will check if the website is using Cloudflare. If not, it will inform you that subdomain scanning is unnecessary.

  3. If Cloudflare is detected, CloudScan will scan for subdomains and identify their real IP addresses.

  4. You will receive detailed output, including the number of subdomains scanned, the total number of subdomains found, and the time taken for the scan.

  5. Any real IP addresses found will be displayed, allowing you to conduct further analysis and penetration testing.

CloudScan simplifies the process of assessing website security by providing a clear, organized, and informative report. Use it to enhance your security assessments, identify potential vulnerabilities, and secure your web assets.

Run It Online:

Run it online on replit.com : https://replit.com/@spyb0y/CloakQuest3r



Holehe - Tool To Check If The Mail Is Used On Different Sites Like Twitter, Instagram And Will Retrieve Information On Sites With The Forgotten Password Function

By: Zion3R

Holehe Online Version

Summary

Efficiently finding registered accounts from emails.

Holehe checks if an email is attached to an account on sites like twitter, instagram, imgur and more than 120 others.


Installation

With PyPI

pip3 install holehe

With Github

git clone https://github.com/megadose/holehe.git
cd holehe/
python3 setup.py install

Quick Start

Holehe can be run from the CLI and rapidly embedded within existing python applications.

ο“š CLI Example

holehe test@gmail.com

ο“ˆ Python Example

import trio
import httpx

from holehe.modules.social_media.snapchat import snapchat


async def main():
email = "test@gmail.com"
out = []
client = httpx.AsyncClient()

await snapchat(email, client, out)

print(out)
await client.aclose()

trio.run(main)

Module Output

For each module, data is returned in a standard dictionary with the following json-equivalent format :

{
"name": "example",
"rateLimit": false,
"exists": true,
"emailrecovery": "ex****e@gmail.com",
"phoneNumber": "0*******78",
"others": null
}
  • rateLitmit : Lets you know if you've been rate-limited.
  • exists : If an account exists for the email on that service.
  • emailrecovery : Sometimes partially obfuscated recovery emails are returned.
  • phoneNumber : Sometimes partially obfuscated recovery phone numbers are returned.
  • others : Any extra info.

Rate limit? Change your IP.

Maltego Transform : Holehe Maltego

Thank you to :

Donations

For BTC Donations : 1FHDM49QfZX6pJmhjLE5tB2K6CaTLMZpXZ

 License

GNU General Public License v3.0

Built for educational purposes only.

Modules

Name Domain Method Frequent Rate Limit
aboutme about.me register ✘
adobe adobe.com password recovery ✘
amazon amazon.com login ✘
amocrm amocrm.com register ✘
anydo any.do login βœ”
archive archive.org register ✘
armurerieauxerre armurerie-auxerre.com register ✘
atlassian atlassian.com register ✘
axonaut axonaut.com register ✘
babeshows babeshows.co.uk register ✘
badeggsonline badeggsonline.com register ✘
biosmods bios-mods.com register ✘
biotechnologyforums biotechnologyforums.com register ✘
bitmoji bitmoji.com login ✘
blablacar blablacar.com register βœ”
blackworldforum blackworldforum.com register βœ”
blip blip.fm register βœ”
blitzortung forum.blitzortung.org register ✘
bluegrassrivals bluegrassrivals.com register ✘
bodybuilding bodybuilding.com register ✘
buymeacoffee buymeacoffee.com register βœ”
cambridgemt discussion.cambridge-mt.com register ✘
caringbridge caringbridge.org register ✘
chinaphonearena chinaphonearena.com register ✘
clashfarmer clashfarmer.com register βœ”
codecademy codecademy.com register βœ”
codeigniter forum.codeigniter.com register ✘
codepen codepen.io register ✘
coroflot coroflot.com register ✘
cpaelites cpaelites.com register ✘
cpahero cpahero.com register ✘
cracked_to cracked.to register βœ”
crevado crevado.com register βœ”
deliveroo deliveroo.com register βœ”
demonforums demonforums.net register βœ”
devrant devrant.com register ✘
diigo diigo.com register ✘
discord discord.com register ✘
docker docker.com register ✘
dominosfr dominos.fr register βœ”
ebay ebay.com login βœ”
ello ello.co register ✘
envato envato.com register ✘
eventbrite eventbrite.com login ✘
evernote evernote.com login ✘
fanpop fanpop.com register ✘
firefox firefox.com register ✘
flickr flickr.com login ✘
freelancer freelancer.com register ✘
freiberg drachenhort.user.stunet.tu-freiberg.de register ✘
garmin garmin.com register βœ”
github github.com register ✘
google google.com register βœ”
gravatar gravatar.com other ✘
hubspot hubspot.com login ✘
imgur imgur.com register βœ”
insightly insightly.com login ✘
instagram instagram.com register βœ”
issuu issuu.com register ✘
koditv forum.kodi.tv register ✘
komoot komoot.com register βœ”
laposte laposte.fr register ✘
lastfm last.fm register ✘
lastpass lastpass.com register ✘
mail_ru mail.ru password recovery ✘
mybb community.mybb.com register ✘
myspace myspace.com register ✘
nattyornot nattyornotforum.nattyornot.com register ✘
naturabuy naturabuy.fr register ✘
ndemiccreations forum.ndemiccreations.com register ✘
nextpvr forums.nextpvr.com register ✘
nike nike.com register ✘
nimble nimble.com register ✘
nocrm nocrm.io register ✘
nutshell nutshell.com register ✘
odnoklassniki ok.ru password recovery ✘
office365 office365.com other βœ”
onlinesequencer onlinesequencer.net register ✘
parler parler.com login ✘
patreon patreon.com login βœ”
pinterest pinterest.com register ✘
pipedrive pipedrive.com register ✘
plurk plurk.com register ✘
pornhub pornhub.com register ✘
protonmail protonmail.ch other ✘
quora quora.com register ✘
rambler rambler.ru register ✘
redtube redtube.com register ✘
replit replit.com register βœ”
rocketreach rocketreach.co register ✘
samsung samsung.com register ✘
seoclerks seoclerks.com register ✘
sevencups 7cups.com register βœ”
smule smule.com register βœ”
snapchat snapchat.com login ✘
soundcloud soundcloud.com register ✘
sporcle sporcle.com register ✘
spotify spotify.com register βœ”
strava strava.com register ✘
taringa taringa.net register βœ”
teamleader teamleader.com register ✘
teamtreehouse teamtreehouse.com register ✘
tellonym tellonym.me register ✘
thecardboard thecardboard.org register ✘
therianguide forums.therian-guide.com register ✘
thevapingforum thevapingforum.com register ✘
tumblr tumblr.com register ✘
tunefind tunefind.com register βœ”
twitter twitter.com register ✘
venmo venmo.com register βœ”
vivino vivino.com register ✘
voxmedia voxmedia.com register ✘
vrbo vrbo.com register ✘
vsco vsco.co register ✘
wattpad wattpad.com register βœ”
wordpress wordpress login ✘
xing xing.com register ✘
xnxx xnxx.com register βœ”
xvideos xvideos.com register ✘
yahoo yahoo.com login βœ”
zoho zoho.com login βœ”


Xsubfind3R - A CLI Utility To Find Domain'S Known Subdomains From Curated Passive Online Sources

By: Zion3R


xsubfind3r is a command-line interface (CLI) utility to find domain's known subdomains from curated passive online sources.


Features

  • Fetches domains from curated passive sources to maximize results.

  • Supports stdin and stdout for easy integration into workflows.

  • Cross-Platform (Windows, Linux & macOS).

Installation

Install release binaries (Without Go Installed)

Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

  • ...with wget:

     wget https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz
  • ...or, with curl:

     curl -OL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz

...then, extract the binary:

tar xf xsubfind3r-<version>-linux-amd64.tar.gz

TIP: The above steps, download and extract, can be combined into a single step with this onliner

curl -sL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz | tar -xzv

NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xsubfind3r executable.

...move the xsubfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

sudo mv xsubfind3r /usr/local/bin/

NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r to their PATH.

Install source (With Go Installed)

Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

go install ...

go install -v github.com/hueristiq/xsubfind3r/cmd/xsubfind3r@latest

go build ... the development Version

  • Clone the repository

     git clone https://github.com/hueristiq/xsubfind3r.git 
  • Build the utility

     cd xsubfind3r/cmd/xsubfind3r && \
    go build .
  • Move the xsubfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

     sudo mv xsubfind3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r to their PATH.

NOTE: While the development version is a good way to take a peek at xsubfind3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

Post Installation

xsubfind3r will work right after installation. However, BeVigil, Chaos, Fullhunt, Github, Intelligence X and Shodan require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xsubfind3r/config.yaml file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.

Example config.yaml:

version: 0.3.0
sources:
- alienvault
- anubis
- bevigil
- chaos
- commoncrawl
- crtsh
- fullhunt
- github
- hackertarget
- intelx
- shodan
- urlscan
- wayback
keys:
bevigil:
- awA5nvpKU3N8ygkZ
chaos:
- d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39asdsd54bbc1aabb208c9acfb
fullhunt:
- 0d9652ce-516c-4315-b589-9b241ee6dc24
github:
- d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
- asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
intelx:
- 2.intelx.io:00000000-0000-0000-0000-000000000000
shodan:
- AAAAClP1bJJSRMEYJazgwhJKrggRwKA
urlscan:
- d4c85d34-e425-446e-d4ab-f5a3412acbe8

Usage

To display help message for xsubfind3r use the -h flag:

xsubfind3r -h

help message:


_ __ _ _ _____
__ _____ _ _| |__ / _(_)_ __ __| |___ / _ __
\ \/ / __| | | | '_ \| |_| | '_ \ / _` | |_ \| '__|
> <\__ \ |_| | |_) | _| | | | | (_| |___) | |
/_/\_\___/\__,_|_.__/|_| |_|_| |_|\__,_|____/|_| v0.3.0

USAGE:
xsubfind3r [OPTIONS]

INPUT:
-d, --domain string[] target domains
-l, --list string target domains' list file path

SOURCES:
--sources bool list supported sources
-u, --sources-to-use string[] comma(,) separeted sources to use
-e, --sources-to-exclude string[] comma(,) separeted sources to exclude

OPTIMIZATION:
-t, --threads int number of threads (default: 50)

OUTPUT:
--no-color bool disable colored output
-o, --output string output subdomains' file path
-O, --output-directory string output subdomains' directory path
-v, --verbosity string debug, info, warning, error, fatal or silent (default: info)

CONFIGURATION:
-c, --configuration string configuration file path (default: ~/.hueristiq/xsubfind3r/config.yaml)

Contribution

Issues and Pull Requests are welcome! Check out the contribution guidelines.

Licensing

This utility is distributed under the MIT license.



Xurlfind3R - A CLI Utility To Find Domain'S Known URLs From Curated Passive Online Sources

By: Zion3R


xurlfind3r is a command-line interface (CLI) utility to find domain's known URLs from curated passive online sources.


Features

Installation

Install release binaries (Without Go Installed)

Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

  • ...with wget:

     wget https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz
  • ...or, with curl:

     curl -OL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz

...then, extract the binary:

tar xf xurlfind3r-<version>-linux-amd64.tar.gz

TIP: The above steps, download and extract, can be combined into a single step with this onliner

curl -sL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz | tar -xzv

NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xurlfind3r executable.

...move the xurlfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

sudo mv xurlfind3r /usr/local/bin/

NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r to their PATH.

Install source (With Go Installed)

Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

go install ...

go install -v github.com/hueristiq/xurlfind3r/cmd/xurlfind3r@latest

go build ... the development Version

  • Clone the repository

     git clone https://github.com/hueristiq/xurlfind3r.git 
  • Build the utility

     cd xurlfind3r/cmd/xurlfind3r && \
    go build .
  • Move the xurlfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

     sudo mv xurlfind3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r to their PATH.

NOTE: While the development version is a good way to take a peek at xurlfind3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

Post Installation

xurlfind3r will work right after installation. However, BeVigil, Github and Intelligence X require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xurlfind3r/config.yaml file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.

Example config.yaml:

version: 0.2.0
sources:
- bevigil
- commoncrawl
- github
- intelx
- otx
- urlscan
- wayback
keys:
bevigil:
- awA5nvpKU3N8ygkZ
github:
- d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
- asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
intelx:
- 2.intelx.io:00000000-0000-0000-0000-000000000000
urlscan:
- d4c85d34-e425-446e-d4ab-f5a3412acbe8

Usage

To display help message for xurlfind3r use the -h flag:

xurlfind3r -h

help message:

                 _  __ _           _ _____      
__ ___ _ _ __| |/ _(_)_ __ __| |___ / _ __
\ \/ / | | | '__| | |_| | '_ \ / _` | |_ \| '__|
> <| |_| | | | | _| | | | | (_| |___) | |
/_/\_\\__,_|_| |_|_| |_|_| |_|\__,_|____/|_| v0.2.0

USAGE:
xurlfind3r [OPTIONS]

TARGET:
-d, --domain string (sub)domain to match URLs

SCOPE:
--include-subdomains bool match subdomain's URLs

SOURCES:
-s, --sources bool list sources
-u, --use-sources string sources to use (default: bevigil,commoncrawl,github,intelx,otx,urlscan,wayback)
--skip-wayback-robots bool with wayback, skip parsing robots.txt snapshots
--skip-wayback-source bool with wayback , skip parsing source code snapshots

FILTER & MATCH:
-f, --filter string regex to filter URLs
-m, --match string regex to match URLs

OUTPUT:
--no-color bool no color mode
-o, --output string output URLs file path
-v, --verbosity string debug, info, warning, error, fatal or silent (default: info)

CONFIGURATION:
-c, --configuration string configuration file path (default: ~/.hueristiq/xurlfind3r/config.yaml)

Examples

Basic

xurlfind3r -d hackerone.com --include-subdomains

Filter Regex

# filter images
xurlfind3r -d hackerone.com --include-subdomains -f '`^https?://[^/]*?/.*\.(jpg|jpeg|png|gif|bmp)(\?[^\s]*)?$`'

Match Regex

# match js URLs
xurlfind3r -d hackerone.com --include-subdomains -m '^https?://[^/]*?/.*\.js(\?[^\s]*)?$'

Contributing

Issues and Pull Requests are welcome! Check out the contribution guidelines.

Licensing

This utility is distributed under the MIT license.



❌