FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Yesterday β€” May 7th 2025KitPloit - PenTest Tools!

API-s-for-OSINT - List Of API's For Gathering Information About Phone Numbers, Addresses, Domains Etc

By: Unknown

APIs For OSINT

Β This is a Collection of APIs that will be useful for automating various tasks in OSINT.

Thank you for following me! https://cybdetective.com


    IOT/IP Search engines

    Name Link Description Price
    Shodan https://developer.shodan.io Search engine for Internet connected host and devices from $59/month
    Netlas.io https://netlas-api.readthedocs.io/en/latest/ Search engine for Internet connected host and devices. Read more at Netlas CookBook Partly FREE
    Fofa.so https://fofa.so/static_pages/api_help Search engine for Internet connected host and devices ???
    Censys.io https://censys.io/api Search engine for Internet connected host and devices Partly FREE
    Hunter.how https://hunter.how/search-api Search engine for Internet connected host and devices Partly FREE
    Fullhunt.io https://api-docs.fullhunt.io/#introduction Search engine for Internet connected host and devices Partly FREE
    IPQuery.io https://ipquery.io API for ip information such as ip risk, geolocation data, and asn details FREE

    Universal OSINT APIs

    Name Link Description Price
    Social Links https://sociallinks.io/products/sl-api Email info lookup, phone info lookup, individual and company profiling, social media tracking, dark web monitoring and more. Code example of using this API for face search in this repo PAID. Price per request

    Phone Number Lookup and Verification

    Name Link Description Price
    Numverify https://numverify.com Global Phone Number Validation & Lookup JSON API. Supports 232 countries. 250 requests FREE
    Twillo https://www.twilio.com/docs/lookup/api Provides a way to retrieve additional information about a phone number Free or $0.01 per request (for caller lookup)
    Plivo https://www.plivo.com/lookup/ Determine carrier, number type, format, and country for any phone number worldwide from $0.04 per request
    GetContact https://github.com/kovinevmv/getcontact Find info about user by phone number from $6,89 in months/100 requests
    Veriphone https://veriphone.io/ Phone number validation & carrier lookup 1000 requests/month FREE

    Address/ZIP codes lookup

    Name Link Description Price
    Global Address https://rapidapi.com/adminMelissa/api/global-address/ Easily verify, check or lookup address FREE
    US Street Address https://smartystreets.com/docs/cloud/us-street-api Validate and append data for any US postal address FREE
    Google Maps Geocoding API https://developers.google.com/maps/documentation/geocoding/overview convert addresses (like "1600 Amphitheatre Parkway, Mountain View, CA") into geographic coordinates 0.005 USD per request
    Postcoder https://postcoder.com/address-lookup Find adress by postcode Β£130/5000 requests
    Zipcodebase https://zipcodebase.com Lookup postal codes, calculate distances and much more 5000 requests FREE
    Openweathermap geocoding API https://openweathermap.org/api/geocoding-api get geographical coordinates (lat, lon) by using name of the location (city name or area name) 60 calls/minute 1,000,000 calls/month
    DistanceMatrix https://distancematrix.ai/product Calculate, evaluate and plan your routes $1.25-$2 per 1000 elements
    Geotagging API https://geotagging.ai/ Predict geolocations by texts Freemium

    People and documents verification

    Name Link Description Price
    Approuve.com https://appruve.co Allows you to verify the identities of individuals, businesses, and connect to financial account data across Africa Paid
    Onfido.com https://onfido.com Onfido Document Verification lets your users scan a photo ID from any device, before checking it's genuine. Combined with Biometric Verification, it's a seamless way to anchor an account to the real identity of a customer. India Paid
    Superpass.io https://surepass.io/passport-id-verification-api/ Passport, Photo ID and Driver License Verification in India Paid

    Business/Entity search

    Name Link Description Price
    Open corporates https://api.opencorporates.com Companies information Paid, price upon request
    Linkedin company search API https://docs.microsoft.com/en-us/linkedin/marketing/integrations/community-management/organizations/company-search?context=linkedin%2Fcompliance%2Fcontext&tabs=http Find companies using keywords, industry, location, and other criteria FREE
    Mattermark https://rapidapi.com/raygorodskij/api/Mattermark/ Get companies and investor information free 14-day trial, from $49 per month

    Domain/DNS/IP lookup

    Name Link Description Price
    API OSINT DS https://github.com/davidonzo/apiosintDS Collect info about IPv4/FQDN/URLs and file hashes in md5, sha1 or sha256 FREE
    InfoDB API https://www.ipinfodb.com/api The API returns the location of an IP address (country, region, city, zipcode, latitude, longitude) and the associated timezone in XML, JSON or plain text format FREE
    Domainsdb.info https://domainsdb.info Registered Domain Names Search FREE
    BGPView https://bgpview.docs.apiary.io/# allowing consumers to view all sort of analytics data about the current state and structure of the internet FREE
    DNSCheck https://www.dnscheck.co/api monitor the status of both individual DNS records and groups of related DNS records up to 10 DNS records/FREE
    Cloudflare Trace https://github.com/fawazahmed0/cloudflare-trace-api Get IP Address, Timestamp, User Agent, Country Code, IATA, HTTP Version, TLS/SSL Version & More FREE
    Host.io https://host.io/ Get info about domain FREE

    Mobile Apps Endpoints

    Name Link Description Price
    BeVigil OSINT API https://bevigil.com/osint-api provides access to millions of asset footprint data points including domain intel, cloud services, API information, and third party assets extracted from millions of mobile apps being continuously uploaded and scanned by users on bevigil.com 50 credits free/1000 credits/$50

    Scraping

    Name Link Description Price
    WebScraping.AI https://webscraping.ai/ Web Scraping API with built-in proxies and JS rendering FREE
    ZenRows https://www.zenrows.com/ Web Scraping API that bypasses anti-bot solutions while offering JS rendering, and rotating proxies apiKey Yes Unknown FREE

    Whois

    Name Link Description Price
    Whois freaks https://whoisfreaks.com/ well-parsed and structured domain WHOIS data for all domain names, registrars, countries and TLDs since the birth of internet $19/5000 requests
    WhoisXMLApi https://whois.whoisxmlapi.com gathers a variety of domain ownership and registration data points from a comprehensive WHOIS database 500 requests in month/FREE
    IPtoWhois https://www.ip2whois.com/developers-api Get detailed info about a domain 500 requests/month FREE

    GEO IP

    Name Link Description Price
    Ipstack https://ipstack.com Detect country, region, city and zip code FREE
    Ipgeolocation.io https://ipgeolocation.io provides country, city, state, province, local currency, latitude and longitude, company detail, ISP lookup, language, zip code, country calling code, time zone, current time, sunset and sunrise time, moonset and moonrise 30 000 requests per month/FREE
    IPInfoDB https://ipinfodb.com/api Free Geolocation tools and APIs for country, region, city and time zone lookup by IP address FREE
    IP API https://ip-api.com/ Free domain/IP geolocation info FREE

    Wi-fi lookup

    Name Link Description Price
    Mylnikov API https://www.mylnikov.org public API implementation of Wi-Fi Geo-Location database FREE
    Wigle https://api.wigle.net/ get location and other information by SSID FREE

    Network

    Name Link Description Price
    PeetingDB https://www.peeringdb.com/apidocs/ Database of networks, and the go-to location for interconnection data FREE
    PacketTotal https://packettotal.com/api.html .pcap files analyze FREE

    Finance

    Name Link Description Price
    Binlist.net https://binlist.net/ get information about bank by BIN FREE
    FDIC Bank Data API https://banks.data.fdic.gov/docs/ institutions, locations and history events FREE
    Amdoren https://www.amdoren.com/currency-api/ Free currency API with over 150 currencies FREE
    VATComply.com https://www.vatcomply.com/documentation Exchange rates, geolocation and VAT number validation FREE
    Alpaca https://alpaca.markets/docs/api-documentation/api-v2/market-data/alpaca-data-api-v2/ Realtime and historical market data on all US equities and ETFs FREE
    Swiftcodesapi https://swiftcodesapi.com Verifying the validity of a bank SWIFT code or IBAN account number $39 per month/4000 swift lookups
    IBANAPI https://ibanapi.com Validate IBAN number and get bank account information from it Freemium/10$ Starter plan

    Email

    Name Link Description Price
    EVA https://eva.pingutil.com/ Measuring email deliverability & quality FREE
    Mailboxlayer https://mailboxlayer.com/ Simple REST API measuring email deliverability & quality 100 requests FREE, 5000 requests in month β€” $14.49
    EmailCrawlr https://emailcrawlr.com/ Get key information about company websites. Find all email addresses associated with a domain. Get social accounts associated with an email. Verify email address deliverability. 200 requests FREE, 5000 requets β€” $40
    Voila Norbert https://www.voilanorbert.com/api/ Find anyone's email address and ensure your emails reach real people from $49 in month
    Kickbox https://open.kickbox.com/ Email verification API FREE
    FachaAPI https://api.facha.dev/ Allows checking if an email domain is a temporary email domain FREE

    Names/Surnames

    Name Link Description Price
    Genderize.io https://genderize.io Instantly answers the question of how likely a certain name is to be male or female and shows the popularity of the name. 1000 names/day free
    Agify.io https://agify.io Predicts the age of a person given their name 1000 names/day free
    Nataonalize.io https://nationalize.io Predicts the nationality of a person given their name 1000 names/day free

    Pastebin/Leaks

    Name Link Description Price
    HaveIBeenPwned https://haveibeenpwned.com/API/v3 allows the list of pwned accounts (email addresses and usernames) $3.50 per month
    Psdmp.ws https://psbdmp.ws/api search in Pastebin $9.95 per 10000 requests
    LeakPeek https://psbdmp.ws/api searc in leaks databases $9.99 per 4 weeks unlimited access
    BreachDirectory.com https://breachdirectory.com/api_documentation search domain in data breaches databases FREE
    LeekLookup https://leak-lookup.com/api search domain, email_address, fullname, ip address, phone, password, username in leaks databases 10 requests FREE
    BreachDirectory.org https://rapidapi.com/rohan-patra/api/breachdirectory/pricing search domain, email_address, fullname, ip address, phone, password, username in leaks databases (possible to view password hashes) 50 requests in month/FREE

    Archives

    Name Link Description Price
    Wayback Machine API (Memento API, CDX Server API, Wayback Availability JSON API) https://archive.org/help/wayback_api.php Retrieve information about Wayback capture data FREE
    TROVE (Australian Web Archive) API https://trove.nla.gov.au/about/create-something/using-api Retrieve information about TROVE capture data FREE
    Archive-it API https://support.archive-it.org/hc/en-us/articles/115001790023-Access-Archive-It-s-Wayback-index-with-the-CDX-C-API Retrieve information about archive-it capture data FREE
    UK Web Archive API https://ukwa-manage.readthedocs.io/en/latest/#api-reference Retrieve information about UK Web Archive capture data FREE
    Arquivo.pt API https://github.com/arquivo/pwa-technologies/wiki/Arquivo.pt-API Allows full-text search and access preserved web content and related metadata. It is also possible to search by URL, accessing all versions of preserved web content. API returns a JSON object. FREE
    Library Of Congress archive API https://www.loc.gov/apis/ Provides structured data about Library of Congress collections FREE
    BotsArchive https://botsarchive.com/docs.html JSON formatted details about Telegram Bots available in database FREE

    Hashes decrypt/encrypt

    Name Link Description Price
    MD5 Decrypt https://md5decrypt.net/en/Api/ Search for decrypted hashes in the database 1.99 EURO/day

    Crypto

    Name Link Description Price
    BTC.com https://btc.com/btc/adapter?type=api-doc get information about addresses and transanctions FREE
    Blockchair https://blockchair.com Explore data stored on 17 blockchains (BTC, ETH, Cardano, Ripple etc) $0.33 - $1 per 1000 calls
    Bitcointabyse https://www.bitcoinabuse.com/api-docs Lookup bitcoin addresses that have been linked to criminal activity FREE
    Bitcoinwhoswho https://www.bitcoinwhoswho.com/api Scam reports on the Bitcoin Address FREE
    Etherscan https://etherscan.io/apis Ethereum explorer API FREE
    apilayer coinlayer https://coinlayer.com Real-time Crypto Currency Exchange Rates FREE
    BlockFacts https://blockfacts.io/ Real-time crypto data from multiple exchanges via a single unified API, and much more FREE
    Brave NewCoin https://bravenewcoin.com/developers Real-time and historic crypto data from more than 200+ exchanges FREE
    WorldCoinIndex https://www.worldcoinindex.com/apiservice Cryptocurrencies Prices FREE
    WalletLabels https://www.walletlabels.xyz/docs Labels for 7,5 million Ethereum wallets FREE

    Malware

    Name Link Description Price
    VirusTotal https://developers.virustotal.com/reference files and urls analyze Public API is FREE
    AbuseLPDB https://docs.abuseipdb.com/#introduction IP/domain/URL reputation FREE
    AlienVault Open Threat Exchange (OTX) https://otx.alienvault.com/api IP/domain/URL reputation FREE
    Phisherman https://phisherman.gg IP/domain/URL reputation FREE
    URLScan.io https://urlscan.io/about-api/ Scan and Analyse URLs FREE
    Web of Thrust https://support.mywot.com/hc/en-us/sections/360004477734-API- IP/domain/URL reputation FREE
    Threat Jammer https://threatjammer.com/docs/introduction-threat-jammer-user-api IP/domain/URL reputation ???

    Face Search

    Name Link Description Price
    Search4faces https://search4faces.com/api.html Detect and locate human faces within an image, and returns high-precision face bounding boxes. Face⁺⁺ also allows you to store metadata of each detected face for future use. $21 per 1000 requests

    ## Face Detection

    Name Link Description Price
    Face++ https://www.faceplusplus.com/face-detection/ Search for people in social networks by facial image from 0.03 per call
    BetaFace https://www.betafaceapi.com/wpa/ Can scan uploaded image files or image URLs, find faces and analyze them. API also provides verification (faces comparison) and identification (faces search) services, as well able to maintain multiple user-defined recognition databases (namespaces) 50 image per day FREE/from 0.15 EUR per request

    ## Reverse Image Search

    Name Link Description Price
    Google Reverse images search API https://github.com/SOME-1HING/google-reverse-image-api/ This is a simple API built using Node.js and Express.js that allows you to perform Google Reverse Image Search by providing an image URL. FREE (UNOFFICIAL)
    TinEyeAPI https://services.tineye.com/TinEyeAPI Verify images, Moderate user-generated content, Track images and brands, Check copyright compliance, Deploy fraud detection solutions, Identify stock photos, Confirm the uniqueness of an image Start from $200/5000 searches
    Bing Images Search API https://www.microsoft.com/en-us/bing/apis/bing-image-search-api With Bing Image Search API v7, help users scour the web for images. Results include thumbnails, full image URLs, publishing website info, image metadata, and more. 1,000 requests free per month FREE
    MRISA https://github.com/vivithemage/mrisa MRISA (Meta Reverse Image Search API) is a RESTful API which takes an image URL, does a reverse Google image search, and returns a JSON array with the search results FREE? (no official)
    PicImageSearch https://github.com/kitUIN/PicImageSearch Aggregator for different Reverse Image Search API FREE? (no official)

    ## AI Geolocation

    Name Link Description Price
    Geospy https://api.geospy.ai/ Detecting estimation location of uploaded photo Access by request
    Picarta https://picarta.ai/api Detecting estimation location of uploaded photo 100 request/day FREE

    Social Media and Messengers

    Name Link Description Price
    Twitch https://dev.twitch.tv/docs/v5/reference
    YouTube Data API https://developers.google.com/youtube/v3
    Reddit https://www.reddit.com/dev/api/
    Vkontakte https://vk.com/dev/methods
    Twitter API https://developer.twitter.com/en
    Linkedin API https://docs.microsoft.com/en-us/linkedin/
    All Facebook and Instagram API https://developers.facebook.com/docs/
    Whatsapp Business API https://www.whatsapp.com/business/api
    Telegram and Telegram Bot API https://core.telegram.org
    Weibo API https://open.weibo.com/wiki/APIζ–‡ζ‘£/en
    XING https://dev.xing.com/partners/job_integration/api_docs
    Viber https://developers.viber.com/docs/api/rest-bot-api/
    Discord https://discord.com/developers/docs
    Odnoklassniki https://ok.ru/apiok
    Blogger https://developers.google.com/blogger/ The Blogger APIs allows client applications to view and update Blogger content FREE
    Disqus https://disqus.com/api/docs/auth/ Communicate with Disqus data FREE
    Foursquare https://developer.foursquare.com/ Interact with Foursquare users and places (geolocation-based checkins, photos, tips, events, etc) FREE
    HackerNews https://github.com/HackerNews/API Social news for CS and entrepreneurship FREE
    Kakao https://developers.kakao.com/ Kakao Login, Share on KakaoTalk, Social Plugins and more FREE
    Line https://developers.line.biz/ Line Login, Share on Line, Social Plugins and more FREE
    TikTok https://developers.tiktok.com/doc/login-kit-web Fetches user info and user's video posts on TikTok platform FREE
    Tumblr https://www.tumblr.com/docs/en/api/v2 Read and write Tumblr Data FREE

    UNOFFICIAL APIs

    !WARNING Use with caution! Accounts may be blocked permanently for using unofficial APIs.

    Name Link Description Price
    TikTok https://github.com/davidteather/TikTok-Api The Unofficial TikTok API Wrapper In Python FREE
    Google Trends https://github.com/suryasev/unofficial-google-trends-api Unofficial Google Trends API FREE
    YouTube Music https://github.com/sigma67/ytmusicapi Unofficial APi for YouTube Music FREE
    Duolingo https://github.com/KartikTalwar/Duolingo Duolingo unofficial API (can gather info about users) FREE
    Steam. https://github.com/smiley/steamapi An unofficial object-oriented Python library for accessing the Steam Web API. FREE
    Instagram https://github.com/ping/instagram_private_api Instagram Private API FREE
    Discord https://github.com/discordjs/discord.js JavaScript library for interacting with the Discord API FREE
    Zhihu https://github.com/syaning/zhihu-api FREE Unofficial API for Zhihu FREE
    Quora https://github.com/csu/quora-api Unofficial API for Quora FREE
    DnsDumbster https://github.com/PaulSec/API-dnsdumpster.com (Unofficial) Python API for DnsDumbster FREE
    PornHub https://github.com/sskender/pornhub-api Unofficial API for PornHub in Python FREE
    Skype https://github.com/ShyykoSerhiy/skyweb Unofficial Skype API for nodejs via 'Skype (HTTP)' protocol. FREE
    Google Search https://github.com/aviaryan/python-gsearch Google Search unofficial API for Python with no external dependencies FREE
    Airbnb https://github.com/nderkach/airbnb-python Python wrapper around the Airbnb API (unofficial) FREE
    Medium https://github.com/enginebai/PyMedium Unofficial Medium Python Flask API and SDK FREE
    Facebook https://github.com/davidyen1124/Facebot Powerful unofficial Facebook API FREE
    Linkedin https://github.com/tomquirk/linkedin-api Unofficial Linkedin API for Python FREE
    Y2mate https://github.com/Simatwa/y2mate-api Unofficial Y2mate API for Python FREE
    Livescore https://github.com/Simatwa/livescore-api Unofficial Livescore API for Python FREE

    Search Engines

    Name Link Description Price
    Google Custom Search JSON API https://developers.google.com/custom-search/v1/overview Search in Google 100 requests FREE
    Serpstack https://serpstack.com/ Google search results to JSON FREE
    Serpapi https://serpapi.com Google, Baidu, Yandex, Yahoo, DuckDuckGo, Bint and many others search results $50/5000 searches/month
    Bing Web Search API https://www.microsoft.com/en-us/bing/apis/bing-web-search-api Search in Bing (+instant answers and location) 1000 transactions per month FREE
    WolframAlpha API https://products.wolframalpha.com/api/pricing/ Short answers, conversations, calculators and many more from $25 per 1000 queries
    DuckDuckgo Instant Answers API https://duckduckgo.com/api An API for some of our Instant Answers, not for full search results. FREE

    | Memex Marginalia | https://memex.marginalia.nu/projects/edge/api.gmi | An API for new privacy search engine | FREE |

    News analyze

    Name Link Description Price
    MediaStack https://mediastack.com/ News articles search results in JSON 500 requests/month FREE

    Darknet

    Name Link Description Price
    Darksearch.io https://darksearch.io/apidoc search by websites in .onion zone FREE
    Onion Lookup https://onion.ail-project.org/ onion-lookup is a service for checking the existence of Tor hidden services and retrieving their associated metadata. onion-lookup relies on an private AIL instance to obtain the metadata FREE

    Torrents/file sharing

    Name Link Description Price
    Jackett https://github.com/Jackett/Jackett API for automate searching in different torrent trackers FREE
    Torrents API PY https://github.com/Jackett/Jackett Unofficial API for 1337x, Piratebay, Nyaasi, Torlock, Torrent Galaxy, Zooqle, Kickass, Bitsearch, MagnetDL,Libgen, YTS, Limetorrent, TorrentFunk, Glodls, Torre FREE
    Torrent Search API https://github.com/Jackett/Jackett API for Torrent Search Engine with Extratorrents, Piratebay, and ISOhunt 500 queries/day FREE
    Torrent search api https://github.com/JimmyLaurent/torrent-search-api Yet another node torrent scraper (supports iptorrents, torrentleech, torrent9, torrentz2, 1337x, thepiratebay, Yggtorrent, TorrentProject, Eztv, Yts, LimeTorrents) FREE
    Torrentinim https://github.com/sergiotapia/torrentinim Very low memory-footprint, self hosted API-only torrent search engine. Sonarr + Radarr Compatible, native support for Linux, Mac and Windows. FREE

    Vulnerabilities

    Name Link Description Price
    National Vulnerability Database CVE Search API https://nvd.nist.gov/developers/vulnerabilities Get basic information about CVE and CVE history FREE
    OpenCVE API https://docs.opencve.io/api/cve/ Get basic information about CVE FREE
    CVEDetails API https://www.cvedetails.com/documentation/apis Get basic information about CVE partly FREE (?)
    CVESearch API https://docs.cvesearch.com/ Get basic information about CVE by request
    KEVin API https://kevin.gtfkd.com/ API for accessing CISA's Known Exploited Vulnerabilities Catalog (KEV) and CVE Data FREE
    Vulners.com API https://vulners.com Get basic information about CVE FREE for personal use

    Flights

    Name Link Description Price
    Aviation Stack https://aviationstack.com get information about flights, aircrafts and airlines FREE
    OpenSky Network https://opensky-network.org/apidoc/index.html Free real-time ADS-B aviation data FREE
    AviationAPI https://docs.aviationapi.com/ FAA Aeronautical Charts and Publications, Airport Information, and Airport Weather FREE
    FachaAPI https://api.facha.dev Aircraft details and live positioning API FREE

    Webcams

    Name Link Description Price
    Windy Webcams API https://api.windy.com/webcams/docs Get a list of available webcams for a country, city or geographical coordinates FREE with limits or 9990 euro without limits

    ## Regex

    Name Link Description Price
    Autoregex https://autoregex.notion.site/AutoRegex-API-Documentation-97256bad2c114a6db0c5822860214d3a Convert English phrase to regular expression from $3.49/month

    API testing tools

    Name Link
    API Guessr (detect API by auth key or by token) https://api-guesser.netlify.app/
    REQBIN Online REST & SOAP API Testing Tool https://reqbin.com
    ExtendClass Online REST Client https://extendsclass.com/rest-client-online.html
    Codebeatify.org Online API Test https://codebeautify.org/api-test
    SyncWith Google Sheet add-on. Link more than 1000 APIs with Spreadsheet https://workspace.google.com/u/0/marketplace/app/syncwith_crypto_binance_coingecko_airbox/449644239211?hl=ru&pann=sheets_addon_widget
    Talend API Tester Google Chrome Extension https://workspace.google.com/u/0/marketplace/app/syncwith_crypto_binance_coingecko_airbox/449644239211?hl=ru&pann=sheets_addon_widget
    Michael Bazzel APIs search tools https://inteltechniques.com/tools/API.html

    Curl converters (tools that help to write code using API queries)

    Name Link
    Convert curl commands to Python, JavaScript, PHP, R, Go, C#, Ruby, Rust, Elixir, Java, MATLAB, Dart, CFML, Ansible URI or JSON https://curlconverter.com
    Curl-to-PHP. Instantly convert curl commands to PHP code https://incarnate.github.io/curl-to-php/
    Curl to PHP online (Codebeatify) https://codebeautify.org/curl-to-php-online
    Curl to JavaScript fetch https://kigiri.github.io/fetch/
    Curl to JavaScript fetch (Scrapingbee) https://www.scrapingbee.com/curl-converter/javascript-fetch/
    Curl to C# converter https://curl.olsh.me

    Create your own API

    Name Link
    Sheety. Create API frome GOOGLE SHEET https://sheety.co/
    Postman. Platform for creating your own API https://www.postman.com
    Reetoo. Rest API Generator https://retool.com/api-generator/
    Beeceptor. Rest API mocking and intercepting in seconds (no coding). https://beeceptor.com

    Distribute your own API

    Name Link
    RapidAPI. Market your API for millions of developers https://rapidapi.com/solution/api-provider/
    Apilayer. API Marketplace https://apilayer.com

    API Keys Info

    Name Link Description
    Keyhacks https://github.com/streaak/keyhacks Keyhacks is a repository which shows quick ways in which API keys leaked by a bug bounty program can be checked to see if they're valid.
    All about APIKey https://github.com/daffainfo/all-about-apikey Detailed information about API key / OAuth token for different services (Description, Request, Response, Regex, Example)
    API Guessr https://api-guesser.netlify.app/ Enter API Key and and find out which service they belong to

    API directories

    If you don't find what you need, try searching these directories.

    Name Link Description
    APIDOG ApiHub https://apidog.com/apihub/
    Rapid APIs collection https://rapidapi.com/collections
    API Ninjas https://api-ninjas.com/api
    APIs Guru https://apis.guru/
    APIs List https://apislist.com/
    API Context Directory https://apicontext.com/api-directory/
    Any API https://any-api.com/
    Public APIs Github repo https://github.com/public-apis/public-apis

    How to learn how to work with REST API?

    If you don't know how to work with the REST API, I recommend you check out the Netlas API guide I wrote for Netlas.io.

    Netlas Cookbook

    There it is very brief and accessible to write how to automate requests in different programming languages (focus on Python and Bash) and process the resulting JSON data.

    Thank you for following me! https://cybdetective.com



    Before yesterdayKitPloit - PenTest Tools!

    Maryam - Open-source Intelligence(OSINT) Framework

    By: Unknown


    OWASP Maryam is a modular open-source framework based on OSINT and data gathering. It is designed to provide a robust environment to harvest data from open sources and search engines quickly and thoroughly.


    Installation

    Supported OS

    • Linux
    • FreeBSD
    • Darwin
    • OSX
    $ pip install maryam

    Alternatively, you can install the latest version with the following command (Recommended):

    pip install git+https://github.com/saeeddhqan/maryam.git

    Usage

    # Using dns_search. --max means all of resources. --api shows the results as json.
    # .. -t means use multi-threading.
    maryam -e dns_search -d ibm.com -t 5 --max --api --form
    # Using youtube. -q means query
    maryam -e youtube -q "<QUERY>"
    maryam -e google -q "<QUERY>"
    maryam -e dnsbrute -d domain.tld
    # Show framework modules
    maryam -e show modules
    # Set framework options.
    maryam -e set proxy ..
    maryam -e set agent ..
    maryam -e set timeout ..
    # Run web API
    maryam -e web api 127.0.0.1 1313

    Contribution

    Here is a start guide: Development Guide You can add a new search engine to the util classes or use the current search engines to write a new module. The best help to write a new module is checking the current modules.

    Roadmap

    • Write a language model based search

    Links

    OWASP

    Wiki

    Install

    Modules Guide

    Development Guide

    To report bugs, requests, or any other issues please create an issue.



    TruffleHog Explorer - A User-Friendly Web-Based Tool To Visualize And Analyze Data Extracted Using TruffleHog

    By: Unknown


    Welcome toΒ TruffleHog Explorer, a user-friendly web-based tool to visualize and analyze data extracted using TruffleHog. TruffleHog is one of the most powerful secrets discovery, classification, validation, and analysis open source tool. In this context, a secret refers to a credential a machine uses to authenticate itself to another machine. This includes API keys, database passwords, private encryption keys, and more.

    With an improved UI/UX, powerful filtering options, and export capabilities, this tool helps security professionals efficiently review potential secrets and credentials found in their repositories.

    ⚠️ This dashboard has been tested only with GitHub TruffleHog JSON outputs. Expect updates soon to support additional formats and platforms.

    You can use online version here: TruffleHog Explorer


    πŸš€ Features

    • Intuitive UI/UX: Beautiful pastel theme with smooth navigation.
    • Powerful Filtering:
    • Filter findings by repository, detector type, and uploaded file.
    • Flexible date range selection with a calendar picker.
    • Verification status categorization for effective review.
    • Advanced search capabilities for faster identification.
    • Batch Operations:
    • Verify or reject multiple findings with a single click.
    • Toggle visibility of rejected results for a streamlined view.
    • Bulk processing to manage large datasets efficiently.
    • Export Capabilities:
    • Export verified secrets or filtered findings effortlessly.
    • Save and load session backups for continuity.
    • Generate reports in multiple formats (JSON, CSV).
    • Dynamic Sorting:
    • Sort results by repository, date, or verification status.
    • Customizable sorting preferences for a personalized experience.

    πŸ“₯ Installation & Usage

    1. Clone the Repository

    $ git clone https://github.com/yourusername/trufflehog-explorer.git
    $ cd trufflehog-explorer

    2. Open the index.html

    Simply open the index.html file in your preferred web browser.

    $ open index.html

    πŸ“‚ How to Use

    1. Upload TruffleHog JSON Findings:
    2. Click on the "Load Data" section and select your .json files from TruffleHog output.
    3. Multiple files are supported.
    4. Apply Filters:
    5. Choose filters such as repository, detector type, and verification status.
    6. Utilize the date range picker to narrow down findings.
    7. Leverage the search function to locate specific findings quickly.
    8. Review Findings:
    9. Click on a finding to expand and view its details.
    10. Use the action buttons to verify or reject findings.
    11. Add comments and annotations for better tracking.
    12. Export Results:
    13. Export verified or filtered findings for reporting.
    14. Save session data for future review and analysis.
    15. Save Your Progress:
    16. Save your session and resume later without losing any progress.
    17. Automatic backup feature to prevent data loss.

    Happy Securing! πŸ”’



    PANO - Advanced OSINT Investigation Platform Combining Graph Visualization, Timeline Analysis, And AI Assistance To Uncover Hidden Connections In Data

    By: Unknown


    PANO is a powerful OSINT investigation platform that combines graph visualization, timeline analysis, and AI-powered tools to help you uncover hidden connections and patterns in your data.

    Getting Started

    1. Clone the repository: bash git clone https://github.com/ALW1EZ/PANO.git cd PANO

    2. Run the application:

    3. Linux: ./start_pano.sh
    4. Windows: start_pano.bat

    The startup script will automatically: - Check for updates - Set up the Python environment - Install dependencies - Launch PANO

    In order to use Email Lookup transform You need to login with GHunt first. After starting the pano via starter scripts;

    1. Select venv manually
    2. Linux: source venv/bin/activate
    3. Windows: call venv\Scripts\activate
    4. See how to login here

    πŸ’‘ Quick Start Guide

    1. Create Investigation: Start a new investigation or load an existing one
    2. Add Entities: Drag entities from the sidebar onto the graph
    3. Discover Connections: Use transforms to automatically find relationships
    4. Analyze: Use timeline and map views to understand patterns
    5. Save: Export your investigation for later use

    πŸ” Features

    πŸ•ΈοΈ Core Functionality

    • Interactive Graph Visualization
    • Drag-and-drop entity creation
    • Multiple layout algorithms (Circular, Hierarchical, Radial, Force-Directed)
    • Dynamic relationship mapping
    • Visual node and edge styling

    • Timeline Analysis

    • Chronological event visualization
    • Interactive timeline navigation
    • Event filtering and grouping
    • Temporal relationship analysis

    • Map Integration

    • Geographic data visualization
    • Location-based analysis
    • Interactive mapping features
    • Coordinate plotting and tracking

    🎯 Entity Management

    • Supported Entity Types
    • πŸ“§ Email addresses
    • πŸ‘€ Usernames
    • 🌐 Websites
    • πŸ–ΌοΈ Images
    • πŸ“ Locations
    • ⏰ Events
    • πŸ“ Text content
    • πŸ”§ Custom entity types

    πŸ”„ Transform System

    • Email Analysis
    • Google account investigation
    • Calendar event extraction
    • Location history analysis
    • Connected services discovery

    • Username Analysis

    • Cross-platform username search
    • Social media profile discovery
    • Platform correlation
    • Web presence analysis

    • Image Analysis

    • Reverse image search
    • Visual content analysis
    • Metadata extraction
    • Related image discovery

    πŸ€– AI Integration

    • PANAI
    • Natural language investigation assistant
    • Automated entity extraction and relationship mapping
    • Pattern recognition and anomaly detection
    • Multi-language support
    • Context-aware suggestions
    • Timeline and graph analysis

    🧩 Core Components

    πŸ“¦ Entities

    Entities are the fundamental building blocks of PANO. They represent distinct pieces of information that can be connected and analyzed:

    • Built-in Types
    • πŸ“§ Email: Email addresses with service detection
    • πŸ‘€ Username: Social media and platform usernames
    • 🌐 Website: Web pages with metadata
    • πŸ–ΌοΈ Image: Images with EXIF and analysis
    • πŸ“ Location: Geographic coordinates and addresses
    • ⏰ Event: Time-based occurrences
    • πŸ“ Text: Generic text content

    • Properties System

    • Type-safe property validation
    • Automatic property getters
    • Dynamic property updates
    • Custom property types
    • Metadata support

    ⚑ Transforms

    Transforms are automated operations that process entities to discover new information and relationships:

    • Operation Types
    • πŸ” Discovery: Find new entities from existing ones
    • πŸ”— Correlation: Connect related entities
    • πŸ“Š Analysis: Extract insights from entity data
    • 🌐 OSINT: Gather open-source intelligence
    • πŸ”„ Enrichment: Add data to existing entities

    • Features

    • Async operation support
    • Progress tracking
    • Error handling
    • Rate limiting
    • Result validation

    πŸ› οΈ Helpers

    Helpers are specialized tools with dedicated UIs for specific investigation tasks:

    • Available Helpers
    • πŸ” Cross-Examination: Analyze statements and testimonies
    • πŸ‘€ Portrait Creator: Generate facial composites
    • πŸ“Έ Media Analyzer: Advanced image processing and analysis
    • πŸ” Base Searcher: Search near places of interest
    • πŸ”„ Translator: Translate text between languages

    • Helper Features

    • Custom Qt interfaces
    • Real-time updates
    • Graph integration
    • Data visualization
    • Export capabilities

    πŸ‘₯ Contributing

    We welcome contributions! To contribute to PANO:

    1. Fork the repository at https://github.com/ALW1EZ/PANO/
    2. Make your changes in your fork
    3. Test your changes thoroughly
    4. Create a Pull Request to our main branch
    5. In your PR description, include:
    6. What the changes do
    7. Why you made these changes
    8. Any testing you've done
    9. Screenshots if applicable

    Note: We use a single main branch for development. All pull requests should be made directly to main.

    πŸ“– Development Guide

    Click to expand development documentation ### System Requirements - Operating System: Windows or Linux - Python 3.11+ - PySide6 for GUI - Internet connection for online features ### Custom Entities Entities are the core data structures in PANO. Each entity represents a piece of information with specific properties and behaviors. To create a custom entity: 1. Create a new file in the `entities` folder (e.g., `entities/phone_number.py`) 2. Implement your entity class:
    from dataclasses import dataclass
    from typing import ClassVar, Dict, Any
    from .base import Entity

    @dataclass
    class PhoneNumber(Entity):
    name: ClassVar[str] = "Phone Number"
    description: ClassVar[str] = "A phone number entity with country code and validation"

    def init_properties(self):
    """Initialize phone number properties"""
    self.setup_properties({
    "number": str,
    "country_code": str,
    "carrier": str,
    "type": str, # mobile, landline, etc.
    "verified": bool
    })

    def update_label(self):
    """Update the display label"""
    self.label = self.format_label(["country_code", "number"])
    ### Custom Transforms Transforms are operations that process entities and generate new insights or relationships. To create a custom transform: 1. Create a new file in the `transforms` folder (e.g., `transforms/phone_lookup.py`) 2. Implement your transform class:
    from dataclasses import dataclass
    from typing import ClassVar, List
    from .base import Transform
    from entities.base import Entity
    from entities.phone_number import PhoneNumber
    from entities.location import Location
    from ui.managers.status_manager import StatusManager

    @dataclass
    class PhoneLookup(Transform):
    name: ClassVar[str] = "Phone Number Lookup"
    description: ClassVar[str] = "Lookup phone number details and location"
    input_types: ClassVar[List[str]] = ["PhoneNumber"]
    output_types: ClassVar[List[str]] = ["Location"]

    async def run(self, entity: PhoneNumber, graph) -> List[Entity]:
    if not isinstance(entity, PhoneNumber):
    return []

    status = StatusManager.get()
    operation_id = status.start_loading("Phone Lookup")

    try:
    # Your phone number lookup logic here
    # Example: query an API for phone number details
    location = Location(properties={
    "country": "Example Country",
    "region": "Example Region",
    "carrier": "Example Carrier",
    "source": "PhoneLookup transform"
    })

    return [location]

    except Exception as e:
    status.set_text(f"Error during phone lookup: {str(e)}")
    return []

    finally:
    status.stop_loading(operation_id)
    ### Custom Helpers Helpers are specialized tools that provide additional investigation capabilities through a dedicated UI interface. To create a custom helper: 1. Create a new file in the `helpers` folder (e.g., `helpers/data_analyzer.py`) 2. Implement your helper class:
    from PySide6.QtWidgets import (
    QWidget, QVBoxLayout, QHBoxLayout, QPushButton,
    QTextEdit, QLabel, QComboBox
    )
    from .base import BaseHelper
    from qasync import asyncSlot

    class DummyHelper(BaseHelper):
    """A dummy helper for testing"""

    name = "Dummy Helper"
    description = "A dummy helper for testing"

    def setup_ui(self):
    """Initialize the helper's user interface"""
    # Create input text area
    self.input_label = QLabel("Input:")
    self.input_text = QTextEdit()
    self.input_text.setPlaceholderText("Enter text to process...")
    self.input_text.setMinimumHeight(100)

    # Create operation selector
    operation_layout = QHBoxLayout()
    self.operation_label = QLabel("Operation:")
    self.operation_combo = QComboBox()
    self.operation_combo.addItems(["Uppercase", "Lowercase", "Title Case"])
    operation_layout.addWidget(self.operation_label)
    operation_layout.addWidget(self.operation_combo)

    # Create process button
    self.process_btn = QPushButton("Process")
    self.process_btn.clicked.connect(self.process_text)

    # Create output text area
    self.output_label = QLabel("Output:")
    self.output_text = QTextEdit()
    self.output_text.setReadOnly(True)
    self.output_text.setMinimumHeight(100)

    # Add widgets to main layout
    self.main_layout.addWidget(self.input_label)
    self.main_layout.addWidget(self.input_text)
    self.main_layout.addLayout(operation_layout)
    self.main_layout.addWidget(self.process_btn)
    self.main_layout.addWidget(self.output_label)
    self.main_layout.addWidget(self.output_text)

    # Set dialog size
    self.resize(400, 500)

    @asyncSlot()
    async def process_text(self):
    """Process the input text based on selected operation"""
    text = self.input_text.toPlainText()
    operation = self.operation_combo.currentText()

    if operation == "Uppercase":
    result = text.upper()
    elif operation == "Lowercase":
    result = text.lower()
    else: # Title Case
    result = text.title()

    self.output_text.setPlainText(result)

    πŸ“„ License

    This project is licensed under the Creative Commons Attribution-NonCommercial (CC BY-NC) License.

    You are free to: - βœ… Share: Copy and redistribute the material - βœ… Adapt: Remix, transform, and build upon the material

    Under these terms: - ℹ️ Attribution: You must give appropriate credit - 🚫 NonCommercial: No commercial use - πŸ”“ No additional restrictions

    πŸ™ Acknowledgments

    Special thanks to all library authors and contributors who made this project possible.

    πŸ‘¨β€πŸ’» Author

    Created by ALW1EZ with AI ❀️



    Telegram-Checker - A Python Tool For Checking Telegram Accounts Via Phone Numbers Or Usernames

    By: Unknown


    Enhanced version of bellingcat's Telegram Phone Checker!

    A Python script to check Telegram accounts using phone numbers or username.


    ✨ Features

    • πŸ” Check single or multiple phone numbers and usernames
    • πŸ“ Import numbers from text file
    • πŸ“Έ Auto-download profile pictures
    • πŸ’Ύ Save results as JSON
    • πŸ” Secure credential storage
    • πŸ“Š Detailed user information

    πŸš€ Installation

    1. Clone the repository:
    git clone https://github.com/unnohwn/telegram-checker.git
    cd telegram-checker
    1. Install required packages:
    pip install -r requirements.txt

    πŸ“¦ Requirements

    Contents of requirements.txt:

    telethon
    rich
    click
    python-dotenv

    Or install packages individually:

    pip install telethon rich click python-dotenv

    βš™οΈ Configuration

    First time running the script, you'll need: - Telegram API credentials (get from https://my.telegram.org/apps) - Your Telegram phone number including countrycode + - Verification code (sent to your Telegram)

    πŸ’» Usage

    Run the script:

    python telegram_checker.py

    Choose from options: 1. Check phone numbers from input 2. Check phone numbers from file 3. Check usernames from input 4. Check usernames from file 5. Clear saved credentials 6. Exit

    πŸ“‚ Output

    Results are saved in: - results/ - JSON files with detailed information - profile_photos/ - Downloaded profile pictures

    ⚠️ Note

    This tool is for educational purposes only. Please respect Telegram's terms of service and user privacy.

    πŸ“„ License

    MIT License



    Telegram-Scraper - A Powerful Python Script That Allows You To Scrape Messages And Media From Telegram Channels Using The Telethon Library

    By: Unknown


    A powerful Python script that allows you to scrape messages and media from Telegram channels using the Telethon library. Features include real-time continuous scraping, media downloading, and data export capabilities.

    ___________________  _________
    \__ ___/ _____/ / _____/
    | | / \ ___ \_____ \
    | | \ \_\ \/ \
    |____| \______ /_______ /
    \/ \/

    Features πŸš€

    • Scrape messages from multiple Telegram channels
    • Download media files (photos, documents)
    • Real-time continuous scraping
    • Export data to JSON and CSV formats
    • SQLite database storage
    • Resume capability (saves progress)
    • Media reprocessing for failed downloads
    • Progress tracking
    • Interactive menu interface

    Prerequisites πŸ“‹

    Before running the script, you'll need:

    • Python 3.7 or higher
    • Telegram account
    • API credentials from Telegram

    Required Python packages

    pip install -r requirements.txt

    Contents of requirements.txt:

    telethon
    aiohttp
    asyncio

    Getting Telegram API Credentials πŸ”‘

    1. Visit https://my.telegram.org/auth
    2. Log in with your phone number
    3. Click on "API development tools"
    4. Fill in the form:
    5. App title: Your app name
    6. Short name: Your app short name
    7. Platform: Can be left as "Desktop"
    8. Description: Brief description of your app
    9. Click "Create application"
    10. You'll receive:
    11. api_id: A number
    12. api_hash: A string of letters and numbers

    Keep these credentials safe, you'll need them to run the script!

    Setup and Running πŸ”§

    1. Clone the repository:
    git clone https://github.com/unnohwn/telegram-scraper.git
    cd telegram-scraper
    1. Install requirements:
    pip install -r requirements.txt
    1. Run the script:
    python telegram-scraper.py
    1. On first run, you'll be prompted to enter:
    2. Your API ID
    3. Your API Hash
    4. Your phone number (with country code)
    5. Your phone number (with country code) or bot, but use the phone number option when prompted second time.
    6. Verification code (sent to your Telegram)

    Initial Scraping Behavior πŸ•’

    When scraping a channel for the first time, please note:

    • The script will attempt to retrieve the entire channel history, starting from the oldest messages
    • Initial scraping can take several minutes or even hours, depending on:
    • The total number of messages in the channel
    • Whether media downloading is enabled
    • The size and number of media files
    • Your internet connection speed
    • Telegram's rate limiting
    • The script uses pagination and maintains state, so if interrupted, it can resume from where it left off
    • Progress percentage is displayed in real-time to track the scraping status
    • Messages are stored in the database as they are scraped, so you can start analyzing available data even before the scraping is complete

    Usage πŸ“

    The script provides an interactive menu with the following options:

    • [A] Add new channel
    • Enter the channel ID or channelname
    • [R] Remove channel
    • Remove a channel from scraping list
    • [S] Scrape all channels
    • One-time scraping of all configured channels
    • [M] Toggle media scraping
    • Enable/disable downloading of media files
    • [C] Continuous scraping
    • Real-time monitoring of channels for new messages
    • [E] Export data
    • Export to JSON and CSV formats
    • [V] View saved channels
    • List all saved channels
    • [L] List account channels
    • List all channels with ID:s for account
    • [Q] Quit

    Channel IDs πŸ“’

    You can use either: - Channel username (e.g., channelname) - Channel ID (e.g., -1001234567890)

    Data Storage πŸ’Ύ

    Database Structure

    Data is stored in SQLite databases, one per channel: - Location: ./channelname/channelname.db - Table: messages - id: Primary key - message_id: Telegram message ID - date: Message timestamp - sender_id: Sender's Telegram ID - first_name: Sender's first name - last_name: Sender's last name - username: Sender's username - message: Message text - media_type: Type of media (if any) - media_path: Local path to downloaded media - reply_to: ID of replied message (if any)

    Media Storage πŸ“

    Media files are stored in: - Location: ./channelname/media/ - Files are named using message ID or original filename

    Exported Data πŸ“Š

    Data can be exported in two formats: 1. CSV: ./channelname/channelname.csv - Human-readable spreadsheet format - Easy to import into Excel/Google Sheets

    1. JSON: ./channelname/channelname.json
    2. Structured data format
    3. Ideal for programmatic processing

    Features in Detail πŸ”

    Continuous Scraping

    The continuous scraping feature ([C] option) allows you to: - Monitor channels in real-time - Automatically download new messages - Download media as it's posted - Run indefinitely until interrupted (Ctrl+C) - Maintains state between runs

    Media Handling

    The script can download: - Photos - Documents - Other media types supported by Telegram - Automatically retries failed downloads - Skips existing files to avoid duplicates

    Error Handling πŸ› οΈ

    The script includes: - Automatic retry mechanism for failed media downloads - State preservation in case of interruption - Flood control compliance - Error logging for failed operations

    Limitations ⚠️

    • Respects Telegram's rate limits
    • Can only access public channels or channels you're a member of
    • Media download size limits apply as per Telegram's restrictions

    Contributing 🀝

    Contributions are welcome! Please feel free to submit a Pull Request.

    License πŸ“„

    This project is licensed under the MIT License - see the LICENSE file for details.

    Disclaimer βš–οΈ

    This tool is for educational purposes only. Make sure to: - Respect Telegram's Terms of Service - Obtain necessary permissions before scraping - Use responsibly and ethically - Comply with data protection regulations



    Telegram-Story-Scraper - A Python Script That Allows You To Automatically Scrape And Download Stories From Your Telegram Friends

    By: Unknown


    A Python script that allows you to automatically scrape and download stories from your Telegram friends using the Telethon library. The script continuously monitors and saves both photos and videos from stories, along with their metadata.


    Important Note About Story Access ⚠️

    Due to Telegram API restrictions, this script can only access stories from: - Users you have added to your friend list - Users whose privacy settings allow you to view their stories

    This is a limitation of Telegram's API and cannot be bypassed.

    Features πŸš€

    • Automatically scrapes all available stories from your Telegram friends
    • Downloads both photos and videos from stories
    • Stores metadata in SQLite database
    • Exports data to Excel spreadsheet
    • Real-time monitoring with customizable intervals
    • Timestamp is set to (UTC+2)
    • Maintains record of previously downloaded stories
    • Resume capability
    • Automatic retry mechanism

    Prerequisites πŸ“‹

    Before running the script, you'll need:

    • Python 3.7 or higher
    • Telegram account
    • API credentials from Telegram
    • Friends on Telegram whose stories you want to track

    Required Python packages

    pip install -r requirements.txt

    Contents of requirements.txt:

    telethon
    openpyxl
    schedule

    Getting Telegram API Credentials πŸ”‘

    1. Visit https://my.telegram.org/auth
    2. Log in with your phone number
    3. Click on "API development tools"
    4. Fill in the form:
    5. App title: Your app name
    6. Short name: Your app short name
    7. Platform: Can be left as "Desktop"
    8. Description: Brief description of your app
    9. Click "Create application"
    10. You'll receive:
    11. api_id: A number
    12. api_hash: A string of letters and numbers

    Keep these credentials safe, you'll need them to run the script!

    Setup and Running πŸ”§

    1. Clone the repository:
    git clone https://github.com/unnohwn/telegram-story-scraper.git
    cd telegram-story-scraper
    1. Install requirements:
    pip install -r requirements.txt
    1. Run the script:
    python TGSS.py
    1. On first run, you'll be prompted to enter:
    2. Your API ID
    3. Your API Hash
    4. Your phone number (with country code)
    5. Verification code (sent to your Telegram)
    6. Checking interval in seconds (default is 60)

    How It Works πŸ”„

    The script: 1. Connects to your Telegram account 2. Periodically checks for new stories from your friends 3. Downloads any new stories (photos/videos) 4. Stores metadata in a SQLite database 5. Exports information to an Excel file 6. Runs continuously until interrupted (Ctrl+C)

    Data Storage πŸ’Ύ

    Database Structure (stories.db)

    SQLite database containing: - user_id: Telegram user ID of the story creator - story_id: Unique story identifier - timestamp: When the story was posted (UTC+2) - filename: Local filename of the downloaded media

    CSV and Excel Export (stories_export.csv/xlsx)

    Export file containing the same information as the database, useful for: - Easy viewing of story metadata - Filtering and sorting - Data analysis - Sharing data with others

    Media Storage πŸ“

    • Photos are saved as: {user_id}_{story_id}.jpg
    • Videos are saved with their original extension: {user_id}_{story_id}.{extension}
    • All media files are saved in the script's directory

    Features in Detail πŸ”

    Continuous Monitoring

    • Customizable checking interval (default: 60 seconds)
    • Runs continuously until manually stopped
    • Maintains state between runs
    • Avoids duplicate downloads

    Media Handling

    • Supports both photos and videos
    • Automatically detects media type
    • Preserves original quality
    • Generates unique filenames

    Error Handling πŸ› οΈ

    The script includes: - Automatic retry mechanism for failed downloads - Error logging for failed operations - Connection error handling - State preservation in case of interruption

    Limitations ⚠️

    • Subject to Telegram's rate limits
    • Stories must be currently active (not expired)
    • Media download size limits apply as per Telegram's restrictions

    Contributing 🀝

    Contributions are welcome! Please feel free to submit a Pull Request.

    License πŸ“„

    This project is licensed under the MIT License - see the LICENSE file for details.

    Disclaimer βš–οΈ

    This tool is for educational purposes only. Make sure to: - Respect Telegram's Terms of Service - Obtain necessary permissions before scraping - Use responsibly and ethically - Comply with data protection regulations - Respect user privacy



    gitGRAB - This Tool Is Designed To Interact With The GitHub API And Retrieve Specific User Details, Repository Information, And Commit Emails For A Given User

    By: Unknown


    This tool is designed to interact with the GitHub API and retrieve specific user details, repository information, and commit emails for a given user.


    Install Requests

    pip install requests

    Execute the program

    python3 gitgrab.py



    Snoop - OSINT Tool For Research Social Media Accounts By Username

    By: Unknown


    OSINT Tool for research social media accounts by username


    Install Requests

    ```Install Requests pip install requests

    #### Install BeautifulSoup
    ```Install BeautifulSoup
    pip install beautifulsoup4

    Execute the program

    Execute Snoop python3 snoop.py



    DockerSpy - DockerSpy Searches For Images On Docker Hub And Extracts Sensitive Information Such As Authentication Secrets, Private Keys, And More

    By: Unknown


    DockerSpy searches for images on Docker Hub and extracts sensitive information such as authentication secrets, private keys, and more.


    What is Docker?

    Docker is an open-source platform that automates the deployment, scaling, and management of applications using containerization technology. Containers allow developers to package an application and its dependencies into a single, portable unit that can run consistently across various computing environments. Docker simplifies the development and deployment process by ensuring that applications run the same way regardless of where they are deployed.

    About Docker Hub

    Docker Hub is a cloud-based repository where developers can store, share, and distribute container images. It serves as the largest library of container images, providing access to both official images created by Docker and community-contributed images. Docker Hub enables developers to easily find, download, and deploy pre-built images, facilitating rapid application development and deployment.

    Why OSINT on Docker Hub?

    Open Source Intelligence (OSINT) on Docker Hub involves using publicly available information to gather insights and data from container images and repositories hosted on Docker Hub. This is particularly important for identifying exposed secrets for several reasons:

    1. Security Audits: By analyzing Docker images, organizations can uncover exposed secrets such as API keys, authentication tokens, and private keys that might have been inadvertently included. This helps in mitigating potential security risks.

    2. Incident Prevention: Proactively searching for exposed secrets in Docker images can prevent security breaches before they happen, protecting sensitive information and maintaining the integrity of applications.

    3. Compliance: Ensuring that container images do not expose secrets is crucial for meeting regulatory and organizational security standards. OSINT helps verify that no sensitive information is unintentionally disclosed.

    4. Vulnerability Assessment: Identifying exposed secrets as part of regular security assessments allows organizations to address these vulnerabilities promptly, reducing the risk of exploitation by malicious actors.

    5. Enhanced Security Posture: Continuously monitoring Docker Hub for exposed secrets strengthens an organization's overall security posture, making it more resilient against potential threats.

    Utilizing OSINT on Docker Hub to find exposed secrets enables organizations to enhance their security measures, prevent data breaches, and ensure the confidentiality of sensitive information within their containerized applications.

    How DockerSpy Works

    DockerSpy obtains information from Docker Hub and uses regular expressions to inspect the content for sensitive information, such as secrets.

    Getting Started

    To use DockerSpy, follow these steps:

    1. Installation: Clone the DockerSpy repository and install the required dependencies.
    git clone https://github.com/UndeadSec/DockerSpy.git && cd DockerSpy && make
    1. Usage: Run DockerSpy from terminal.
    dockerspy

    Custom Configurations

    To customize DockerSpy configurations, edit the following files: - Regular Expressions - Ignored File Extensions

    Disclaimer

    DockerSpy is intended for educational and research purposes only. Users are responsible for ensuring that their use of this tool complies with applicable laws and regulations.

    Contribution

    Contributions to DockerSpy are welcome! Feel free to submit issues, feature requests, or pull requests to help improve this tool.

    About the Author

    DockerSpy is developed and maintained by Alisson Moretto (UndeadSec)

    I'm a passionate cyber threat intelligence pro who loves sharing insights and crafting cybersecurity tools.

    Consider following me:

    DockerSpy searches for images on Docker Hub and extracts sensitive information such as authentication secrets, private keys, and more. (2) DockerSpy searches for images on Docker Hub and extracts sensitive information such as authentication secrets, private keys, and more. (3) DockerSpy searches for images on Docker Hub and extracts sensitive information such as authentication secrets, private keys, and more. (4)


    Thanks

    Special thanks to @akaclandestine



    PIP-INTEL - OSINT and Cyber Intelligence Tool

    By: Unknown

    Β 


    Pip-Intel is a powerful tool designed for OSINT (Open Source Intelligence) and cyber intelligence gathering activities. It consolidates various open-source tools into a single user-friendly interface simplifying the data collection and analysis processes for researchers and cybersecurity professionals.

    Pip-Intel utilizes Python-written pip packages to gather information from various data points. This tool is equipped with the capability to collect detailed information through email addresses, phone numbers, IP addresses, and social media accounts. It offers a wide range of functionalities including email-based OSINT operations, phone number-based inquiries, geolocating IP addresses, social media and user analyses, and even dark web searches.




    C2-Tracker - Live Feed Of C2 Servers, Tools, And Botnets

    By: Zion3R


    Free to use IOC feed for various tools/malware. It started out for just C2 tools but has morphed into tracking infostealers and botnets as well. It uses shodan.io/">Shodan searches to collect the IPs. The most recent collection is always stored in data; the IPs are broken down by tool and there is an all.txt.

    The feed should update daily. Actively working on making the backend more reliable


    Honorable Mentions

    Many of the Shodan queries have been sourced from other CTI researchers:

    Huge shoutout to them!

    Thanks to BertJanCyber for creating the KQL query for ingesting this feed

    And finally, thanks to Y_nexro for creating C2Live in order to visualize the data

    What do I track?

    Running Locally

    If you want to host a private version, put your Shodan API key in an environment variable called SHODAN_API_KEY

    echo SHODAN_API_KEY=API_KEY >> ~/.bashrc
    bash
    python3 -m pip install -r requirements.txt
    python3 tracker.py

    Contributing

    I encourage opening an issue/PR if you know of any additional Shodan searches for identifying adversary infrastructure. I will not set any hard guidelines around what can be submitted, just know, fidelity is paramount (high true/false positive ratio is the focus).

    References



    Cloud_Enum - Multi-cloud OSINT Tool. Enumerate Public Resources In AWS, Azure, And Google Cloud

    By: Zion3R


    Multi-cloud OSINT tool. Enumerate public resources in AWS, Azure, and Google Cloud.

    Currently enumerates the following:

    Amazon Web Services: - Open / Protected S3 Buckets - awsapps (WorkMail, WorkDocs, Connect, etc.)

    Microsoft Azure: - Storage Accounts - Open Blob Storage Containers - Hosted Databases - Virtual Machines - Web Apps

    Google Cloud Platform - Open / Protected GCP Buckets - Open / Protected Firebase Realtime Databases - Google App Engine sites - Cloud Functions (enumerates project/regions with existing functions, then brute forces actual function names) - Open Firebase Apps


    See it in action in Codingo's video demo here.


    Usage

    Setup

    Several non-standard libaries are required to support threaded HTTP requests and dns lookups. You'll need to install the requirements as follows:

    pip3 install -r ./requirements.txt

    Running

    The only required argument is at least one keyword. You can use the built-in fuzzing strings, but you will get better results if you supply your own with -m and/or -b.

    You can provide multiple keywords by specifying the -k argument multiple times.

    Keywords are mutated automatically using strings from enum_tools/fuzz.txt or a file you provide with the -m flag. Services that require a second-level of brute forcing (Azure Containers and GCP Functions) will also use fuzz.txt by default or a file you provide with the -b flag.

    Let's say you were researching "somecompany" whose website is "somecompany.io" that makes a product called "blockchaindoohickey". You could run the tool like this:

    ./cloud_enum.py -k somecompany -k somecompany.io -k blockchaindoohickey

    HTTP scraping and DNS lookups use 5 threads each by default. You can try increasing this, but eventually the cloud providers will rate limit you. Here is an example to increase to 10.

    ./cloud_enum.py -k keyword -t 10

    IMPORTANT: Some resources (Azure Containers, GCP Functions) are discovered per-region. To save time scanning, there is a "REGIONS" variable defined in cloudenum/azure_regions.py and cloudenum/gcp_regions.py that is set by default to use only 1 region. You may want to look at these files and edit them to be relevant to your own work.

    Complete Usage Details

    usage: cloud_enum.py [-h] -k KEYWORD [-m MUTATIONS] [-b BRUTE]

    Multi-cloud enumeration utility. All hail OSINT!

    optional arguments:
    -h, --help show this help message and exit
    -k KEYWORD, --keyword KEYWORD
    Keyword. Can use argument multiple times.
    -kf KEYFILE, --keyfile KEYFILE
    Input file with a single keyword per line.
    -m MUTATIONS, --mutations MUTATIONS
    Mutations. Default: enum_tools/fuzz.txt
    -b BRUTE, --brute BRUTE
    List to brute-force Azure container names. Default: enum_tools/fuzz.txt
    -t THREADS, --threads THREADS
    Threads for HTTP brute-force. Default = 5
    -ns NAMESERVER, --nameserver NAMESERVER
    DNS server to use in brute-force.
    -l LOGFILE, --logfile LOGFILE
    Will APPEND found items to specified file.
    -f FORMAT, --format FORMAT
    Format for log file (text,json,csv - defaults to text)
    --disable-aws Disable Amazon checks.
    --disable-azure Disable Azure checks.
    --disable-gcp Disable Google checks.
    -qs, --quickscan Disable all mutations and second-level scans

    Thanks

    So far, I have borrowed from: - Some of the permutations from GCPBucketBrute



    Dorkish - Chrome Extension Tool For OSINT & Recon

    By: Zion3R


    During reconaissance phase or when doing OSINT , we often use google dorking and shodan and thus the idea of Dorkish.
    Dorkish is a Chrome extension tool that facilitates custom dork creation for Google and Shodan using the builder and it offers prebuilt dorks for efficient reconnaissance and OSINT engagement.


    Installation And Setup

    1- Clone the repository

    git clone https://github.com/yousseflahouifi/dorkish.git

    2- Go to chrome://extensions/ and enable the Developer mode in the top right corner.
    3- click on Load unpacked extension button and select the dorkish folder.

    Note: For firefox users , you can find the extension here : https://addons.mozilla.org/en-US/firefox/addon/dorkish/

    Features

    Google dorking

    • Builder with keywords to filter your google search results.
    • Prebuilt dorks for Bug bounty programs.
    • Prebuilt dorks used during the reconnaissance phase in bug bounty.
    • Prebuilt dorks for exposed files and directories
    • Prebuilt dorks for logins and sign up portals
    • Prebuilt dorks for cyber secruity jobs

    Shodan dorking

    • Builder with filter keywords used in shodan.
    • Varierty of prebuilt dorks to find IOT , Network infrastructure , cameras , ICS , databases , etc.

    Usage

    Once you have found or built the dork you need, simply click it and click search. This will direct you to the desired search engine, Shodan or Google, with the specific dork you've entered. Then, you can explore and enjoy the results that match your query.

    TODO

    • Add more useful dorks and catogories
    • Fix some bugs
    • Add a search bar to search through the results
    • Might add some LLM models to build dorks

    Notes

    I have built some dorks and I have used some public resources to gather the dorks , here's few : - https://github.com/lothos612/shodan - https://github.com/TakSec/google-dorks-bug-bounty

    Warning

    • I am not responsible for any damage caused by using the tool


    DarkGPT - An OSINT Assistant Based On GPT-4-200K Designed To Perform Queries On Leaked Databases, Thus Providing An Artificial Intelligence Assistant That Can Be Useful In Your Traditional OSINT Processes

    By: Zion3R


    DarkGPT is an artificial intelligence assistant based on GPT-4-200K designed to perform queries on leaked databases. This guide will help you set up and run the project on your local environment.


    Prerequisites

    Before starting, make sure you have Python installed on your system. This project has been tested with Python 3.8 and higher versions.

    Environment Setup

    1. Clone the Repository

    First, you need to clone the GitHub repository to your local machine. You can do this by executing the following command in your terminal:

    git clone https://github.com/luijait/DarkGPT.git cd DarkGPT

    1. Configure Environment Variables

    You will need to set up some environment variables for the script to work correctly. Copy the .env.example file to a new file named .env:

    DEHASHED_API_KEY="your_dehashed_api_key_here"

    1. Install Dependencies

    This project requires certain Python packages to run. Install them by running the following command:

    pip install -r requirements.txt 4. Then Run the project: python3 main.py



    SwaggerSpy - Automated OSINT On SwaggerHub

    By: Zion3R


    SwaggerSpy is a tool designed for automated Open Source Intelligence (OSINT) on SwaggerHub. This project aims to streamline the process of gathering intelligence from APIs documented on SwaggerHub, providing valuable insights for security researchers, developers, and IT professionals.


    What is Swagger?

    Swagger is an open-source framework that allows developers to design, build, document, and consume RESTful web services. It simplifies API development by providing a standard way to describe REST APIs using a JSON or YAML format. Swagger enables developers to create interactive documentation for their APIs, making it easier for both developers and non-developers to understand and use the API.


    About SwaggerHub

    SwaggerHub is a collaborative platform for designing, building, and managing APIs using the Swagger framework. It offers a centralized repository for API documentation, version control, and collaboration among team members. SwaggerHub simplifies the API development lifecycle by providing a unified platform for API design and testing.


    Why OSINT on SwaggerHub?

    Performing OSINT on SwaggerHub is crucial because developers, in their pursuit of efficient API documentation and sharing, may inadvertently expose sensitive information. Here are key reasons why OSINT on SwaggerHub is valuable:

    1. Developer Oversights: Developers might unintentionally include secrets, credentials, or sensitive information in API documentation on SwaggerHub. These oversights can lead to security vulnerabilities and unauthorized access if not identified and addressed promptly.

    2. Security Best Practices: OSINT on SwaggerHub helps enforce security best practices. Identifying and rectifying potential security issues early in the development lifecycle is essential to ensure the confidentiality and integrity of APIs.

    3. Preventing Data Leaks: By systematically scanning SwaggerHub for sensitive information, organizations can proactively prevent data leaks. This is especially crucial in today's interconnected digital landscape where APIs play a vital role in data exchange between services.

    4. Risk Mitigation: Understanding that developers might forget to remove or obfuscate sensitive details in API documentation underscores the importance of continuous OSINT on SwaggerHub. This proactive approach mitigates the risk of unintentional exposure of critical information.

    5. Compliance and Privacy: Many industries have stringent compliance requirements regarding the protection of sensitive data. OSINT on SwaggerHub ensures that APIs adhere to these regulations, promoting a culture of compliance and safeguarding user privacy.

    6. Educational Opportunities: Identifying oversights in SwaggerHub documentation provides educational opportunities for developers. It encourages a security-conscious mindset, fostering a culture of awareness and responsible information handling.

    By recognizing that developers can inadvertently expose secrets, OSINT on SwaggerHub becomes an integral part of the overall security strategy, safeguarding against potential threats and promoting a secure API ecosystem.


    How SwaggerSpy Works

    SwaggerSpy obtains information from SwaggerHub and utilizes regular expressions to inspect API documentation for sensitive information, such as secrets and credentials.


    Getting Started

    To use SwaggerSpy, follow these steps:

    1. Installation: Clone the SwaggerSpy repository and install the required dependencies.
    git clone https://github.com/UndeadSec/SwaggerSpy.git
    cd SwaggerSpy
    pip install -r requirements.txt
    1. Usage: Run SwaggerSpy with the target search terms (more accurate with domains).
    python swaggerspy.py searchterm
    1. Results: SwaggerSpy will generate a report containing OSINT findings, including information about the API, endpoints, and secrets.

    Disclaimer

    SwaggerSpy is intended for educational and research purposes only. Users are responsible for ensuring that their use of this tool complies with applicable laws and regulations.


    Contribution

    Contributions to SwaggerSpy are welcome! Feel free to submit issues, feature requests, or pull requests to help improve this tool.


    About the Author

    SwaggerSpy is developed and maintained by Alisson Moretto (UndeadSec)

    I'm a passionate cyber threat intelligence pro who loves sharing insights and crafting cybersecurity tools.


    TODO

    Regular Expressions Enhancement
    • [ ] Review and improve existing regular expressions.
    • [ ] Ensure that regular expressions adhere to best practices.
    • [ ] Check for any potential optimizations in the regex patterns.
    • [ ] Test regular expressions with various input scenarios for accuracy.
    • [ ] Document any complex or non-trivial regex patterns for better understanding.
    • [ ] Explore opportunities to modularize or break down complex patterns.
    • [ ] Verify the regular expressions against the latest specifications or requirements.
    • [ ] Update documentation to reflect any changes made to the regular expressions.

    License

    SwaggerSpy is licensed under the MIT License. See the LICENSE file for details.


    Thanks

    Special thanks to @Liodeus for providing project inspiration through swaggerHole.



    Uscrapper - Powerful OSINT Webscraper For Personal Data Collection

    By: Zion3R


    Introducing Uscrapper 2.0, A powerfull OSINT webscrapper that allows users to extract various personal information from a website. It leverages web scraping techniques and regular expressions to extract email addresses, social media links, author names, geolocations, phone numbers, and usernames from both hyperlinked and non-hyperlinked sources on the webpage, supports multithreading to make this process faster, Uscrapper 2.0 is equipped with advanced Anti-webscrapping bypassing modules and supports webcrawling to scrape from various sublinks within the same domain. The tool also provides an option to generate a report containing the extracted details.


    Extracted Details:

    Uscrapper extracts the following details from the provided website:

    • Email Addresses: Displays email addresses found on the website.
    • Social Media Links: Displays links to various social media platforms found on the website.
    • Author Names: Displays the names of authors associated with the website.
    • Geolocations: Displays geolocation information associated with the website.
    • Non-Hyperlinked Details: Displays non-hyperlinked details found on the website including email addresses phone numbers and usernames.

    Whats New?:

    Uscrapper 2.0:

    • Introduced multiple modules to bypass anti-webscrapping techniques.
    • Introducing Crawl and scrape: an advanced crawl and scrape module to scrape the websites from within.
    • Implemented Multithreading to make these processes faster.

    Installation Steps:

    git clone https://github.com/z0m31en7/Uscrapper.git
    cd Uscrapper/install/ 
    chmod +x ./install.sh && ./install.sh #For Unix/Linux systems

    Usage:

    To run Uscrapper, use the following command-line syntax:

    python Uscrapper-v2.0.py [-h] [-u URL] [-c (INT)] [-t THREADS] [-O] [-ns]


    Arguments:

    • -h, --help: Show the help message and exit.
    • -u URL, --url URL: Specify the URL of the website to extract details from.
    • -c INT, --crawl INT: Specify the number of links to crawl
    • -t INT, --threads INT: Specify the number of threads to use while crawling and scraping.
    • -O, --generate-report: Generate a report file containing the extracted details.
    • -ns, --nonstrict: Display non-strict usernames during extraction.

    Note:

    • Uscrapper relies on web scraping techniques to extract information from websites. Make sure to use it responsibly and in compliance with the website's terms of service and applicable laws.

    • The accuracy and completeness of the extracted details depend on the structure and content of the website being analyzed.

    • To bypass some Anti-Webscrapping methods we have used selenium which can make the overall process slower.

    Contribution:

    Want a new feature to be added?

    • Make a pull request with all the necessary details and it will be merged after a review.
    • You can contribute by making the regular expressions more efficient and accurate, or by suggesting some more features that can be added.


    Pantheon - Insecure Camera Parser

    By: Zion3R


    Pantheon is a GUI application that allows users to display information regarding network cameras in various countries as well as an integrated live-feed for non-protected cameras.

    Functionalities

    Pantheon allows users to execute an API crawler. There was original functionality without the use of any API's (like Insecam), but Google TOS kept getting in the way of the original scraping mechanism.


    Installation

    1. git clone https://github.com/josh0xA/Pantheon.git
    2. cd Pantheon
    3. pip3 install -r requirements.txt
      Execution: python3 pantheon.py
    • Note: I will later add a GUI installer to make it fully indepenent of a CLI

    Windows

    • You can just follow the steps above or download the official package here.
    • Note, the PE binary of Pantheon was put together using pyinstaller, so Windows Defender might get a bit upset.

    Ubuntu

    • First, complete steps 1, 2 and 3 listed above.
    • chmod +x distros/ubuntu_install.sh
    • ./distros/ubuntu_install.sh

    Debian and Kali Linux

    • First, complete steps 1, 2 and 3 listed above.
    • chmod +x distros/debian-kali_install.sh
    • ./distros/debian-kali_install.sh

    MacOS

    • The regular installation steps above should suffice. If not, open up an issue.

    Usage

    (Enter) on a selected IP:Port to establish a Pantheon webview of the camera. (Use this at your own risk)

    (Left-click) on a selected IP:Port to view the geolocation of the camera.
    (Right-click) on a selected IP:Port to view the HTTP data of the camera (Ctrl+Left-click for Mac).

    Adjust the map as you please to see the markers.

    • Also note that this app is far from perfect and not every link that shows up is a live-feed, some are login pages (Do NOT attempt to login).

    Ethical Notice

    The developer of this program, Josh Schiavone, is not resposible for misuse of this data gathering tool. Pantheon simply provides information that can be indexed by any modern search engine. Do not try to establish unauthorized access to live feeds that are password protected - that is illegal. Furthermore, if you do choose to use Pantheon to view a live-feed, do so at your own risk. Pantheon was developed for educational purposes only. For further information, please visit: https://joshschiavone.com/panth_info/panth_ethical_notice.html

    Licence

    MIT License
    Copyright (c) Josh Schiavone



    CloakQuest3r - Uncover The True IP Address Of Websites Safeguarded By Cloudflare

    By: Zion3R


    CloakQuest3r is a powerful Python tool meticulously crafted to uncover the true IP address of websites safeguarded by Cloudflare, a widely adopted web security and performance enhancement service. Its core mission is to accurately discern the actual IP address of web servers that are concealed behind Cloudflare's protective shield. Subdomain scanning is employed as a key technique in this pursuit. This tool is an invaluable resource for penetration testers, security professionals, and web administrators seeking to perform comprehensive security assessments and identify vulnerabilities that may be obscured by Cloudflare's security measures.


    Key Features:

    • Real IP Detection: CloakQuest3r excels in the art of discovering the real IP address of web servers employing Cloudflare's services. This crucial information is paramount for conducting comprehensive penetration tests and ensuring the security of web assets.

    • Subdomain Scanning: Subdomain scanning is harnessed as a fundamental component in the process of finding the real IP address. It aids in the identification of the actual server responsible for hosting the website and its associated subdomains.

    • Threaded Scanning: To enhance efficiency and expedite the real IP detection process, CloakQuest3r utilizes threading. This feature enables scanning of a substantial list of subdomains without significantly extending the execution time.

    • Detailed Reporting: The tool provides comprehensive output, including the total number of subdomains scanned, the total number of subdomains found, and the time taken for the scan. Any real IP addresses unveiled during the process are also presented, facilitating in-depth analysis and penetration testing.

    With CloakQuest3r, you can confidently evaluate website security, unveil hidden vulnerabilities, and secure your web assets by disclosing the true IP address concealed behind Cloudflare's protective layers.

    Limitation

    infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information. 3. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the primary host. Some subdomains may also be protected by Cloudflare. " dir="auto">
    - Still in the development phase, sometimes it can't detect the real Ip.

    - CloakQuest3r combines multiple indicators to uncover real IP addresses behind Cloudflare. While subdomain scanning is a part of the process, we do not assume that all subdomains' A records point to the target host. The tool is designed to provide valuable insights but may not work in every scenario. We welcome any specific suggestions for improvement.

    1. False Negatives: CloakReveal3r may not always accurately identify the real IP address behind Cloudflare, particularly for websites with complex network configurations or strict security measures.

    2. Dynamic Environments: Websites' infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information.

    3. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the pri mary host. Some subdomains may also be protected by Cloudflare.

    This tool is a Proof of Concept and is for Educational Purposes Only.

    How to Use:

    1. Run CloudScan with a single command-line argument: the target domain you want to analyze.

       git clone https://github.com/spyboy-productions/CloakQuest3r.git
      cd CloakQuest3r
      pip3 install -r requirements.txt
      python cloakquest3r.py example.com
    2. The tool will check if the website is using Cloudflare. If not, it will inform you that subdomain scanning is unnecessary.

    3. If Cloudflare is detected, CloudScan will scan for subdomains and identify their real IP addresses.

    4. You will receive detailed output, including the number of subdomains scanned, the total number of subdomains found, and the time taken for the scan.

    5. Any real IP addresses found will be displayed, allowing you to conduct further analysis and penetration testing.

    CloudScan simplifies the process of assessing website security by providing a clear, organized, and informative report. Use it to enhance your security assessments, identify potential vulnerabilities, and secure your web assets.

    Run It Online:

    Run it online on replit.com : https://replit.com/@spyb0y/CloakQuest3r



    Porch-Pirate - The Most Comprehensive Postman Recon / OSINT Client And Framework That Facilitates The Automated Discovery And Exploitation Of API Endpoints And Secrets Committed To Workspaces, Collections, Requests, Users And Teams

    By: Zion3R


    Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.

    Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:

    • Workspaces
    • Collections
    • Requests
    • Users
    • Teams

    Installation

    python3 -m pip install porch-pirate

    Using the client

    The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.

    Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.

    • --globals
    • --collections
    • --requests
    • --urls
    • --dump
    • --raw
    • --curl

    Simple Search

    porch-pirate -s "coca-cola.com"

    Get Workspace Globals

    By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8

    Dump Workspace

    When an interesting result has been found with a simple search, we can provide the workspace ID to the -w argument with the --dump command to begin extracting information from the workspace and its collections.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump

    Automatic Search and Globals Extraction

    Porch Pirate can be supplied a simple search term, following the --globals argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.

    porch-pirate -s "shopify" --globals

    Automatic Search Dump

    Porch Pirate can be supplied a simple search term, following the --dump argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.

    porch-pirate -s "coca-cola.com" --dump

    Extract URLs from Workspace

    A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls

    Automatic URL Extraction

    Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.

    porch-pirate -s "coca-cola.com" --urls

    Show Collections in a Workspace

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections

    Show Workspace Requests

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests

    Show raw JSON

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw

    Show Entity Information

    porch-pirate -w WORKSPACE_ID
    porch-pirate -c COLLECTION_ID
    porch-pirate -r REQUEST_ID
    porch-pirate -u USERNAME/TEAMNAME

    Convert Request to Curl

    Porch Pirate can build curl requests when provided with a request ID for easier testing.

    porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl

    Use a proxy

    porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080

    Using as a library

    Searching

    p = porchpirate()
    print(p.search('coca-cola.com'))

    Get Workspace Collections

    p = porchpirate()
    print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

    Dumping a Workspace

    p = porchpirate()
    collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
    for collection in collections['data']:
    requests = collection['requests']
    for r in requests:
    request_data = p.request(r['id'])
    print(request_data)

    Grabbing a Workspace's Globals

    p = porchpirate()
    print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

    Other Examples

    Other library usage examples can be located in the examples directory, which contains the following examples:

    • dump_workspace.py
    • format_search_results.py
    • format_workspace_collections.py
    • format_workspace_globals.py
    • get_collection.py
    • get_collections.py
    • get_profile.py
    • get_request.py
    • get_statistics.py
    • get_team.py
    • get_user.py
    • get_workspace.py
    • recursive_globals_from_search.py
    • request_to_curl.py
    • search.py
    • search_by_page.py
    • workspace_collections.py


    OSINT-Framework - OSINT Framework

    By: Zion3R


    OSINT framework focused on gathering information from free tools or resources. The intention is to help people find free OSINT resources. Some of the sites included might require registration or offer more data for $$$, but you should be able to get at least a portion of the available information for no cost.

    I originally created this framework with an information security point of view. Since then, the response from other fields and disciplines has been incredible. I would love to be able to include any other OSINT resources, especially from fields outside of infosec. Please let me know about anything that might be missing!

    Please visit the framework at the link below and good hunting!


    https://osintframework.com

    Legend

    (T) - Indicates a link to a tool that must be installed and run locally
    (D) - Google Dork, for more information: Google Hacking
    (R) - Requires registration
    (M) - Indicates a URL that contains the search term and the URL itself must be edited manually

    For Update Notifications

    Follow me on Twitter: @jnordine - https://twitter.com/jnordine
    Watch or star the project on Github: https://github.com/lockfale/osint-framework

    Suggestions, Comments, Feedback

    Feedback or new tool suggestions are extremely welcome! Please feel free to submit a pull request or open an issue on github or reach out on Twitter.

    Contribute with a GitHub Pull Request

    For new resources, please ensure that the site is available for public and free use.

    1. Update the arf.json file in the format shown below. If this isn't the first entry for a folder, add a comma to the last closing brace of the previous entry.
  • Submit pull request!
  • Thank you!

    OSINT Framework Website

    https://osintframework.com

    Happy Hunting!



    Poastal - The Email OSINT Tool

    By: Zion3R


    Poastal is an email OSINT tool that provides valuable information on any email address. With Poastal, you can easily input an email address and it will quickly answer several questions, providing you with crucial information.


    Features

    • Determine the name of the person who has the email.
    • Check if the email is deliverable or not.
    • Find out if the email is disposable or not.
    • Identify if the email is considered spam.
    • Check if the email is registered on popular platforms such as Facebook, Twitter, Snapchat, Parler, Rumble, MeWe, Imgur, Adobe, Wordpress, and Duolingo.

    Usage

    Make sure you have the requirements installed.

    pip install -r requirements.txt

    Navigate to the backend folder and run poastal.py to start the Flask app. This points to port:8080.

    python poastal.py

    Open index.html in the root directory to use the UI.

    Enter an email address and see the results.

    Test with example@gmail.com.

    There's a new GitHub module.

    If you open up github.py you'll see a section that asks you to replace it with your own API key.

    Feedback

    I hope you find Poastal to be a valuable tool for your OSINT investigations. If you have any feedback or suggestions on how we can improve Poastal, please let me know. I'm always looking for ways to improve this tool to better serve the OSINT community.



    Holehe - Tool To Check If The Mail Is Used On Different Sites Like Twitter, Instagram And Will Retrieve Information On Sites With The Forgotten Password Function

    By: Zion3R

    Holehe Online Version

    Summary

    Efficiently finding registered accounts from emails.

    Holehe checks if an email is attached to an account on sites like twitter, instagram, imgur and more than 120 others.


    Installation

    With PyPI

    pip3 install holehe

    With Github

    git clone https://github.com/megadose/holehe.git
    cd holehe/
    python3 setup.py install

    Quick Start

    Holehe can be run from the CLI and rapidly embedded within existing python applications.

    ο“š CLI Example

    holehe test@gmail.com

    ο“ˆ Python Example

    import trio
    import httpx

    from holehe.modules.social_media.snapchat import snapchat


    async def main():
    email = "test@gmail.com"
    out = []
    client = httpx.AsyncClient()

    await snapchat(email, client, out)

    print(out)
    await client.aclose()

    trio.run(main)

    Module Output

    For each module, data is returned in a standard dictionary with the following json-equivalent format :

    {
    "name": "example",
    "rateLimit": false,
    "exists": true,
    "emailrecovery": "ex****e@gmail.com",
    "phoneNumber": "0*******78",
    "others": null
    }
    • rateLitmit : Lets you know if you've been rate-limited.
    • exists : If an account exists for the email on that service.
    • emailrecovery : Sometimes partially obfuscated recovery emails are returned.
    • phoneNumber : Sometimes partially obfuscated recovery phone numbers are returned.
    • others : Any extra info.

    Rate limit? Change your IP.

    Maltego Transform : Holehe Maltego

    Thank you to :

    Donations

    For BTC Donations : 1FHDM49QfZX6pJmhjLE5tB2K6CaTLMZpXZ

     License

    GNU General Public License v3.0

    Built for educational purposes only.

    Modules

    Name Domain Method Frequent Rate Limit
    aboutme about.me register ✘
    adobe adobe.com password recovery ✘
    amazon amazon.com login ✘
    amocrm amocrm.com register ✘
    anydo any.do login βœ”
    archive archive.org register ✘
    armurerieauxerre armurerie-auxerre.com register ✘
    atlassian atlassian.com register ✘
    axonaut axonaut.com register ✘
    babeshows babeshows.co.uk register ✘
    badeggsonline badeggsonline.com register ✘
    biosmods bios-mods.com register ✘
    biotechnologyforums biotechnologyforums.com register ✘
    bitmoji bitmoji.com login ✘
    blablacar blablacar.com register βœ”
    blackworldforum blackworldforum.com register βœ”
    blip blip.fm register βœ”
    blitzortung forum.blitzortung.org register ✘
    bluegrassrivals bluegrassrivals.com register ✘
    bodybuilding bodybuilding.com register ✘
    buymeacoffee buymeacoffee.com register βœ”
    cambridgemt discussion.cambridge-mt.com register ✘
    caringbridge caringbridge.org register ✘
    chinaphonearena chinaphonearena.com register ✘
    clashfarmer clashfarmer.com register βœ”
    codecademy codecademy.com register βœ”
    codeigniter forum.codeigniter.com register ✘
    codepen codepen.io register ✘
    coroflot coroflot.com register ✘
    cpaelites cpaelites.com register ✘
    cpahero cpahero.com register ✘
    cracked_to cracked.to register βœ”
    crevado crevado.com register βœ”
    deliveroo deliveroo.com register βœ”
    demonforums demonforums.net register βœ”
    devrant devrant.com register ✘
    diigo diigo.com register ✘
    discord discord.com register ✘
    docker docker.com register ✘
    dominosfr dominos.fr register βœ”
    ebay ebay.com login βœ”
    ello ello.co register ✘
    envato envato.com register ✘
    eventbrite eventbrite.com login ✘
    evernote evernote.com login ✘
    fanpop fanpop.com register ✘
    firefox firefox.com register ✘
    flickr flickr.com login ✘
    freelancer freelancer.com register ✘
    freiberg drachenhort.user.stunet.tu-freiberg.de register ✘
    garmin garmin.com register βœ”
    github github.com register ✘
    google google.com register βœ”
    gravatar gravatar.com other ✘
    hubspot hubspot.com login ✘
    imgur imgur.com register βœ”
    insightly insightly.com login ✘
    instagram instagram.com register βœ”
    issuu issuu.com register ✘
    koditv forum.kodi.tv register ✘
    komoot komoot.com register βœ”
    laposte laposte.fr register ✘
    lastfm last.fm register ✘
    lastpass lastpass.com register ✘
    mail_ru mail.ru password recovery ✘
    mybb community.mybb.com register ✘
    myspace myspace.com register ✘
    nattyornot nattyornotforum.nattyornot.com register ✘
    naturabuy naturabuy.fr register ✘
    ndemiccreations forum.ndemiccreations.com register ✘
    nextpvr forums.nextpvr.com register ✘
    nike nike.com register ✘
    nimble nimble.com register ✘
    nocrm nocrm.io register ✘
    nutshell nutshell.com register ✘
    odnoklassniki ok.ru password recovery ✘
    office365 office365.com other βœ”
    onlinesequencer onlinesequencer.net register ✘
    parler parler.com login ✘
    patreon patreon.com login βœ”
    pinterest pinterest.com register ✘
    pipedrive pipedrive.com register ✘
    plurk plurk.com register ✘
    pornhub pornhub.com register ✘
    protonmail protonmail.ch other ✘
    quora quora.com register ✘
    rambler rambler.ru register ✘
    redtube redtube.com register ✘
    replit replit.com register βœ”
    rocketreach rocketreach.co register ✘
    samsung samsung.com register ✘
    seoclerks seoclerks.com register ✘
    sevencups 7cups.com register βœ”
    smule smule.com register βœ”
    snapchat snapchat.com login ✘
    soundcloud soundcloud.com register ✘
    sporcle sporcle.com register ✘
    spotify spotify.com register βœ”
    strava strava.com register ✘
    taringa taringa.net register βœ”
    teamleader teamleader.com register ✘
    teamtreehouse teamtreehouse.com register ✘
    tellonym tellonym.me register ✘
    thecardboard thecardboard.org register ✘
    therianguide forums.therian-guide.com register ✘
    thevapingforum thevapingforum.com register ✘
    tumblr tumblr.com register ✘
    tunefind tunefind.com register βœ”
    twitter twitter.com register ✘
    venmo venmo.com register βœ”
    vivino vivino.com register ✘
    voxmedia voxmedia.com register ✘
    vrbo vrbo.com register ✘
    vsco vsco.co register ✘
    wattpad wattpad.com register βœ”
    wordpress wordpress login ✘
    xing xing.com register ✘
    xnxx xnxx.com register βœ”
    xvideos xvideos.com register ✘
    yahoo yahoo.com login βœ”
    zoho zoho.com login βœ”


    Xsubfind3R - A CLI Utility To Find Domain'S Known Subdomains From Curated Passive Online Sources

    By: Zion3R


    xsubfind3r is a command-line interface (CLI) utility to find domain's known subdomains from curated passive online sources.


    Features

    • Fetches domains from curated passive sources to maximize results.

    • Supports stdin and stdout for easy integration into workflows.

    • Cross-Platform (Windows, Linux & macOS).

    Installation

    Install release binaries (Without Go Installed)

    Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

    • ...with wget:

       wget https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz
    • ...or, with curl:

       curl -OL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz

    ...then, extract the binary:

    tar xf xsubfind3r-<version>-linux-amd64.tar.gz

    TIP: The above steps, download and extract, can be combined into a single step with this onliner

    curl -sL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz | tar -xzv

    NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xsubfind3r executable.

    ...move the xsubfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

    sudo mv xsubfind3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r to their PATH.

    Install source (With Go Installed)

    Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

    go install ...

    go install -v github.com/hueristiq/xsubfind3r/cmd/xsubfind3r@latest

    go build ... the development Version

    • Clone the repository

       git clone https://github.com/hueristiq/xsubfind3r.git 
    • Build the utility

       cd xsubfind3r/cmd/xsubfind3r && \
      go build .
    • Move the xsubfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

       sudo mv xsubfind3r /usr/local/bin/

      NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r to their PATH.

    NOTE: While the development version is a good way to take a peek at xsubfind3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

    Post Installation

    xsubfind3r will work right after installation. However, BeVigil, Chaos, Fullhunt, Github, Intelligence X and Shodan require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xsubfind3r/config.yaml file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.

    Example config.yaml:

    version: 0.3.0
    sources:
    - alienvault
    - anubis
    - bevigil
    - chaos
    - commoncrawl
    - crtsh
    - fullhunt
    - github
    - hackertarget
    - intelx
    - shodan
    - urlscan
    - wayback
    keys:
    bevigil:
    - awA5nvpKU3N8ygkZ
    chaos:
    - d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39asdsd54bbc1aabb208c9acfb
    fullhunt:
    - 0d9652ce-516c-4315-b589-9b241ee6dc24
    github:
    - d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
    - asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
    intelx:
    - 2.intelx.io:00000000-0000-0000-0000-000000000000
    shodan:
    - AAAAClP1bJJSRMEYJazgwhJKrggRwKA
    urlscan:
    - d4c85d34-e425-446e-d4ab-f5a3412acbe8

    Usage

    To display help message for xsubfind3r use the -h flag:

    xsubfind3r -h

    help message:


    _ __ _ _ _____
    __ _____ _ _| |__ / _(_)_ __ __| |___ / _ __
    \ \/ / __| | | | '_ \| |_| | '_ \ / _` | |_ \| '__|
    > <\__ \ |_| | |_) | _| | | | | (_| |___) | |
    /_/\_\___/\__,_|_.__/|_| |_|_| |_|\__,_|____/|_| v0.3.0

    USAGE:
    xsubfind3r [OPTIONS]

    INPUT:
    -d, --domain string[] target domains
    -l, --list string target domains' list file path

    SOURCES:
    --sources bool list supported sources
    -u, --sources-to-use string[] comma(,) separeted sources to use
    -e, --sources-to-exclude string[] comma(,) separeted sources to exclude

    OPTIMIZATION:
    -t, --threads int number of threads (default: 50)

    OUTPUT:
    --no-color bool disable colored output
    -o, --output string output subdomains' file path
    -O, --output-directory string output subdomains' directory path
    -v, --verbosity string debug, info, warning, error, fatal or silent (default: info)

    CONFIGURATION:
    -c, --configuration string configuration file path (default: ~/.hueristiq/xsubfind3r/config.yaml)

    Contribution

    Issues and Pull Requests are welcome! Check out the contribution guidelines.

    Licensing

    This utility is distributed under the MIT license.



    Xurlfind3R - A CLI Utility To Find Domain'S Known URLs From Curated Passive Online Sources

    By: Zion3R


    xurlfind3r is a command-line interface (CLI) utility to find domain's known URLs from curated passive online sources.


    Features

    Installation

    Install release binaries (Without Go Installed)

    Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

    • ...with wget:

       wget https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz
    • ...or, with curl:

       curl -OL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz

    ...then, extract the binary:

    tar xf xurlfind3r-<version>-linux-amd64.tar.gz

    TIP: The above steps, download and extract, can be combined into a single step with this onliner

    curl -sL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz | tar -xzv

    NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xurlfind3r executable.

    ...move the xurlfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

    sudo mv xurlfind3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r to their PATH.

    Install source (With Go Installed)

    Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

    go install ...

    go install -v github.com/hueristiq/xurlfind3r/cmd/xurlfind3r@latest

    go build ... the development Version

    • Clone the repository

       git clone https://github.com/hueristiq/xurlfind3r.git 
    • Build the utility

       cd xurlfind3r/cmd/xurlfind3r && \
      go build .
    • Move the xurlfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

       sudo mv xurlfind3r /usr/local/bin/

      NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r to their PATH.

    NOTE: While the development version is a good way to take a peek at xurlfind3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

    Post Installation

    xurlfind3r will work right after installation. However, BeVigil, Github and Intelligence X require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xurlfind3r/config.yaml file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.

    Example config.yaml:

    version: 0.2.0
    sources:
    - bevigil
    - commoncrawl
    - github
    - intelx
    - otx
    - urlscan
    - wayback
    keys:
    bevigil:
    - awA5nvpKU3N8ygkZ
    github:
    - d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
    - asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
    intelx:
    - 2.intelx.io:00000000-0000-0000-0000-000000000000
    urlscan:
    - d4c85d34-e425-446e-d4ab-f5a3412acbe8

    Usage

    To display help message for xurlfind3r use the -h flag:

    xurlfind3r -h

    help message:

                     _  __ _           _ _____      
    __ ___ _ _ __| |/ _(_)_ __ __| |___ / _ __
    \ \/ / | | | '__| | |_| | '_ \ / _` | |_ \| '__|
    > <| |_| | | | | _| | | | | (_| |___) | |
    /_/\_\\__,_|_| |_|_| |_|_| |_|\__,_|____/|_| v0.2.0

    USAGE:
    xurlfind3r [OPTIONS]

    TARGET:
    -d, --domain string (sub)domain to match URLs

    SCOPE:
    --include-subdomains bool match subdomain's URLs

    SOURCES:
    -s, --sources bool list sources
    -u, --use-sources string sources to use (default: bevigil,commoncrawl,github,intelx,otx,urlscan,wayback)
    --skip-wayback-robots bool with wayback, skip parsing robots.txt snapshots
    --skip-wayback-source bool with wayback , skip parsing source code snapshots

    FILTER & MATCH:
    -f, --filter string regex to filter URLs
    -m, --match string regex to match URLs

    OUTPUT:
    --no-color bool no color mode
    -o, --output string output URLs file path
    -v, --verbosity string debug, info, warning, error, fatal or silent (default: info)

    CONFIGURATION:
    -c, --configuration string configuration file path (default: ~/.hueristiq/xurlfind3r/config.yaml)

    Examples

    Basic

    xurlfind3r -d hackerone.com --include-subdomains

    Filter Regex

    # filter images
    xurlfind3r -d hackerone.com --include-subdomains -f '`^https?://[^/]*?/.*\.(jpg|jpeg|png|gif|bmp)(\?[^\s]*)?$`'

    Match Regex

    # match js URLs
    xurlfind3r -d hackerone.com --include-subdomains -m '^https?://[^/]*?/.*\.js(\?[^\s]*)?$'

    Contributing

    Issues and Pull Requests are welcome! Check out the contribution guidelines.

    Licensing

    This utility is distributed under the MIT license.



    LinkedInDumper - Tool To Dump Company Employees From LinkedIn API

    By: Zion3R

    Python 3 script to dump company employees from LinkedIn APIο’¬

    Description

    LinkedInDumper is a Python 3 script that dumps employee data from the LinkedIn social networking platform.

    The results contain firstname, lastname, position (title), location and a user's profile link. Only 2 API calls are required to retrieve all employees if the company does not have more than 10 employees. Otherwise, we have to paginate through the API results. With the --email-format CLI flag one can define a Python string format to auto generate email addresses based on the retrieved first and last name.


    Requirements

    LinkedInDumper talks with the unofficial LinkedIn Voyager API, which requires authentication. Therefore, you must have a valid LinkedIn user account. To keep it simple, LinkedInDumper just expects a cookie value provided by you. Doing it this way, even 2FA protected accounts are supported. Furthermore, you are tasked to provide a LinkedIn company URL to dump employees from.

    Retrieving LinkedIn Cookie

    1. Sign into www.linkedin.com and retrieve your li_at session cookie value e.g. via developer tools
    2. Specify the cookie value either persistently in the python script's variable li_at or temporarily during runtime via the CLI flag --cookie

    Retrieving LinkedIn Company URL

    1. Search your target company on Google Search or directly on LinkedIn
    2. The LinkedIn company URL should look something like this: https://www.linkedin.com/company/apple

    Usage

    usage: linkedindumper.py [-h] --url <linkedin-url> [--cookie <cookie>] [--quiet] [--include-private-profiles] [--email-format EMAIL_FORMAT]

    options:
    -h, --help show this help message and exit
    --url <linkedin-url> A LinkedIn company url - https://www.linkedin.com/company/<company>
    --cookie <cookie> LinkedIn 'li_at' session cookie
    --quiet Show employee results only
    --include-private-profiles
    Show private accounts too
    --email-format Python string format for emails; for example:
    [1] john.doe@example.com > '{0}.{1}@example.com'
    [2] j.doe@example.com > '{0[0]}.{1}@example.com'
    [3] jdoe@example.com > '{0[0]}{1}@example.com'
    [4] doe@example.com > '{1}@example.com'
    [5] john@example.com > '{0}@example.com'
    [6] jd@example.com > '{0[0]}{1[0]}@example.com'

    Example 1 - Docker Run

    docker run --rm l4rm4nd/linkedindumper:latest --url 'https://www.linkedin.com/company/apple' --cookie <cookie> --email-format '{0}.{1}@apple.de'

    Example 2 - Native Python

    # install dependencies
    pip install -r requirements.txt

    python3 linkedindumper.py --url 'https://www.linkedin.com/company/apple' --cookie <cookie> --email-format '{0}.{1}@apple.de'

    Outputs

    The script will return employee data as semi-colon separated values (like CSV):

     β–ˆβ–ˆβ–“     β–ˆβ–ˆβ–“ β–ˆβ–ˆβ–ˆβ–„    β–ˆ  β–ˆβ–ˆ β–„β–ˆβ–€β–“β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–“β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–„  β–ˆβ–ˆβ–“ β–ˆβ–ˆβ–ˆβ–„    β–ˆ β–“β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–„  β–ˆ    β–ˆβ–ˆ  β–ˆβ–ˆβ–ˆβ–„ β–„β–ˆβ–ˆβ–ˆβ–“ β–ˆβ–ˆβ–“β–ˆβ–ˆβ–ˆ  β–“β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  β–ˆβ–ˆβ–€β–ˆβ–ˆβ–ˆ  
    β–“β–ˆβ–ˆβ–’ β–“β–ˆβ–ˆβ–’ β–ˆβ–ˆ β–€β–ˆ β–ˆ β–ˆβ–ˆβ–„β–ˆβ–’ β–“β–ˆ β–€ β–’β–ˆβ–ˆβ–€ β–ˆβ–ˆβ–Œβ–“β–ˆβ–ˆβ–’ β–ˆβ–ˆ β–€β–ˆ β–ˆ β–’β–ˆβ–ˆβ–€ β–ˆβ–ˆβ–Œ β–ˆβ–ˆ β–“β–ˆβ–ˆβ–’β–“β–ˆβ–ˆβ–’β–€β–ˆ& #9600; β–ˆβ–ˆβ–’β–“β–ˆβ–ˆβ–‘ β–ˆβ–ˆβ–’β–“β–ˆ β–€ β–“β–ˆβ–ˆ β–’ β–ˆβ–ˆβ–’
    β–’β–ˆβ–ˆβ–‘ β–’β–ˆβ–ˆβ–’β–“β–ˆβ–ˆ β–€β–ˆ β–ˆβ–ˆβ–’β–“β–ˆβ–ˆβ–ˆβ–„β–‘ β–’β–ˆβ–ˆβ–ˆ β–‘β–ˆβ–ˆ β–ˆβ–Œβ–’β–ˆβ–ˆβ–’β–“β–ˆβ–ˆ β–€β–ˆ β–ˆβ–ˆβ–’β–‘β–ˆβ–ˆ β–ˆβ–Œβ–“β–ˆβ–ˆ β–’β–ˆβ–ˆβ–‘β–“β–ˆβ–ˆ β–“β–ˆβ–ˆβ–‘β–“β–ˆβ–ˆβ–‘ β–ˆβ–ˆβ–“β–’β–’β–ˆβ–ˆβ–ˆ β–“β–ˆβ–ˆ β–‘β–„β–ˆ β–’
    β–’β–ˆβ–ˆβ–‘ β–‘β–ˆβ–ˆβ–‘β–“β–ˆβ–ˆβ–’ β–β–Œβ–ˆβ–ˆβ–’β–“β–ˆβ–ˆ β–ˆβ–„ β–’β–“β–ˆ β–„ β–‘β–“β–ˆβ–„ β–Œ&# 9617;β–ˆβ–ˆβ–‘β–“β–ˆβ–ˆβ–’ β–β–Œβ–ˆβ–ˆβ–’β–‘β–“β–ˆβ–„ β–Œβ–“β–“β–ˆ β–‘β–ˆβ–ˆβ–‘β–’β–ˆβ–ˆ β–’β–ˆβ–ˆ β–’β–ˆβ–ˆβ–„β–ˆβ–“β–’ β–’β–’β–“β–ˆ β–„ β–’β–ˆβ–ˆβ–€β–€β–ˆβ–„
    β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–’β–‘β–ˆβ–ˆβ–‘β–’β–ˆβ–ˆβ–‘ β–“β–ˆβ–ˆβ–‘β–’β–ˆβ–ˆβ–’ β–ˆβ–„β–‘β–’β–ˆβ–ˆβ–ˆβ–ˆβ–’β–‘β–’β–ˆβ–ˆβ–ˆβ–ˆβ–“ β–‘β–ˆβ–ˆβ–‘β–’β–ˆβ–ˆβ–‘ β–“β–ˆβ–ˆβ–‘β–‘β–’β–ˆβ–ˆβ–ˆβ–ˆβ–“ β–’β–’β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–“ β–’β–ˆβ–ˆβ–’ β–‘β–ˆβ–ˆβ–’β–’β–ˆβ–ˆβ–’ β–‘ β–‘β–‘β–’β–ˆβ–ˆβ–ˆβ–ˆ& #9618;β–‘β–ˆβ–ˆβ–“ β–’β–ˆβ–ˆβ–’
    β–‘ β–’β–‘β–“ β–‘β–‘β–“ β–‘ β–’β–‘ β–’ β–’ β–’ β–’β–’ β–“β–’β–‘β–‘ β–’β–‘ β–‘ β–’β–’β–“ β–’ β–‘β–“ β–‘ β–’β–‘ β–’ β–’ β–’β–’β–“ β–’ β–‘β–’β–“β–’ β–’ β–’ β–‘ β–’β–‘ β–‘ β–‘β–’β–“β–’β–‘ β–‘ β–‘β–‘β–‘ β–’β–‘ β–‘β–‘ β–’β–“ β–‘β–’β–“β–‘
    β–‘ β–‘ β–’ β–‘ β–’ β–‘β–‘ β–‘β–‘ β–‘ β–’β–‘β–‘ β–‘β–’ β–’β–‘ β–‘ β–‘ β–‘ β–‘ β–’ β–’ β–’ β–‘β–‘ β–‘β–‘ β–‘ β–’β–‘ β–‘ β–’ β–’ β–‘β–‘β–’β–‘ β–‘ β–‘ β–‘ β–‘ β–‘β–‘β–’ β–‘ β–‘ β–‘ β–‘ β–‘β–’ β–‘ β–’β–‘
    β–‘ β–‘ β–’ β–‘ β–‘ β–‘ β–‘ β–‘ β–‘β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–’ β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–‘β–‘β–‘ β–‘ β–‘ β–‘ β–‘ β–‘β–‘ β–‘ β–‘β–‘ β–‘
    β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–‘
    β–‘ β–‘ β–‘ by LRVT

    [i] Company Name: apple
    [i] Company X-ID: 162479
    [i] LN Employees: 1000 employees found
    [i] Dumping Date: 17/10/2022 13:55:06
    [i] Email Format: {0}.{1}@apple.de
    Firstname;Lastname;Email;Position;Gender;Location;Profile
    Katrin;Honauer;katrin.honauer@apple.com;Software Engineer at Apple;N/A;Heidelberg;https://www.linkedin.com/in/katrin-honauer
    Raymond;Chen;raymond.chen@apple.com;Recruiting at Apple;N/A;Austin, Texas Metropolitan Area;https://www.linkedin.com/in/raytherecruiter

    [i] Successfully crawled 2 unique apple employee(s). Hurray ^_-

    Limitations

    LinkedIn will allow only the first 1,000 search results to be returned when harvesting contact information. You may also need a LinkedIn premium account when you reached the maximum allowed queries for visiting profiles with your freemium LinkedIn account.

    Furthermore, not all employee profiles are public. The results vary depending on your used LinkedIn account and whether you are befriended with some employees of the company to crawl or not. Therefore, it is sometimes not possible to retrieve the firstname, lastname and profile url of some employee accounts. The script will not display such profiles, as they contain default values such as "LinkedIn" as firstname and "Member" in the lastname. If you want to include such private profiles, please use the CLI flag --include-private-profiles. Although some accounts may be private, we can obtain the position (title) as well as the location of such accounts. Only firstname, lastname and profile URL are hidden for private LinkedIn accounts.

    Finally, LinkedIn users are free to name their profile. An account name can therefore consist of various things such as saluations, abbreviations, emojis, middle names etc. I tried my best to remove some nonsense. However, this is not a complete solution to the general problem. Note that we are not using the official LinkedIn API. This script gathers information from the "unofficial" Voyager API.



    ❌