FreshRSS

๐Ÿ”’
โŒ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayTools

Ashok - A OSINT Recon Tool, A.K.A Swiss Army Knife

By: Zion3R


Reconnaissance is the first phase of penetration testing which means gathering information before any real attacks are planned So Ashok is an Incredible fast recon tool for penetration tester which is specially designed for Reconnaissance" title="Reconnaissance">Reconnaissance phase. And in Ashok-v1.1 you can find the advanced google dorker and wayback crawling machine.



Main Features

- Wayback Crawler Machine
- Google Dorking without limits
- Github Information Grabbing
- Subdomain Identifier
- Cms/Technology Detector With Custom Headers

Installation

~> git clone https://github.com/ankitdobhal/Ashok
~> cd Ashok
~> python3.7 -m pip3 install -r requirements.txt

How to use Ashok?

A detailed usage guide is available on Usage section of the Wiki.

But Some index of options is given below:

Docker

Ashok can be launched using a lightweight Python3.8-Alpine Docker image.

$ docker pull powerexploit/ashok-v1.2
$ docker container run -it powerexploit/ashok-v1.2 --help


    Credits



    MasterParser - Powerful DFIR Tool Designed For Analyzing And Parsing Linux Logs

    By: Zion3R


    What is MasterParser ?

    MasterParser stands as a robust Digital Forensics and Incident Response tool meticulously crafted for the analysis of Linux logs within the var/log directory. Specifically designed to expedite the investigative process for security incidents on Linux systems, MasterParser adeptly scans supported logs, such as auth.log for example, extract critical details including SSH logins, user creations, event names, IP addresses and much more. The tool's generated summary presents this information in a clear and concise format, enhancing efficiency and accessibility for Incident Responders. Beyond its immediate utility for DFIR teams, MasterParser proves invaluable to the broader InfoSec and IT community, contributing significantly to the swift and comprehensive assessment of security events on Linux platforms.


    MasterParser Wallpapers

    Love MasterParser as much as we do? Dive into the fun and jazz up your screen with our exclusive MasterParser wallpaper! Click the link below and get ready to add a splash of excitement to your device! Download Wallpaper

    Supported Logs Format

    This is the list of supported log formats within the var/log directory that MasterParser can analyze. In future updates, MasterParser will support additional log formats for analysis. |Supported Log Formats List| | --- | | auth.log |

    Feature & Log Format Requests:

    If you wish to propose the addition of a new feature \ log format, kindly submit your request by creating an issue Click here to create a request

    How To Use ?

    How To Use - Text Guide

    1. From this GitHub repository press on "<> Code" and then press on "Download ZIP".
    2. From "MasterParser-main.zip" export the folder "MasterParser-main" to you Desktop.
    3. Open a PowerSehll terminal and navigate to the "MasterParser-main" folder.
    # How to navigate to "MasterParser-main" folder from the PS terminal
    PS C:\> cd "C:\Users\user\Desktop\MasterParser-main\"
    1. Now you can execute the tool, for example see the tool command menu, do this:
    # How to show MasterParser menu
    PS C:\Users\user\Desktop\MasterParser-main> .\MasterParser.ps1 -O Menu
    1. To run the tool, put all your /var/log/* logs in to the 01-Logs folder, and execute the tool like this:
    # How to run MasterParser
    PS C:\Users\user\Desktop\MasterParser-main> .\MasterParser.ps1 -O Start
    1. That's it, enjoy the tool!

    How To Use - Video Guide

    https://github.com/YosfanEilay/MasterParser/assets/132997318/d26b4b3f-7816-42c3-be7f-7ee3946a2c70

    MasterParser Social Media Publications

    Social Media Posts
    1. First Tool Post
    2. First Tool Story Publication By Help Net Security
    3. Second Tool Story Publication By Forensic Focus
    4. MasterParser featured in Help Net Security: 20 Essential Open-Source Cybersecurity Tools That Save You Time


    MemTracer - Memory Scaner

    By: Zion3R


    MemTracer is a tool that offers live memory analysis capabilities, allowing digital forensic practitioners to discover and investigate stealthy attack traces hidden in memory. The MemTracer is implemented in Python language, aiming to detect reflectively loaded native .NET framework Dynamic-Link Library (DLL). This is achieved by looking for the following abnormal memory regionโ€™s characteristics:

    • The state of memory pages flags in each memory region. Specifically, the MEM_COMMIT flag which is used to reserve memory pages for virtual memory use.
    • The type of pages in the region. The MEM_MAPPED page type indicates that the memory pages within the region are mapped into the view of a section.
    • The memory protection for the region. The PAGE_READWRITE protection to indicate that the memory region is readable and writable, which happens if Assembly.Load(byte[]) method is used to load a module into memory.
    • The memory region contains a PE header.

    The tool starts by scanning the running processes, and by analyzing the allocated memory regions characteristics to detect reflective DLL loading symptoms. Suspicious memory regions which are identified as DLL modules are dumped for further analysis and investigation.
    Furthermore, the tool features the following options:

    • Dump the compromised process.
    • Export a JSON file that provides information about the compromised process, such as the process name, ID, path, size, and base address.
    • Search for specific loaded module by name.

    Example

    python.exe memScanner.py [-h] [-r] [-m MODULE]
    -h, --help show this help message and exit
    -r, --reflectiveScan Looking for reflective DLL loading
    -m MODULE, --module MODULE Looking for spcefic loaded DLL

    The script needs administrator privileges in order incepect all processes.



    Seekr - A Multi-Purpose OSINT Toolkit With A Neat Web-Interface


    A multi-purpose toolkit for gathering and managing OSINT-Data with a neat web-interface.


    Introduction

    Seekr is a multi-purpose toolkit for gathering and managing OSINT-data with a sleek web interface. The backend is written in Go and offers a wide range of features for data collection, organization, and analysis. Whether you're a researcher, investigator, or just someone looking to gather information, seekr makes it easy to find and manage the data you need. Give it a try and see how it can streamline your OSINT workflow!

    Check the wiki for setup guide, etc.

    Why use seekr over my current tool ?

    Seekr combines note taking and OSINT in one application. Seekr can be used alongside your current tools. Seekr is desingned with OSINT in mind and optimized for real world usecases.

    Key features

    • Database for OSINT targets
    • GitHub to email
    • Account cards for each person in the database
    • Account discovery intigrating with the account cards
    • Pre defined commonly used fields in the database

    Getting Started - Installation

    Windows

    Download the latest exe here

    Linux (stable)

    Download the latest stable binary here

    Linux (unstable)

    To install seekr on linux simply run:

    git clone https://github.com/seekr-osint/seekr
    cd seekr
    go run main.go

    Now open the web interface in your browser of choice.

    Run on NixOS

    Seekr is build with NixOS in mind and therefore supports nix flakes. To run seekr on NixOS run following commands.

    nix shell github:seekr-osint/seekr
    seekr

    Intigrating seekr into your current workflow

    journey
    title How to Intigrate seekr into your current workflow.
    section Initial Research
    Create a person in seekr: 100: seekr
    Simple web research: 100: Known tools
    Account scan: 100: seekr
    section Deeper account investigation
    Investigate the accounts: 100: seekr, Known tools
    Keep notes: 100: seekr
    section Deeper Web research
    Deep web research: 100: Known tools
    Keep notes: 100: seekr
    section Finishing the report
    Export the person with seekr: 100: seekr
    Done.: 100

    Feedback

    We would love to hear from you. Tell us about your opinions on seekr. Where do we need to improve?... You can do this by just opeing up an issue or maybe even telling others in your blog or somewhere else about your experience.

    Legal Disclaimer

    This tool is intended for legitimate and lawful use only. It is provided for educational and research purposes, and should not be used for any illegal or malicious activities, including doxxing. Doxxing is the practice of researching and broadcasting private or identifying information about an individual, without their consent and can be illegal. The creators and contributors of this tool will not be held responsible for any misuse or damage caused by this tool. By using this tool, you agree to use it only for lawful purposes and to comply with all applicable laws and regulations. It is the responsibility of the user to ensure compliance with all relevant laws and regulations in the jurisdiction in which they operate. Misuse of this tool may result in criminal and/or civil prosecut ion.



    Leaktopus - Keep Your Source Code Under Control

    Keep your source code under control.

    Key Features

    • Plug&Play - one line installation with Docker.

    • Scan various sources containing a set of keywords, e.g. ORGANIZATION-NAME.com.

      Currently supports:

      • GitHub
        • Repositories
        • Gists (coming soon)
      • Paste sites (e.g., PasteBin) (coming soon)
    • Filter results with a built-in heuristic engine.

    • Enhance results with IOLs (Indicators Of Leak):

      • Secrets in the found sources (including Git repos commits history):
      • URIs (Including indication of your organization's domains)
      • Emails (Including indication of your organization's email addresses)
      • Contributors
      • Sensitive keywords (e.g., canary token, internal domains)
    • Allows to ignore public sources, (e.g., "junk" repositories by web crawlers).

    • OOTB ignore list of common "junk" sources.

    • Acknowledge a leak, and only get notified if the source has been modified since the previous scan.

    • Built-in ELK to search for data in leaks (including full index of Git repositories with IOLs).

    • Notify on new leaks

      • MS Teams Webhook.
      • Slack Bot.
      • Cortex XSOARยฎ (by Palo Alto Networks) Integration (WIP).

    Technology Stack

    • Fully Dockerized.
    • API-first Python Flask backend.
    • Decoupled Vue.js (3.x) frontend.
    • SQLite DB.
    • Async tasks with Celery + Redis queues.

    Prerequisites

    • Docker-Compose

    Installation

    • Clone the repository
    • Create a local .env file
      cd Leaktopus
      cp .env.example .env
    • Edit .env according to your local setup (see the internal comments).
    • Run Leaktopus
      docker-compose up -d
    • Initiate the installation sequence by accessing the installation API. Just open http://{LEAKTOPUS_HOST}:8000/api/install in your browser.
    • Check that the API is up and running at http://{LEAKTOPUS_HOST}:8000/up
    • The UI should be available at http://{LEAKTOPUS_HOST}:8080

    Using Github App

    In addition to the basic personal access token option, Leaktopus supports Github App authentication. Using Github App is recommended due to the increased rate limits.

    1. To use Github App authentication, you need to create a Github App and install it on your organization/account. See Github's documentation for more details.

    2. After creating the app, you need to set the following environment variables:

      • GITHUB_USE_APP=True
      • GITHUB_APP_ID
      • GITHUB_INSTALLATION_ID - The installation id can be found in your app installation.
      • GITHUB_APP_PRIVATE_KEY_PATH (defaults to /app/private-key.pem)
    3. Mount the private key file to the container (see docker-compose.yml for an example). ./leaktopus_backend/private-key.pem:/app/private-key.pem

    * Note that GITHUB_ACCESS_TOKEN will be ignored if GITHUB_USE_APP is set to True.

    Updating Leaktopus

    If you wish to update your Leaktopus version (pulling a newer version), just follow the next steps.

    • Pull the latest version.
      git pull
    • Rebuild Docker images (data won't be deleted).
      # Force image recreation
      docker-compose up --force-recreate --build
    • Run the DB update by calling its API (should be required after some updates). http://{LEAKTOPUS_HOST}/api/updatedb

    Results Filtering Heuristic Engine

    The built-in heuristic engine is filtering the search results to reduce false positives by:

    • Content:
      • More than X emails containing non-organizational domains.
      • More than X URIs containing non-organizational domains.
    • Metadata:
      • More than X stars.
      • More than X forks.
    • Sources ignore list.

    API Documentation

    OpenAPI documentation is available in http://{LEAKTOPUS_HOST}:8000/apidocs.

    Leaktopus Services

    Service Port Mandatory/Optional
    Backend (API) 8000 Mandatory
    Backend (Worker) N/A Mandatory
    Redis 6379 Mandatory
    Frontend 8080 Optional
    Elasticsearch 9200 Optional
    Logstash 5000 Optional
    Kibana 5601 Optional

    The above can be customized by using a custom docker-compose.yml file.

    Security Notes

    As for now, Leaktopus does not provide any authentication mechanism. Make sure that you are not exposing it to the world, and doing your best to restrict access to your Leaktopus instance(s).

    Contributing

    Contributions are very welcomed.

    Please follow our contribution guidelines and documentation.



    HTTPLoot - An Automated Tool Which Can Simultaneously Crawl, Fill Forms, Trigger Error/Debug Pages And "Loot" Secrets Out Of The Client-Facing Code Of Sites


    An automated tool which can simultaneously crawl, fill forms, trigger error/debug pages and "loot" secrets out of the client-facing code of sites.


    Usage

    To use the tool, you can grab any one of the pre-built binaries from the Releases section of the repository. If you want to build the source code yourself, you will need Go > 1.16 to build it. Simply running go build will output a usable binary for you.

    Additionally you will need two json files (lootdb.json and regexes.json) alongwith the binary which you can get from the repo itself. Once you have all 3 files in the same folder, you can go ahead and fire up the tool.

    Video demo:


    Here is the help usage of the tool:

    $ ./httploot --help
    _____
    )=(
    / \ H T T P L O O T
    ( $ ) v0.1
    \___/

    [+] HTTPLoot by RedHunt Labs - A Modern Attack Surface (ASM) Management Company
    [+] Author: Pinaki Mondal (RHL Research Team)
    [+] Continuously Track Your Attack Surface using https://redhuntlabs.com/nvadr.

    Usage of ./httploot:
    -concurrency int
    Maximum number of sites to process concurrently (default 100)
    -depth int
    Maximum depth limit to traverse while crawling (default 3)
    -form-length int
    Length of the string to be randomly generated for filling form fields (default 5)
    -form-string string
    Value with which the tool will auto-fill forms, strings will be randomly generated if no value is supplied
    -input-file string
    Path of the input file conta ining domains to process
    -output-file string
    CSV output file path to write the results to (default "httploot-results.csv")
    -parallelism int
    Number of URLs per site to crawl parallely (default 15)
    -submit-forms
    Whether to auto-submit forms to trigger debug pages
    -timeout int
    The default timeout for HTTP requests (default 10)
    -user-agent string
    User agent to use during HTTP requests (default "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:98.0) Gecko/20100101 Firefox/98.0")
    -verify-ssl
    Verify SSL certificates while making HTTP requests
    -wildcard-crawl
    Allow crawling of links outside of the domain being scanned

    Concurrent scanning

    There are two flags which help with the concurrent scanning:

    • -concurrency: Specifies the maximum number of sites to process concurrently.
    • -parallelism: Specifies the number of links per site to crawl parallely.

    Both -concurrency and -parallelism are crucial to performance and reliability of the tool results.

    Crawling

    The crawl depth can be specified using the -depth flag. The integer value supplied to this is the maximum chain depth of links to crawl grabbed on a site.

    An important flag -wildcard-crawl can be used to specify whether to crawl URLs outside the domain in scope.

    NOTE: Using this flag might lead to infinite crawling in worst case scenarios if the crawler finds links to other domains continuously.

    Filling forms

    If you want the tool to scan for debug pages, you need to specify the -submit-forms argument. This will direct the tool to autosubmit forms and try to trigger error/debug pages once a tech stack has been identified successfully.

    If the -submit-forms flag is enabled, you can control the string to be submitted in the form fields. The -form-string specifies the string to be submitted, while the -form-length can control the length of the string to be randomly generated which will be filled into the forms.

    Network tuning

    Flags like:

    • -timeout - specifies the HTTP timeout of requests.
    • -user-agent - specifies the user-agent to use in HTTP requests.
    • -verify-ssl - specifies whether or not to verify SSL certificates.

    Input/Output

    Input file to read can be specified using the -input-file argument. You can specify a file path containing a list of URLs to scan with the tool. The -output-file flag can be used to specify the result output file path -- which by default goes into a file called httploot-results.csv.

    Further Details

    Further details about the research which led to the development of the tool can be found on our RedHunt Labs Blog.

    License & Version

    The tool is licensed under the MIT license. See LICENSE.

    Currently the tool is at v0.1.

    Credits

    The RedHunt Labs Research Team would like to extend credits to the creators & maintainers of shhgit for the regular expressions provided by them in their repository.

    To know more about our Attack Surface Management platform, check out NVADR.



    Octosuite - Advanced Github OSINT Framework


    A framework fro gathering osint on GitHub users, repositories and organizations


    Wiki

    Refer to the Wiki for installation instructions, in addition to all other documentation.

    Features

    • Fetches an organization's profile information
    • Fetches an oganization's events
    • Returns an organization's repositories
    • Returns an organization's public members
    • Fetches a repository's information
    • Returns a repository's contributors
    • Returns a repository's languages
    • Fetches a repository's stargazers
    • Fetches a repository's forks
    • Fetches a repository's releases
    • Returns a list of files in a specified path of a repository
    • Fetches a user's profile information
    • Returns a user's gists
    • Returns organizations that a user owns/belongs to
    • Fetches a user's events
    • Fetches a list of users followed by the target
    • Fetches a user's followers
    • Checks if user A follows user B
    • Checks if user is a public member of an organizations
    • Returns a user's subscriptions
    • Gets a user's subscriptions
    • Gets a user's events
    • Searches users
    • Searches repositories
    • Searches topics
    • Searches issues
    • Searches commits
    • Automatically logs network activity (.logs folder)
    • User can view, read and delete logs
    • ...And more

    Note

    Octosuite automatically logs network and user activity of each session, the logs are saved by date and time in the .logs folder



    Legitify - Detect And Remediate Misconfigurations And Security Risks Across All Your GitHub Assets


    Strengthen the security posture of your GitHub organization!
    Detect and remediate misconfigurations, security and compliance issues across all your GitHub assets with ease

    ย 

    Installation

    1. You can download the latest legitify release from https://github.com/Legit-Labs/legitify/releases, each archive contains:
    • Legitify binary for the desired platform
    • Built-in policies provided by Legit Security
    1. From source with the following steps:
    git clone git@github.com:Legit-Labs/legitify.git
    go run main.go analyze ...

    Provenance

    To enhance the software supply chain security of legitify's users, as of v0.1.6, every legitify release contains a SLSA Level 3 Provenacne document.
    The provenance document refers to all artifacts in the release, as well as the generated docker image.
    You can use SLSA framework's official verifier to verify the provenance.
    Example of usage for the darwin_arm64 architecture for the v0.1.6 release:

    VERSION=0.1.6
    ARCH=darwin_arm64
    ./slsa-verifier verify-artifact --source-branch main --builder-id 'https://github.com/slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@refs/tags/v1.2.2' --source-uri "git+https://github.com/Legit-Labs/legitify" --provenance-path multiple.intoto.jsonl ./legitify_${VERSION}_${ARCH}.tar.gz

    Requirements

    1. To get the most out of legitify, you need to be an owner of at least one GitHub organization. Otherwise, you can still use the tool if you're an admin of at least one repository inside an organization, in which case you'll be able to see only repository-related policies results.
    2. legitify requires a GitHub personal access token (PAT) to analyze your resources successfully, which can be either provided as an argument (-t) or as an environment variable ($GITHUB_ENV). The PAT needs the following scopes for full analysis:
    admin:org, read:enterprise, admin:org_hook, read:org, repo, read:repo_hook

    See Creating a Personal Access Token for more information.
    Fine-grained personal access tokens are currently not supported because they do not support GitHub's GraphQL (https://github.blog/2022-10-18-introducing-fine-grained-personal-access-tokens-for-github/)

    Usage

    LEGITIFY_TOKEN=<your_token> legitify analyze

    By default, legitify will check the policies against all your resources (organizations, repositories, members, actions).

    You can control which resources will be analyzed with command-line flags namespace and org:

    • --namespace (-n): will analyze policies that relate to the specified resources
    • --org: will limit the analysis to the specified organizations
    LEGITIFY_TOKEN=<your_token> legitify analyze --org org1,org2 --namespace organization,member

    The above command will test organization and member policies against org1 and org2.

    GitHub Enterprise Support

    You can run legitify against a GitHub Enterprise instance if you set the endpoint URL in the environment variable SERVER_URL:

    export SERVER_URL="https://github.example.com/"
    LEGITIFY_TOKEN=<your_token> legitify analyze --org org1,org2 --namespace organization,member

    GitLab Cloud/Server Support

    To run legitify against GitLab Cloud set the scm flag to gitlab --scm gitlab, to run against GitLab Server you need to provide also SERVER_URL:

    export SERVER_URL="https://gitlab.example.com/"
    LEGITIFY_TOKEN=<your_token> legitify analyze --namespace organization --scm gitlab

    Namespaces

    Namespaces in legitify are resources that are collected and run against the policies. Currently, the following namespaces are supported:

    1. organization - organization level policies (e.g., "Two-Factor Authentication Is Not Enforced for the Organization")
    2. actions - organization GitHub Actions policies (e.g., "GitHub Actions Runs Are Not Limited To Verified Actions")
    3. member - organization members policies (e.g., "Stale Admin Found")
    4. repository - repository level policies (e.g., "Code Review By At Least Two Reviewers Is Not Enforced")
    5. runner_group - runner group policies (e.g, "runner can be used by public repositories")

    By default, legitify will analyze all namespaces. You can limit only to selected ones with the --namespace flag, and then a comma separated list of the selected namespaces.

    Output Options

    By default, legitify will output the results in a human-readable format. This includes the list of policy violations listed by severity, as well as a summary table that is sorted by namespace.

    Output Formats

    Using the --output-format (-f) flag, legitify supports outputting the results in the following formats:

    1. human-readable - Human-readable text (default).
    2. json - Standard JSON.

    Output Schemes

    Using the --output-scheme flag, legitify supports outputting the results in different grouping schemes. Note: --output-format=json must be specified to output non-default schemes.

    1. flattened - No grouping; A flat listing of the policies, each with its violations (default).
    2. group-by-namespace - Group the policies by their namespace.
    3. group-by-resource - Group the policies by their resource e.g. specific organization/repository.
    4. group-by-severity - Group the policies by their severity.

    Output Destinations

    • --output-file - full path of the output file (default: no output file, prints to stdout).
    • --error-file - full path of the error logs (default: ./error.log).

    Coloring

    When outputting in a human-readable format, legitify support the conventional --color[=when] flag, which has the following options:

    • auto - colored output if stdout is a terminal, uncolored otherwise (default).
    • always - colored output regardless of the output destination.
    • none - uncolored output regardless of the output destination.

    Misc

    • Use the --failed-only flag to filter-out passed/skipped checks from the result.

    Scorecard Support

    scorecard is an OSSF's open-source project:

    Scorecards is an automated tool that assesses a number of important heuristics ("checks") associated with software security and assigns each check a score of 0-10. You can use these scores to understand specific areas to improve in order to strengthen the security posture of your project. You can also assess the risks that dependencies introduce, and make informed decisions about accepting these risks, evaluating alternative solutions, or working with the maintainers to make improvements.

    legitify supports running scorecard for all of the organization's repositories, enforcing score policies and showing the results using the --scorecard flag:

    • no - do not run scorecard (default).
    • yes - run scorecard and employ a policy that alerts on each repo score below 7.0.
    • verbose - run scorecard, employ a policy that alerts on each repo score below 7.0, and embed its output to legitify's output.

    legitify runs the following scorecard checks:

    Check Public Repository Private Repository
    Security-Policy V
    CII-Best-Practices V
    Fuzzing V
    License V
    Signed-Releases V
    Branch-Protection V V
    Code-Review V V
    Contributors V V
    Dangerous-Workflow V V
    Dependency-Update-Tool V V
    Maintained V V
    Pinned-Dependencies V V
    SAST V V
    Token-Permissions V V
    Vulnerabilities V V
    Webhooks V V

    Policies

    legitify comes with a set of policies in the policies/github directory. These policies are documented here.

    In addition, you can use the --policies-path (-p) flag to specify a custom directory for OPA policies.

    Contribution

    Thank you for considering contributing to Legitify! We encourage and appreciate any kind of contribution. Here are some resources to help you get started:



    R4Ven - Track Ip And GPS Location

    Track User's Smartphone/Pc Ip And Gps Location.

    The tool hosts a fake website which uses an iframe to display a legit website and, if the target allows it, it will fetch the Gps location (latitude and longitude) of the target along with IP Address and Device Information.

    This tool is a Proof of Concept and is for Educational Purposes Only.

    Using this tool, you can find out what information a malicious website can gather about you and your devices and why you shouldn't click on random links or grant permissions like Location to them.


    On link click

    + it wil automatically fetch ip address and device information
    ! if location permission allowed, it will fetch exact location of target.

    Limitation

    browsers that block javascript, # or if the user is mocking the GPS location. " dir="auto">
    - It will not work on laptops or phones that have broken GPS, 
    # browsers that block javascript,
    # or if the user is mocking the GPS location.

    IP location vs GPS location

    - Geographic location based on IP address is NOT accurate,
    # Does not provide the location of the target.
    # Instead, it provides the approximate location of the ISP (Internet service provider)
    longitude and latitude coordinates. @@ Once location permission is granted @@ # accurate location information is recieved to within 20 to 30 meters of the user's location. # (it's almost exact location)" dir="auto">
    + GPS fetch almost exact location because it uses longitude and latitude coordinates.
    @@ Once location permission is granted @@
    # accurate location information is recieved to within 20 to 30 meters of the user's location.
    # (it's almost exact location)

    Installation

    git clone https://github.com/spyboy-productions/r4ven.git
    cd r4ven
    pip3 install -r requirements.txt
    python3 r4ven.py

    enter your discord webhook url (set up a channel in your discord server with webhook integration)

    https://support.discord.com/hc/en-us/articles/228383668-Intro-to-Webhooks

    if not have discord account and sever make one, it's free.

    https://discord.com/

    ๏“
    Track info data will be sent to your discord webhook channel.
    • why discord webhook? Conveniently, you will receive a notification when someone clicks on the link.

    To chnage website template

    • open file index.html on line 12 and replace the src in the iframe. (Note: not every website support iframe)


    To port forward install ngrok or use ssh

    • For ngrok port forward type: ngrok http 8000
    • For ssh port forwarding type: ssh -R 80:localhost:8000 ssh.localhost.run

    Snapshots



    โŒ