DockerSpy searches for images on Docker Hub and extracts sensitive information such as authentication secrets, private keys, and more.
Docker is an open-source platform that automates the deployment, scaling, and management of applications using containerization technology. Containers allow developers to package an application and its dependencies into a single, portable unit that can run consistently across various computing environments. Docker simplifies the development and deployment process by ensuring that applications run the same way regardless of where they are deployed.
Docker Hub is a cloud-based repository where developers can store, share, and distribute container images. It serves as the largest library of container images, providing access to both official images created by Docker and community-contributed images. Docker Hub enables developers to easily find, download, and deploy pre-built images, facilitating rapid application development and deployment.
Open Source Intelligence (OSINT) on Docker Hub involves using publicly available information to gather insights and data from container images and repositories hosted on Docker Hub. This is particularly important for identifying exposed secrets for several reasons:
Security Audits: By analyzing Docker images, organizations can uncover exposed secrets such as API keys, authentication tokens, and private keys that might have been inadvertently included. This helps in mitigating potential security risks.
Incident Prevention: Proactively searching for exposed secrets in Docker images can prevent security breaches before they happen, protecting sensitive information and maintaining the integrity of applications.
Compliance: Ensuring that container images do not expose secrets is crucial for meeting regulatory and organizational security standards. OSINT helps verify that no sensitive information is unintentionally disclosed.
Vulnerability Assessment: Identifying exposed secrets as part of regular security assessments allows organizations to address these vulnerabilities promptly, reducing the risk of exploitation by malicious actors.
Enhanced Security Posture: Continuously monitoring Docker Hub for exposed secrets strengthens an organization's overall security posture, making it more resilient against potential threats.
Utilizing OSINT on Docker Hub to find exposed secrets enables organizations to enhance their security measures, prevent data breaches, and ensure the confidentiality of sensitive information within their containerized applications.
DockerSpy obtains information from Docker Hub and uses regular expressions to inspect the content for sensitive information, such as secrets.
To use DockerSpy, follow these steps:
git clone https://github.com/UndeadSec/DockerSpy.git && cd DockerSpy && make
dockerspy
To customize DockerSpy configurations, edit the following files: - Regular Expressions - Ignored File Extensions
DockerSpy is intended for educational and research purposes only. Users are responsible for ensuring that their use of this tool complies with applicable laws and regulations.
Contributions to DockerSpy are welcome! Feel free to submit issues, feature requests, or pull requests to help improve this tool.
DockerSpy is developed and maintained by Alisson Moretto (UndeadSec)
I'm a passionate cyber threat intelligence pro who loves sharing insights and crafting cybersecurity tools.
Consider following me:
Special thanks to @akaclandestine
Most accomplished cybercriminals go out of their way to separate their real names from their hacker handles. But among certain old-school Russian hackers it is not uncommon to find major players who have done little to prevent people from figuring out who they are in real life. A case study in this phenomenon is βx999xx,β the nickname chosen by a venerated Russian hacker who specializes in providing the initial network access to various ransomware groups.
x999xx is a well-known βaccess brokerβ who frequently sells access to hacked corporate networks β usually in the form of remote access credentials β as well as compromised databases containing large amounts of personal and financial data.
In an analysis published in February 2019, cyber intelligence firm Flashpoint called x999xx one of the most senior and prolific members of the top-tier Russian-language cybercrime forum Exploit, where x999xx could be seen frequently advertising the sale of stolen databases and network credentials.
In August 2023, x999xx sold access to a company that develops software for the real estate industry. In July 2023, x999xx advertised the sale of Social Security numbers, names, and birthdays for the citizenry of an entire U.S. state (unnamed in the auction).
A month earlier, x999xx posted a sales thread for 80 databases taken from Australiaβs largest retail company. βYou may use this data to demand a ransom or do something different with it,β x999xx wrote on Exploit. βUnfortunately, the flaw was patched fast. [+] no one has used the data yet [+] the data hasnβt been used to send spam [+] the data is waiting for its time.β
In October 2022, x999xx sold administrative access to a U.S. healthcare provider.
The oldest account by the name x999xx appeared in 2009 on the Russian language cybercrime forum Verified, under the email address maxnm@ozersk.com. Ozersk is a city in the Chelyabinsk region of west-central Russia.
According to the breach tracking service Constella Intelligence, the address maxnm@ozersk.com was used more than a decade ago to create an account at Vktontakte (the Russian answer to Facebook) under the name Maxim Kirtsov from Ozersk. Mr. Kirtsovβs profile β βmaxnmβ β says his birthday is September 5, 1991.
Personal photos Maxnm shared on Vktontakte in 2016. The caption has been machine translated from Russian.
The user x999xx registered on the Russian language cybercrime community Zloy in 2014 using the email address maxnmalias-1@yahoo.com. Constella says this email address was used in 2022 at the Russian shipping service cdek.ruΒ by a Maksim Georgievich Kirtsov from Ozersk.
Additional searches on these contact details reveal that prior to 2009, x999xx favored the handle MaxnmΒ on Russian cybercrime forums. Cyber intelligence company Intel 471 finds the user Maxnm registered on Zloy in 2006 from an Internet address in Chelyabinsk, using the email address kirtsov@telecom.ozersk.ru.
That same email address was used to create Maxnm accounts on several other crime forums, including Spamdot and Exploit in 2005 (also from Chelyabinsk), and Damagelab in 2006.
A search in Constella for the Russian version of Kirtsovβs full name β ΠΠΈΡΡΠΎΠ² ΠΠ°ΠΊΡΠΈΠΌ ΠΠ΅ΠΎΡΠ³ΠΈΠ΅Π²ΠΈΡ β brings up multiple accounts registered to maksya@icloud.com.
A review of the digital footprint for maksya@icloud.com at osint.industries reveals this address was used a decade ago to register a still-active account at imageshack.com under the name x999xx. That account features numerous screenshots of financial statements from various banks, chat logs with other hackers, and even hacked websites.
x999xxβs Imageshack account includes screenshots of bank account balances from dozens of financial institutions, as well as chat logs with other hackers and pictures of homegrown weed.
Some of the photos in that Imageshack account also appear on Kirtsovβs Vkontakte page, including images of vehicles he owns, as well as pictures of potted marijuana plants. Kirtsovβs Vkontakte profile says that in 2012 he was a faculty member of the Ozersk Technological Institute National Research Nuclear University.
The Vkontakte page lists Kirtsovβs occupation as a website called ozersk[.]today, which on the surface appears to be a blog about life in Ozersk. However, in 2019 the security firm Recorded Future published a blog post which found this domain was being used to host a malicious Cobalt Strike server.
Cobalt Strike is a commercial network penetration testing and reconnaissance tool that is sold only to vetted partners. But stolen or ill-gotten Cobalt Strike licenses are frequently abused by cybercriminal gangs to help lay the groundwork for the installation of ransomware on a victim network.
In August 2023, x999xx posted a message on Exploit saying he was interested in buying a licensed version of Cobalt Strike. A month earlier, x999xx filed a complaint on Exploit against another forum member named Cobaltforce, an apparent onetime partner whose sudden and prolonged disappearance from the community left x999xx and others in the lurch. Cobaltforce recruited people experienced in using Cobalt Strike for ransomware operations, and offered to monetize access to hacked networks for a share of the profits.
DomainTools.com finds ozersk[.]today was registered to the email address dashin2008@yahoo.com, which also was used to register roughly two dozen other domains, including x999xx[.]biz. Virtually all of those domains were registered to Maxim Kirtsov from Ozersk. Below is a mind map used to track the identities mentioned in this story.
x999xx is a prolific member of the Russian webmaster forum βGofuckbiz,β with more than 2,000 posts over nearly a decade, according to Intel 471. In one post from 2016, x999xx asked whether anyone knew where he could buy a heat lamp that simulates sunlight, explaining that one his pet rabbits had recently perished for lack of adequate light and heat. Mr. Kirtsovβs Vkontakte page includes several pictures of caged rabbits from 2015 and earlier.
Reached via email, Mr. Kirtsov acknowledged that he is x999xx. Kirtsov said he and his team are also regular readers of KrebsOnSecurity.
βWeβre glad to hear and read you,β Kirtsov replied.
Asked whether he was concerned about the legal and moral implications of his work, Kirtsov downplayed his role in ransomware intrusions, saying he was more focused on harvesting data.
βI consider myself as committed to ethical practices as you are,β Kirtsov wrote. βI have also embarked on research and am currently mentoring students. You may have noticed my activities on a forum, which I assume you know of through information gathered from public sources, possibly using the new tool you reviewed.β
βRegarding my posts about selling access, I must honestly admit, upon reviewing my own actions, I recall such mentions but believe they were never actualized,β he continued. βMany use the forum for self-serving purposes, which explains why listings of targets for sale have dwindled β they simply ceased being viable.β
Kirtsov asserted that he is not interested in harming healthcare institutions, just in stealing their data.
βAs for health-related matters, I was once acquainted with affluent webmasters who would pay up to $50 for every 1000 health-themed emails,β Kirtsov said. βTherefore, I had no interest in the more sensitive data from medical institutions like X-rays, insurance numbers, or even names; I focused solely on emails. I am proficient in SQL, hence my ease with handling data like IDs and emails. And i never doing spam or something like this.β
On the Russian crime forums, x999xx said he never targets anything or anyone in Russia, and that he has little to fear from domestic law enforcement agencies provided he remains focused on foreign adversaries.
x999xxβs lackadaisical approach to personal security mirrors that of Wazawaka, another top Russian access broker who sold access to countless organizations and even operated his own ransomware affiliate programs.
βDonβt shit where you live, travel local, and donβt go abroad,β Wazawaka said of his own personal mantra. βMother Russia will help you. Love your country, and you will always get away with everything.β
In January 2022, KrebsOnSecurity followed clues left behind by Wazawaka to identify him as 32-year-old Mikhail Matveev from Khakassia, Russia. In May 2023, the U.S. Department of Justice indicted Matveev as a key figure in several ransomware groups that collectively extorted hundreds of millions of dollars from victim organizations. The U.S. State DepartmentΒ is offeringΒ a $10 million reward for information leading to the capture and/or prosecution of Matveev.
Perhaps in recognition that many top ransomware criminals are largely untouchable so long as they remain in Russia, western law enforcement agencies have begun focusing more on getting inside the heads of those individuals. These so-called βpsyopsβ are aimed at infiltrating ransomware-as-a-service operations, disrupting major cybercrime services, and decreasing trust within cybercriminal communities.
When authorities in the U.S. and U.K. announced in February 2024 that theyβdΒ infiltrated and seizedΒ the infrastructure used by the infamousΒ LockBitΒ ransomware gang, they borrowed the existing design of LockBitβs victim shaming website to link instead to press releases about the takedown, and included a countdown timer that was eventually replaced with the personal details ofΒ LockBitβs alleged leader.
In May 2024, law enforcement agencies in the United States and Europe announced Operation Endgame, a coordinated action against some of the most popular cybercrime platforms for delivering ransomware and data-stealing malware. The Operation Endgame website also included a countdown timer, which served to tease the release of several animated videos that mimic the same sort of flashy, short advertisements that established cybercriminals often produce to promote their services online.
Pip-Intel is a powerful tool designed for OSINT (Open Source Intelligence) and cyber intelligence gathering activities. It consolidates various open-source tools into a single user-friendly interface simplifying the data collection and analysis processes for researchers and cybersecurity professionals.
Pip-Intel utilizes Python-written pip packages to gather information from various data points. This tool is equipped with the capability to collect detailed information through email addresses, phone numbers, IP addresses, and social media accounts. It offers a wide range of functionalities including email-based OSINT operations, phone number-based inquiries, geolocating IP addresses, social media and user analyses, and even dark web searches.
Free to use IOC feed for various tools/malware. It started out for just C2 tools but has morphed into tracking infostealers and botnets as well. It uses shodan.io/">Shodan searches to collect the IPs. The most recent collection is always stored in data
; the IPs are broken down by tool and there is an all.txt
.
The feed should update daily. Actively working on making the backend more reliable
Many of the Shodan queries have been sourced from other CTI researchers:
Huge shoutout to them!
Thanks to BertJanCyber for creating the KQL query for ingesting this feed
And finally, thanks to Y_nexro for creating C2Live in order to visualize the data
If you want to host a private version, put your Shodan API key in an environment variable called SHODAN_API_KEY
echo SHODAN_API_KEY=API_KEY >> ~/.bashrc
bash
python3 -m pip install -r requirements.txt
python3 tracker.py
I encourage opening an issue/PR if you know of any additional Shodan searches for identifying adversary infrastructure. I will not set any hard guidelines around what can be submitted, just know, fidelity is paramount (high true/false positive ratio is the focus).
Multi-cloud OSINT tool. Enumerate public resources in AWS, Azure, and Google Cloud.
Currently enumerates the following:
Amazon Web Services: - Open / Protected S3 Buckets - awsapps (WorkMail, WorkDocs, Connect, etc.)
Microsoft Azure: - Storage Accounts - Open Blob Storage Containers - Hosted Databases - Virtual Machines - Web Apps
Google Cloud Platform - Open / Protected GCP Buckets - Open / Protected Firebase Realtime Databases - Google App Engine sites - Cloud Functions (enumerates project/regions with existing functions, then brute forces actual function names) - Open Firebase Apps
See it in action in Codingo's video demo here.
Several non-standard libaries are required to support threaded HTTP requests and dns lookups. You'll need to install the requirements as follows:
pip3 install -r ./requirements.txt
The only required argument is at least one keyword. You can use the built-in fuzzing strings, but you will get better results if you supply your own with -m
and/or -b
.
You can provide multiple keywords by specifying the -k
argument multiple times.
Keywords are mutated automatically using strings from enum_tools/fuzz.txt
or a file you provide with the -m
flag. Services that require a second-level of brute forcing (Azure Containers and GCP Functions) will also use fuzz.txt
by default or a file you provide with the -b
flag.
Let's say you were researching "somecompany" whose website is "somecompany.io" that makes a product called "blockchaindoohickey". You could run the tool like this:
./cloud_enum.py -k somecompany -k somecompany.io -k blockchaindoohickey
HTTP scraping and DNS lookups use 5 threads each by default. You can try increasing this, but eventually the cloud providers will rate limit you. Here is an example to increase to 10.
./cloud_enum.py -k keyword -t 10
IMPORTANT: Some resources (Azure Containers, GCP Functions) are discovered per-region. To save time scanning, there is a "REGIONS" variable defined in cloudenum/azure_regions.py and cloudenum/gcp_regions.py
that is set by default to use only 1 region. You may want to look at these files and edit them to be relevant to your own work.
Complete Usage Details
usage: cloud_enum.py [-h] -k KEYWORD [-m MUTATIONS] [-b BRUTE]
Multi-cloud enumeration utility. All hail OSINT!
optional arguments:
-h, --help show this help message and exit
-k KEYWORD, --keyword KEYWORD
Keyword. Can use argument multiple times.
-kf KEYFILE, --keyfile KEYFILE
Input file with a single keyword per line.
-m MUTATIONS, --mutations MUTATIONS
Mutations. Default: enum_tools/fuzz.txt
-b BRUTE, --brute BRUTE
List to brute-force Azure container names. Default: enum_tools/fuzz.txt
-t THREADS, --threads THREADS
Threads for HTTP brute-force. Default = 5
-ns NAMESERVER, --nameserver NAMESERVER
DNS server to use in brute-force.
-l LOGFILE, --logfile LOGFILE
Will APPEND found items to specified file.
-f FORMAT, --format FORMAT
Format for log file (text,json,csv - defaults to text)
--disable-aws Disable Amazon checks.
--disable-azure Disable Azure checks.
--disable-gcp Disable Google checks.
-qs, --quickscan Disable all mutations and second-level scans
So far, I have borrowed from: - Some of the permutations from GCPBucketBrute
During reconaissance phase or when doing OSINT , we often use google dorking and shodan and thus the idea of Dorkish.
Dorkish is a Chrome extension tool that facilitates custom dork creation for Google and Shodan using the builder and it offers prebuilt dorks for efficient reconnaissance and OSINT engagement.
1- Clone the repository
git clone https://github.com/yousseflahouifi/dorkish.git
2- Go to chrome://extensions/ and enable the Developer mode in the top right corner.
3- click on Load unpacked extension button and select the dorkish folder.
Note: For firefox users , you can find the extension here : https://addons.mozilla.org/en-US/firefox/addon/dorkish/
Once you have found or built the dork you need, simply click it and click search. This will direct you to the desired search engine, Shodan or Google, with the specific dork you've entered. Then, you can explore and enjoy the results that match your query.
I have built some dorks and I have used some public resources to gather the dorks , here's few : - https://github.com/lothos612/shodan - https://github.com/TakSec/google-dorks-bug-bounty
DarkGPT is an artificial intelligence assistant based on GPT-4-200K designed to perform queries on leaked databases. This guide will help you set up and run the project on your local environment.
Before starting, make sure you have Python installed on your system. This project has been tested with Python 3.8 and higher versions.
First, you need to clone the GitHub repository to your local machine. You can do this by executing the following command in your terminal:
git clone https://github.com/luijait/DarkGPT.git cd DarkGPT
You will need to set up some environment variables for the script to work correctly. Copy the .env.example
file to a new file named .env
:
DEHASHED_API_KEY="your_dehashed_api_key_here"
This project requires certain Python packages to run. Install them by running the following command:
pip install -r requirements.txt 4. Then Run the project: python3 main.py
SwaggerSpy is a tool designed for automated Open Source Intelligence (OSINT) on SwaggerHub. This project aims to streamline the process of gathering intelligence from APIs documented on SwaggerHub, providing valuable insights for security researchers, developers, and IT professionals.
Swagger is an open-source framework that allows developers to design, build, document, and consume RESTful web services. It simplifies API development by providing a standard way to describe REST APIs using a JSON or YAML format. Swagger enables developers to create interactive documentation for their APIs, making it easier for both developers and non-developers to understand and use the API.
SwaggerHub is a collaborative platform for designing, building, and managing APIs using the Swagger framework. It offers a centralized repository for API documentation, version control, and collaboration among team members. SwaggerHub simplifies the API development lifecycle by providing a unified platform for API design and testing.
Performing OSINT on SwaggerHub is crucial because developers, in their pursuit of efficient API documentation and sharing, may inadvertently expose sensitive information. Here are key reasons why OSINT on SwaggerHub is valuable:
Developer Oversights: Developers might unintentionally include secrets, credentials, or sensitive information in API documentation on SwaggerHub. These oversights can lead to security vulnerabilities and unauthorized access if not identified and addressed promptly.
Security Best Practices: OSINT on SwaggerHub helps enforce security best practices. Identifying and rectifying potential security issues early in the development lifecycle is essential to ensure the confidentiality and integrity of APIs.
Preventing Data Leaks: By systematically scanning SwaggerHub for sensitive information, organizations can proactively prevent data leaks. This is especially crucial in today's interconnected digital landscape where APIs play a vital role in data exchange between services.
Risk Mitigation: Understanding that developers might forget to remove or obfuscate sensitive details in API documentation underscores the importance of continuous OSINT on SwaggerHub. This proactive approach mitigates the risk of unintentional exposure of critical information.
Compliance and Privacy: Many industries have stringent compliance requirements regarding the protection of sensitive data. OSINT on SwaggerHub ensures that APIs adhere to these regulations, promoting a culture of compliance and safeguarding user privacy.
Educational Opportunities: Identifying oversights in SwaggerHub documentation provides educational opportunities for developers. It encourages a security-conscious mindset, fostering a culture of awareness and responsible information handling.
By recognizing that developers can inadvertently expose secrets, OSINT on SwaggerHub becomes an integral part of the overall security strategy, safeguarding against potential threats and promoting a secure API ecosystem.
SwaggerSpy obtains information from SwaggerHub and utilizes regular expressions to inspect API documentation for sensitive information, such as secrets and credentials.
To use SwaggerSpy, follow these steps:
git clone https://github.com/UndeadSec/SwaggerSpy.git
cd SwaggerSpy
pip install -r requirements.txt
python swaggerspy.py searchterm
SwaggerSpy is intended for educational and research purposes only. Users are responsible for ensuring that their use of this tool complies with applicable laws and regulations.
Contributions to SwaggerSpy are welcome! Feel free to submit issues, feature requests, or pull requests to help improve this tool.
SwaggerSpy is developed and maintained by Alisson Moretto (UndeadSec)
I'm a passionate cyber threat intelligence pro who loves sharing insights and crafting cybersecurity tools.
SwaggerSpy is licensed under the MIT License. See the LICENSE file for details.
Special thanks to @Liodeus for providing project inspiration through swaggerHole.
Introducing Uscrapper 2.0, A powerfull OSINT webscrapper that allows users to extract various personal information from a website. It leverages web scraping techniques and regular expressions to extract email addresses, social media links, author names, geolocations, phone numbers, and usernames from both hyperlinked and non-hyperlinked sources on the webpage, supports multithreading to make this process faster, Uscrapper 2.0 is equipped with advanced Anti-webscrapping bypassing modules and supports webcrawling to scrape from various sublinks within the same domain. The tool also provides an option to generate a report containing the extracted details.
Uscrapper extracts the following details from the provided website:
Uscrapper 2.0:
git clone https://github.com/z0m31en7/Uscrapper.git
cd Uscrapper/install/
chmod +x ./install.sh && ./install.sh #For Unix/Linux systems
To run Uscrapper, use the following command-line syntax:
python Uscrapper-v2.0.py [-h] [-u URL] [-c (INT)] [-t THREADS] [-O] [-ns]
Arguments:
Uscrapper relies on web scraping techniques to extract information from websites. Make sure to use it responsibly and in compliance with the website's terms of service and applicable laws.
The accuracy and completeness of the extracted details depend on the structure and content of the website being analyzed.
To bypass some Anti-Webscrapping methods we have used selenium which can make the overall process slower.
Pantheon is a GUI application that allows users to display information regarding network cameras in various countries as well as an integrated live-feed for non-protected cameras.
Pantheon allows users to execute an API crawler. There was original functionality without the use of any API's (like Insecam), but Google TOS kept getting in the way of the original scraping mechanism.
git clone https://github.com/josh0xA/Pantheon.git
cd Pantheon
pip3 install -r requirements.txt
python3 pantheon.py
chmod +x distros/ubuntu_install.sh
./distros/ubuntu_install.sh
chmod +x distros/debian-kali_install.sh
./distros/debian-kali_install.sh
(Enter) on a selected IP:Port to establish a Pantheon webview of the camera. (Use this at your own risk)
(Left-click) on a selected IP:Port to view the geolocation of the camera.
(Right-click) on a selected IP:Port to view the HTTP data of the camera (Ctrl+Left-click for Mac).
Adjust the map as you please to see the markers.
The developer of this program, Josh Schiavone, is not resposible for misuse of this data gathering tool. Pantheon simply provides information that can be indexed by any modern search engine. Do not try to establish unauthorized access to live feeds that are password protected - that is illegal. Furthermore, if you do choose to use Pantheon to view a live-feed, do so at your own risk. Pantheon was developed for educational purposes only. For further information, please visit: https://joshschiavone.com/panth_info/panth_ethical_notice.html
MIT License
Copyright (c) Josh Schiavone
CloakQuest3r is a powerful Python tool meticulously crafted to uncover the true IP address of websites safeguarded by Cloudflare, a widely adopted web security and performance enhancement service. Its core mission is to accurately discern the actual IP address of web servers that are concealed behind Cloudflare's protective shield. Subdomain scanning is employed as a key technique in this pursuit. This tool is an invaluable resource for penetration testers, security professionals, and web administrators seeking to perform comprehensive security assessments and identify vulnerabilities that may be obscured by Cloudflare's security measures.
Key Features:
Real IP Detection: CloakQuest3r excels in the art of discovering the real IP address of web servers employing Cloudflare's services. This crucial information is paramount for conducting comprehensive penetration tests and ensuring the security of web assets.
Subdomain Scanning: Subdomain scanning is harnessed as a fundamental component in the process of finding the real IP address. It aids in the identification of the actual server responsible for hosting the website and its associated subdomains.
Threaded Scanning: To enhance efficiency and expedite the real IP detection process, CloakQuest3r utilizes threading. This feature enables scanning of a substantial list of subdomains without significantly extending the execution time.
Detailed Reporting: The tool provides comprehensive output, including the total number of subdomains scanned, the total number of subdomains found, and the time taken for the scan. Any real IP addresses unveiled during the process are also presented, facilitating in-depth analysis and penetration testing.
With CloakQuest3r, you can confidently evaluate website security, unveil hidden vulnerabilities, and secure your web assets by disclosing the true IP address concealed behind Cloudflare's protective layers.
- Still in the development phase, sometimes it can't detect the real Ip.
- CloakQuest3r combines multiple indicators to uncover real IP addresses behind Cloudflare. While subdomain scanning is a part of the process, we do not assume that all subdomains' A records point to the target host. The tool is designed to provide valuable insights but may not work in every scenario. We welcome any specific suggestions for improvement.
1. False Negatives: CloakReveal3r may not always accurately identify the real IP address behind Cloudflare, particularly for websites with complex network configurations or strict security measures.
2. Dynamic Environments: Websites' infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information.
3. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the pri mary host. Some subdomains may also be protected by Cloudflare.
How to Use:
Run CloudScan with a single command-line argument: the target domain you want to analyze.
git clone https://github.com/spyboy-productions/CloakQuest3r.git
cd CloakQuest3r
pip3 install -r requirements.txt
python cloakquest3r.py example.com
The tool will check if the website is using Cloudflare. If not, it will inform you that subdomain scanning is unnecessary.
If Cloudflare is detected, CloudScan will scan for subdomains and identify their real IP addresses.
You will receive detailed output, including the number of subdomains scanned, the total number of subdomains found, and the time taken for the scan.
Any real IP addresses found will be displayed, allowing you to conduct further analysis and penetration testing.
CloudScan simplifies the process of assessing website security by providing a clear, organized, and informative report. Use it to enhance your security assessments, identify potential vulnerabilities, and secure your web assets.
Run it online on replit.com : https://replit.com/@spyb0y/CloakQuest3r
Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.
Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:
python3 -m pip install porch-pirate
The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.
Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.
--globals
--collections
--requests
--urls
--dump
--raw
--curl
porch-pirate -s "coca-cola.com"
By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w
argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8
When an interesting result has been found with a simple search, we can provide the workspace ID to the -w
argument with the --dump
command to begin extracting information from the workspace and its collections.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump
Porch Pirate can be supplied a simple search term, following the --globals
argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.
porch-pirate -s "shopify" --globals
Porch Pirate can be supplied a simple search term, following the --dump
argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.
porch-pirate -s "coca-cola.com" --dump
A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls
Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.
porch-pirate -s "coca-cola.com" --urls
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw
porch-pirate -w WORKSPACE_ID
porch-pirate -c COLLECTION_ID
porch-pirate -r REQUEST_ID
porch-pirate -u USERNAME/TEAMNAME
Porch Pirate can build curl requests when provided with a request ID for easier testing.
porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl
porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080
p = porchpirate()
print(p.search('coca-cola.com'))
p = porchpirate()
print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
p = porchpirate()
collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
for collection in collections['data']:
requests = collection['requests']
for r in requests:
request_data = p.request(r['id'])
print(request_data)
p = porchpirate()
print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
Other library usage examples can be located in the examples
directory, which contains the following examples:
dump_workspace.py
format_search_results.py
format_workspace_collections.py
format_workspace_globals.py
get_collection.py
get_collections.py
get_profile.py
get_request.py
get_statistics.py
get_team.py
get_user.py
get_workspace.py
recursive_globals_from_search.py
request_to_curl.py
search.py
search_by_page.py
workspace_collections.py
OSINT framework focused on gathering information from free tools or resources. The intention is to help people find free OSINT resources. Some of the sites included might require registration or offer more data for $$$, but you should be able to get at least a portion of the available information for no cost.
I originally created this framework with an information security point of view. Since then, the response from other fields and disciplines has been incredible. I would love to be able to include any other OSINT resources, especially from fields outside of infosec. Please let me know about anything that might be missing!
Please visit the framework at the link below and good hunting!
(T) - Indicates a link to a tool that must be installed and run locally
(D) - Google Dork, for more information: Google Hacking
(R) - Requires registration
(M) - Indicates a URL that contains the search term and the URL itself must be edited manually
Follow me on Twitter: @jnordine - https://twitter.com/jnordine
Watch or star the project on Github: https://github.com/lockfale/osint-framework
Feedback or new tool suggestions are extremely welcome! Please feel free to submit a pull request or open an issue on github or reach out on Twitter.
For new resources, please ensure that the site is available for public and free use.
Thank you!
Happy Hunting!
Poastal is an email OSINT tool that provides valuable information on any email address. With Poastal, you can easily input an email address and it will quickly answer several questions, providing you with crucial information.
Make sure you have the requirements installed.
pip install -r requirements.txt
Navigate to the backend folder and run poastal.py
to start the Flask app. This points to port:8080.
python poastal.py
Open index.html
in the root directory to use the UI.
Enter an email address and see the results.
Test with example@gmail.com
.
There's a new GitHub module.
If you open up github.py
you'll see a section that asks you to replace it with your own API key.
I hope you find Poastal to be a valuable tool for your OSINT investigations. If you have any feedback or suggestions on how we can improve Poastal, please let me know. I'm always looking for ways to improve this tool to better serve the OSINT community.
Efficiently finding registered accounts from emails.
Holehe checks if an email is attached to an account on sites like twitter, instagram, imgur and more than 120 others.
pip3 install holehe
git clone https://github.com/megadose/holehe.git
cd holehe/
python3 setup.py install
Holehe can be run from the CLI and rapidly embedded within existing python applications.
holehe test@gmail.com
import trio
import httpx
from holehe.modules.social_media.snapchat import snapchat
async def main():
email = "test@gmail.com"
out = []
client = httpx.AsyncClient()
await snapchat(email, client, out)
print(out)
await client.aclose()
trio.run(main)
For each module, data is returned in a standard dictionary with the following json-equivalent format :
{
"name": "example",
"rateLimit": false,
"exists": true,
"emailrecovery": "ex****e@gmail.com",
"phoneNumber": "0*******78",
"others": null
}
Rate limit? Change your IP.
For BTC Donations : 1FHDM49QfZX6pJmhjLE5tB2K6CaTLMZpXZ
GNU General Public License v3.0
Built for educational purposes only.
Name | Domain | Method | Frequent Rate Limit |
---|---|---|---|
aboutme | about.me | register | β |
adobe | adobe.com | password recovery | β |
amazon | amazon.com | login | β |
amocrm | amocrm.com | register | β |
anydo | any.do | login | β |
archive | archive.org | register | β |
armurerieauxerre | armurerie-auxerre.com | register | β |
atlassian | atlassian.com | register | β |
axonaut | axonaut.com | register | β |
babeshows | babeshows.co.uk | register | β |
badeggsonline | badeggsonline.com | register | β |
biosmods | bios-mods.com | register | β |
biotechnologyforums | biotechnologyforums.com | register | β |
bitmoji | bitmoji.com | login | β |
blablacar | blablacar.com | register | β |
blackworldforum | blackworldforum.com | register | β |
blip | blip.fm | register | β |
blitzortung | forum.blitzortung.org | register | β |
bluegrassrivals | bluegrassrivals.com | register | β |
bodybuilding | bodybuilding.com | register | β |
buymeacoffee | buymeacoffee.com | register | β |
cambridgemt | discussion.cambridge-mt.com | register | β |
caringbridge | caringbridge.org | register | β |
chinaphonearena | chinaphonearena.com | register | β |
clashfarmer | clashfarmer.com | register | β |
codecademy | codecademy.com | register | β |
codeigniter | forum.codeigniter.com | register | β |
codepen | codepen.io | register | β |
coroflot | coroflot.com | register | β |
cpaelites | cpaelites.com | register | β |
cpahero | cpahero.com | register | β |
cracked_to | cracked.to | register | β |
crevado | crevado.com | register | β |
deliveroo | deliveroo.com | register | β |
demonforums | demonforums.net | register | β |
devrant | devrant.com | register | β |
diigo | diigo.com | register | β |
discord | discord.com | register | β |
docker | docker.com | register | β |
dominosfr | dominos.fr | register | β |
ebay | ebay.com | login | β |
ello | ello.co | register | β |
envato | envato.com | register | β |
eventbrite | eventbrite.com | login | β |
evernote | evernote.com | login | β |
fanpop | fanpop.com | register | β |
firefox | firefox.com | register | β |
flickr | flickr.com | login | β |
freelancer | freelancer.com | register | β |
freiberg | drachenhort.user.stunet.tu-freiberg.de | register | β |
garmin | garmin.com | register | β |
github | github.com | register | β |
google.com | register | β | |
gravatar | gravatar.com | other | β |
hubspot | hubspot.com | login | β |
imgur | imgur.com | register | β |
insightly | insightly.com | login | β |
instagram.com | register | β | |
issuu | issuu.com | register | β |
koditv | forum.kodi.tv | register | β |
komoot | komoot.com | register | β |
laposte | laposte.fr | register | β |
lastfm | last.fm | register | β |
lastpass | lastpass.com | register | β |
mail_ru | mail.ru | password recovery | β |
mybb | community.mybb.com | register | β |
myspace | myspace.com | register | β |
nattyornot | nattyornotforum.nattyornot.com | register | β |
naturabuy | naturabuy.fr | register | β |
ndemiccreations | forum.ndemiccreations.com | register | β |
nextpvr | forums.nextpvr.com | register | β |
nike | nike.com | register | β |
nimble | nimble.com | register | β |
nocrm | nocrm.io | register | β |
nutshell | nutshell.com | register | β |
odnoklassniki | ok.ru | password recovery | β |
office365 | office365.com | other | β |
onlinesequencer | onlinesequencer.net | register | β |
parler | parler.com | login | β |
patreon | patreon.com | login | β |
pinterest.com | register | β | |
pipedrive | pipedrive.com | register | β |
plurk | plurk.com | register | β |
pornhub | pornhub.com | register | β |
protonmail | protonmail.ch | other | β |
quora | quora.com | register | β |
rambler | rambler.ru | register | β |
redtube | redtube.com | register | β |
replit | replit.com | register | β |
rocketreach | rocketreach.co | register | β |
samsung | samsung.com | register | β |
seoclerks | seoclerks.com | register | β |
sevencups | 7cups.com | register | β |
smule | smule.com | register | β |
snapchat | snapchat.com | login | β |
soundcloud | soundcloud.com | register | β |
sporcle | sporcle.com | register | β |
spotify | spotify.com | register | β |
strava | strava.com | register | β |
taringa | taringa.net | register | β |
teamleader | teamleader.com | register | β |
teamtreehouse | teamtreehouse.com | register | β |
tellonym | tellonym.me | register | β |
thecardboard | thecardboard.org | register | β |
therianguide | forums.therian-guide.com | register | β |
thevapingforum | thevapingforum.com | register | β |
tumblr | tumblr.com | register | β |
tunefind | tunefind.com | register | β |
twitter.com | register | β | |
venmo | venmo.com | register | β |
vivino | vivino.com | register | β |
voxmedia | voxmedia.com | register | β |
vrbo | vrbo.com | register | β |
vsco | vsco.co | register | β |
wattpad | wattpad.com | register | β |
wordpress | wordpress | login | β |
xing.com | register | β | |
xnxx | xnxx.com | register | β |
xvideos | xvideos.com | register | β |
yahoo | yahoo.com | login | β |
zoho | zoho.com | login | β |
xsubfind3r
is a command-line interface (CLI) utility to find domain's known subdomains from curated passive online sources.
Fetches domains from curated passive sources to maximize results.
Supports stdin
and stdout
for easy integration into workflows.
Cross-Platform (Windows, Linux & macOS).
Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget
or curl
:
...with wget
:
wget https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz
...or, with curl
:
curl -OL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz
...then, extract the binary:
tar xf xsubfind3r-<version>-linux-amd64.tar.gz
TIP: The above steps, download and extract, can be combined into a single step with this onliner
curl -sL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz | tar -xzv
NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xsubfind3r
executable.
...move the xsubfind3r
binary to somewhere in your PATH
. For example, on GNU/Linux and OS X systems:
sudo mv xsubfind3r /usr/local/bin/
NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r
to their PATH
.
Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.
go install ...
go install -v github.com/hueristiq/xsubfind3r/cmd/xsubfind3r@latest
go build ...
the development VersionClone the repository
git clone https://github.com/hueristiq/xsubfind3r.git
Build the utility
cd xsubfind3r/cmd/xsubfind3r && \
go build .
Move the xsubfind3r
binary to somewhere in your PATH
. For example, on GNU/Linux and OS X systems:
sudo mv xsubfind3r /usr/local/bin/
NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r
to their PATH
.
NOTE: While the development version is a good way to take a peek at xsubfind3r
's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.
xsubfind3r
will work right after installation. However, BeVigil, Chaos, Fullhunt, Github, Intelligence X and Shodan require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xsubfind3r/config.yaml
file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.
Example config.yaml
:
version: 0.3.0
sources:
- alienvault
- anubis
- bevigil
- chaos
- commoncrawl
- crtsh
- fullhunt
- github
- hackertarget
- intelx
- shodan
- urlscan
- wayback
keys:
bevigil:
- awA5nvpKU3N8ygkZ
chaos:
- d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39asdsd54bbc1aabb208c9acfb
fullhunt:
- 0d9652ce-516c-4315-b589-9b241ee6dc24
github:
- d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
- asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
intelx:
- 2.intelx.io:00000000-0000-0000-0000-000000000000
shodan:
- AAAAClP1bJJSRMEYJazgwhJKrggRwKA
urlscan:
- d4c85d34-e425-446e-d4ab-f5a3412acbe8
To display help message for xsubfind3r
use the -h
flag:
xsubfind3r -h
help message:
_ __ _ _ _____
__ _____ _ _| |__ / _(_)_ __ __| |___ / _ __
\ \/ / __| | | | '_ \| |_| | '_ \ / _` | |_ \| '__|
> <\__ \ |_| | |_) | _| | | | | (_| |___) | |
/_/\_\___/\__,_|_.__/|_| |_|_| |_|\__,_|____/|_| v0.3.0
USAGE:
xsubfind3r [OPTIONS]
INPUT:
-d, --domain string[] target domains
-l, --list string target domains' list file path
SOURCES:
--sources bool list supported sources
-u, --sources-to-use string[] comma(,) separeted sources to use
-e, --sources-to-exclude string[] comma(,) separeted sources to exclude
OPTIMIZATION:
-t, --threads int number of threads (default: 50)
OUTPUT:
--no-color bool disable colored output
-o, --output string output subdomains' file path
-O, --output-directory string output subdomains' directory path
-v, --verbosity string debug, info, warning, error, fatal or silent (default: info)
CONFIGURATION:
-c, --configuration string configuration file path (default: ~/.hueristiq/xsubfind3r/config.yaml)
Issues and Pull Requests are welcome! Check out the contribution guidelines.
This utility is distributed under the MIT license.
xurlfind3r
is a command-line interface (CLI) utility to find domain's known URLs from curated passive online sources.
robots.txt
snapshots.Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget
or curl
:
...with wget
:
wget https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz
...or, with curl
:
curl -OL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz
...then, extract the binary:
tar xf xurlfind3r-<version>-linux-amd64.tar.gz
TIP: The above steps, download and extract, can be combined into a single step with this onliner
curl -sL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz | tar -xzv
NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xurlfind3r
executable.
...move the xurlfind3r
binary to somewhere in your PATH
. For example, on GNU/Linux and OS X systems:
sudo mv xurlfind3r /usr/local/bin/
NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r
to their PATH
.
Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.
go install ...
go install -v github.com/hueristiq/xurlfind3r/cmd/xurlfind3r@latest
go build ...
the development VersionClone the repository
git clone https://github.com/hueristiq/xurlfind3r.git
Build the utility
cd xurlfind3r/cmd/xurlfind3r && \
go build .
Move the xurlfind3r
binary to somewhere in your PATH
. For example, on GNU/Linux and OS X systems:
sudo mv xurlfind3r /usr/local/bin/
NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r
to their PATH
.
NOTE: While the development version is a good way to take a peek at xurlfind3r
's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.
xurlfind3r
will work right after installation. However, BeVigil, Github and Intelligence X require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xurlfind3r/config.yaml
file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.
Example config.yaml
:
version: 0.2.0
sources:
- bevigil
- commoncrawl
- github
- intelx
- otx
- urlscan
- wayback
keys:
bevigil:
- awA5nvpKU3N8ygkZ
github:
- d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
- asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
intelx:
- 2.intelx.io:00000000-0000-0000-0000-000000000000
urlscan:
- d4c85d34-e425-446e-d4ab-f5a3412acbe8
To display help message for xurlfind3r
use the -h
flag:
xurlfind3r -h
help message:
_ __ _ _ _____
__ ___ _ _ __| |/ _(_)_ __ __| |___ / _ __
\ \/ / | | | '__| | |_| | '_ \ / _` | |_ \| '__|
> <| |_| | | | | _| | | | | (_| |___) | |
/_/\_\\__,_|_| |_|_| |_|_| |_|\__,_|____/|_| v0.2.0
USAGE:
xurlfind3r [OPTIONS]
TARGET:
-d, --domain string (sub)domain to match URLs
SCOPE:
--include-subdomains bool match subdomain's URLs
SOURCES:
-s, --sources bool list sources
-u, --use-sources string sources to use (default: bevigil,commoncrawl,github,intelx,otx,urlscan,wayback)
--skip-wayback-robots bool with wayback, skip parsing robots.txt snapshots
--skip-wayback-source bool with wayback , skip parsing source code snapshots
FILTER & MATCH:
-f, --filter string regex to filter URLs
-m, --match string regex to match URLs
OUTPUT:
--no-color bool no color mode
-o, --output string output URLs file path
-v, --verbosity string debug, info, warning, error, fatal or silent (default: info)
CONFIGURATION:
-c, --configuration string configuration file path (default: ~/.hueristiq/xurlfind3r/config.yaml)
xurlfind3r -d hackerone.com --include-subdomains
# filter images
xurlfind3r -d hackerone.com --include-subdomains -f '`^https?://[^/]*?/.*\.(jpg|jpeg|png|gif|bmp)(\?[^\s]*)?$`'
# match js URLs
xurlfind3r -d hackerone.com --include-subdomains -m '^https?://[^/]*?/.*\.js(\?[^\s]*)?$'
Issues and Pull Requests are welcome! Check out the contribution guidelines.
This utility is distributed under the MIT license.
Python 3 script to dump company employees from LinkedIn APIο¬
LinkedInDumper is a Python 3 script that dumps employee data from the LinkedIn social networking platform.
The results contain firstname, lastname, position (title), location and a user's profile link. Only 2 API calls are required to retrieve all employees if the company does not have more than 10 employees. Otherwise, we have to paginate through the API results. With the --email-format
CLI flag one can define a Python string format to auto generate email addresses based on the retrieved first and last name.
LinkedInDumper talks with the unofficial LinkedIn Voyager API, which requires authentication. Therefore, you must have a valid LinkedIn user account. To keep it simple, LinkedInDumper just expects a cookie value provided by you. Doing it this way, even 2FA protected accounts are supported. Furthermore, you are tasked to provide a LinkedIn company URL to dump employees from.
li_at
session cookie value e.g. via developer toolsli_at
or temporarily during runtime via the CLI flag --cookie
usage: linkedindumper.py [-h] --url <linkedin-url> [--cookie <cookie>] [--quiet] [--include-private-profiles] [--email-format EMAIL_FORMAT]
options:
-h, --help show this help message and exit
--url <linkedin-url> A LinkedIn company url - https://www.linkedin.com/company/<company>
--cookie <cookie> LinkedIn 'li_at' session cookie
--quiet Show employee results only
--include-private-profiles
Show private accounts too
--email-format Python string format for emails; for example:
[1] john.doe@example.com > '{0}.{1}@example.com'
[2] j.doe@example.com > '{0[0]}.{1}@example.com'
[3] jdoe@example.com > '{0[0]}{1}@example.com'
[4] doe@example.com > '{1}@example.com'
[5] john@example.com > '{0}@example.com'
[6] jd@example.com > '{0[0]}{1[0]}@example.com'
docker run --rm l4rm4nd/linkedindumper:latest --url 'https://www.linkedin.com/company/apple' --cookie <cookie> --email-format '{0}.{1}@apple.de'
# install dependencies
pip install -r requirements.txt
python3 linkedindumper.py --url 'https://www.linkedin.com/company/apple' --cookie <cookie> --email-format '{0}.{1}@apple.de'
The script will return employee data as semi-colon separated values (like CSV):
βββ βββ ββββ β ββ βββββββββ βββββββ βββ ββββ β βββββββ β ββ ββββ βββββ ββββββ ββββββ ββββββ
ββββ ββββ ββ ββ β βββββ ββ β ββββ βββββββ ββ ββ β ββββ βββ ββ ββββββββββ& #9600; βββββββ βββββ β βββ β βββ
ββββ βββββββ ββ βββββββββ ββββ βββ βββββββββ ββ ββββββ βββββ βββββββ ββββββββ ββββββββ βββ βββ β
ββββ ββββββββ ββββββββ ββ βββ β ββββ β&# 9617;βββββββ βββββββββ ββββ βββββββ βββ βββββββ ββββ β βββββββ
ββββββββββββββββ ββββββββ ββββββββββββββββ ββββββββ βββββββββββ ββββββββ ββββ ββββββββ β βββββββ& #9618;ββββ ββββ
β βββ βββ β ββ β β β ββ ββββ ββ β βββ β ββ β ββ β β βββ β ββββ β β β ββ β βββββ β βββ ββ ββ ββ ββββ
β β β β β ββ ββ β βββ ββ ββ β β β β β β β ββ ββ β ββ β β β ββββ β β β β βββ β β β β ββ β ββ
β β β β β β β β ββ β β β β β β β β β β β β β βββ β β β β ββ β ββ β
β β β β β β β β β β β β β β β β β
β β β by LRVT
[i] Company Name: apple
[i] Company X-ID: 162479
[i] LN Employees: 1000 employees found
[i] Dumping Date: 17/10/2022 13:55:06
[i] Email Format: {0}.{1}@apple.de
Firstname;Lastname;Email;Position;Gender;Location;Profile
Katrin;Honauer;katrin.honauer@apple.com;Software Engineer at Apple;N/A;Heidelberg;https://www.linkedin.com/in/katrin-honauer
Raymond;Chen;raymond.chen@apple.com;Recruiting at Apple;N/A;Austin, Texas Metropolitan Area;https://www.linkedin.com/in/raytherecruiter
[i] Successfully crawled 2 unique apple employee(s). Hurray ^_-
LinkedIn will allow only the first 1,000 search results to be returned when harvesting contact information. You may also need a LinkedIn premium account when you reached the maximum allowed queries for visiting profiles with your freemium LinkedIn account.
Furthermore, not all employee profiles are public. The results vary depending on your used LinkedIn account and whether you are befriended with some employees of the company to crawl or not. Therefore, it is sometimes not possible to retrieve the firstname, lastname and profile url of some employee accounts. The script will not display such profiles, as they contain default values such as "LinkedIn" as firstname and "Member" in the lastname. If you want to include such private profiles, please use the CLI flag --include-private-profiles
. Although some accounts may be private, we can obtain the position (title) as well as the location of such accounts. Only firstname, lastname and profile URL are hidden for private LinkedIn accounts.
Finally, LinkedIn users are free to name their profile. An account name can therefore consist of various things such as saluations, abbreviations, emojis, middle names etc. I tried my best to remove some nonsense. However, this is not a complete solution to the general problem. Note that we are not using the official LinkedIn API. This script gathers information from the "unofficial" Voyager API.
An advance cross-platform and multi-feature GUI web spider/crawler for cyber security proffesionals. Spider Suite can be used for attack surface mapping and analysis. For more information visit SpiderSuite's website.
Spider Suite is designed for easy installation and usage even for first timers.
First, download the package of your choice.
Then install the downloaded SpiderSuite package.
See First time crawling with SpiderSuite article for tutorial on how to get started.
For complete documentation of Spider Suite see wiki.
Can you translate?
Visit SpiderSuite's translation project to make translations to your native language.
Not a developer?
You can help by reporting bugs, requesting new features, improving the documentation, sponsoring the project & writing articles.
For More information see contribution guide.
Contributers
This product includes software developed by the following open source projects:
A multi-purpose toolkit for gathering and managing OSINT-Data with a neat web-interface.
Seekr is a multi-purpose toolkit for gathering and managing OSINT-data with a sleek web interface. The backend is written in Go and offers a wide range of features for data collection, organization, and analysis. Whether you're a researcher, investigator, or just someone looking to gather information, seekr makes it easy to find and manage the data you need. Give it a try and see how it can streamline your OSINT workflow!
Check the wiki for setup guide, etc.
Seekr combines note taking and OSINT in one application. Seekr can be used alongside your current tools. Seekr is desingned with OSINT in mind and optimized for real world usecases.
Download the latest exe here
Download the latest stable binary here
To install seekr on linux simply run:
git clone https://github.com/seekr-osint/seekr
cd seekr
go run main.go
Now open the web interface in your browser of choice.
Seekr is build with NixOS in mind and therefore supports nix flakes. To run seekr on NixOS run following commands.
nix shell github:seekr-osint/seekr
seekr
journey
title How to Intigrate seekr into your current workflow.
section Initial Research
Create a person in seekr: 100: seekr
Simple web research: 100: Known tools
Account scan: 100: seekr
section Deeper account investigation
Investigate the accounts: 100: seekr, Known tools
Keep notes: 100: seekr
section Deeper Web research
Deep web research: 100: Known tools
Keep notes: 100: seekr
section Finishing the report
Export the person with seekr: 100: seekr
Done.: 100
We would love to hear from you. Tell us about your opinions on seekr. Where do we need to improve?... You can do this by just opeing up an issue or maybe even telling others in your blog or somewhere else about your experience.
This tool is intended for legitimate and lawful use only. It is provided for educational and research purposes, and should not be used for any illegal or malicious activities, including doxxing. Doxxing is the practice of researching and broadcasting private or identifying information about an individual, without their consent and can be illegal. The creators and contributors of this tool will not be held responsible for any misuse or damage caused by this tool. By using this tool, you agree to use it only for lawful purposes and to comply with all applicable laws and regulations. It is the responsibility of the user to ensure compliance with all relevant laws and regulations in the jurisdiction in which they operate. Misuse of this tool may result in criminal and/or civil prosecut ion.