FreshRSS

๐Ÿ”’
โŒ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
โ˜ โ˜† โœ‡ KitPloit - PenTest Tools!

PEGASUS-NEO - A Comprehensive Penetration Testing Framework Designed For Security Professionals And Ethical Hackers. It Combines Multiple Security Tools And Custom Modules For Reconnaissance, Exploitation, Wireless Attacks, Web Hacking, And More

By: Unknown โ€” April 24th 2025 at 12:30


                              ____                                  _   _ 
| _ \ ___ __ _ __ _ ___ _ _ ___| \ | |
| |_) / _ \/ _` |/ _` / __| | | / __| \| |
| __/ __/ (_| | (_| \__ \ |_| \__ \ |\ |
|_| \___|\__, |\__,_|___/\__,_|___/_| \_|
|___/
โ–ˆโ–ˆโ–ˆโ–„ โ–ˆ โ–“โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–’โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ
โ–ˆโ–ˆ โ–€โ–ˆ โ–ˆ โ–“โ–ˆ โ–€ โ–’โ–ˆโ–ˆโ–’ โ–ˆโ–ˆโ–’
โ–“โ–ˆโ–ˆ โ–€โ–ˆ โ–ˆโ–ˆโ–’โ–’โ–ˆโ–ˆโ–ˆ โ–’โ–ˆโ–ˆโ–‘ โ–ˆโ–ˆโ–’
โ–“โ–ˆโ–ˆโ–’ โ–โ–Œโ–ˆโ–ˆโ–’โ–’โ–“โ–ˆ โ–„ โ–’โ–ˆโ–ˆ โ–ˆโ–ˆโ–‘
โ–’โ–ˆโ–ˆโ–‘ โ–“โ–ˆโ–ˆโ–‘โ–‘โ–’โ–ˆโ–ˆโ–ˆโ–ˆโ–’โ–‘ โ–ˆโ–ˆโ–ˆโ–ˆโ–“โ–’โ–‘
โ–‘ โ–’โ–‘ โ–’ โ–’ โ–‘โ–‘ โ–’โ–‘ โ–‘โ–‘ โ–’โ–‘โ–’โ–‘โ–’โ–‘
โ–‘ โ–‘โ–‘ โ–‘ โ–’โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–’ โ–’โ–‘
โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–’
โ–‘ โ–‘ โ–‘ โ–‘ โ–‘

PEGASUS-NEO Penetration Testing Framework

ย 

๐Ÿ›ก๏ธ Description

PEGASUS-NEO is a comprehensive penetration testing framework designed for security professionals and ethical hackers. It combines multiple security tools and custom modules for reconnaissance, exploitation, wireless attacks, web hacking, and more.

โš ๏ธ Legal Disclaimer

This tool is provided for educational and ethical testing purposes only. Usage of PEGASUS-NEO for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state, and federal laws.

Developers assume no liability and are not responsible for any misuse or damage caused by this program.

๐Ÿ”’ Copyright Notice

PEGASUS-NEO - Advanced Penetration Testing Framework
Copyright (C) 2024 Letda Kes dr. Sobri. All rights reserved.

This software is proprietary and confidential. Unauthorized copying, transfer, or
reproduction of this software, via any medium is strictly prohibited.

Written by Letda Kes dr. Sobri <muhammadsobrimaulana31@gmail.com>, January 2024

๐ŸŒŸ Features

Password: Sobri

  • Reconnaissance & OSINT
  • Network scanning
  • Email harvesting
  • Domain enumeration
  • Social media tracking

  • Exploitation & Pentesting

  • Automated exploitation
  • Password attacks
  • SQL injection
  • Custom payload generation

  • Wireless Attacks

  • WiFi cracking
  • Evil twin attacks
  • WPS exploitation

  • Web Attacks

  • Directory scanning
  • XSS detection
  • SQL injection
  • CMS scanning

  • Social Engineering

  • Phishing templates
  • Email spoofing
  • Credential harvesting

  • Tracking & Analysis

  • IP geolocation
  • Phone number tracking
  • Email analysis
  • Social media hunting

๐Ÿ”ง Installation

# Clone the repository
git clone https://github.com/sobri3195/pegasus-neo.git

# Change directory
cd pegasus-neo

# Install dependencies
sudo python3 -m pip install -r requirements.txt

# Run the tool
sudo python3 pegasus_neo.py

๐Ÿ“‹ Requirements

  • Python 3.8+
  • Linux Operating System (Kali/Ubuntu recommended)
  • Root privileges
  • Internet connection

๐Ÿš€ Usage

  1. Start the tool:
sudo python3 pegasus_neo.py
  1. Enter authentication password
  2. Select category from main menu
  3. Choose specific tool or module
  4. Follow on-screen instructions

๐Ÿ” Security Features

  • Source code protection
  • Integrity checking
  • Anti-tampering mechanisms
  • Encrypted storage
  • Authentication system

๐Ÿ› ๏ธ Supported Tools

Reconnaissance & OSINT

  • Nmap
  • Wireshark
  • Maltego
  • Shodan
  • theHarvester
  • Recon-ng
  • SpiderFoot
  • FOCA
  • Metagoofil

Exploitation & Pentesting

  • Metasploit
  • SQLmap
  • Commix
  • BeEF
  • SET
  • Hydra
  • John the Ripper
  • Hashcat

Wireless Hacking

  • Aircrack-ng
  • Kismet
  • WiFite
  • Fern Wifi Cracker
  • Reaver
  • Wifiphisher
  • Cowpatty
  • Fluxion

Web Hacking

  • Burp Suite
  • OWASP ZAP
  • Nikto
  • XSStrike
  • Wapiti
  • Sublist3r
  • DirBuster
  • WPScan

๐Ÿ“ Version History

  • v1.0.0 (2024-01) - Initial release
  • v1.1.0 (2024-02) - Added tracking modules
  • v1.2.0 (2024-03) - Added tool installer

๐Ÿ‘ฅ Contributing

This is a proprietary project and contributions are not accepted at this time.

๐Ÿค Support

For support, please email muhammadsobrimaulana31@gmail.com atau https://lynk.id/muhsobrimaulana

โš–๏ธ License

This project is protected under proprietary license. See the LICENSE file for details.

Made with โค๏ธ by Letda Kes dr. Sobri



โ˜ โ˜† โœ‡ KitPloit - PenTest Tools!

Bytesrevealer - Online Reverse Enginerring Viewer

By: Unknown โ€” April 21st 2025 at 12:30


Bytes Revealer is a powerful reverse engineering and binary analysis tool designed for security researchers, forensic analysts, and developers. With features like hex view, visual representation, string extraction, entropy calculation, and file signature detection, it helps users uncover hidden data inside files. Whether you are analyzing malware, debugging binaries, or investigating unknown file formats, Bytes Revealer makes it easy to explore, search, and extract valuable information from any binary file.

Bytes Revealer do NOT store any file or data. All analysis is performed in your browser.

Current Limitation: Files less than 50MB can perform all analysis, files bigger up to 1.5GB will only do Visual View and Hex View analysis.


Features

File Analysis

  • Chunked file processing for memory efficiency
  • Real-time progress tracking
  • File signature detection
  • Hash calculations (MD5, SHA-1, SHA-256)
  • Entropy and Bytes Frequency analysis

Multiple Views

File View

  • Basic file information and metadata
  • File signatures detection
  • Hash values
  • Entropy calculation
  • Statistical analysis

Visual View

  • Binary data visualization
  • ASCII or Bytes searching
  • Data distribution view
  • Highlighted pattern matching

Hex View

  • Traditional hex editor interface
  • Byte-level inspection
  • Highlighted pattern matching
  • ASCII representation
  • ASCII or Bytes searching

String Analysis

  • ASCII and UTF-8 string extraction
  • String length analysis
  • String type categorization
  • Advanced filtering and sorting
  • String pattern recognition
  • Export capabilities

Search Capabilities

  • Hex pattern search
  • ASCII/UTF-8 string search
  • Regular expression support
  • Highlighted search results

Technical Details

Built With

  • Vue.js 3
  • Tailwind CSS
  • Web Workers for performance
  • Modern JavaScript APIs

Performance Features

  • Chunked file processing
  • Web Worker implementation
  • Memory optimization
  • Cancelable operations
  • Progress tracking

Getting Started

Prerequisites

# Node.js 14+ is required
node -v

Docker Usage

docker-compose build --no-cache

docker-compose up -d

Now open your browser: http://localhost:8080/

To stop the docker container

docker-compose down

Installation

# Clone the repository
git clone https://github.com/vulnex/bytesrevealer

# Navigate to project directory
cd bytesrevealer

# Install dependencies
npm install

# Start development server
npm run dev

Building for Production

# Build the application
npm run build

# Preview production build
npm run preview

Usage

  1. File Upload
  2. Click "Choose File" or drag and drop a file
  3. Progress bar shows upload and analysis status

  4. Analysis Views

  5. Switch between views using the tab interface
  6. Each view provides different analysis perspectives
  7. Real-time updates as you navigate

  8. Search Functions

  9. Use the search bar for pattern matching
  10. Toggle between hex and string search modes
  11. Results are highlighted in the current view

  12. String Analysis

  13. View extracted strings with type and length
  14. Filter strings by type or content
  15. Sort by various criteria
  16. Export results in multiple formats

Performance Considerations

  • Large files are processed in chunks
  • Web Workers handle intensive operations
  • Memory usage is optimized
  • Operations can be canceled if needed

Browser Compatibility

  • Chrome 80+
  • Firefox 75+
  • Safari 13.1+
  • Edge 80+

Contributing

  1. Fork the project
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

This project is licensed under the MIT License - see the LICENSE.md file for details.

Security Considerations

  • All strings are properly escaped
  • Input validation is implemented
  • Memory limits are enforced
  • File size restrictions are in place

Future Enhancements

  • Additional file format support
  • More visualization options
  • Pattern recognition improvements
  • Advanced string analysis features
  • Export/import capabilities
  • Collaboration features


โ˜ โ˜† โœ‡ KitPloit - PenTest Tools!

PANO - Advanced OSINT Investigation Platform Combining Graph Visualization, Timeline Analysis, And AI Assistance To Uncover Hidden Connections In Data

By: Unknown โ€” April 17th 2025 at 19:48


PANO is a powerful OSINT investigation platform that combines graph visualization, timeline analysis, and AI-powered tools to help you uncover hidden connections and patterns in your data.

Getting Started

  1. Clone the repository: bash git clone https://github.com/ALW1EZ/PANO.git cd PANO

  2. Run the application:

  3. Linux: ./start_pano.sh
  4. Windows: start_pano.bat

The startup script will automatically: - Check for updates - Set up the Python environment - Install dependencies - Launch PANO

In order to use Email Lookup transform You need to login with GHunt first. After starting the pano via starter scripts;

  1. Select venv manually
  2. Linux: source venv/bin/activate
  3. Windows: call venv\Scripts\activate
  4. See how to login here

๐Ÿ’ก Quick Start Guide

  1. Create Investigation: Start a new investigation or load an existing one
  2. Add Entities: Drag entities from the sidebar onto the graph
  3. Discover Connections: Use transforms to automatically find relationships
  4. Analyze: Use timeline and map views to understand patterns
  5. Save: Export your investigation for later use

๐Ÿ” Features

๐Ÿ•ธ๏ธ Core Functionality

  • Interactive Graph Visualization
  • Drag-and-drop entity creation
  • Multiple layout algorithms (Circular, Hierarchical, Radial, Force-Directed)
  • Dynamic relationship mapping
  • Visual node and edge styling

  • Timeline Analysis

  • Chronological event visualization
  • Interactive timeline navigation
  • Event filtering and grouping
  • Temporal relationship analysis

  • Map Integration

  • Geographic data visualization
  • Location-based analysis
  • Interactive mapping features
  • Coordinate plotting and tracking

๐ŸŽฏ Entity Management

  • Supported Entity Types
  • ๐Ÿ“ง Email addresses
  • ๐Ÿ‘ค Usernames
  • ๐ŸŒ Websites
  • ๐Ÿ–ผ๏ธ Images
  • ๐Ÿ“ Locations
  • โฐ Events
  • ๐Ÿ“ Text content
  • ๐Ÿ”ง Custom entity types

๐Ÿ”„ Transform System

  • Email Analysis
  • Google account investigation
  • Calendar event extraction
  • Location history analysis
  • Connected services discovery

  • Username Analysis

  • Cross-platform username search
  • Social media profile discovery
  • Platform correlation
  • Web presence analysis

  • Image Analysis

  • Reverse image search
  • Visual content analysis
  • Metadata extraction
  • Related image discovery

๐Ÿค– AI Integration

  • PANAI
  • Natural language investigation assistant
  • Automated entity extraction and relationship mapping
  • Pattern recognition and anomaly detection
  • Multi-language support
  • Context-aware suggestions
  • Timeline and graph analysis

๐Ÿงฉ Core Components

๐Ÿ“ฆ Entities

Entities are the fundamental building blocks of PANO. They represent distinct pieces of information that can be connected and analyzed:

  • Built-in Types
  • ๐Ÿ“ง Email: Email addresses with service detection
  • ๐Ÿ‘ค Username: Social media and platform usernames
  • ๐ŸŒ Website: Web pages with metadata
  • ๐Ÿ–ผ๏ธ Image: Images with EXIF and analysis
  • ๐Ÿ“ Location: Geographic coordinates and addresses
  • โฐ Event: Time-based occurrences
  • ๐Ÿ“ Text: Generic text content

  • Properties System

  • Type-safe property validation
  • Automatic property getters
  • Dynamic property updates
  • Custom property types
  • Metadata support

โšก Transforms

Transforms are automated operations that process entities to discover new information and relationships:

  • Operation Types
  • ๐Ÿ” Discovery: Find new entities from existing ones
  • ๐Ÿ”— Correlation: Connect related entities
  • ๐Ÿ“Š Analysis: Extract insights from entity data
  • ๐ŸŒ OSINT: Gather open-source intelligence
  • ๐Ÿ”„ Enrichment: Add data to existing entities

  • Features

  • Async operation support
  • Progress tracking
  • Error handling
  • Rate limiting
  • Result validation

๐Ÿ› ๏ธ Helpers

Helpers are specialized tools with dedicated UIs for specific investigation tasks:

  • Available Helpers
  • ๐Ÿ” Cross-Examination: Analyze statements and testimonies
  • ๐Ÿ‘ค Portrait Creator: Generate facial composites
  • ๐Ÿ“ธ Media Analyzer: Advanced image processing and analysis
  • ๐Ÿ” Base Searcher: Search near places of interest
  • ๐Ÿ”„ Translator: Translate text between languages

  • Helper Features

  • Custom Qt interfaces
  • Real-time updates
  • Graph integration
  • Data visualization
  • Export capabilities

๐Ÿ‘ฅ Contributing

We welcome contributions! To contribute to PANO:

  1. Fork the repository at https://github.com/ALW1EZ/PANO/
  2. Make your changes in your fork
  3. Test your changes thoroughly
  4. Create a Pull Request to our main branch
  5. In your PR description, include:
  6. What the changes do
  7. Why you made these changes
  8. Any testing you've done
  9. Screenshots if applicable

Note: We use a single main branch for development. All pull requests should be made directly to main.

๐Ÿ“– Development Guide

Click to expand development documentation ### System Requirements - Operating System: Windows or Linux - Python 3.11+ - PySide6 for GUI - Internet connection for online features ### Custom Entities Entities are the core data structures in PANO. Each entity represents a piece of information with specific properties and behaviors. To create a custom entity: 1. Create a new file in the `entities` folder (e.g., `entities/phone_number.py`) 2. Implement your entity class:
from dataclasses import dataclass
from typing import ClassVar, Dict, Any
from .base import Entity

@dataclass
class PhoneNumber(Entity):
name: ClassVar[str] = "Phone Number"
description: ClassVar[str] = "A phone number entity with country code and validation"

def init_properties(self):
"""Initialize phone number properties"""
self.setup_properties({
"number": str,
"country_code": str,
"carrier": str,
"type": str, # mobile, landline, etc.
"verified": bool
})

def update_label(self):
"""Update the display label"""
self.label = self.format_label(["country_code", "number"])
### Custom Transforms Transforms are operations that process entities and generate new insights or relationships. To create a custom transform: 1. Create a new file in the `transforms` folder (e.g., `transforms/phone_lookup.py`) 2. Implement your transform class:
from dataclasses import dataclass
from typing import ClassVar, List
from .base import Transform
from entities.base import Entity
from entities.phone_number import PhoneNumber
from entities.location import Location
from ui.managers.status_manager import StatusManager

@dataclass
class PhoneLookup(Transform):
name: ClassVar[str] = "Phone Number Lookup"
description: ClassVar[str] = "Lookup phone number details and location"
input_types: ClassVar[List[str]] = ["PhoneNumber"]
output_types: ClassVar[List[str]] = ["Location"]

async def run(self, entity: PhoneNumber, graph) -> List[Entity]:
if not isinstance(entity, PhoneNumber):
return []

status = StatusManager.get()
operation_id = status.start_loading("Phone Lookup")

try:
# Your phone number lookup logic here
# Example: query an API for phone number details
location = Location(properties={
"country": "Example Country",
"region": "Example Region",
"carrier": "Example Carrier",
"source": "PhoneLookup transform"
})

return [location]

except Exception as e:
status.set_text(f"Error during phone lookup: {str(e)}")
return []

finally:
status.stop_loading(operation_id)
### Custom Helpers Helpers are specialized tools that provide additional investigation capabilities through a dedicated UI interface. To create a custom helper: 1. Create a new file in the `helpers` folder (e.g., `helpers/data_analyzer.py`) 2. Implement your helper class:
from PySide6.QtWidgets import (
QWidget, QVBoxLayout, QHBoxLayout, QPushButton,
QTextEdit, QLabel, QComboBox
)
from .base import BaseHelper
from qasync import asyncSlot

class DummyHelper(BaseHelper):
"""A dummy helper for testing"""

name = "Dummy Helper"
description = "A dummy helper for testing"

def setup_ui(self):
"""Initialize the helper's user interface"""
# Create input text area
self.input_label = QLabel("Input:")
self.input_text = QTextEdit()
self.input_text.setPlaceholderText("Enter text to process...")
self.input_text.setMinimumHeight(100)

# Create operation selector
operation_layout = QHBoxLayout()
self.operation_label = QLabel("Operation:")
self.operation_combo = QComboBox()
self.operation_combo.addItems(["Uppercase", "Lowercase", "Title Case"])
operation_layout.addWidget(self.operation_label)
operation_layout.addWidget(self.operation_combo)

# Create process button
self.process_btn = QPushButton("Process")
self.process_btn.clicked.connect(self.process_text)

# Create output text area
self.output_label = QLabel("Output:")
self.output_text = QTextEdit()
self.output_text.setReadOnly(True)
self.output_text.setMinimumHeight(100)

# Add widgets to main layout
self.main_layout.addWidget(self.input_label)
self.main_layout.addWidget(self.input_text)
self.main_layout.addLayout(operation_layout)
self.main_layout.addWidget(self.process_btn)
self.main_layout.addWidget(self.output_label)
self.main_layout.addWidget(self.output_text)

# Set dialog size
self.resize(400, 500)

@asyncSlot()
async def process_text(self):
"""Process the input text based on selected operation"""
text = self.input_text.toPlainText()
operation = self.operation_combo.currentText()

if operation == "Uppercase":
result = text.upper()
elif operation == "Lowercase":
result = text.lower()
else: # Title Case
result = text.title()

self.output_text.setPlainText(result)

๐Ÿ“„ License

This project is licensed under the Creative Commons Attribution-NonCommercial (CC BY-NC) License.

You are free to: - โœ… Share: Copy and redistribute the material - โœ… Adapt: Remix, transform, and build upon the material

Under these terms: - โ„น๏ธ Attribution: You must give appropriate credit - ๐Ÿšซ NonCommercial: No commercial use - ๐Ÿ”“ No additional restrictions

๐Ÿ™ Acknowledgments

Special thanks to all library authors and contributors who made this project possible.

๐Ÿ‘จโ€๐Ÿ’ป Author

Created by ALW1EZ with AI โค๏ธ



โ˜ โ˜† โœ‡ KitPloit - PenTest Tools!

Telegram-Scraper - A Powerful Python Script That Allows You To Scrape Messages And Media From Telegram Channels Using The Telethon Library

By: Unknown โ€” April 11th 2025 at 12:30


A powerful Python script that allows you to scrape messages and media from Telegram channels using the Telethon library. Features include real-time continuous scraping, media downloading, and data export capabilities.

___________________  _________
\__ ___/ _____/ / _____/
| | / \ ___ \_____ \
| | \ \_\ \/ \
|____| \______ /_______ /
\/ \/

Features ๐Ÿš€

  • Scrape messages from multiple Telegram channels
  • Download media files (photos, documents)
  • Real-time continuous scraping
  • Export data to JSON and CSV formats
  • SQLite database storage
  • Resume capability (saves progress)
  • Media reprocessing for failed downloads
  • Progress tracking
  • Interactive menu interface

Prerequisites ๐Ÿ“‹

Before running the script, you'll need:

  • Python 3.7 or higher
  • Telegram account
  • API credentials from Telegram

Required Python packages

pip install -r requirements.txt

Contents of requirements.txt:

telethon
aiohttp
asyncio

Getting Telegram API Credentials ๐Ÿ”‘

  1. Visit https://my.telegram.org/auth
  2. Log in with your phone number
  3. Click on "API development tools"
  4. Fill in the form:
  5. App title: Your app name
  6. Short name: Your app short name
  7. Platform: Can be left as "Desktop"
  8. Description: Brief description of your app
  9. Click "Create application"
  10. You'll receive:
  11. api_id: A number
  12. api_hash: A string of letters and numbers

Keep these credentials safe, you'll need them to run the script!

Setup and Running ๐Ÿ”ง

  1. Clone the repository:
git clone https://github.com/unnohwn/telegram-scraper.git
cd telegram-scraper
  1. Install requirements:
pip install -r requirements.txt
  1. Run the script:
python telegram-scraper.py
  1. On first run, you'll be prompted to enter:
  2. Your API ID
  3. Your API Hash
  4. Your phone number (with country code)
  5. Your phone number (with country code) or bot, but use the phone number option when prompted second time.
  6. Verification code (sent to your Telegram)

Initial Scraping Behavior ๐Ÿ•’

When scraping a channel for the first time, please note:

  • The script will attempt to retrieve the entire channel history, starting from the oldest messages
  • Initial scraping can take several minutes or even hours, depending on:
  • The total number of messages in the channel
  • Whether media downloading is enabled
  • The size and number of media files
  • Your internet connection speed
  • Telegram's rate limiting
  • The script uses pagination and maintains state, so if interrupted, it can resume from where it left off
  • Progress percentage is displayed in real-time to track the scraping status
  • Messages are stored in the database as they are scraped, so you can start analyzing available data even before the scraping is complete

Usage ๐Ÿ“

The script provides an interactive menu with the following options:

  • [A] Add new channel
  • Enter the channel ID or channelname
  • [R] Remove channel
  • Remove a channel from scraping list
  • [S] Scrape all channels
  • One-time scraping of all configured channels
  • [M] Toggle media scraping
  • Enable/disable downloading of media files
  • [C] Continuous scraping
  • Real-time monitoring of channels for new messages
  • [E] Export data
  • Export to JSON and CSV formats
  • [V] View saved channels
  • List all saved channels
  • [L] List account channels
  • List all channels with ID:s for account
  • [Q] Quit

Channel IDs ๐Ÿ“ข

You can use either: - Channel username (e.g., channelname) - Channel ID (e.g., -1001234567890)

Data Storage ๐Ÿ’พ

Database Structure

Data is stored in SQLite databases, one per channel: - Location: ./channelname/channelname.db - Table: messages - id: Primary key - message_id: Telegram message ID - date: Message timestamp - sender_id: Sender's Telegram ID - first_name: Sender's first name - last_name: Sender's last name - username: Sender's username - message: Message text - media_type: Type of media (if any) - media_path: Local path to downloaded media - reply_to: ID of replied message (if any)

Media Storage ๐Ÿ“

Media files are stored in: - Location: ./channelname/media/ - Files are named using message ID or original filename

Exported Data ๐Ÿ“Š

Data can be exported in two formats: 1. CSV: ./channelname/channelname.csv - Human-readable spreadsheet format - Easy to import into Excel/Google Sheets

  1. JSON: ./channelname/channelname.json
  2. Structured data format
  3. Ideal for programmatic processing

Features in Detail ๐Ÿ”

Continuous Scraping

The continuous scraping feature ([C] option) allows you to: - Monitor channels in real-time - Automatically download new messages - Download media as it's posted - Run indefinitely until interrupted (Ctrl+C) - Maintains state between runs

Media Handling

The script can download: - Photos - Documents - Other media types supported by Telegram - Automatically retries failed downloads - Skips existing files to avoid duplicates

Error Handling ๐Ÿ› ๏ธ

The script includes: - Automatic retry mechanism for failed media downloads - State preservation in case of interruption - Flood control compliance - Error logging for failed operations

Limitations โš ๏ธ

  • Respects Telegram's rate limits
  • Can only access public channels or channels you're a member of
  • Media download size limits apply as per Telegram's restrictions

Contributing ๐Ÿค

Contributions are welcome! Please feel free to submit a Pull Request.

License ๐Ÿ“„

This project is licensed under the MIT License - see the LICENSE file for details.

Disclaimer โš–๏ธ

This tool is for educational purposes only. Make sure to: - Respect Telegram's Terms of Service - Obtain necessary permissions before scraping - Use responsibly and ethically - Comply with data protection regulations



โ˜ โ˜† โœ‡ KitPloit - PenTest Tools!

Lobo Guarรก - Cyber Threat Intelligence Platform

By: Unknown โ€” April 9th 2025 at 12:30


Lobo Guarรก is a platform aimed at cybersecurity professionals, with various features focused on Cyber Threat Intelligence (CTI). It offers tools that make it easier to identify threats, monitor data leaks, analyze suspicious domains and URLs, and much more.


Features

1. SSL Certificate Search

Allows identifying domains and subdomains that may pose a threat to organizations. SSL certificates issued by trusted authorities are indexed in real-time, and users can search using keywords of 4 or more characters.

Note: The current database contains certificates issued from September 5, 2024.

2. SSL Certificate Discovery

Allows the insertion of keywords for monitoring. When a certificate is issued and the common name contains the keyword (minimum of 5 characters), it will be displayed to the user.

3. Tracking Link

Generates a link to capture device information from attackers. Useful when the security professional can contact the attacker in some way.

4. Domain Scan

Performs a scan on a domain, displaying whois information and subdomains associated with that domain.

5. Web Path Scan

Allows performing a scan on a URL to identify URIs (web paths) related to that URL.

6. URL Scan

Performs a scan on a URL, generating a screenshot and a mirror of the page. The result can be made public to assist in taking down malicious websites.

7. URL Monitoring

Monitors a URL with no active application until it returns an HTTP 200 code. At that moment, it automatically initiates a URL scan, providing evidence for actions against malicious sites.

8. Data Leak

  • Data Leak Alerts: Monitors and presents almost real-time data leaks posted in hacker forums and websites.
  • URL+User+Password: Allows searching by URL, username, or password, helping identify leaked data from clients or employees.

9. Threat Intelligence Feeds

Centralizes intelligence news from various channels, keeping users updated on the latest threats.

Installation

The application installation has been approved on Ubuntu 24.04 Server and Red Hat 9.4 distributions, the links for which are below:

Lobo Guarรก Implementation on Ubuntu 24.04

Lobo Guarรก Implementation on Red Hat 9.4

There is a Dockerfile and a docker-compose version of Lobo Guarรก too. Just clone the repo and do:

docker compose up

Then, go to your web browser at localhost:7405.

Dependencies

Before proceeding with the installation, ensure the following dependencies are installed:

  • PostgreSQL
  • Python 3.12
  • ChromeDriver and Google Chrome (version 129.0.6668.89)
  • FFUF (version 2.0.0)
  • Subfinder (version 2.6.6)

Installation Instructions

  1. Clone the repository:
git clone https://github.com/olivsec/loboguara.git
  1. Enter the project directory:
cd loboguara/
  1. Edit the configuration file:
nano server/app/config.py

Fill in the required parameters in the config.py file:

class Config:
SECRET_KEY = 'YOUR_SECRET_KEY_HERE'
SQLALCHEMY_DATABASE_URI = 'postgresql://guarauser:YOUR_PASSWORD_HERE@localhost/guaradb?sslmode=disable'
SQLALCHEMY_TRACK_MODIFICATIONS = False

MAIL_SERVER = 'smtp.example.com'
MAIL_PORT = 587
MAIL_USE_TLS = True
MAIL_USERNAME = 'no-reply@example.com'
MAIL_PASSWORD = 'YOUR_SMTP_PASSWORD_HERE'
MAIL_DEFAULT_SENDER = 'no-reply@example.com'

ALLOWED_DOMAINS = ['yourdomain1.my.id', 'yourdomain2.com', 'yourdomain3.net']

API_ACCESS_TOKEN = 'YOUR_LOBOGUARA_API_TOKEN_HERE'
API_URL = 'https://loboguara.olivsec.com.br/api'

CHROME_DRIVER_PATH = '/opt/loboguara/bin/chromedriver'
GOOGLE_CHROME_PATH = '/opt/loboguara/bin/google-chrome'
FFUF_PATH = '/opt/loboguara/bin/ffuf'
SUBFINDER_PATH = '/opt/loboguara/bin/subfinder'

LOG_LEVEL = 'ERROR'
LOG_FILE = '/opt/loboguara/logs/loboguara.log'
  1. Make the installation script executable and run it:
sudo chmod +x ./install.sh
sudo ./install.sh
  1. Start the service after installation:
sudo -u loboguara /opt/loboguara/start.sh

Access the URL below to register the Lobo Guarรก Super Admin

http://your_address:7405/admin

Online Platform

Access the Lobo Guarรก platform online: https://loboguara.olivsec.com.br/



โ˜ โ˜† โœ‡ KitPloit - PenTest Tools!

Lazywarden - Automatic Bitwarden Backup

By: Unknown โ€” April 5th 2025 at 11:30


Secure, Automated, and Multi-Cloud Bitwarden Backup and Import System

Lazywarden is a Python automation tool designed to Backup and Restore data from your vault, including Bitwarden attachments. It allows you to upload backups to multiple cloud storage services and receive notifications across multiple platforms. It also offers AES encrypted backups and uses key derivation with Argon2, ensuring maximum security for your data.


Features

  • ๐Ÿ”’ Maximum Security: Data protection with AES-256 encryption and Argon2 key derivation.
  • ๐Ÿ”„ Automated Backups and Imports: Keep your Bitwarden vault up to date and secure.
  • โœ… Integrity Verification: SHA-256 hash to ensure data integrity on every backup.
  • โ˜๏ธ Multi-Cloud Support: Store backups to services such as Dropbox, Google Drive, pCloud, MEGA, NextCloud, Seafile, Storj, Cloudflare R2, Backblaze B2, Filebase (IPFS) and via SMTP.
  • ๐Ÿ–ฅ๏ธ Local Storage: Save backups to a local path for greater control.
  • ๐Ÿ”” Real-Time Alerts: Instant notifications on Discord, Telegram, Ntfy and Slack.
  • ๐Ÿ—“๏ธ Schedule Management: Integration with CalDAV, Todoist and Vikunja to manage your schedule.
  • ๐Ÿณ Easy Deployment: Quick setup with Docker Compose.
  • ๐Ÿค– Full Automation and Custom Scheduling: Automatic backups with flexible scheduling options (daily, weekly, monthly, yearly). Integration with CalDAV, Todoist and Vikunja for complete tracking and email notifications.
  • ๐Ÿ”‘ Bitwarden Export to KeePass: Export Bitwarden items to a KeePass database (kdbx), including TOTP-seeded logins, URI, custom fields, card, identity attachments and secure notes.

Platform Compatibilityย ย 



โ˜ โ˜† โœ‡ KitPloit - PenTest Tools!

C2-Tracker - Live Feed Of C2 Servers, Tools, And Botnets

By: Zion3R โ€” April 24th 2024 at 02:23


Free to use IOC feed for various tools/malware. It started out for just C2 tools but has morphed into tracking infostealers and botnets as well. It uses shodan.io/">Shodan searches to collect the IPs. The most recent collection is always stored in data; the IPs are broken down by tool and there is an all.txt.

The feed should update daily. Actively working on making the backend more reliable


Honorable Mentions

Many of the Shodan queries have been sourced from other CTI researchers:

Huge shoutout to them!

Thanks to BertJanCyber for creating the KQL query for ingesting this feed

And finally, thanks to Y_nexro for creating C2Live in order to visualize the data

What do I track?

Running Locally

If you want to host a private version, put your Shodan API key in an environment variable called SHODAN_API_KEY

echo SHODAN_API_KEY=API_KEY >> ~/.bashrc
bash
python3 -m pip install -r requirements.txt
python3 tracker.py

Contributing

I encourage opening an issue/PR if you know of any additional Shodan searches for identifying adversary infrastructure. I will not set any hard guidelines around what can be submitted, just know, fidelity is paramount (high true/false positive ratio is the focus).

References



โ˜ โ˜† โœ‡ KitPloit - PenTest Tools!

CureIAM - Clean Accounts Over Permissions In GCP Infra At Scale

By: Zion3R โ€” November 21st 2023 at 11:30

Clean up of over permissioned IAM accounts on GCP infra in an automated way

CureIAM is an easy-to-use, reliable, and performant engine for Least Privilege Principle Enforcement on GCP cloud infra. It enables DevOps and Security team to quickly clean up accounts in GCP infra that have granted permissions of more than what are required. CureIAM fetches the recommendations and insights from GCP IAM recommender, scores them and enforce those recommendations automatically on daily basic. It takes care of scheduling and all other aspects of running these enforcement jobs at scale. It is built on top of GCP IAM recommender APIs and Cloudmarker framework.


Key features

Discover what makes CureIAM scalable and production grade.

  • Config driven : The entire workflow of CureIAM is config driven. Skip to Config section to know more about it.
  • Scalable : Its is designed to scale because of its plugin driven, multiprocess and multi-threaded approach.
  • Handles Scheduling: Scheduling part is embedded in CureIAM code itself, configure the time, and CureIAM will run daily at that time note.
  • Plugin driven: CureIAM codebase is completely plugin oriented, which means, one can plug and play the existing plugins or create new to add more functionality to it.
  • Track actionable insights: Every action that CureIAM takes, is recorded for audit purpose, It can do that in file store and in elasticsearch store. If you want you can build other store plugins to push that to other stores for tracking purposes.
  • Scoring and Enforcement: Every recommendation that is fetch by CureIAM is scored against various parameters, after that couple of scores like safe_to_apply_score, risk_score, over_privilege_score. Each score serves a different purpose. For safe_to_apply_score identifies the capability to apply recommendation on automated basis, based on the threshold set in CureIAM.yaml config file.

Usage

Since CureIAM is built with python, you can run it locally with these commands. Before running make sure to have a configuration file ready in either of /etc/CureIAM.yaml, ~/.CureIAM.yaml, ~/CureIAM.yaml, or CureIAM.yaml and there is Service account JSON file present in current directory with name preferably cureiamSA.json. This SA private key can be named anything, but for docker image build, it is preferred to use this name. Make you to reference this file in config for GCP cloud.

# Install necessary dependencies
$ pip install -r requirements.txt

# Run CureIAM now
$ python -m CureIAM -n

# Run CureIAM process as schedular
$ python -m CureIAM

# Check CureIAM help
$ python -m CureIAM --help

CureIAM can be also run inside a docker environment, this is completely optional and can be used for CI/CD with K8s cluster deployment.

# Build docker image from dockerfile
$ docker build -t cureiam .

# Run the image, as schedular
$ docker run -d cureiam

# Run the image now
$ docker run -f cureiam -m cureiam -n

Config

CureIAM.yaml configuration file is the heart of CureIAM engine. Everything that engine does it does it based on the pipeline configured in this config file. Let's break this down in different sections to make this config look simpler.

  1. Let's configure first section, which is logging configuration and scheduler configuration.
  logger:
version: 1

disable_existing_loggers: false

formatters:
verysimple:
format: >-
[%(process)s]
%(name)s:%(lineno)d - %(message)s
datefmt: "%Y-%m-%d %H:%M:%S"

handlers:
rich_console:
class: rich.logging.RichHandler
formatter: verysimple

file:
class: logging.handlers.TimedRotatingFileHandler
formatter: simple
filename: /tmp/CureIAM.log
when: midnight
encoding: utf8
backupCount: 5

loggers:
adal-python:
level: INFO

root:
level: INFO
handlers:
- rich_console
- file

schedule: "16:00"

This subsection of config uses, Rich logging module and schedules CureIAM to run daily at 16:00.

  1. Next section is configure different modules, which we MIGHT use in pipeline. This falls under plugins section in CureIAM.yaml. You can think of this section as declaration for different plugins.
  plugins:
gcpCloud:
plugin: CureIAM.plugins.gcp.gcpcloud.GCPCloudIAMRecommendations
params:
key_file_path: cureiamSA.json

filestore:
plugin: CureIAM.plugins.files.filestore.FileStore

gcpIamProcessor:
plugin: CureIAM.plugins.gcp.gcpcloudiam.GCPIAMRecommendationProcessor
params:
mode_scan: true
mode_enforce: true
enforcer:
key_file_path: cureiamSA.json
allowlist_projects:
- alpha
blocklist_projects:
- beta
blocklist_accounts:
- foo@bar.com
allowlist_account_types:
- user
- group
- serviceAccount
blocklist_account_types:
- None
min_safe_to_apply_score_user: 0
min_safe_to_apply_scor e_group: 0
min_safe_to_apply_score_SA: 50

esstore:
plugin: CureIAM.plugins.elastic.esstore.EsStore
params:
# Change http to https later if your elastic are using https
scheme: http
host: es-host.com
port: 9200
index: cureiam-stg
username: security
password: securepassword

Each of these plugins declaration has to be of this form:

  plugins:
<plugin-name>:
plugin: <class-name-as-python-path>
params:
param1: val1
param2: val2

For example, for plugins CureIAM.stores.esstore.EsStore which is this file and class EsStore. All the params which are defined in yaml has to match the declaration in __init__() function of the same plugin class.

  1. Once plugins are defined , next step is to define how to define pipeline for auditing. And it goes like this:
  audits:
IAMAudit:
clouds:
- gcpCloud
processors:
- gcpIamProcessor
stores:
- filestore
- esstore

Multiple Audits can be created out of this. The one created here is named IAMAudit with three plugins in use, gcpCloud, gcpIamProcessor, filestores and esstore. Note these are the same plugin names defined in Step 2. Again this is like defining the pipeline, not actually running it. It will be considered for running with definition in next step.

  1. Tell CureIAM to run the Audits defined in previous step.
  run:
- IAMAudits

And this makes the entire configuration for CureIAM, you can find the full sample here, this config driven pipeline concept is inherited from Cloudmarker framework.

Dashboard

The JSON which is indexed in elasticsearch using Elasticsearch store plugin, can be used to generate dashboard in Kibana.

Contribute

[Please do!] We are looking for any kind of contribution to improve CureIAM's core funtionality and documentation. When in doubt, make a PR!

Credits

Gojek Product Security Team

Demo

<>

=============

NEW UPDATES May 2023 0.2.0

Refactoring

  • Breaking down the large code into multiple small function
  • Moving all plugins into plugins folder: Esstore, files, Cloud and GCP.
  • Adding fixes into zero divide issues
  • Migration to new major version of elastic
  • Change configuration in CureIAM.yaml file
  • Tested in python version 3.9.X

Library Updates

Adding the version in library to avoid any back compatibility issues.

  • Elastic==8.7.0 # previously 7.17.9
  • elasticsearch==8.7.0
  • google-api-python-client==2.86.0
  • PyYAML==6.0
  • schedule==1.2.0
  • rich==13.3.5

Docker Files

  • Adding Docker Compose for local Elastic and Kibana in elastic
  • Adding .env-ex change .env-ex to .env to before running the docker
Running docker compose: docker-compose -f docker_compose_es.yaml up 

Features

  • Adding the capability to run scan without applying the recommendation. By default, if mode_scan is false, mode_enforce won't be running.
      mode_scan: true
mode_enforce: false
  • Turn off the email function temporarily.


โ˜ โ˜† โœ‡ KitPloit - PenTest Tools!

HardHatC2 - A C# Command And Control Framework

By: Zion3R โ€” June 28th 2023 at 02:12


A cross-platform, collaborative, Command & Control framework written in C#, designed for red teaming and ease of use.

HardHat is a multiplayer C# .NET-based command and control framework. Designed to aid in red team engagements and penetration testing. HardHat aims to improve the quality of life factors during engagements by providing an easy-to-use but still robust C2 framework.
It contains three primary components, an ASP.NET teamserver, a blazor .NET client, and C# based implants.


Release Tracking

Alpha Release - 3/29/23 NOTE: HardHat is in Alpha release; it will have bugs, missing features, and unexpected things will happen. Thank you for trying it, and please report back any issues or missing features so they can be addressed.

Community

Discord Join the community to talk about HardHat C2, Programming, Red teaming and general cyber security things The discord community is also a great way to request help, submit new features, stay up to date on the latest additions, and submit bugs.

Features

Teamserver & Client

  • Per-operator accounts with account tiers to allow customized access control and features, including view-only guest modes, team-lead opsec approval(WIP), and admin accounts for general operation management.
  • Managers (Listeners)
  • Dynamic Payload Generation (Exe, Dll, shellcode, PowerShell command)
  • Creation & editing of C2 profiles on the fly in the client
  • Customization of payload generation
    • sleep time/jitter
    • kill date
    • working hours
    • type (Exe, Dll, Shellcode, ps command)
    • Included commands(WIP)
    • option to run confuser
  • File upload & Downloads
  • Graph View
  • File Browser GUI
  • Event Log
  • JSON logging for events & tasks
  • Loot tracking (Creds, downloads)
  • IOC tracing
  • Pivot proxies (SOCKS 4a, Port forwards)
  • Cred store
  • Autocomplete command history
  • Detailed help command
  • Interactive bash terminal command if the client is on linux or powershell on windows, this allows automatic parsing and logging of terminal commands like proxychains
  • Persistent database storage of teamserver items (User accounts, Managers, Engineers, Events, tasks, creds, downloads, uploads, etc. )
  • Recon Entity Tracking (track info about users/devices, random metadata as needed)
  • Shared files for some commands (see teamserver page for details)
  • tab-based interact window for command issuing
  • table-based output option for some commands like ls, ps, etc.
  • Auto parsing of output from seatbelt to create "recon entities" and fill entries to reference back to later easily
  • Dark and Light
    ๏คฎ
    theme

Engineers

  • C# .NET framework implant for windows devices, currently only CLR/.NET 4 support
  • atm only one implant, but looking to add others
  • It can be generated as EXE, DLL, shellcode, or PowerShell stager
  • Rc4 encryption of payload memory & heap when sleeping (Exe / DLL only)
  • AES encryption of all network communication
  • ConfuserEx integration for obfuscation
  • HTTP, HTTPS, TCP, SMB communication
    • TCP & SMB can work P2P in a bind or reverse setups
  • Unique per implant key generated at compile time
  • multiple callback URI's depending on the C2 profile
  • P/Invoke & D/Invoke integration for windows API calls
  • SOCKS 4a support
  • Reverse Port Forward & Port Forwards
  • All commands run as async cancellable jobs
    • Option to run commands sync if desired
  • Inline assembly execution & inline shellcode execution
  • DLL Injection
  • Execute assembly & Mimikatz integration
  • Mimikatz is not built into the implant but is pushed when specific commands are issued
  • Various localhost & network enumeration tools
  • Token manipulation commands
    • Steal Token Mask(WIP)
  • Lateral Movement Commands
  • Jump (psexec, wmi, wmi-ps, winrm, dcom)
  • Remote Execution (WIP)
  • AMSI & ETW Patching
  • Unmanaged Powershell
  • Script Store (can load multiple scripts at once if needed)
  • Spawn & Inject
    • Spawn-to is configurable
  • run, shell & execute

Documentation

documentation can be found at docs

Getting Started

Prerequisites

  • Installation of the .net 7 SDK from Microsoft
  • Once installed, the teamserver and client are started with dotnet run

Teamserver

To configure the team server's starting address (where clients will connect), edit the HardHatC2\TeamServer\Properties\LaunchSettings.json changing the "applicationUrl": "https://127.0.0.1:5000" to the desired location and port. start the teamserver with dotnet run from its top-level folder ../HrdHatC2/Teamserver/

HardHat Client

  1. When starting the client to set the target teamserver location, include it in the command line dotnet run https://127.0.0.1:5000 for example
  2. open a web browser and navigate to https://localhost:7096/ if this works, you should see the login page
  3. Log in with the HardHat_Admin user (Password is printed on first TeamServer startup)
  4. Navigate to the settings page & create a new user if successful, a message should appear, then you may log in with that account to access the full client

Contributions & Bug Reports

Code contributions are welcome feel free to submit feature requests, pull requests or send me your ideas on discord.



โŒ