We were testing a black-box service for a client with an interesting software platform. They'd provided an SDK with minimal documentationβjust enough to show basic usage, but none of the underlying service definitions. The SDK binary was obfuscated, and the gRPC endpoints it connected to had reflection disabled.
After spending too much time piecing together service names from SDK string dumps and network traces, we built grpc-scan to automate what we were doing manually: exploiting how gRPC implementations handle invalid requests to enumerate services without any prior knowledge.
Unlike REST APIs where you can throw curl at an endpoint and see what sticks, gRPC operates over HTTP/2 using binary Protocol Buffers. Every request needs:
Miss any of these and you get nothing useful. There's no OPTIONS request, typically limited documentation, no guessing /api/v1/users
might exist. You either have the proto files or you're blind.
Most teams rely on server reflectionβa gRPC feature that lets clients query available services. But reflection is usually disabled in production. Itβs an information disclosure risk, yet developers rarely provide alternative documentation.
But gRPC have varying error messages which inadvertently leak service existence through different error codes:
# Calling
non-existent\
`unknown service FakeService``real service, wrong method``unknown method FakeMethod for service UserService``real service and method``missing authentication token`
These distinct responses let us map the attack surface. The tool automates this process, testing thousands of potential service/method combinations based on various naming patterns we've observed.
The enumeration engine does a few things
1. Even when reflection is "disabled," servers often still respond to reflection requests with errors that confirm the protocol exists. We use this for fingerprinting.
2. For a base word like "User", we generate likely services
User
UserService
Users
UserAPI
user.User
api.v1.User
com.company.User
Each pattern tested with common method names: Get, List, Create, Update, Delete, Search, Find, etc.
3. Different gRPC implementations return subtly different error codes:
UNIMPLEMENTED
vs NOT_FOUND
for missing servicesINVALID_ARGUMENT
vs INTERNAL
for malformed requests4. gRPC's HTTP/2 foundation means we can multiplex hundreds of requests over a single TCP connection. The tool maintains a pool of persistent connections, improving scan speed.
What do we commonly see in pentests using RPC?
Service Sprawl from Migrations
SDK analysis often reveals parallel service implementations, for example
UserService
- The original monolith endpointAccountManagementService
- New microservice, full authUserDataService
- Read-only split-off, inconsistent authUserProfileService
- Another team's implementationThese typically emerge from partial migrations where different teams own different pieces. The older services often bypass newer security controls.
Method Proliferation and Auth Drift
Real services accumulate method variants over time, for example
GetUser
- Original, added auth in v2GetUserDetails
- Different team, no auth checkFetchUserByID
- Deprecated but still activeGetUserWithPreferences
- Calls GetUser internally, skips authSo newer methods that compose older ones sometimes bypass security checks the original methods later acquired.
Package Namespace Archaeology
Service discovery reveals organizational history
com.startup.api.Users
- Original serviceplatform.users.v1.UserAPI
- Post-merge standardization attemptinternal.batch.UserBulkService
- "Internal only" but on same endpointEach namespace generation typically has different security assumptions. Internal services exposed on the same port as public APIs are surprisingly commonβdevelopers assume network isolation that doesn't exist.
UserService/CreateUser
exists, but crafting a valid User message requires either the proto definition or guessing or reverse engineering of the SDK's serialization.Available at https://github.com/Adversis/grpc-scan. Pull requests welcome.
Compiled Node.js files (.node
files) are compiled binary files that allow Node.js applications to interface with native code written in languages like C, C++, or Objective-C as native addon modules.
Unlike JavaScript files which are mostly readable, assuming theyβre not obfuscated and minified, .node
files are compiled binaries that can contain machine code and run with the same privileges as the Node.js process that loads them, without the constraints of the JavaScript sandbox. These extensions can directly call system APIs and perform operations that pure JavaScript code cannot, like making system calls.
These addons can use Objective-C++ to leverage native macOS APIs directly from Node.js. This allows arbitrary code execution outside the normal sandboxing that would constrain a typical Electron application.
When an Electron application uses a module that contains a compiled .node
file, it automatically loads and executes the binary code within it. Many Electron apps use the ASAR (Atom Shell Archive) file format to package the application's source code. ASAR integrity checking is a security feature that checks the file integrity and prevents tampering with files within the ASAR archive. It is disabled by default.
When ASAR integrity is enabled, your Electron app will verify the header hash of the ASAR archive on runtime. If no hash is present or if there is a mismatch in the hashes, the app will forcefully terminate.
This prevents files from being modified within the ASAR archive. Note that it appears the integrity check is a string that you can regenerate after modifying files, then find and replace in the executable file as well. See more here.
But many applications run from outside the verified archive, under app.asar.unpacked
since the compiled .node
files (the native modules) cannot be executed directly from within an ASAR archive.
And so even with the proper security features enabled, a local attacker can modify or replace .node
files within the unpacked directory - not so different than DLL hijacking on Windows.
We wrote two tools - one to find Electron applications that arenβt hardened against this, and one to simply compile Node.js addons.
.node
filesAn interactive application that visualizes and demonstrates Googleβs CaMeL (Capabilities for Machine Learning) security approach for defending against prompt injections in LLM agents.
Link to original paper: https://arxiv.org/pdf/2503.18813
All credit to the original researchers
title={Defeating Prompt Injections by Design}, author={Edoardo Debenedetti and Ilia Shumailov and Tianqi Fan and Jamie Hayes and Nicholas Carlini and Daniel Fabian and Christoph Kern and Chongyang Shi and Andreas Terzis and Florian Tramèr}, year={2025}, eprint={2503.18813}, archivePrefix={arXiv}, primaryClass={cs.CR}, url={https://arxiv.org/abs/2503.18813}, }