FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Yesterday — October 12th 2025Your RSS feeds

Blind Enumeration of gRPC Services

We were testing a black-box service for a client with an interesting software platform. They'd provided an SDK with minimal documentation—just enough to show basic usage, but none of the underlying service definitions. The SDK binary was obfuscated, and the gRPC endpoints it connected to had reflection disabled.

After spending too much time piecing together service names from SDK string dumps and network traces, we built grpc-scan to automate what we were doing manually: exploiting how gRPC implementations handle invalid requests to enumerate services without any prior knowledge.

Unlike REST APIs where you can throw curl at an endpoint and see what sticks, gRPC operates over HTTP/2 using binary Protocol Buffers. Every request needs:

  • The exact service name (case-sensitive)
  • The exact method name (also case-sensitive)
  • Properly serialized protobuf messages

Miss any of these and you get nothing useful. There's no OPTIONS request, typically limited documentation, no guessing /api/v1/users might exist. You either have the proto files or you're blind.

Most teams rely on server reflection—a gRPC feature that lets clients query available services. But reflection is usually disabled in production. It’s an information disclosure risk, yet developers rarely provide alternative documentation.

But gRPC have varying error messages which inadvertently leak service existence through different error codes:

# Calling non-existent\`unknown service FakeService``real service, wrong method``unknown method FakeMethod for service UserService``real service and method``missing authentication token`

These distinct responses let us map the attack surface. The tool automates this process, testing thousands of potential service/method combinations based on various naming patterns we've observed.

The enumeration engine does a few things

1. Even when reflection is "disabled," servers often still respond to reflection requests with errors that confirm the protocol exists. We use this for fingerprinting.

2. For a base word like "User", we generate likely services

  • User
  • UserService
  • Users
  • UserAPI
  • user.User
  • api.v1.User
  • com.company.User

Each pattern tested with common method names: Get, List, Create, Update, Delete, Search, Find, etc.

3. Different gRPC implementations return subtly different error codes:

  • UNIMPLEMENTED vs NOT_FOUND for missing services
  • INVALID_ARGUMENT vs INTERNAL for malformed requests
  • Timing differences between auth checks and method validation

4. gRPC's HTTP/2 foundation means we can multiplex hundreds of requests over a single TCP connection. The tool maintains a pool of persistent connections, improving scan speed.

What do we commonly see in pentests using RPC?

Service Sprawl from Migrations

SDK analysis often reveals parallel service implementations, for example

  • UserService - The original monolith endpoint
  • AccountManagementService - New microservice, full auth
  • UserDataService - Read-only split-off, inconsistent auth
  • UserProfileService - Another team's implementation

These typically emerge from partial migrations where different teams own different pieces. The older services often bypass newer security controls.

Method Proliferation and Auth Drift

Real services accumulate method variants over time, for example

  • GetUser - Original, added auth in v2
  • GetUserDetails - Different team, no auth check
  • FetchUserByID - Deprecated but still active
  • GetUserWithPreferences - Calls GetUser internally, skips auth

So newer methods that compose older ones sometimes bypass security checks the original methods later acquired.

Package Namespace Archaeology

Service discovery reveals organizational history

  • com.startup.api.Users - Original service
  • platform.users.v1.UserAPI - Post-merge standardization attempt
  • internal.batch.UserBulkService - "Internal only" but on same endpoint

Each namespace generation typically has different security assumptions. Internal services exposed on the same port as public APIs are surprisingly common—developers assume network isolation that doesn't exist.

Limitations

  • Services expecting specific protobuf structures still require manual work. We can detect UserService/CreateUser exists, but crafting a valid User message requires either the proto definition or guessing or reverse engineering of the SDK's serialization.
  • The current version focuses on unary calls. Bidirectional streaming (common in real-time features) needs different handling.

Available at https://github.com/Adversis/grpc-scan. Pull requests welcome.

submitted by /u/ok_bye_now_
[link] [comments]
Before yesterdayYour RSS feeds

Living off Node.js Addons

Native Modules

Compiled Node.js files (.node files) are compiled binary files that allow Node.js applications to interface with native code written in languages like C, C++, or Objective-C as native addon modules.

Unlike JavaScript files which are mostly readable, assuming they’re not obfuscated and minified, .node files are compiled binaries that can contain machine code and run with the same privileges as the Node.js process that loads them, without the constraints of the JavaScript sandbox. These extensions can directly call system APIs and perform operations that pure JavaScript code cannot, like making system calls.

These addons can use Objective-C++ to leverage native macOS APIs directly from Node.js. This allows arbitrary code execution outside the normal sandboxing that would constrain a typical Electron application.

ASAR Integrity

When an Electron application uses a module that contains a compiled .node file, it automatically loads and executes the binary code within it. Many Electron apps use the ASAR (Atom Shell Archive) file format to package the application's source code. ASAR integrity checking is a security feature that checks the file integrity and prevents tampering with files within the ASAR archive. It is disabled by default.

When ASAR integrity is enabled, your Electron app will verify the header hash of the ASAR archive on runtime. If no hash is present or if there is a mismatch in the hashes, the app will forcefully terminate.

This prevents files from being modified within the ASAR archive. Note that it appears the integrity check is a string that you can regenerate after modifying files, then find and replace in the executable file as well. See more here.

But many applications run from outside the verified archive, under app.asar.unpacked since the compiled .node files (the native modules) cannot be executed directly from within an ASAR archive.

And so even with the proper security features enabled, a local attacker can modify or replace .node files within the unpacked directory - not so different than DLL hijacking on Windows.

We wrote two tools - one to find Electron applications that aren’t hardened against this, and one to simply compile Node.js addons.

  1. Electron ASAR Scanner - A tool that assesses whether Electron applications implement ASAR integrity protection and useful .node files
  2. NodeLoader - A simple native Node.js addon compiler capable of launching macOS applications and shell commands
submitted by /u/ok_bye_now_
[link] [comments]

Supply Chain Attack Vector Analysis: 250% Surge Prompts CISA Emergency Response

Interesting data point from CISA's latest emergency directive - supply chain attacks have increased 250% from 2021-2024 (62→219 incidents).

Technical breakdown: - Primary attack vector: Third-party vendor compromise (45% of incidents) - Average dwell time in supply chain attacks: 287 days vs 207 days for direct attacks - Detection gap remains significant - Cost differential: $5.12M (supply chain) vs $4.45M (direct attacks)

CISA's directive focuses on: - Zero-trust architecture implementation - SBOM (Software Bill of Materials) requirements - Continuous vendor risk assessment

Massachusetts highlighted as high-risk due to tech sector density and critical infrastructure.

Would be interested in hearing from those implementing SBOM strategies - what tools/frameworks are working?

submitted by /u/Hot_Lengthiness1173
[link] [comments]

CISA Emergency Directive: AI-Powered Phishing Campaign Analysis - 300% Surge, $2.3B Q3 Losses

CISA's Automated Indicator Sharing (AIS) program is showing concerning metrics on AI-driven phishing campaigns:

Technical Overview: - 300% YoY increase in AI-generated phishing attempts - Attack sophistication score: 3.2 → 8.7 (out of 10) - 85% targeting US infrastructure - ML algorithms analyzing target orgs' communication patterns, employee behavior, business relationships - Real-time generation of unique, personalized vectors

Threat Intelligence: FBI Cyber Division reports campaigns using advanced NLP to create contextually relevant emails that mimic legitimate business comms. Over 200 US organizations compromised in 30 days.

Attack Chain Evolution: Traditional phishing relied on generic templates + basic social engineering. Current wave utilizes ML to generate thousands of unique, personalized emails in real-time, making signature-based detection largely ineffective.

NIST predicts 90% of successful breaches in 2025 will originate from AI-powered campaigns.

Detailed analysis with case studies and mitigation frameworks: https://cyberupdates365.com/ai-phishing-attacks-surge-300-percent-us-cisa-emergency-alert/

Interested in technical discussion on effective countermeasures beyond traditional email filtering.

submitted by /u/Street-Time-8159
[link] [comments]

From CPU Spikes to Defense

We just published a case study about an Australian law firm that noticed two employees accessing a bunch of sensitive files. The behavior was flagged using UEBA, which triggered alerts based on deviations from normal access patterns. The firm dug in and found signs of lateral movement and privilege escalation attempts.

They were able to lock things down before any encryption or data exfiltration happened. No payload, no breach.

It’s a solid example of how behavioral analytics and least privilege enforcement can actually work in practice.

Curious what’s working for others in their hybrid environments?

submitted by /u/Varonis-Dan
[link] [comments]

Active Directory domain (join)own accounts revisited 2025

Domain join accounts are frequently exposed during build processes, and even when following Microsoft’s current guidance they inherit over-privileged ACLs (ownership, read-all, account restrictions) that enable LAPS disclosure, RBCD and other high-impact abuses.

Hardening requires layering controls such as disallowing low privileged users to create machine accounts and ensure that Domain Admins own joined computer objects. In addition, add deny ACEs for LAPS (ms-Mcs-AdmPwd) and RBCD (msDS-AllowedToActOnBehalfOfOtherIdentity) while scoping create/delete rights to specific OUs.

Even with those mitigations, reset-password rights can be weaponised via replication lag plus AD CS to recover the pre-reset machine secret.

Dig into this post to see the lab walkthroughs, detection pointers and scripts that back these claims.

submitted by /u/ivxrehc
[link] [comments]

Look mom HR application, look mom no job - phishing using Zoom docs to harvest Gmail creds

Hey all, I found a phishing campaign that uses Zoom's document share flow as the initial trust vector. It forces victims through a fake "bot protection" gate, then shows a Gmail-like login. When someone types credentials, they are pushed out to the attacker over a WebSocket and the backend validates them.

submitted by /u/unknownhad
[link] [comments]

Upcoming Technical Security Talks & Workshops at BsidesNoVA – Oct 10–11 (Arlington VA)

BsidesNoVA (Oct 10–11 at GMU Mason Square, Arlington VA) is a community-run, volunteer-organized security conference.
Sharing here because several of this year’s talks and workshops are deeply technical and may be of interest to practitioners and researchers in the DMV area:

🔹 Detection / Blue-Team / DFIR

  • ATT&CK-driven detection engineering with Sigma & KQL
  • Network-forensics in hybrid environments
  • Memory-forensics at scale on Linux/macOS
  • Threat-intel-driven hunts & breach-simulation lab

🔹 Adversary / Research / OSINT

  • Breaking AI-based phishing detection
  • OSINT pivoting techniques for actor tracking
  • Live breach scenarios in Breach Village

🔹 Other Highlights

  • Capture-the-Flag (real-world IR/OSINT/crypto challenges – $1,000 prize + Black Badge)
  • Hallway-con & villages for DFIR, AI, and CTI collaboration
  • Program is peer-driven; no vendor pitches or sales content

The agenda & CFP archive: https://bsidesnova.org
📍 Oct 10–11 | GMU Mason Square – Arlington VA

Posting with mod awareness; goal is to highlight technical sessions for anyone nearby who wants to learn or collaborate in person.

submitted by /u/JackfruitDirect6803
[link] [comments]

My experience with LLM Code Review vs Deterministic SAST Security Tools

TLDR: LLMs generally perform better than existing SAST tools when you need to answer a subjective question that requires context (ie lots of ways to define one thing), but only as good (or worse) when looking for an objective, deterministic output.

AI is all the hype commercially, but at the same time has a pretty negative sentiment from practitioners (at least in my experience). It's true there are lots of reason NOT to use AI but I wrote a blog post that tries to summarize what AI is actually good at in regards to reviewing code.

submitted by /u/prestonprice
[link] [comments]

Nuclei Templates for Detecting AMI MegaRAC BMC Vulnerabilities

AMI BMC vulns are on the CISA Known Exploited Vulnerabilities catalog now. I think this is the first BMC vuln to hit the KEV. Here are some Nuclei templates to detect this vuln in your BMCs.

submitted by /u/TechDeepDive
[link] [comments]

r/netsec monthly discussion & tool thread

Questions regarding netsec and discussion related directly to netsec are welcome here, as is sharing tool links.

Rules & Guidelines

  • Always maintain civil discourse. Be awesome to one another - moderator intervention will occur if necessary.
  • Avoid NSFW content unless absolutely necessary. If used, mark it as being NSFW. If left unmarked, the comment will be removed entirely.
  • If linking to classified content, mark it as such. If left unmarked, the comment will be removed entirely.
  • Avoid use of memes. If you have something to say, say it with real words.
  • All discussions and questions should directly relate to netsec.
  • No tech support is to be requested or provided on r/netsec.

As always, the content & discussion guidelines should also be observed on r/netsec.

Feedback

Feedback and suggestions are welcome, but don't post it here. Please send it to the moderator inbox.

submitted by /u/albinowax
[link] [comments]

IPv4/IPv6 Packet Fragmentation: Implementation Details - PacketSmith

In version 3.0 of PacketSmith, which we shipped on Monday, we've added an IPv4/IPv6 fragmenter. Today, we're releasing an article describing some of the implementation details behind it.

submitted by /u/MFMokbel
[link] [comments]

Github - Phishcan/phishcan-data: Canadian threat feeds updated every 12 hours.

I built PhishCan because I was frustrated by the lack of free, accessible resources on cyber threats in Canada.

  • Canada-first focus: phishing data that actually reflects local targets (banks, utilities, government, investor alerts).
  • Open & transparent: everything is free, with a public API and raw data on GitHub.
  • Continuous pipeline: domains, actors, and infrastructures are monitored in near real time and enriched with context, every 12 hours

Data on Github: https://github.com/Phishcan/phishcan-data

submitted by /u/Additional_Swan_9280
[link] [comments]

Why “contained” doesn’t mean “safe” in modern SOCs

I’ve been seeing more and more cases where the SOC reports success, process killed, host isolated, dashboard green. Yet weeks later the same organisation is staring at ransom notes or data leaks.

The problem: we treat every alert like a dodgy PDF. Malware was contained. The threat actor was not.

SOCs measure noise (MTTD, MTTR, auto-contain). Adversaries measure impact (persistence, privilege, exfiltration). That’s why even fully “security-compliant” companies lose millions every day. Look at what's happening in the UK.

Curious how others here are approaching this:

  • Do you have workflows that pivot from containment to investigation by default?
  • How do you balance speed vs depth when you suspect a human adversary is involved?
  • Are you baking forensic collection into SOC alerts, or leaving it for the big crises?

Full piece linked for context.

submitted by /u/SuccessfulMountain64
[link] [comments]

New macOS threat abuses ads and social media to spread malware

Moonlock Lab researchers have spotted a new macOS malware campaign that leverages malvertising + fake social media profiles to distribute malicious apps. Once installed, the malware exfiltrates sensitive data and can be updated remotely with new modules. This trend shows that macOS is no longer “low priority” for attackers – they’re actively adapting Windows-style tactics for Apple’s ecosystem.

submitted by /u/Individual-Gas5276
[link] [comments]

Tea continued - Unauthenticated access to 150+ Firebase databases, storage buckets and secrets

These aren't just random mobile apps with a few hundred or thousand downloads. Most of them had over 100K+, 1M+, 5M+, 10M+, 50M+, or even 100M+ downloads (Tea app only has 500K+ downloads).

I’m also releasing OpenFirebase, an automated Firebase security scanner that checks for unauthorized read and/or write access on Firestore, Realtime Database, Storage Buckets, and Remote Config. It performs checks from both unauthenticated and/or authenticated perspectives, and it can bypass weak Google API key restrictions.

submitted by /u/Woowowow91
[link] [comments]

The God Mode Vulnerability That Should Kill “Trust Microsoft” Forever

This takes "Single Sign-On" to a whole new level...
It's not about blaming Microsoft or any one vendor, but questioning why our systems are architected so a single platform can wield this kind of control in the first place.
We've written up a perspective on the broader implications and proposed how identity could be built differently. Full disclosure: we're the researchers/devs behind this piece, sharing our own analysis (hopefully this isn't taken as promotional to our non-commercial solution)

submitted by /u/tidefoundation
[link] [comments]
❌