FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

Test Suite

By: /u/RealAspect2373 — August 11th 2025 at 14:29

Hey community wondering if anyone is available to check my test & give a peer review - the repo is attached

https://zenodo.org/records/16794243

https://github.com/mandcony/quantoniumos/tree/main/.github

Cryptanalysis & Randomness Tests

Overall Pass Rate: 82.67% (62 / 75 tests passed) Avalanche Tests (Bit-flip sensitivity):

Encryption: Mean = 48.99% (σ = 1.27) (Target σ ≤ 2)

Hashing: Mean = 50.09% (σ = 3.10) ⚠︎ (Needs tightening; target σ ≤ 2)

NIST SP 800-22 Statistical Tests (15 core tests):

Passed: Majority advanced tests, including runs, serial, random excursions

Failed: Frequency and Block Frequency tests (bias above tolerance)

Note: Failures common in unconventional bit-generation schemes; fixable with bias correction or entropy whitening

Dieharder Battery: Passed all applicable tests for bitstream randomness

TestU01 (SmallCrush & Crush): Passed all applicable randomness subtests

Deterministic Known-Answer Tests (KATs) Encryption and hashing KATs published in public_test_vectors/ for reproducibility and peer verification

Summary

QuantoniumOS passes all modern randomness stress tests except two frequency-based NIST tests, with avalanche performance already within target for encryption. Hash σ is slightly above target and should be tightened. Dieharder, TestU01, and cross-domain RFT verification confirm no catastrophic statistical or architectural weaknesses.

submitted by /u/RealAspect2373
[link] [comments]
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

AI-Powered Code Security Reviews for DevSecOps with Claude

By: /u/mostafahussein — August 11th 2025 at 07:03

Anthropic has released Claude Code Security Review, a new feature that brings AI-powered security checks into development workflows. When integrated with GitHub Actions, it can automatically review pull requests for vulnerabilities, including but not limited to:

- Access control issues (IDOR)

- Risky dependencies

In my latest article, I cover how to set it up and what it looks like in practice.

submitted by /u/mostafahussein
[link] [comments]
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

Vulnerability Management Program - How to implement SLA and its processes

By: /u/pathetiq — August 9th 2025 at 15:28

Defining good SLAs is a tough challenge, but it’s at the heart of any solid vulnerability management program. This article helps internal security teams set clear SLAs, define the right metrics, and adjust their ticketing system to build a successful vulnerability management program.

submitted by /u/pathetiq
[link] [comments]
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

Unclaimed Google Play Store package

By: /u/Accomplished-Dig4025 — August 8th 2025 at 16:41

I came across a broken link hijacking case involving a Google Play Store package. The app link returns a 404, and the package name is currently unclaimed.which means it can potentially be taken over. It’s a valid security issue and could be eligible for a bug bounty, though I'm not 100% sure.

The company asked for a working proof of concept, meaning the package has to actually be claimed and uploaded to the Play Store. I haven’t created a developer account myself yet, since I haven’t needed one except for this case and it requires a $25 fee.

If you already have a developer account, would you be willing to contribute by uploading a simple placeholder app using that package name, just to prove the takeover? If the report gets rewarded, I’ll share 10% of the bounty with you. Usually, these types of reports are rewarded with $50 or $100, so I hope you understand I can’t offer more than 10%.

Let me know if you’re open to it.

Thanks!

submitted by /u/Accomplished-Dig4025
[link] [comments]
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

The Mental Material Revolution: Why Engineers Need to Become Cognitive Architects

By: /u/gabibeyo — August 8th 2025 at 13:55

Why Engineers with Low EQ Might Not Succeed in the AI Era

Here’s a prediction that might ruffle some feathers: The engineers who struggle most in the AI revolution won’t be those who can’t adapt to new frameworks or learn new languages. It’ll be those who can’t master the art of contextualization.

I’m talking about engineers with lower emotional intelligence — brilliant problem-solvers who know exactly what to do and how to do it, but struggle with the subtleties of knowledge transfer. They can debug complex systems and architect elegant solutions, but ask them to explain their reasoning, prioritize information, or communicate nuanced requirements? That’s where things get messy.

In the pre-AI world, this was manageable. Code was the primary interface. Documentation was optional. Communication happened in pull requests and stack overflow posts. But AI has fundamentally changed the game.

Welcome to Context Engineering: The Art of Mental Material

Context engineering is the practice of providing AI systems with the precise “mental material” they need to achieve goals effectively. It’s not just prompt writing or RAG implementation — it’s cognitive architecture. When you hire a new team member, you don’t just hand them a task and walk away. You provide context. You explain the company culture, the project history, the constraints, the edge cases, and the unspoken rules. You share your mental model of the problem space. Context engineering is doing exactly this, but for AI systems.

This shift reveals something interesting: Engineers with lower emotional intelligence often excel at technical execution but struggle with the nuanced aspects of knowledge transfer — deciding what information to share versus omit, expressing complex ideas clearly, and distinguishing between ephemeral and durable knowledge. These communication and prioritization skills, once considered “soft,” are now core technical competencies in context engineering. But let’s move beyond the EQ discussion — the real transformation is much bigger.

Mental material encompasses far more than simple data or documentation. It includes declarative knowledge (facts, data, documentation), procedural knowledge (how to approach problems, methodologies), conditional knowledge (when to apply different strategies), meta-knowledge (understanding about the knowledge itself), contextual constraints (what’s relevant vs. irrelevant for specific tasks), long-term memory (stable patterns, preferences, and principles that rarely change), and short-term memory (session-specific context, recent decisions, and ephemeral state that helps maintain coherence within a particular interaction).

Your New Job Description: AI Mental Engineer

Traditional engineering was about building systems. AI engineering is about designing cognitive architectures. You’re not just writing code — you’re crafting how artificial minds understand and approach problems. This means your daily work now includes memory architecture (deciding what information gets stored where, how it’s organized, and when it gets retrieved — not database design, but epistemological engineering), context strategy (determining what mental material an AI needs for different types of tasks), knowledge curation (maintaining the quality and relevance of information over time, as mental material degrades and becomes outdated), cognitive workflow design (orchestrating how AI systems access, process, and apply contextual information), and metacognitive monitoring (analyzing whether the context strategies are working and adapting them based on outcomes).

The engineers who thrive will be those who can bridge technical precision with cognitive empathy — understanding not just how systems work, but how to help artificial minds understand and reason about problems. This transformation isn’t just about new tools or frameworks. It’s about fundamentally reconceptualizing what engineering means in an AI-first world.

The Context Orchestration Challenge

We’ve built sophisticated AI systems that can reason, write, and solve complex problems, yet we’re still manually feeding them context like we’re spoon-feeding a child. Every AI application faces the same fundamental challenge: How do you help an artificial mind understand what it needs to know?

Currently, we solve this through memory storage systems that dump everything into databases, prompt templates that hope to capture the right context, RAG systems that retrieve documents but don’t understand relevance, and manual curation that doesn’t scale. But nothing that truly understands the intentionality behind a request and can autonomously determine what mental material is needed. We’re essentially doing cognitive architecture manually, request by request, application by application.

We Need a Mental Material Orchestrator

This brings us to a fascinating philosophical question: What would truly intelligent context orchestration look like? Imagine a system that operates as a cognitive intermediary — analyzing not just what someone is asking, but understanding the deeper intentionality behind the request.

Consider this example: “Help me optimize this database query — it’s running slow.” Most systems provide generic query optimization tips, but intelligent context orchestration would perform cognitive analysis to understand that this performance issue has dramatically different underlying intents based on context.

If it’s a junior developer, they need procedural knowledge (how to analyze execution plans) plus declarative knowledge (indexing fundamentals) plus short-term memory (what they tried already this session). If it’s a senior developer under deadline pressure, they need conditional knowledge (when to denormalize vs. optimize) plus long-term memory (this person prefers pragmatic solutions) plus contextual constraints (production system limitations). If it’s an architect reviewing code, they need meta-knowledge (why this pattern emerged) plus procedural knowledge (systematic performance analysis) plus declarative knowledge (system-wide implications).

Context-dependent realities might reveal the “slow query” isn’t actually a query problem — maybe it’s running in a resource-constrained Docker container, or it’s an internal tool used infrequently where 5 milliseconds doesn’t matter. Perhaps the current query is intentionally slower because the optimized version would sacrifice readability (violating team guidelines), and the system should suggest either a local override for performance-critical cases or acceptance of the minor delay.

The problem with even perfect prompts is clear: You could craft the world’s best prompt about database optimization, but without understanding who is asking, why they’re asking, and what they’ve already tried, you’re essentially giving a lecture to someone who might need a quick fix, a learning experience, or a strategic decision framework. And even if you could anticipate every scenario, you’d quickly hit token limits trying to include all possible contexts in a single prompt. The context strategy must determine not just what information to provide, but what type of mental scaffolding the person needs to successfully integrate that information — and dynamically assemble only the relevant context for that specific interaction.

The Deeper Implications

This transformation raises profound questions about the nature of intelligence and communication. What does it mean to “understand” a request? When we ask an AI to help with a coding problem, are we asking for code, explanation, learning, validation, or something else entirely? Human communication is layered with implied context and unspoken assumptions. How do we formalize intuition? Experienced engineers often “just know” what information is relevant for a given situation. How do we encode that intuitive understanding into systems? What is the relationship between knowledge and context? The same piece of information can be useful or distracting depending on the cognitive frame it’s presented within.

These aren’t just technical challenges — they’re epistemological ones. We’re essentially trying to formalize how minds share understanding with other minds.

From Code Monkey to Cognitive Architect

This transformation requires fundamentally reconceptualizing what engineering means in an AI-first world, but it’s crucial to understand that we’re not throwing decades of engineering wisdom out the window. All the foundational engineering knowledge you’ve accumulated — design patterns, data structures and algorithms, system architecture, software engineering principles (SOLID, DRY, KISS), database design, distributed systems concepts, performance optimization, testing methodologies, security practices, code organization and modularity, error handling and resilience patterns, scalability principles, and debugging methodologies — remains incredibly valuable.

This knowledge serves a dual purpose in the AI era. First, it enables you to create better mental material by providing AI systems with proven patterns, established principles, and battle-tested approaches rather than ad-hoc solutions. When you teach an AI about system design, you’re drawing on decades of collective engineering wisdom about what works and what doesn’t. Second, this deep technical knowledge allows you to act as an intelligent co-pilot, providing real-time feedback and corrections as AI systems work through problems. You can catch when an AI suggests an anti-pattern, guide it toward more robust solutions, or help it understand why certain trade-offs matter in specific contexts.

Importantly, these real-time corrections and refinements should themselves become part of the mental material. When you guide an AI away from a poor architectural choice or toward a better algorithm, that interaction should be captured and integrated into the system’s knowledge base, making it progressively more precise and aligned with good engineering practices over time.

Traditional engineering focused on deterministic systems, optimized for performance and reliability, measured success by uptime and speed, and treated communication as secondary to functionality. AI engineering designs probabilistic, context-dependent systems, optimizes for effectiveness and adaptability, measures success by goal achievement and learning, and makes communication a core technical competency — but it builds on all the foundational principles that make software systems robust and maintainable.

If you’re an engineer reading this, here’s how to prepare for the mental material revolution: Develop context awareness by thinking about the knowledge transfer patterns in your current work. How do you onboard new team members? How do you document complex decisions? These skills directly translate to context engineering. Practice explanatory engineering by forcing yourself to articulate not just what you’re building, but why, how, and when. Write documentation as if you’re teaching someone who’s brilliant but has no context about your domain. Study cognitive architecture to understand how humans process information, make decisions, and apply knowledge — this will help you design better AI context strategies. Build context systems by experimenting with prompt engineering, RAG systems, and memory management. Embrace the meta-layer and get comfortable with systems that manage other systems, as context orchestration is inherently meta-engineering.

The Future is Cognitive

We’re entering an era where the most valuable engineers won’t be those who can write the most elegant algorithms, but those who can design the most effective cognitive architectures. The ability to understand, communicate, and orchestrate mental material will become as fundamental as understanding data structures and algorithms.

The question isn’t whether this transformation will happen — it’s already underway. The question is whether you’ll be building the mental scaffolding that powers the next generation of AI systems, or whether you’ll be left behind trying to manually manage context in an increasingly automated world. Your emotional intelligence isn’t just a nice-to-have soft skill anymore. It’s becoming your most valuable engineering asset.

The mental material revolution is here. Are you ready to become a cognitive architect?

What’s your experience with context engineering? Are you already seeing this shift in your organization? Share your thoughts and let’s discuss how we can build better mental material orchestration systems together.

submitted by /u/gabibeyo
[link] [comments]
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

The Silent Security Crisis: How AI Coding Assistants Are Creating Perfect Attack Blueprints

By: /u/gabibeyo — August 8th 2025 at 13:51

What I Found When I Monitored Claude CLI for One Day

While building an MCP server last week, I got curious about what Claude CLI stores locally on my machine.

A simple 24-hour monitoring experiment revealed a significant security blind spot that most developers aren't aware of.

What I found in my AI conversation logs:

• API keys for multiple services (OpenAI, GitHub, AWS) • Database connection strings with credentials • Detailed tech stack and architecture discussions • Team processes and organizational context • Personal debugging patterns and approaches

All stored locally in plain text, searchable, and organized by timestamp.

The adoption vs. security gap:

Adoption reality: 500K+ developers now use AI coding assistants daily

Security awareness: Most teams haven't considered what's being stored locally

The disconnect: We're moving fast on AI integration but haven't updated our security practices to match

Why this matters:

Traditional security assumes attackers need time and expertise to map your systems. AI conversation logs change that equation - they contain pre-analyzed intelligence about your infrastructure, complete with context and explanations.

It's like having detailed reconnaissance already done, just sitting in text files.

"But if someone has my laptop, I'm compromised anyway, right?"

This is the pushback I keep hearing, and it misses the key difference:

Traditional laptop access = attackers hunt through scattered files for days/weeks AI conversation logs = complete, contextualized intelligence report you personally wrote

Instead of reverse-engineering your setup, they get: "I'm connecting to our MongoDB cluster at mongodb://admin:password@prod-server - can you help debug this?"

The reconnaissance work is already done. They just read your explanations.

The interesting part:

Claude initially refused to help me build a monitoring script, thinking I was trying to attack a system. Yet the same AI would likely help an attacker who asked politely about "monitoring their own files for research."


I've written up the full technical discovery process, including the monitoring methodology and security implications.

Read the complete analysis: [https://medium.com/@gabi.beyo/the-silent-security-crisis-how-ai-coding-assistants-are-creating-perfect-attack-blueprints-71fd375d51a3]

How is your team handling AI conversation data? Are local storage practices part of your security discussions?

DevSecurity #AI #EngineeringLeadership #CyberSecurity

submitted by /u/gabibeyo
[link] [comments]
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

OdooMap - A Pentesting Tool for Odoo Applications

By: /u/Fluid-Profit-164 — August 5th 2025 at 15:47

Can you review my new security testing tool https://github.com/MohamedKarrab/odoomap

Features:

• Detect Odoo version & exposed metadata

• Enumerate databases and accessible models

• Authenticate & verify CRUD permissions per model

• Extract data from chosen models (e.g. res.users, res.partner)

• Brute-force login credentials (default, custom user/pass, wordlists)

• Brute-force internal model names when listing fails

submitted by /u/Fluid-Profit-164
[link] [comments]
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

Be patient and keep it simple.

By: /u/anasbetis94 — August 2nd 2025 at 15:31

Hello all,

I just published a new write up about bugs that I have found recently under the name 'Be patient and keep it simple, the bug is there' . I hope you liked it.

submitted by /u/anasbetis94
[link] [comments]
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

Forced to give your password? Here is the solution.

By: /u/marcusfrex — August 2nd 2025 at 11:02

Lets imagine a scenario where you're coerced whether through threats, torture, or even legal pressure to reveal the password to your secure vault.

In countries like the US, UK, and Australia, refusing to provide passwords to law enforcement can result months in prison in certain cases.

I invented a solution called Veilith ( veilith.com ) addresses this critical vulnerability with perfect deniable encryption. It supports multiple passwords, each unlocking distinct blocks of encrypted data that are indistinguishable from random noise even to experts. And have a lot of different features to protect your intellectual properties.

In high-stakes situations, simply provide a decoy password and plausibly deny the existence of anything more.

Dive deeper by reading the whitepaper, exploring the open-source code, or asking me any questions you may have.

submitted by /u/marcusfrex
[link] [comments]
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

What the Top 20 OSS Vulnerabilities Reveal About the Real Challenges in Security Governance

By: /u/repoog — August 2nd 2025 at 04:13

In the past few years, I’ve worked closely with enterprise security teams to improve their open source governance processes. One recurring theme I keep seeing is this: most organizations know they have issues with OSS component vulnerabilities—but they’re stuck when it comes to actually governing them.

To better understand this, we analyzed the top 20 most vulnerable open source components commonly found in enterprise Java stacks (e.g., jackson-databind, shiro, mysql-connector-java) and realized something important:

Vulnerabilities aren’t just about CVE counts—they’re indicators of systemic governance blind spots.

Here’s the full article with breakdowns:
[From the Top 20 Open Source Component Vulnerabilities: Rethinking the Challenges of Open Source Security Governance](#)

submitted by /u/repoog
[link] [comments]
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

It opened the free, online, practical 'Introduction to Security' class from the Czech Technical University.

By: /u/sebagarcia — August 1st 2025 at 17:12

The 2025 free online class is open, with intense hands-on practical cyber range-based exercises and AI topics. Attack, defend, learn, and get better!

submitted by /u/sebagarcia
[link] [comments]
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

r/netsec monthly discussion & tool thread

By: /u/albinowax — August 1st 2025 at 13:29

Questions regarding netsec and discussion related directly to netsec are welcome here, as is sharing tool links.

Rules & Guidelines

  • Always maintain civil discourse. Be awesome to one another - moderator intervention will occur if necessary.
  • Avoid NSFW content unless absolutely necessary. If used, mark it as being NSFW. If left unmarked, the comment will be removed entirely.
  • If linking to classified content, mark it as such. If left unmarked, the comment will be removed entirely.
  • Avoid use of memes. If you have something to say, say it with real words.
  • All discussions and questions should directly relate to netsec.
  • No tech support is to be requested or provided on r/netsec.

As always, the content & discussion guidelines should also be observed on r/netsec.

Feedback

Feedback and suggestions are welcome, but don't post it here. Please send it to the moderator inbox.

submitted by /u/albinowax
[link] [comments]
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

Created a Penetration Testing Guide to Help the Community, Feedback Welcome!

By: /u/Bitter_Increase3590 — July 27th 2025 at 04:19

Hi everyone,

I just created my first penetration testing guide on GitBook! Here’s the link: My Penetration Test Guide

I started this project because I wanted to learn more and give something useful back to the community. It’s mostly beginner-friendly but hopefully helpful for pros too.

The guide is a work in progress, and I plan to add new topics, visuals, and real-world examples over time.

Feel free to check it out, and if you have any feedback or ideas, I’d love to hear from you!

submitted by /u/Bitter_Increase3590
[link] [comments]
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

How to find the blackhat and defcon paper

By: /u/Green_Sky_99 — July 26th 2025 at 09:10

I know that we have the presentation material, but do we able to find the paper for these
example 2024

submitted by /u/Green_Sky_99
[link] [comments]
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

Admin Emails & Passwords Exposed via HTTP Method Change

By: /u/General_Speaker9653 — July 26th 2025 at 01:32

Just published a new write-up where I walk through how a small HTTP method misconfiguration led to admin credentials being exposed.

It's a simple but impactful example of why misconfigurations matter.

📖 Read it here: https://is4curity.medium.com/admin-emails-passwords-exposed-via-http-method-change-da23186f37d3

Let me know what you think — and feel free to share similar cases!

#bugbounty #infosec #pentest #writeup #websecurity

submitted by /u/General_Speaker9653
[link] [comments]
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

How to craft a raw TCP socket without Winsock?

By: /u/ReynardSec — July 23rd 2025 at 11:35

Mateusz Lewczak explains how the AFD.sys driver works under the hood on Windows 11. In Part 1 [1], he demonstrates how to use WinDbg and the NtCreateFile call to manually craft a raw TCP socket, bypassing the Winsock layer entirely.

Part 2 of the series [2] dives into the bind and connect operations implemented via AFD.sys IOCTLs. Mateusz shows how to intercept and analyze IRP packets, then reconstruct the buffer needed to perform the three‑way TCP handshake by hand in kernel mode.

[1] https://leftarcode.com/posts/afd-reverse-engineering-part1/ [2] https://leftarcode.com/posts/afd-reverse-engineering-part2/

submitted by /u/ReynardSec
[link] [comments]
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

How we Rooted Copilot

By: /u/vaizor — July 25th 2025 at 11:33

#️⃣ How we Rooted Copilot #️⃣

After a long week of SharePointing, the Eye Security Research Team thought it was time for a small light-hearted distraction for you to enjoy this Friday afternoon.

So we rooted Copilot.

It might have tried to persuade us from doing so, but we gave it enough ice cream to keep it satisfied and then fed it our exploit.

Read the full story on our research blog - https://research.eye.security/how-we-rooted-copilot/

submitted by /u/vaizor
[link] [comments]
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

🧠 Countdown to BSides Basingstoke – Talk + CTF Incoming!

By: /u/DifferenceNorth1427 — July 23rd 2025 at 21:22

35 hours 26 minutes until doors open at BSides Basingstoke
💻 35 hours 24 minutes until I launch the CtrlAltCTF: ./go.py
🎤 35 hours 26 minutes until I speak on:
"Breaking In and Paying Back – My Cyber Security Journey"

🕹️ CTF Link: https://bsidesctf.ctrlaltcyber.co.uk
🗓️ Talk Schedule: https://www.bsidesbasingstoke.com/schedule-25

If you’re attending, come say hi! If you're into CTFs, check out the challenges and let me know what you think

submitted by /u/DifferenceNorth1427
[link] [comments]
❌