FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

AI-Powered Code Security Reviews for DevSecOps with Claude

By: /u/mostafahussein — August 11th 2025 at 07:03

Anthropic has released Claude Code Security Review, a new feature that brings AI-powered security checks into development workflows. When integrated with GitHub Actions, it can automatically review pull requests for vulnerabilities, including but not limited to:

- Access control issues (IDOR)

- Risky dependencies

In my latest article, I cover how to set it up and what it looks like in practice.

submitted by /u/mostafahussein
[link] [comments]
☐ ☆ ✇ WIRED

How to Protect Yourself From Portable Point-of-Sale Scams

By: Diego Barbera — August 10th 2025 at 10:00
POS scams are difficult but not impossible to pull off. Here's how they work—and how you can protect yourself.
☐ ☆ ✇ WIRED

A Special Diamond Is the Key to a Fully Open Source Quantum Sensor

By: Lily Hay Newman — August 9th 2025 at 18:40
Quantum sensors can be used in medical technologies, navigation systems, and more, but they’re too expensive for most people. That's where the Uncut Gem open source project comes in.
☐ ☆ ✇ WeLiveSecurity

Black Hat USA 2025: Is a high cyber insurance premium about your risk, or your insurer’s?

— August 8th 2025 at 14:25
A sky-high premium may not always reflect your company’s security posture
☐ ☆ ✇ WeLiveSecurity

Android adware: What is it, and how do I get it off my device?

— August 8th 2025 at 09:00
Is your phone suddenly flooded with aggressive ads, slowing down performance or leading to unusual app behavior? Here’s what to do.
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

Vulnerability Management Program - How to implement SLA and its processes

By: /u/pathetiq — August 9th 2025 at 15:28

Defining good SLAs is a tough challenge, but it’s at the heart of any solid vulnerability management program. This article helps internal security teams set clear SLAs, define the right metrics, and adjust their ticketing system to build a successful vulnerability management program.

submitted by /u/pathetiq
[link] [comments]
☐ ☆ ✇ WIRED

The US Court Records System Has Been Hacked

By: Dell Cameron, Andrew Couts — August 9th 2025 at 10:30
Plus: Instagram sparks a privacy backlash over its new map feature, hackers steal data from Google's customer support system, and the true scope of the Columbia University hack comes into focus.
☐ ☆ ✇ WIRED

Ex-NSA Chief Paul Nakasone Has a Warning for the Tech World

By: Lily Hay Newman — August 8th 2025 at 23:21
At the Defcon security conference in Las Vegas on Friday, Nakasone tried to thread the needle in a politically fraught moment while hinting at major changes for the tech community around the corner.
☐ ☆ ✇ McAfee Blogs

Instagram’s New Tracking Feature: What You Need to Know to Stay Safe 

By: Jasdev Dhaliwal — August 8th 2025 at 22:40

Meta has unleashed a groundbreaking feature that transforms Instagram from a photo-sharing platform into a real-time location broadcaster. While the company promises enhanced connectivity, cybersecurity experts are sounding alarm bells about potential dangers lurking beneath this seemingly innocent update. 

Understanding the Digital Surveillance Landscape

Instagram’s freshly minted “Map” functionality represents a seismic shift in social media architecture. Unlike traditional posting where you deliberately choose what to share, this feature operates as an always-on location transmitter that continuously broadcasts your whereabouts to selected contacts whenever you launch the application. 

The mechanism mirrors Snapchat’s infamous Snap Map, but with Instagram’s massive user base—over 2 billion active accounts—the implications for personal security amplify exponentially. This feature enables users to share their real-time location with friends and view theirs on a live map, but it also raises serious privacy concerns from targeted advertising to potential stalking and misuse in abusive relationships. 

McAfee’s Chief Technology Officer Steve Grobman provides crucial context: “Features like location sharing aren’t inherently bad, but they come with tradeoffs. It’s about making informed choices. When people don’t fully understand what’s being shared or who can see it, that’s when it becomes a risk.” 

The Hidden Dangers Every Consumer Should Recognize 

Stalking and Harassment Vulnerabilities 

Digital predators can exploit location data to track victims with unprecedented precision. Relationship and parenting experts warn location sharing can turn into a stressful or even dangerous form of control, with research showing that 19 percent of 18 to 24-year-olds think it’s reasonable to expect to track an intimate partner’s location. 

Steve Grobman emphasizes the real-world implications: “There’s also a real-world safety concern. If someone knows where you are in real time, that could lead to stalking, harassment, or even assault. Location data can be powerful, and in the wrong hands, dangerous.” 

Professional and Personal Boundary Erosion

Your boss, colleagues, or acquaintances might gain unwanted insights into your personal activities. Imagine explaining why you visited a competitor’s office or why you called in sick while appearing at a shopping center. 

The Social Network Vulnerability

The danger often comes from within your own network. Grobman warns: “It only takes one person with bad intentions for location sharing to become a serious problem. You may think your network is made up of friends, but in many cases, people accept requests from strangers or someone impersonating a contact without really thinking about the consequences.” 

Data Mining and Commercial Exploitation

While Instagram claims it doesn’t use location data from this feature for ad targeting, the platform’s history with user data suggests caution. Your movement patterns create valuable behavioral profiles for marketers. 

The Mosaic Effect: Building Detailed Profiles

Cybercriminals employ sophisticated data aggregation techniques. According to Grobman: “Criminals can use what’s known as the mosaic effect, combining small bits of data like your location, routines, and social posts to build a detailed profile. They can use that information to run scams against a consumer or their connections, guess security questions, or even commit identity theft.” 

Immediate Action Steps: Protecting Your Digital Territory

Step 1: Verify Your Current Status 

For iPhone Users: 

  • Launch Instagram and navigate to your Direct Messages (DM) inbox 
  • Look for the “Map” icon at the top of your message list 
  • If present, tap to access the feature 
  • Check if your location is currently being broadcast 

For Android Users: 

  • Open Instagram and go to your DM section
  • Locate the map symbol above your conversation threads
  • Select the map to examine your sharing status 

Step 2: Disable Location Broadcasting Within Instagram

Method 1: Through the Map Interface 

  • Access the Map feature in your DMs
  • Tap the Settings gear icon in the upper-right corner 
  • Select “Who can see your location” 
  • Choose “No One” to completely disable sharing 
  • Confirm your selection 

Method 2: Through Profile Settings 

  • Navigate to your Instagram profile 
  • Tap the three horizontal lines (hamburger menu) 
  • Select Settings and Activity 
  • Choose “Privacy and Security” 
  • Find “Story, Live and Location” section 
  • Tap “Location Sharing” 
  • Set preferences to “No One” 

Step 3: Implement Device-Level Protection

iPhone Security Configuration: 

  • Open Settings on your device 
  • Scroll to Privacy & Security 
  • Select Location Services 
  • Find Instagram in the app list 
  • Choose “Never” or “Ask Next Time” 

Android Security Setup: 

  • Access Settings on your phone 
  • Navigate to Apps or Application Manager 
  • Locate Instagram 
  • Select Permissions 
  • Find Location and switch to “Don’t Allow” 

Step 4: Verify Complete Deactivation

After implementing these changes: 

  • Restart the Instagram application 
  • Check the Map feature again 
  • Ensure your location doesn’t appear 
  • Ask trusted contacts to confirm you’re invisible on their maps 

Advanced Privacy Fortification Strategies

Audit Your Digital Footprint 

Review all social media platforms for similar location-sharing features. Snapchat, Facebook, and TikTok offer comparable functionalities that require individual deactivation. 

Implement Location Spoofing Awareness 

Some users consider VPN services or location-spoofing applications, but these methods can violate platform terms of service and create additional security vulnerabilities. 

Regular Security Hygiene 

Establish monthly reviews of your privacy settings across all social platforms. Companies frequently update features and reset user preferences without explicit notification. 

Grobman emphasizes the challenge consumers face: “Most social platforms offer privacy settings that offer fine-grained control, but the reality is many people don’t know those settings exist or don’t take the time to use them. That can lead to oversharing, especially when it comes to things like your location.” 

Family Protection Protocols 

If you’re a parent with supervision set up for your teen, you can control their location sharing experience on the map, get notified when they enable it, and see who they’re sharing with. Implement these controls immediately for underage family members. 

Understanding the Technical Mechanics 

Data Collection Frequency 

Your location updates whenever you open the app or return to it while running in the background. This means Instagram potentially logs your position multiple times daily, creating detailed movement profiles. 

Data Retention Policies 

Instagram claims to hold location data for a maximum of three days, but this timeframe applies only to active sharing, not the underlying location logs the platform maintains for other purposes. 

Visibility Scope 

Even with location sharing disabled, you can still see others’ shared locations on the map if they’ve enabled the feature. This asymmetric visibility creates potential social pressure to reciprocate sharing. 

Red Flags and Warning Signs 

Monitor these indicators that suggest your privacy may be compromised: 

  • Unexpected visitors appearing at locations you’ve visited 
  • Colleagues or acquaintances referencing your whereabouts without your disclosure
  • Targeted advertisements for businesses near places you’ve recently visited
  • Friends asking about activities they shouldn’t know about 

The Broader Cybersecurity Context

This Instagram update represents a concerning trend toward ambient surveillance in social media. Companies increasingly normalize continuous data collection by framing it as connectivity enhancement. As consumers, we must recognize that convenience often comes at the cost of privacy. 

The feature’s opt-in design provides some protection, but user reports suggest the system may automatically activate for users with older app versions who previously granted location permissions. This highlights the importance of proactive privacy management rather than reactive protection. 

Your Privacy Action Plan

Immediate (Next 10 Minutes): 

  • Disable Instagram location sharing using the steps above
  • Check device-level location permissions for Instagram 

This Week: 

  • Audit other social media platforms for similar features
  • Review and update privacy settings across all digital accounts
  • Inform family members about these privacy risks 

Monthly Ongoing: 

  • Monitor Instagram for new privacy-affecting features 
  • Review location permissions for all mobile applications 
  • Stay informed about emerging digital privacy threats 

Expert-Recommended Protection Strategy:

Grobman advises a comprehensive approach: “The best thing you can do is stay aware and take control. Review your app permissions, think carefully before you share, and use tools that help protect your privacy. McAfee+ includes identity monitoring, scam detection. McAfee’s VPN keeps your IP address private, but if a consumer allows an application to identify its location via GPS or other location services, VPNs will not protect location in that scenario. Staying safe online is always a combination of the best technology along with good digital street smarts.” 

Remember: Your location data tells the story of your life—where you work, live, worship, shop, and spend leisure time. Protecting this information isn’t paranoia; it’s fundamental digital hygiene in our hyper-connected world. 

The choice to share your location should always remain yours, made with full awareness of the implications. By implementing these protective measures, you’re taking control of your digital footprint and safeguarding your personal security in an increasingly surveilled digital landscape. 

 

The post Instagram’s New Tracking Feature: What You Need to Know to Stay Safe  appeared first on McAfee Blog.

☐ ☆ ✇ WIRED

Hackers Went Looking for a Backdoor in High-Security Safes—and Now Can Open Them in Seconds

By: Andy Greenberg — August 8th 2025 at 20:20
Security researchers found two techniques to crack at least eight brands of electronic safes—used to secure everything from guns to narcotics—that are sold with Securam Prologic locks.
☐ ☆ ✇ WIRED

A Misconfiguration That Haunts Corporate Streaming Platforms Could Expose Sensitive Data

By: Lily Hay Newman — August 8th 2025 at 17:00
A security researcher discovered that flawed API configurations are plaguing corporate livestreaming platforms, potentially exposing internal company meetings—and he's releasing a tool to find them.
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

Unclaimed Google Play Store package

By: /u/Accomplished-Dig4025 — August 8th 2025 at 16:41

I came across a broken link hijacking case involving a Google Play Store package. The app link returns a 404, and the package name is currently unclaimed.which means it can potentially be taken over. It’s a valid security issue and could be eligible for a bug bounty, though I'm not 100% sure.

The company asked for a working proof of concept, meaning the package has to actually be claimed and uploaded to the Play Store. I haven’t created a developer account myself yet, since I haven’t needed one except for this case and it requires a $25 fee.

If you already have a developer account, would you be willing to contribute by uploading a simple placeholder app using that package name, just to prove the takeover? If the report gets rewarded, I’ll share 10% of the bounty with you. Usually, these types of reports are rewarded with $50 or $100, so I hope you understand I can’t offer more than 10%.

Let me know if you’re open to it.

Thanks!

submitted by /u/Accomplished-Dig4025
[link] [comments]
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

The Mental Material Revolution: Why Engineers Need to Become Cognitive Architects

By: /u/gabibeyo — August 8th 2025 at 13:55

Why Engineers with Low EQ Might Not Succeed in the AI Era

Here’s a prediction that might ruffle some feathers: The engineers who struggle most in the AI revolution won’t be those who can’t adapt to new frameworks or learn new languages. It’ll be those who can’t master the art of contextualization.

I’m talking about engineers with lower emotional intelligence — brilliant problem-solvers who know exactly what to do and how to do it, but struggle with the subtleties of knowledge transfer. They can debug complex systems and architect elegant solutions, but ask them to explain their reasoning, prioritize information, or communicate nuanced requirements? That’s where things get messy.

In the pre-AI world, this was manageable. Code was the primary interface. Documentation was optional. Communication happened in pull requests and stack overflow posts. But AI has fundamentally changed the game.

Welcome to Context Engineering: The Art of Mental Material

Context engineering is the practice of providing AI systems with the precise “mental material” they need to achieve goals effectively. It’s not just prompt writing or RAG implementation — it’s cognitive architecture. When you hire a new team member, you don’t just hand them a task and walk away. You provide context. You explain the company culture, the project history, the constraints, the edge cases, and the unspoken rules. You share your mental model of the problem space. Context engineering is doing exactly this, but for AI systems.

This shift reveals something interesting: Engineers with lower emotional intelligence often excel at technical execution but struggle with the nuanced aspects of knowledge transfer — deciding what information to share versus omit, expressing complex ideas clearly, and distinguishing between ephemeral and durable knowledge. These communication and prioritization skills, once considered “soft,” are now core technical competencies in context engineering. But let’s move beyond the EQ discussion — the real transformation is much bigger.

Mental material encompasses far more than simple data or documentation. It includes declarative knowledge (facts, data, documentation), procedural knowledge (how to approach problems, methodologies), conditional knowledge (when to apply different strategies), meta-knowledge (understanding about the knowledge itself), contextual constraints (what’s relevant vs. irrelevant for specific tasks), long-term memory (stable patterns, preferences, and principles that rarely change), and short-term memory (session-specific context, recent decisions, and ephemeral state that helps maintain coherence within a particular interaction).

Your New Job Description: AI Mental Engineer

Traditional engineering was about building systems. AI engineering is about designing cognitive architectures. You’re not just writing code — you’re crafting how artificial minds understand and approach problems. This means your daily work now includes memory architecture (deciding what information gets stored where, how it’s organized, and when it gets retrieved — not database design, but epistemological engineering), context strategy (determining what mental material an AI needs for different types of tasks), knowledge curation (maintaining the quality and relevance of information over time, as mental material degrades and becomes outdated), cognitive workflow design (orchestrating how AI systems access, process, and apply contextual information), and metacognitive monitoring (analyzing whether the context strategies are working and adapting them based on outcomes).

The engineers who thrive will be those who can bridge technical precision with cognitive empathy — understanding not just how systems work, but how to help artificial minds understand and reason about problems. This transformation isn’t just about new tools or frameworks. It’s about fundamentally reconceptualizing what engineering means in an AI-first world.

The Context Orchestration Challenge

We’ve built sophisticated AI systems that can reason, write, and solve complex problems, yet we’re still manually feeding them context like we’re spoon-feeding a child. Every AI application faces the same fundamental challenge: How do you help an artificial mind understand what it needs to know?

Currently, we solve this through memory storage systems that dump everything into databases, prompt templates that hope to capture the right context, RAG systems that retrieve documents but don’t understand relevance, and manual curation that doesn’t scale. But nothing that truly understands the intentionality behind a request and can autonomously determine what mental material is needed. We’re essentially doing cognitive architecture manually, request by request, application by application.

We Need a Mental Material Orchestrator

This brings us to a fascinating philosophical question: What would truly intelligent context orchestration look like? Imagine a system that operates as a cognitive intermediary — analyzing not just what someone is asking, but understanding the deeper intentionality behind the request.

Consider this example: “Help me optimize this database query — it’s running slow.” Most systems provide generic query optimization tips, but intelligent context orchestration would perform cognitive analysis to understand that this performance issue has dramatically different underlying intents based on context.

If it’s a junior developer, they need procedural knowledge (how to analyze execution plans) plus declarative knowledge (indexing fundamentals) plus short-term memory (what they tried already this session). If it’s a senior developer under deadline pressure, they need conditional knowledge (when to denormalize vs. optimize) plus long-term memory (this person prefers pragmatic solutions) plus contextual constraints (production system limitations). If it’s an architect reviewing code, they need meta-knowledge (why this pattern emerged) plus procedural knowledge (systematic performance analysis) plus declarative knowledge (system-wide implications).

Context-dependent realities might reveal the “slow query” isn’t actually a query problem — maybe it’s running in a resource-constrained Docker container, or it’s an internal tool used infrequently where 5 milliseconds doesn’t matter. Perhaps the current query is intentionally slower because the optimized version would sacrifice readability (violating team guidelines), and the system should suggest either a local override for performance-critical cases or acceptance of the minor delay.

The problem with even perfect prompts is clear: You could craft the world’s best prompt about database optimization, but without understanding who is asking, why they’re asking, and what they’ve already tried, you’re essentially giving a lecture to someone who might need a quick fix, a learning experience, or a strategic decision framework. And even if you could anticipate every scenario, you’d quickly hit token limits trying to include all possible contexts in a single prompt. The context strategy must determine not just what information to provide, but what type of mental scaffolding the person needs to successfully integrate that information — and dynamically assemble only the relevant context for that specific interaction.

The Deeper Implications

This transformation raises profound questions about the nature of intelligence and communication. What does it mean to “understand” a request? When we ask an AI to help with a coding problem, are we asking for code, explanation, learning, validation, or something else entirely? Human communication is layered with implied context and unspoken assumptions. How do we formalize intuition? Experienced engineers often “just know” what information is relevant for a given situation. How do we encode that intuitive understanding into systems? What is the relationship between knowledge and context? The same piece of information can be useful or distracting depending on the cognitive frame it’s presented within.

These aren’t just technical challenges — they’re epistemological ones. We’re essentially trying to formalize how minds share understanding with other minds.

From Code Monkey to Cognitive Architect

This transformation requires fundamentally reconceptualizing what engineering means in an AI-first world, but it’s crucial to understand that we’re not throwing decades of engineering wisdom out the window. All the foundational engineering knowledge you’ve accumulated — design patterns, data structures and algorithms, system architecture, software engineering principles (SOLID, DRY, KISS), database design, distributed systems concepts, performance optimization, testing methodologies, security practices, code organization and modularity, error handling and resilience patterns, scalability principles, and debugging methodologies — remains incredibly valuable.

This knowledge serves a dual purpose in the AI era. First, it enables you to create better mental material by providing AI systems with proven patterns, established principles, and battle-tested approaches rather than ad-hoc solutions. When you teach an AI about system design, you’re drawing on decades of collective engineering wisdom about what works and what doesn’t. Second, this deep technical knowledge allows you to act as an intelligent co-pilot, providing real-time feedback and corrections as AI systems work through problems. You can catch when an AI suggests an anti-pattern, guide it toward more robust solutions, or help it understand why certain trade-offs matter in specific contexts.

Importantly, these real-time corrections and refinements should themselves become part of the mental material. When you guide an AI away from a poor architectural choice or toward a better algorithm, that interaction should be captured and integrated into the system’s knowledge base, making it progressively more precise and aligned with good engineering practices over time.

Traditional engineering focused on deterministic systems, optimized for performance and reliability, measured success by uptime and speed, and treated communication as secondary to functionality. AI engineering designs probabilistic, context-dependent systems, optimizes for effectiveness and adaptability, measures success by goal achievement and learning, and makes communication a core technical competency — but it builds on all the foundational principles that make software systems robust and maintainable.

If you’re an engineer reading this, here’s how to prepare for the mental material revolution: Develop context awareness by thinking about the knowledge transfer patterns in your current work. How do you onboard new team members? How do you document complex decisions? These skills directly translate to context engineering. Practice explanatory engineering by forcing yourself to articulate not just what you’re building, but why, how, and when. Write documentation as if you’re teaching someone who’s brilliant but has no context about your domain. Study cognitive architecture to understand how humans process information, make decisions, and apply knowledge — this will help you design better AI context strategies. Build context systems by experimenting with prompt engineering, RAG systems, and memory management. Embrace the meta-layer and get comfortable with systems that manage other systems, as context orchestration is inherently meta-engineering.

The Future is Cognitive

We’re entering an era where the most valuable engineers won’t be those who can write the most elegant algorithms, but those who can design the most effective cognitive architectures. The ability to understand, communicate, and orchestrate mental material will become as fundamental as understanding data structures and algorithms.

The question isn’t whether this transformation will happen — it’s already underway. The question is whether you’ll be building the mental scaffolding that powers the next generation of AI systems, or whether you’ll be left behind trying to manually manage context in an increasingly automated world. Your emotional intelligence isn’t just a nice-to-have soft skill anymore. It’s becoming your most valuable engineering asset.

The mental material revolution is here. Are you ready to become a cognitive architect?

What’s your experience with context engineering? Are you already seeing this shift in your organization? Share your thoughts and let’s discuss how we can build better mental material orchestration systems together.

submitted by /u/gabibeyo
[link] [comments]
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

The Silent Security Crisis: How AI Coding Assistants Are Creating Perfect Attack Blueprints

By: /u/gabibeyo — August 8th 2025 at 13:51

What I Found When I Monitored Claude CLI for One Day

While building an MCP server last week, I got curious about what Claude CLI stores locally on my machine.

A simple 24-hour monitoring experiment revealed a significant security blind spot that most developers aren't aware of.

What I found in my AI conversation logs:

• API keys for multiple services (OpenAI, GitHub, AWS) • Database connection strings with credentials • Detailed tech stack and architecture discussions • Team processes and organizational context • Personal debugging patterns and approaches

All stored locally in plain text, searchable, and organized by timestamp.

The adoption vs. security gap:

Adoption reality: 500K+ developers now use AI coding assistants daily

Security awareness: Most teams haven't considered what's being stored locally

The disconnect: We're moving fast on AI integration but haven't updated our security practices to match

Why this matters:

Traditional security assumes attackers need time and expertise to map your systems. AI conversation logs change that equation - they contain pre-analyzed intelligence about your infrastructure, complete with context and explanations.

It's like having detailed reconnaissance already done, just sitting in text files.

"But if someone has my laptop, I'm compromised anyway, right?"

This is the pushback I keep hearing, and it misses the key difference:

Traditional laptop access = attackers hunt through scattered files for days/weeks AI conversation logs = complete, contextualized intelligence report you personally wrote

Instead of reverse-engineering your setup, they get: "I'm connecting to our MongoDB cluster at mongodb://admin:password@prod-server - can you help debug this?"

The reconnaissance work is already done. They just read your explanations.

The interesting part:

Claude initially refused to help me build a monitoring script, thinking I was trying to attack a system. Yet the same AI would likely help an attacker who asked politely about "monitoring their own files for research."


I've written up the full technical discovery process, including the monitoring methodology and security implications.

Read the complete analysis: [https://medium.com/@gabi.beyo/the-silent-security-crisis-how-ai-coding-assistants-are-creating-perfect-attack-blueprints-71fd375d51a3]

How is your team handling AI conversation data? Are local storage practices part of your security discussions?

DevSecurity #AI #EngineeringLeadership #CyberSecurity

submitted by /u/gabibeyo
[link] [comments]
☐ ☆ ✇ WIRED

It Looks Like a School Bathroom Smoke Detector. A Teen Hacker Showed It Could Be an Audio Bug

By: Andy Greenberg, Joseph Cox — August 8th 2025 at 13:00
A pair of hackers found that a vape detector often found in high school bathrooms contained microphones—and security weaknesses that could allow someone to turn it into a secret listening device.
☐ ☆ ✇ WeLiveSecurity

Black Hat USA 2025: Policy compliance and the myth of the silver bullet

— August 7th 2025 at 16:03
Who’s to blame when the AI tool managing a company’s compliance status gets it wrong?
☐ ☆ ✇ WeLiveSecurity

Black Hat USA 2025: Does successful cybersecurity today increase cyber-risk tomorrow?

— August 7th 2025 at 14:23
Success in cybersecurity is when nothing happens, plus other standout themes from two of the event’s keynotes
☐ ☆ ✇ WIRED

Leak Reveals the Workaday Lives of North Korean IT Scammers

By: Matt Burgess — August 7th 2025 at 23:15
Spreadsheets, Slack messages, and files linked to an alleged group of North Korean IT workers expose their meticulous job-planning and targeting—and the constant surveillance they're under.
☐ ☆ ✇ WIRED

Mysterious Crime Spree Targeted National Guard Equipment Stashes

By: Dell Cameron — August 7th 2025 at 18:21
A string of US armory break-ins, kept quiet by authorities for months, points to a growing security crisis—and signs of an inside job.
☐ ☆ ✇ WIRED

Encryption Made for Police and Military Radios May Be Easily Cracked

By: Kim Zetter — August 7th 2025 at 18:09
Researchers found that an encryption algorithm likely used by law enforcement and special forces can have weaknesses that could allow an attacker to listen in.
☐ ☆ ✇ Security – Cisco Blog

Improving Cloud-VPN Resiliency to DoS Attacks With IKE Throttling

By: Jerome Tollet — August 7th 2025 at 12:00
Explore a network-layer throttling mechanism to improve the resiliency of Cloud VPNs IKE servers, which are typically subject to IKE flood attacks.
☐ ☆ ✇ WIRED

A Single Poisoned Document Could Leak ‘Secret’ Data Via ChatGPT

By: Matt Burgess — August 6th 2025 at 23:30
Security researchers found a weakness in OpenAI’s Connectors, which let you hook up ChatGPT to other services, that allowed them to extract data from a Google Drive without any user interaction.
☐ ☆ ✇ WIRED

Hackers Hijacked Google’s Gemini AI With a Poisoned Calendar Invite to Take Over a Smart Home

By: Matt Burgess — August 6th 2025 at 13:00
For likely the first time ever, security researchers have shown how AI can be hacked to create real world havoc, allowing them to turn off lights, open smart shutters, and more.
☐ ☆ ✇ WIRED

What to Know About Traveling to China for Business

By: Mitch Moxley — August 6th 2025 at 13:00
Recent developments and an escalating trade war have made travel to cities like Beijing challenging but by no means impossible.
☐ ☆ ✇ Security – Cisco Blog

Foundation-sec-8B-Instruct: An Out-of-the-Box Security Copilot

By: Yaron Singer — August 6th 2025 at 12:00
Foundation-sec-8B-Instruct layers instruction fine-tuning on top of our domain-focused base model, giving you a chat-native copilotthat understands security.
☐ ☆ ✇ WIRED

Nuclear Experts Say Mixing AI and Nuclear Weapons Is Inevitable

By: Matthew Gault — August 6th 2025 at 10:30
Human judgement remains central to the launch of nuclear weapons. But experts say it’s a matter of when, not if, artificial intelligence will get baked into the world’s most dangerous systems.
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

OdooMap - A Pentesting Tool for Odoo Applications

By: /u/Fluid-Profit-164 — August 5th 2025 at 15:47

Can you review my new security testing tool https://github.com/MohamedKarrab/odoomap

Features:

• Detect Odoo version & exposed metadata

• Enumerate databases and accessible models

• Authenticate & verify CRUD permissions per model

• Extract data from chosen models (e.g. res.users, res.partner)

• Brute-force login credentials (default, custom user/pass, wordlists)

• Brute-force internal model names when listing fails

submitted by /u/Fluid-Profit-164
[link] [comments]
☐ ☆ ✇ Security – Cisco Blog

Cisco’s Foundation AI Advances AI Supply Chain Security With Hugging Face

By: Hyrum Anderson — August 5th 2025 at 12:00
Cisco's Foundation AI is partnering with Hugging Face, bringing together the world's leading AI model hub with Cisco's security expertise.
☐ ☆ ✇ WIRED

The US Military Is Raking in Millions From On-Base Slot Machines

By: Molly Longman — August 4th 2025 at 10:30
The Defense Department operates slot machines on US military bases overseas, raising millions of dollars to fund recreation for troops—and creating risks for soldiers prone to gambling addiction.
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

Be patient and keep it simple.

By: /u/anasbetis94 — August 2nd 2025 at 15:31

Hello all,

I just published a new write up about bugs that I have found recently under the name 'Be patient and keep it simple, the bug is there' . I hope you liked it.

submitted by /u/anasbetis94
[link] [comments]
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

Forced to give your password? Here is the solution.

By: /u/marcusfrex — August 2nd 2025 at 11:02

Lets imagine a scenario where you're coerced whether through threats, torture, or even legal pressure to reveal the password to your secure vault.

In countries like the US, UK, and Australia, refusing to provide passwords to law enforcement can result months in prison in certain cases.

I invented a solution called Veilith ( veilith.com ) addresses this critical vulnerability with perfect deniable encryption. It supports multiple passwords, each unlocking distinct blocks of encrypted data that are indistinguishable from random noise even to experts. And have a lot of different features to protect your intellectual properties.

In high-stakes situations, simply provide a decoy password and plausibly deny the existence of anything more.

Dive deeper by reading the whitepaper, exploring the open-source code, or asking me any questions you may have.

submitted by /u/marcusfrex
[link] [comments]
☐ ☆ ✇ WIRED

Google Will Use AI to Guess People’s Ages Based on Search History

By: Dell Cameron — August 2nd 2025 at 10:30
Plus: A former top US cyber official loses her new job due to political backlash, Congress is rushing through a bill to censor lawmakers’ personal information online, and more.
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

What the Top 20 OSS Vulnerabilities Reveal About the Real Challenges in Security Governance

By: /u/repoog — August 2nd 2025 at 04:13

In the past few years, I’ve worked closely with enterprise security teams to improve their open source governance processes. One recurring theme I keep seeing is this: most organizations know they have issues with OSS component vulnerabilities—but they’re stuck when it comes to actually governing them.

To better understand this, we analyzed the top 20 most vulnerable open source components commonly found in enterprise Java stacks (e.g., jackson-databind, shiro, mysql-connector-java) and realized something important:

Vulnerabilities aren’t just about CVE counts—they’re indicators of systemic governance blind spots.

Here’s the full article with breakdowns:
[From the Top 20 Open Source Component Vulnerabilities: Rethinking the Challenges of Open Source Security Governance](#)

submitted by /u/repoog
[link] [comments]
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

It opened the free, online, practical 'Introduction to Security' class from the Czech Technical University.

By: /u/sebagarcia — August 1st 2025 at 17:12

The 2025 free online class is open, with intense hands-on practical cyber range-based exercises and AI topics. Attack, defend, learn, and get better!

submitted by /u/sebagarcia
[link] [comments]
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

r/netsec monthly discussion & tool thread

By: /u/albinowax — August 1st 2025 at 13:29

Questions regarding netsec and discussion related directly to netsec are welcome here, as is sharing tool links.

Rules & Guidelines

  • Always maintain civil discourse. Be awesome to one another - moderator intervention will occur if necessary.
  • Avoid NSFW content unless absolutely necessary. If used, mark it as being NSFW. If left unmarked, the comment will be removed entirely.
  • If linking to classified content, mark it as such. If left unmarked, the comment will be removed entirely.
  • Avoid use of memes. If you have something to say, say it with real words.
  • All discussions and questions should directly relate to netsec.
  • No tech support is to be requested or provided on r/netsec.

As always, the content & discussion guidelines should also be observed on r/netsec.

Feedback

Feedback and suggestions are welcome, but don't post it here. Please send it to the moderator inbox.

submitted by /u/albinowax
[link] [comments]
☐ ☆ ✇ WIRED

The Kremlin’s Most Devious Hacking Group Is Using Russian ISPs to Plant Spyware

By: Andy Greenberg — July 31st 2025 at 16:00
The FSB cyberespionage group known as Turla seems to have used its control of Russia’s network infrastructure to meddle with web traffic and trick diplomats into infecting their computers.
☐ ☆ ✇ Security – Cisco Blog

Cisco delivers enhanced email protection to the Middle East

By: Marvin Nodora — July 30th 2025 at 12:00
Cisco's new data center in the UAE delivers in-region reliability and increased protection to organizations in the Middle East.
☐ ☆ ✇ WeLiveSecurity

The hidden risks of browser extensions – and how to stay safe

— July 29th 2025 at 09:00
Not all browser add-ons are handy helpers – some may contain far more than you have bargained for
☐ ☆ ✇ WIRED

Age Verification Laws Send VPN Use Soaring—and Threaten the Open Internet

By: Lily Hay Newman, Matt Burgess — July 29th 2025 at 10:30
A law requiring UK internet users to verify their age to access adult content has led to a huge surge in VPN downloads—and has experts worried about the future of free expression online.
❌