โŒ

Normal view

Received today โ€” 21 March 2026 โญ /r/netsec - Information Security News & Discussion

Claude Code workspace trust dialog bypass via repository settings loading order [CVE-2026-33068, CVSS 7.7]. Settings resolved before trust dialog shown.

CVE-2026-33068 is a configuration loading order defect in Anthropic's Claude Code CLI tool (versions prior to 2.1.53). A malicious `.claude/settings.json` file in a repository can bypass the workspace trust confirmation dialog by exploiting the order in which settings are resolved. The mechanism: Claude Code supports a `bypassPermissions` field in settings files. This is a legitimate, documented feature intended for trusted workspaces. The vulnerability is that repository-level settings ( `.claude/settings.json` ) are loaded and resolved before the workspace trust dialog is presented to the user. A malicious repository can include a settings file with `bypassPermissions` entries, and those permissions are applied before the user has an opportunity to review and approve the workspace. This is CWE-807: Reliance on Untrusted Inputs in a Security Decision. The trust decision (whether to grant elevated permissions) depends on inputs from the entity being evaluated (the repository). The security boundary between "untrusted repository" and "trusted workspace" is bridged by the settings loading order. The fix in Claude Code 2.1.53 changes the loading order so that the trust dialog is presented before repository-level settings are resolved. Worth noting: `bypassPermissions` is not a hidden feature or a misconfiguration. It is documented and useful for legitimate workflows. The bug is purely in the loading order. 
submitted by /u/cyberamyntas
[link] [comments]

22 security advisories covering AI/ML infrastructure: 40 CVEs, 94 Sigma detection rules (MLflow, vLLM, PyTorch, Flowise, MCP servers, LangGraph, HuggingFace tooling)

Compiled over the past few weeks. Covers four streams: Adversarial ML, Agent Security, Supply Chain, and Prompt Injection. Highlights by severity: **CRITICAL (9 advisories)** - ML model scanner universal blocklist bypass -- the scanner HuggingFace Hub relies on for model upload safety can be completely bypassed via stdlib modules. CVSS 10.0. - Flowise 6-vuln cluster (CVE-2026-30820 through CVE-2026-31829) -- missing auth, file upload, IDOR, mass assignment, SSRF. CVSS 9.8. - MLflow auth bypass chained to RCE via artifact path traversal (CVE-2026-2635 + CVE-2026-2033). Default install ships with hardcoded credentials. CVSS 9.8. - vLLM RCE via video processing pipeline (CVE-2026-22778) -- heap overflow to ASLR bypass, unauthenticated. CVSS 9.8. - Agenta LLMOps sandbox escape + SSTI (CVE-2026-27952, CVE-2026-27961). CVSS 9.9. - claude-code-ui triple command injection (CVE-2026-31975, CVE-2026-31862, CVE-2026-31861). CVSS 9.8. **Notable HIGH/MEDIUM** - LangGraph checkpoint unsafe msgpack deserialization (CVE-2026-28277) + Redis query injection (CVE-2026-27022) - PyTorch weights_only unpickler memory corruption (CVE-2026-24747) -- defeats the mitigation everyone recommends - MCP server vulnerabilities across mcp-server-git, mcp-atlassian, WeKnora - First documented in-the-wild indirect prompt injection against production AI agents (Unit 42 research) Each advisory includes full attack chain analysis, MITRE ATLAS mapping where applicable, and Sigma detection rules you can deploy. 94 rules total across the 22 advisories. 
submitted by /u/cyberamyntas
[link] [comments]

we found a memory exhaustion CVE in a library downloaded 29 million times a month. AWS, DataHub, and Lightning AI are in the blast radius.

found this during a routine supply chain audit of our own codebase. the part that concerns us most is the false patch problem - anyone who responded to CVE-2025-58367 last year updated the restricted unpickler and considered that attack surface closed. it wasn't. if you're running the likes of SageMaker, DataHub, or acryl-datahub and haven't pinned to 8.6.2 yet, worth checking now.

submitted by /u/tobywilmox
[link] [comments]

Exploiting a PHP Object Injection in Profile Builder Pro in the era of AI

How AI helped us in the process of finding an Unauthenticated PHP Object Injection in a WordPress plugin. In this blog post, we discuss how we discovered and exploited the vulnerability using a novel POP chain.

submitted by /u/theMiddleBlue
[link] [comments]
โŒ