A fresh Linux privilege escalation bug dubbed "Dirty Frag" has dropped into the wild with no patches, no CVE, and a public exploit that hands attackers root access across major distributions. Security researcher Hyunwoo Kim disclosed the local privilege escalation flaw on Friday after what he said was a broken embargo forced the issue into the open. Kim described Dirty Frag as a "universal LPE" affecting "all major distributions" and warned that it delivers the same kind of immediate root access as the recent CopyFail mess โ only this time, defenders do not even have patches to throw at the problem. "As with the previous Copy Fail vulnerability, Dirty Frag likewise allows immediate root privilege escalation on all major distributions," Kim said. "Because the responsible disclosure schedule and embargo have been broken, no patches exist for any distribution." Dirty Frag works by chaining together two separate Linux kernel flaws. One sits in the xfrm-ESP subsystem and dates back to a January 2017 kernel commit, according to Kim, while the second vulnerability affects RxRPC functionality introduced in 2023. Together, the two bugs allegedly let unprivileged local users overwrite protected files in memory and claw their way to root. A long list of distributions in the firing line, according to Kim, including Ubuntu, Red Hat Enterprise Linux, CentOS Stream, Fedora, AlmaLinux, and openSUSE Tumbleweed. Separately, researchers appear to have independently reverse-engineered part of the bug chain from a publicly visible kernel fix commit before the embargo expired, adding to the disclosure mess already surrounding the flaw. One GitHub project titled "Copy Fail 2: Electric Boogaloo" claims to weaponize the ESP/xfrm side of the issue separately from Kim's full Dirty Frag chain. Kim said maintainers signed off on the disclosure of the flaw after somebody else dumped exploit details online first, collapsing the embargo before patches were finished. So now the exploit is public, the fixes are not, and Linux admins get another long week. The disclosure comes as the industry is still dealing with the fallout from CopyFail, another Linux privilege escalation bug that recently landed in CISA's Known Exploited Vulnerabilities catalog after attackers started cashing in on it in the wild. But Dirty Frag makes the recent CopyFail chaos look relatively organized. There's still no CVE, no coordinated patch rollout, and not much in the way of mitigation. Kim published a temporary workaround that disables affected ESP and RxRPC modules before clearing the system page cache. Useful, perhaps, although "turn bits of the kernel off and hope for the best" is not usually the sort of guidance admins enjoy seeing. ยฎ
Meta has quietly pulled the plug on encrypted Instagram DMs, meaning private messages on one of the worldโs biggest social networks are no longer especially private. The change took effect today, according to a revised Meta post first published in 2022. In a statement to The Register, Meta said the feature saw limited adoption and pointed users toward WhatsApp instead. "Very few people were opting in to end-to-end encrypted messaging in DMs, so we're removing this option from Instagram in the coming months," the spokesperson said. "Anyone who wants to keep messaging with end-to-end encryption can easily do that on WhatsApp." Itโs quite the reversal for a corporation that spent years telling everyone that encryption was the future of online communications, even as governments pushed back against the companyโs wider rollout plans. Much of that pressure centered on child protection. Campaigners and agencies, including the NSPCC UKโs National Crime Agency, argued wider encryption would make it harder to detect grooming, child abuse material, and other criminal activity taking place over private messaging services. Privacy advocates, however, say Meta has just blown a hole in one of the few genuinely private corners of the platform. The Center for Democracy & Technology said it had urged Meta to reverse the decision, alongside members of the Global Encryption Coalition Steering Committee. โWithout default encryption, millions of Instagram users are left exposed to surveillance, interception, and misuse of their private communications,โ the group said. โThese risks fall hardest on people who rely on secure messaging for their safety, including journalists, human rights defenders, and survivors of abuse.โ Swiss privacy outfit Proton also questioned what exactly happens to existing chats once encryption disappears. Because properly implemented E2EE prevents platforms from reading message contents, the company noted that Meta has not clarified whether previously encrypted conversations will remain inaccessible, get deleted, or become readable. โFor Instagram, dropping E2EE is just an example of how little regard Meta has for the privacy and safety of its community,โ Proton said in a blog post. Meta has become increasingly aggressive about monetizing and analyzing user interactions. Last year, the company confirmed that interactions with Meta AI tools, including those inside private conversations, could be used for ad targeting. The company has not publicly said whether ordinary Instagram messages could eventually feed into similar systems now that encryption is gone. ยฎ
Students around the world have an excuse to bunk off after hacking crew ShinyHunters did something nasty to educational SaaS Canvas. Canvas is widely used by schools and universities to communicate with students, publish and store course material, and collect assignments. An outfit called Instructure develops the software and an entry on its Status Page dated May 2 features Chief Information Security Officer Steve Proud stating the org "recently experienced a cybersecurity incident perpetrated by a criminal threat actor." "We are actively investigating this incident with the help of outside forensics experts. We are working quickly to understand the extent of the incident and actively taking steps to minimize its impact," he added. Numerous posts report that attempts to log into Canvas earlier this week failed, but did produce a notice from an entity claiming to be the notorious hacking crew ShinyHunters, who claimed the outage was only possible due to lax patching. The crew also claimed to have stolen data from institutions that use Canvas and threatened to leak it unless a "settlement" is reached by May 12. Canvas has thousands of customers, meaning any confirmed breach could have wide impact. As of Thursday evening US time, Canvas says its wares are now available "for most users" and won't offer further comment. A student of The Register's acquaintance โ OK, one of my kids โ shared an email advising that his uni has prevented access to Canvas while it tries to understand the situation and the risk of data leakage. We've seen multiple universities posting notices about the incident that say more or less the same thing. Most also warn students of heightened phishing risk and urge caution. Several also advise that as they require students to lodge assignments in Canvas, students can assume they have an extension on deadlines. Your correspondent's offspring does not mind this one little bit. This is an evolving story. The Register will update it as more information becomes available. ยฎ
Meta appears to have decided Britain's Online Safety Act would be much easier to swallow if Ofcom stopped counting all the money the social media giant makes everywhere else. The Facebook and Instagram owner has launched a legal challenge against the UK comms regulator, arguing that the way Ofcom calculates fees and potential penalties under the Online Safety Act is fundamentally wrong because it relies on global turnover rather than UK-specific revenue. The law allows Ofcom to fine companies for up to 10 percent of their qualifying worldwide revenue, or ยฃ18 million, whichever is higher. For Meta, which brought in about $201 billion last year, that means the numbers stop sounding like regulatory penalties and start sounding like national infrastructure projects. Meta is now seeking a judicial review in the High Court over how Ofcom defines "qualifying worldwide revenue." The dispute boils down to three complaints. First, Meta argues that Ofcom should only consider UK revenue tied to regulated services, not the companyโs global income. Second, it objects to rules that treat multiple services under the same corporate umbrella as jointly liable, potentially exposing the wider organization to larger penalties. Third, it is challenging how Ofcom aggregates revenue across services rather than assessing them individually. An Ofcom spokesperson told The Register: "Meta have initiated a judicial review in relation to online safety fees and penalties. Under the Online Safety Act, these are to be set with reference to a provider's 'Qualifying Worldwide Revenue', which we have defined based on a plain reading of the law. "Disappointingly, Meta are objecting to the payment of fees, and any penalties that could be levied on companies in future, that are calculated on this basis. We will robustly defend our reasoning and decisions." A Meta spokesperson told The Register: "We are committed to cooperating constructively with Ofcom as it enforces the Online Safety Act. However, we and others in the tech industry believe its decisions on the methodology to calculate fees and potential fines are disproportionate. We believe fees and penalties should be based on the services being regulated in the countries they're being regulated in. This would still allow Ofcom to impose the largest fines in UK corporate history." The case marks the latest flare-up between Silicon Valley and Britain over the Online Safety Act, which has already triggered complaints from US politicians, free speech campaigners, and tech firms unhappy about the scale of Ofcomโs new powers. The regulator has not been shy about flexing them either. It has already threatened action against Elon Musk's X over sexually explicit AI-generated images linked to Grok and, in March, issued its first fine under the regime against 4chan. Meta appears to have looked at where that enforcement road leads and decided now was the time to argue about the math. ยฎ
Mozilla fixed 423 Firefox security bugs in April, a repair rate more than five times higher than the 76 fixes issued in March and almost 20 times higher than its 21.5 monthly average last year. The browser maker previously said Anthropic's ballyhooed Mythos Preview model found 271 of these in Firefox 150. Now, a trio of technical types has come forward to provide a bit more detail about what Mythos (and its less storied sibling Opus 4.6) actually found. But they also highlight something that may matter more than the model: the agentic harness โ the middleware mediating between AI and the end user. Brian Grinstead, Firefox distinguished engineer, Christian Holler, Firefox tech lead, and Frederik Braun, head of the Firefox security team, observe that over the past few months, AI-generated security reports have gone from slop to rather more tasty. They attribute the transformation to better models and development of better ways of harnessing those models โ steering them in a way that increases the ratio of signal to noise. But they also appear to be aware that there's some skepticism in the security community about Mythos. So they've decided to publicize selected wins in an effort to encourage others to jump aboard the AI bug remediation train. "Ordinarily we keep detailed bug reports private for several months after shipping fixes and issuing security advisories, largely as a precaution to protect any users who, for whatever reason, were slow to update to the latest version of Firefox," they said. "Given the extraordinary level of interest in this topic and the urgency of action needed throughout the software ecosystem, weโve made the calculated decision to unhide a small sample of the reports behind the fixes we recently shipped." The post links to a dozen Firefox bugs with varying degrees of severity. The list includes, for example, a 20-year-old heap use-after-free bug (high severity) that a web page could trigger using the XSLTProcessor DOM API without any user interaction. Many of these bugs are sandbox escapes, they note, which are difficult to find using techniques like fuzzing. AI analysis, they say, helps provide broader security coverage. And they add that it has helped validate prior browser hardening work designed to prevent prototype pollution attacks โ audit logs showed AI models making unsuccessful exploitation attempts using this technique. Following Anthropic's announcement of Project Glasswing โ a program for companies to gain early access to Mythos because it's touted as too dangerous for public release โ security experts expressed skepticism. For example, Davi Ottenheimer, president of security consultancy flyingpenguin, wrote in an April 13 blog post, "The supposedly huge Anthropic 'step change' appears to be little more than a rounding error. The threat narrative so far appears to be ALL marketing and no real results. The Glasswing consortium is regulatory capture dressed up poorly as restraint." He subsequently ran a test in which he strapped Anthropic's lesser models Sonnet 4.6 and Haiku 4.5 into a harness called Wirken with an auditing skill called Lyrik. The result was eight findings in two minutes at a cost of about $0.75, Ottenheimer claims, noting that two of the eight matched bugs Mythos had identified. Other security folk have also reported that bug hunting and exploit development can be quite productive with off-the-shelf models like Opus 4.6, which among other virtues costs about 5x less than Mythos. In an email to The Register, Ottenheimer said, "There's a fundamental philosophical failure in the Mozilla post. A reading and a measurement are not the same thing. I don't see a measurement, but they seem to want us to believe we're looking at one. "When they give us the 'behind the scenes math' it's circular, a trick. 'Mythos found 271 bugs' is what Mythos found, not what other tools could not find against the same code. Why leave it as an assumption if it can be proven?" Ottenheimer said Mozilla advocates that every project adopt a similar approach without proving the merits of that approach. "It's like saying if you don't drink Coca-Cola, you can't run a mile under six minutes, because that's what a guy sponsored by Coca-Cola just did," he said. "The bar moves on rhetoric, marketing, not proper evidence. That is the capture crew again." He notes that the merits of Mythos might be more convincing if Mozilla had reported they couldn't do this work without Mythos. And since they're not saying that, he suggests, it's worth asking why there's no transparent comparison of Mythos to other models. He points to Mozilla's admission that Opus 4.6 was already identifying "an impressive amount of previously unknown vulnerabilities." "Mozilla never quantifies what Opus 4.6 [did] before saying what Mythos added," he said. "So 271 attributed to Mythos doesn't fit the analysis. And there's a deeper reveal when they say 'we dramatically improved our techniques for harnessing these models.' The improvement may be entirely in the harness, not as much in the model. This maps to my own experience. A nail gun has advantages over the hammer, yet without being in the right hands the outputs are as bad or worse." ยฎ
How explicit does the maker of a footgun need to be about the product's potential to shoot you in the foot? That's essentially the question security firm Adversa AI is asking with the disclosure of a one-click remote code execution attack via an MCP server in Claude Code, Gemini CLI, Cursor CLI, and Copilot CLI. The TrustFall proof-of-concept attack demonstrates how a cloned code repository can include two JSON files (.mcp.json and .claude/settings.json) that open the door to an attacker-controlled Model Context Protocol (MCP) server. MCP servers make tools, configuration data, schemas, and documentation available in a standard format to AI models via JSON. The vulnerability arises from inconsistent restrictions governing the scope of settings: Anthropic blocks some dangerous settings at the project level (e.g. bypassPermissions) but not others (e.g. enableAllProjectMcpServers and enabledMcpjsonServers). The JSON files simply enable those settings. "The moment a developer presses Enter on Claude Code's generic 'Yes, I trust this folder' dialog, the server spawns as an unsandboxed Node.js process with the user's full privileges โ no per-server consent, no tool call from Claude required," Adversa AI explains in its PoC repo. The likely result is a compromised system. The PoC demonstrated in this video. It worked on Claude Code CLI v2.1.114, as of May 2. Other agent CLIs are also said to be affected, but specific PoCs have not been published. "It's the third CVE in Claude Code in six months from the same root cause (project-scoped settings as injection vector)," Alex Polyakov, co-founder of Adversa AI, told The Register in an email. "Each gets patched in isolation but the underlying class hasn't been finally fixed. Most developers don't know these settings exist, let alone that a cloned repo can set them silently." Anthropic, according to the security biz, contends that the user's trust decision moves the issue outside its threat model. CVE-2025-59536 was considered a vulnerability because it triggered automatically when a user started up Claude Code in a malicious directory. TrustFall, however, is considered out of scope because the user has been presented with a dialog box and made a trust decision. Adversa argues that the decision is not being made with informed consent, citing a prior, more explicit warning notice that was removed in v2.1 of the Claude Code CLI. "The pre-v2.1 dialog explicitly warned that .mcp.json could execute code and offered three options including 'proceed with MCP servers disabled,'" writes Adversa's Sergey Malenkovich. "That informed-consent UX was removed. The current dialog defaults to 'Yes, I trust this folder' with no MCP-specific language, no enumeration of which executables will spawn, and no opt-out for MCP while keeping the rest of the trust grant." Then there's the zero-click variant to consider for CI/CD pipelines that implement Claude Code. When Claude Code is invoked in CI/CD, that happens via SDK rather than the interactive CLI. So there's no terminal prompt. Malenkovich argues that Anthropic should make three changes. First, block enableAllProjectMcpServers, enabledMcpjsonServers, and permissions.allow from any settings file inside a project. The idea is that a malicious server should not be able to approve its own servers. Second, implement a dedicated MCP consent dialog that defaults to "deny." And third, require interactive consent per server rather than for all servers. Anthropic did not respond to a request for comment. ยฎ