❌

Normal view

Received today β€” 12 May 2026 ⏭ /r/netsec - Information Security News & Discussion
Received yesterday β€” 11 May 2026 ⏭ /r/netsec - Information Security News & Discussion

MyAudi app:Security issues in Audi Connected Vehicle experience

I recently published a security research post on the myAudi connected vehicle platform. I found that anyone with a VIN can access a sensitive informations about car and ownership
I think the topic is useful beyond Audi itself, because many vendors now rely on these β€œconnected vehicle” platforms and mobile apps, often with very similar architectures and assumptions

submitted by /u/decoder-ap
[link] [comments]

ShinyHunters / AT&T ransom payment traced on-chain β€” paper draft, seeking arXiv cs.CR endorsement

Across all major ShinyHunters campaigns (AT&T/Snowflake, Salesforce, Canvas/Instructure), only one event has both a publicly stated payment amount and a known approximate settlement date: the May 2024 AT&T payment of ~5.7 BTC (~$370K), confirmed by Wired but never published with a transaction hash. I use that as the analytical anchor for an end-to-end on-chain analysis using only free public data.

Pipeline (5 stages):

  1. BigQuery bulk filter on amount and time window β†’ 500 candidates.
  2. Recipient profiling via Blockstream Esplora (lifetime tx count, spend shape).
  3. Sender-side cluster analysis using common-input ownership; looking for broker-aggregation patterns.
  4. Depth-12 concurrent forward trace, top-K=4 fan-out.
  5. Terminal attribution via OKLink, BitInfoCharts, WalletExplorer.

Result:

A single highest-fit candidate: 5.71997804 BTC paid 2024-05-17 22:04 UTC to a fresh recipient, spent in 6 min, laundered through a 6-cycle automated peel chain, terminating at an exchange deposit cluster. Funding side shows broker-aggregation fingerprint (4Γ— 1.147 BTC peels in a 90-min window pre-payout). Upstream hub addresses appear reused across multiple victims of the same laundering service, active through 2025. Paper closes with the legal pathway from chain endpoint to indictment and a scoped compliance-request template.

Limitations (explicit in Β§5):

Ranking under a scoring scheme, not positive ID. No off-chain ground truth. Documented OKLink vs. Arkham label conflict on the dominant terminal, resolved via behavioural audit. No formal null-distribution analysis yet. Score weights are author judgements.

Asking for:

  1. Technical feedback / methodology critique.
  2. arXiv cs.CR endorsement β€” endorsement code: ZQXBSQ

    github.com/tr4m0ryp/shinyhunters-gotta-catch-em-all/blob/main/Gotta_Catch_Em_All_ShinyHunters.pdf

Tooling and dataset released for reuse

submitted by /u/Visual_Course6624
[link] [comments]

The compression of the exploit timeline: Why n-day gaps and 90-day embargoes are failing in practice.

The traditional vulnerability disclosure timeline relies on a fundamental assumption: exploit development and vulnerability discovery take time. Over the last 12 months the integration of LLMs into offensive tooling has demonstrably broken this assumption.
I recently published a technical write-up arguing that the 90-day disclosure window is effectively dead backed by three specific observations from recent incidents:

  1. Automated Diff Analysis (30-minute n-days) : The safety net between a patch release and an in-the-wild exploit is gone. Taking a recent React security patch (CVE-2026-23870), I used an LLM to analyze the diff, identify the vulnerable path, and write a working DoS PoC in roughly 30 minutes. The human reverse-engineering bottleneck has been bypassed.
  2. Vulnerability Convergence : I recently reported a critical P0 to a vendor and was told I was the 11th reporter in 6 weeks. LLM assisted scanners are causing independent researchers to converge on the same bugs simultaneously. An embargo no longer contains the vulnerability; it simply provides a head start to whichever threat actor also found it.
  3. The Linux Kernel (Copy Fail & Dirty Frag) : The recent kernel exploits highlight this perfectly. Copy Fail (CVE-2026-31431) went from an automated AI scan to a public PoC to nation state weaponization in days. Shortly after the embargo for Dirty Frag (CVE-2026-43284 / CVE-2026-43500) was broken in hours because an unrelated third party independently discovered the same bug class using similar tooling.

The defense cannot operate on monthly cycles when the offense is operating in hours. The focus needs to shift to real-time, PR-level AI scanning to match the pace.
can read the full technical breakdown and case studies on my blog:https://blog.himanshuanand.com/2026/05/the-90-day-disclosure-policy-is-dead/

I am curious if the researchers here are experiencing similar convergence rates or if you view this as a temporary anomaly while legacy codebases are scanned with new tools.

submitted by /u/unknownhad
[link] [comments]

Technical Analysis of EagleSpy V6.0 (CraxsRAT Rebrand) Distributed Through Odysee and Telegram

I recently investigated an individual operating through Odysee and Telegram who is selling a malicious Android RAT known as EagleSpy V6.0, which appears to be a rebranded version of CraxsRAT.

During the investigation:

\- I was financially scammed after payment

\- The seller blocked communication afterward

\- The malware infrastructure was analyzed in detail

Technical analysis confirmed:

\- Banking phishing overlays

\- Crypto wallet credential theft

\- Telegram bot exfiltration

\- Remote shell execution

\- Keylogging

\- Camera/microphone access

\- GPS tracking

\- Ransomware components

\- DEX packers for AV evasion

\- Hidden update/backdoor mechanisms

The repository also contained evidence of real victim infrastructure and compromised device information.

The malware appears capable of targeting not only victims, but potentially even buyers/operators through embedded update systems and hidden control mechanisms.

Relevant reports have already been submitted to platform abuse teams.

Odysee channel involved:

https://odysee.com/@justicerat:e

Telegram:

@JustIcedevs

This post is intended purely as a cybersecurity awareness warning to help prevent additional victims.

If moderators require technical validation or indicators of compromise, I can provide structured analysis details privately.

submitted by /u/CranberryOk2634
[link] [comments]
Received β€” 9 May 2026 ⏭ /r/netsec - Information Security News & Discussion

Seclens: Role-specific Evaluation of LLM's for security vulnerablity detection

Existing benchmarks for LLM-based vulnerability detection compress model performance into a single metric, which fails to reflect the distinct priorities of different stakeholders. For example, a CISO may emphasize high recall of critical vulnerabilities, an engineering leader may prioritize minimizing false positives, and an AI officer may balance capability against cost. To address this limitation, we introduce SecLens-R, a multi-stakeholder evaluation framework structured around 35 shared dimensions grouped into 7 measurement categories. The framework defines five role-specific weighting profiles: CISO, Chief AI Officer, Security Researcher, Head of Engineering, and AI-as-Actor. Each profile selects 12 to 16 dimensions with weights summing to 80, yielding a composite Decision Score between 0 and 100.
We apply SecLens-R to evaluate 12 frontier models on a dataset of 406 tasks derived from 93 open-source projects, covering 10 programming languages and 8 OWASP-aligned vulnerability categories. Evaluations are conducted across two settings: Code-in-Prompt (CIP) and Tool-Use (TU). Results show substantial variation across stakeholder perspectives, with Decision Scores differing by as much as 31 points for the same model. For instance, Qwen3-Coder achieves an A (76.3) under the Head of Engineering profile but a D (45.2) under the CISO profile, while GPT-5.4 shows a similar disparity. These findings demonstrate that vulnerability detection is inherently a multi-objective problem and that stakeholder-aware evaluation provides insights that single aggregated metrics obscure.

submitted by /u/subho007
[link] [comments]

Securing CI/CD for an open source project: lessons from Cilium

As a maintainer, this is Cilium's take on how we secure our Github Actions in the OSS project. A few highlights:

  • SHA pinning every GitHub Action
  • Separating trusted vs untrusted code paths in pull_request_target
  • Isolating CI credentials from production release credentials
  • Cosign signing + SBOM attestations
  • Vendoring Go dependencies to make supply chain changes visible in review
  • Treating blast radius reduction as the core design principle

and a few gaps:

  • no SLSA provenance yet
  • remaining mutable u/main references
  • no dependency review at PR time
  • missing govulncheck integration
submitted by /u/xmull1gan
[link] [comments]
Received β€” 8 May 2026 ⏭ /r/netsec - Information Security News & Discussion
Received β€” 7 May 2026 ⏭ /r/netsec - Information Security News & Discussion
❌