Something I’ve been looking into: a lot of public-records / people-search sites run
Given the overlap between adtech and data brokerage, this feels like an under-discussed surface area. Consider sites like truepeoplesearch.com
PayPal has notified about 100 customers that their personal information was exposed online during a code change gone awry, and in a few of these cases, people saw unauthorized transactions on their accounts.…
Someone compromised open source AI coding assistant Cline CLI's npm package earlier this week in an odd supply chain attack that secretly installed OpenClaw on developers' machines without their knowledge. …
Most phishing websites are little more than static copies of login pages for popular online destinations, and they are often quickly taken down by anti-abuse activists and security firms. But a stealthy new phishing-as-a-service offering lets customers sidestep both of these pitfalls: It uses cleverly disguised links to load the target brand’s real website, and then acts as a relay between the target and the legitimate site — forwarding the victim’s username, password and multi-factor authentication (MFA) code to the legitimate site and returning its responses.
There are countless phishing kits that would-be scammers can use to get started, but successfully wielding them requires some modicum of skill in configuring servers, domain names, certificates, proxy services, and other repetitive tech drudgery. Enter Starkiller, a new phishing service that dynamically loads a live copy of the real login page and records everything the user types, proxying the data from the legitimate site back to the victim.
According to an analysis of Starkiller by the security firm Abnormal AI, the service lets customers select a brand to impersonate (e.g., Apple, Facebook, Google, Microsoft et. al.) and generates a deceptive URL that visually mimics the legitimate domain while routing traffic through the attacker’s infrastructure.
For example, a phishing link targeting Microsoft customers appears as “login.microsoft.com@[malicious/shortened URL here].” The “@” sign in the link trick is an oldie but goodie, because everything before the “@” in a URL is considered username data, and the real landing page is what comes after the “@” sign. Here’s what it looks like in the target’s browser:
Image: Abnormal AI. The actual malicious landing page is blurred out in this picture, but we can see it ends in .ru. The service also offers the ability to insert links from different URL-shortening services.
Once Starkiller customers select the URL to be phished, the service spins up a Docker container running a headless Chrome browser instance that loads the real login page, Abnormal found.
“The container then acts as a man-in-the-middle reverse proxy, forwarding the end user’s inputs to the legitimate site and returning the site’s responses,” Abnormal researchers Callie Baron and Piotr Wojtyla wrote in a blog post on Thursday. “Every keystroke, form submission, and session token passes through attacker-controlled infrastructure and is logged along the way.”
Starkiller in effect offers cybercriminals real-time session monitoring, allowing them to live-stream the target’s screen as they interact with the phishing page, the researchers said.
“The platform also includes keylogger capture for every keystroke, cookie and session token theft for direct account takeover, geo-tracking of targets, and automated Telegram alerts when new credentials come in,” they wrote. “Campaign analytics round out the operator experience with visit counts, conversion rates, and performance graphs—the same kind of metrics dashboard a legitimate SaaS [software-as-a-service] platform would offer.”
Abnormal said the service also deftly intercepts and relays the victim’s MFA credentials, since the recipient who clicks the link is actually authenticating with the real site through a proxy, and any authentication tokens submitted are then forwarded to the legitimate service in real time.
“The attacker captures the resulting session cookies and tokens, giving them authenticated access to the account,” the researchers wrote. “When attackers relay the entire authentication flow in real time, MFA protections can be effectively neutralized despite functioning exactly as designed.”
The “URL Masker” feature of the Starkiller phishing service features options for configuring the malicious link. Image: Abnormal.
Starkiller is just one of several cybercrime services offered by a threat group calling itself Jinkusu, which maintains an active user forum where customers can discuss techniques, request features and troubleshoot deployments. One a-la-carte feature will harvest email addresses and contact information from compromised sessions, and advises the data can be used to build target lists for follow-on phishing campaigns.
This service strikes me as a remarkable evolution in phishing, and its apparent success is likely to be copied by other enterprising cybercriminals (assuming the service performs as well as it claims). After all, phishing users this way avoids the upfront costs and constant hassles associated with juggling multiple phishing domains, and it throws a wrench in traditional phishing detection methods like domain blocklisting and static page analysis.
It also massively lowers the barrier to entry for novice cybercriminals, Abnormal researchers observed.
“Starkiller represents a significant escalation in phishing infrastructure, reflecting a broader trend toward commoditized, enterprise-style cybercrime tooling,” their report concludes. “Combined with URL masking, session hijacking, and MFA bypass, it gives low-skill cybercriminals access to attack capabilities that were previously out of reach.”
Hello r/netsec 👋
When an AI agent gets prompt-injected, it controls its own logs. If the injected instructions say “do this quietly,” it does it quietly. The orchestrator sees normal completions. Your observability tooling sees what the agent reports.
You need an observation point the agent cannot influence. That means going below the application layer.
Any real action in the world eventually becomes a syscall. Exfiltrating data requires connect(). Reading /etc/shadow requires open(). Spawning a shell requires execve(). The kernel does not negotiate with the agent about whether to record them.
eBPF is the right primitive here: you attach to tracepoints inside the kernel, the observed process never blocks and never detects the observer. Combined with cgroup-based filtering you can isolate exactly one container on a busy host with negligible overhead.
A compromised agent has a recognizable syscall signature: net_connect to an unexpected IP, file_open on credential files, process_exec spawning bash or curl with injected arguments. You can alert on deviations from a behavioral baseline in real time, before the exfiltration completes, regardless of what the agent reports.
I built Azazel to validate this: https://github.com/beelzebub-labs/azazel
Prompt-level defenses matter, but a deployed agent needs a layer that does not depend on the model’s cooperation. The syscall layer has always been that layer for traditional software.
Las Vegas hotel and casino giant Wynn Resorts appears to be the latest victim of data-grabbing and extortion gang ShinyHunters.…
I analyzed 9,211 weather API requests from 42 Samsung devices over five days and found that the pre-installed Samsung Weather app generates a persistent, unique device fingerprint from saved locations - one that survives IP changes, VPN usage, and network roaming.
The Samsung Weather app polls api.weather.com on a recurring schedule for each saved location. Every request includes a placeid parameter - a 64-character hex string (consistent with SHA-256) that maps to a specific location. The combination of a user's placeid values creates a fingerprint that is effectively unique per device.
143 distinct placeid values observed across 42 devices
96.4% fingerprint uniqueness: 27 of 28 distinct fingerprints were unique to a single user. The only collision was two users tracking a single identical location.
Every user with 2+ saved locations had a globally unique fingerprint
Persistence: fingerprints survived across 8+ distinct IP addresses per user, including residential, university, and mobile carrier networks
Hardcoded API keys: the app authenticates with static keys baked into the APK - not bound to any device or session. Anyone can query the API and resolve any placeid to a physical location (city, coordinates, country) using these keys
Redundant coordinate transmission: many requests send raw GPS coordinates alongside the placeid that already encodes the same location, providing the API provider with real-time geolocation data beyond what's needed for forecasts
Requests use HTTPS, so passive observers can't read placeid values. But The Weather Company (IBM) receives every request server-side, where the placeid array functions as a natural join key across a user's entire request history.
This is far from the first time weather apps have faced scrutiny over location data practices:
2019: LA City Attorney sued IBM/The Weather Company, alleging the Weather Channel app secretly collected continuous geolocation data and sold it to third parties for targeted advertising and hedge fund analysis. Settled August 2020.
2020-2023: Class action alleged TWC tracked users' locations "minute by minute" and sold the data. Settled April 2023.
2024: New VPPA lawsuit alleges weather.com shared PII (names, emails, precise location, video viewing data) with ad partners mParticle and AppNexus/Xandr without consent. $2,500 statutory damages per violation.
2017: Security researcher Will Strafach found AccuWeather transmitted GPS coordinates and Wi-Fi BSSID data to analytics firm Reveal Mobile even when users denied location permission.
A 2018 NYT investigation found WeatherBug shared location data with 40+ companies. A broader analysis of 20 popular weather apps found 85% gathered data for advertising and 70% harvested location data for ad targeting.
The placeid mechanism is a distinct vector: even if a user denies location permissions or uses a VPN, the saved location hashes in routine weather API calls function as a stable device fingerprint that existing consent mechanisms don't address.
Samsung ships 50-60 million phones per year in the US alone. The weather app is pre-installed and active by default. Our most active user generated 2,000+ requests over five days without any manual interaction.
Morning folks, will do my best to keep this concise.
Work for an MSSP, large client with many different public websites, with different OSs and different groups responsible for them.
This enquiry is about what I will call malicious URLs, invalid paths, lfi, commands in the URL; that kind of thing.
The question is, how do "we" tell (the MSSP responding to the SIEM alerts) if the webserver that received the malicious URL sent back inappropriate data, ie did the malicious URL "work".
The client will not allow us to suppress these alerts and the volume can get very high; filtering has not been particularly effective as the malicious url strings keep changing along with the source. I have never seen one of the malicious URLs even seem to work in their environment.
This has to happen to all big organizations, wondering how you all deal with it?
Thank you
The industry has a massive gap in self-assessment. Recent data shows organizations assess their readiness at 94%, yet realistic drills show accuracy closer to 22%.
The problem is that we are siloed.
We run a TTX to satisfy a checklist, then we run a few detection tests to tune an EDR. If you aren't mapping your technical telemetry directly back to your leadership’s decision-making process, you are just guessing.
When you pair them, you get a loop that produces sharper playbooks and cleaner telemetry. Our team at Lares broke down a practical framework for combining these two disciplines into a single narrative of proof.
Read the full post: https://www.lares.com/blog/ttx-and-ttp-replay-combo/
How is your team currently validating that your TTX assumptions match your actual detection capabilities? We're available to discuss / answer your questions in the comments.
Ukrainian national Oleksandr Didenko will spend the next five years behind bars in the US for his involvement in helping North Korean IT workers secure fraudulent employment.…
Rest easy, Par. The wire remembers.
Building a startup entirely on European infrastructure sounds like a nice sovereignty flex right up until you actually try it and realize the real price gets paid in time, tinkering, and slowly unlearning a decade of GitHub muscle memory.…
Uncle Sam's cyber defenders have given federal agencies just three days to patch a maximum-severity Dell bug that's been under active exploitation since at least mid-2024.…
Two former Google engineers and a third alleged accomplice are facing federal charges after prosecutors accused them of swiping sensitive chip and security technology secrets and then trying to cover their tracks when the scheme began to unravel.…
You can now create CrowdStrike workflows within Claude Code or your favourite \\\[SKILLS.md\\\](\[http://SKILLS.md\\\](http://SKILLS.md)) compatible editor.
$ claude
/plugin marketplace add https://github.com/eth0izzle/security-skills.git
/plugin install fusion-workflows@security-skills
/plan
"create a scheduled workflow that searches for logins of AD admins that are outside of our IP space (84.23.145.X)"
I created this to simplify workflow creation from outside the Fusion UI, which I found quite limiting so this Skill teaches Claude how to write them directly in YAML. Setup API access and it'll talk to the CrowdStrike API to fetch enabled integrations and actions within your tenant, using the correct CIDs, input/output schemas, etc. and it can test and import them directly. You can basically fully automate entire playbooks in one shot.
Read more here;
https://darkport.co.uk/blog/building-crowdstrike-workflows-with-claude-code-skills/
All open-source; https://github.com/eth0izzle/security-skills
Would love to hear any feedback! \*(or other ideas for Security Skills)\*
The UK's data protection watchdog has scored a small win in a lengthy legal battle against a British retail group that lost millions of data records during a 2017 breach.…
The CEO of code review platform provider Snyk has announced he will stand down so the company can find someone better-equipped to steer the company into the age of AI.…
AI agents are becoming more common and more capable, without consensus or standards on how they should behave, say academic researchers.…
Researchers at Proofpoint late last month uncovered what they describe as a "weird twist" on the growing trend of criminals abusing remote monitoring and management software (RMM) as their preferred attack tools.…