Approximately 500 NIST staffers, including at least three lab directors, are expected to lose their jobs at the standards agency as part of the ongoing DOGE purge, sources tell WIRED.
China-based DeepSeek has exploded in popularity, drawing greater scrutiny. Case in point: Security researchers found more than 1 million records, including user data and API keys, in an open database.
Amid ongoing fears over TikTok, Chinese generative AI platform DeepSeek says it’s sending heaps of US user data straight to its home country, potentially setting the stage for greater scrutiny.
The fate of TikTok now rests in the hands of the US Supreme Court. If a law banning the social video app this month is upheld, it won’t disappear from your phone—but it will get messy fast.
A network of Facebook pages has been advertising “fuel filters” that are actually meant to be used as silencers, which are heavily regulated by US law. Even US military officials are concerned.
When programmer Micah Lee was kicked off X for a post that offended Elon Musk, he didn't look back. His new tool for saving and deleting your X posts can give you that same sweet release.
Moldova is facing a tide of disinformation unprecedented in complexity and aggression, the head of a new center meant to combat it tells WIRED. And platforms like Facebook, TikTok, Telegram and YouTube could do more.
Global Intelligence claims its Cybercheck technology can help cops find key evidence to nail a case. But a WIRED investigation reveals the smoking gun often appears far less solid.
Bots that “remove clothes” from images have run rampant on the messaging app, allowing people to create nonconsensual deepfake images even as lawmakers and tech companies try to crack down.
xAI’s generative AI tool, Grok AI, is unhinged compared to its competitors. It’s also scooping up a ton of data that people post on X. Here’s how to keep your posts out of Grok—and why you should.
With 20,000 internet providers across the country, the technical challenges of blocking X in Brazil mean some connections are slipping through the cracks.
French authorities detained Durov to question him as part of a probe into a wide range of alleged violations—including money laundering and CSAM—but it remains unclear if he will face charges.
Durov has reportedly been detained in France over Telegram’s alleged failure to adequately moderate illegal content on the messaging app. His arrest sparked backlash and left some associates asking, what now?
Social Security numbers, physical addresses, and more—all available online. After months of confusion, leaked information from a background-check firm underscores the long-term risks of data breaches.
The US military has abandoned its half-century dream of a suit of powered armor in favor of a “hyper enabled operator,” a tactical AI assistant for special operations forces.
AWS hosted a server linked to the Bezos family- and Nvidia-backed search startup that appears to have been used to scrape the sites of major outlets, prompting an inquiry into potential rules violations.
Blockchain analysis firm Elliptic, MIT, and IBM have released a new AI model—and the 200-million-transaction dataset it's trained on—that aims to spot the “shape” of bitcoin money laundering.
The world's most-visited deepfake website and another large competing site are stopping people in the UK from accessing them, days after the UK government announced a crackdown.
Some companies let you opt out of allowing your content to be used for generative AI. Here’s how to take back (at least a little) control from ChatGPT, Google’s Gemini, and more.
Anonymous, candid reviews made Glassdoor a powerful place to research potential employers. A policy shift requiring users to privately verify their real names is raising privacy concerns.
A global network of violent predators is hiding in plain sight, targeting children on major platforms, grooming them, and extorting them to commit horrific acts of abuse.
A coalition of 41 state attorneys general says Meta is failing to assist Facebook and Instagram users whose accounts have been hacked—and they want the company to take “immediate action.”
Security researchers created an AI worm in a test environment that can automatically spread between generative AI agents—potentially stealing data and sending spam emails along the way.
In a test at one station, Transport for London used a computer vision system to try and detect crime and weapons, people falling on the tracks, and fare dodgers, documents obtained by WIRED show.
Members of Congress say the DOJ is funding the use of AI tools that further discriminatory policing practices. They're demanding higher standards for federal grants.
From Sam Altman and Elon Musk to ransomware gangs and state-backed hackers, these are the individuals and groups that spent this year disrupting the world we know it.
Mark Zuckerberg personally promised that the privacy feature would launch by default on Messenger and Instagram chat. WIRED goes behind the scenes of the company’s colossal effort to get it right.
A WIRED analysis of more than 100 restricted channels shows these communities remain active, and content shared within them often spreads to channels accessible to the public.
Poverty, fentanyl, and lack of public funding mean morgues are overloaded with unidentified bodies. TikTok and Facebook pages are filling the gap—with AI proving a powerful and controversial new tool.
The third GOP debate is sponsored by the Republican Jewish Coalition and will be livestreamed on a platform favored by one of America’s most notorious white nationalists.
A complaint filed with the EU’s independent data regulator accuses YouTube of failing to get explicit user permission for its ad blocker detection system, potentially violating the ePrivacy Directive.
CulturePulse's AI model promises to create a realistic virtual simulation of every Israeli and Palestinian citizen. But don't roll your eyes: It's already been put to the test in other conflict zones.
The slow-motion implosion of Elon Musk’s X has given rise to a slew of competitors, where privacy invasions that ran rampant over the past decade still largely persist.
TikTokkers are using a little-known livestreaming feature to falsely represent Israelis and Palestinians—and the company is taking a cut of costly in-app gifts viewers give to participants.
Inauthentic accounts on X flocked to its owner’s post about Ukrainian president Vlodymr Zelensky, hailing “Comrade Musk” and boosting pro-Russia propaganda.
Hamas has threatened to broadcast videos of hostage executions. With the war between Israel and Hamas poised to enter a new phase, are social platforms ready?
A flood of false information, partisan narratives, and weaponized “fact-checking" has obscured efforts to find out who’s responsible for an explosion at a hospital in Gaza.
X is promoting Community Notes to solve its disinformation problems, but some former employees and people who currently contribute notes say it’s not fit for that purpose.
A video posted by Donald Trump Jr. showing Hamas militants attacking Israelis was falsely flagged in a Community Note as being years old, thus making X's disinformation problem worse, not better.
X’s Trust and Safety team says it’s working to remove false information related to the Israel-Hamas war. Meanwhile, Elon Musk is sharing conspiracies and chatting with QAnon promoters.
People who have turned to X for breaking news about the Israel-Hamas conflict are being hit with old videos, fake photos, and video game footage at a level researchers have never seen.
Corporations are using software to monitor employees on a large scale. Some experts fear the data these tools collect could be used to automate people out of their jobs.
Senators are meeting with Silicon Valley's elite to learn how to deal with AI. But can Congress tackle the rapidly emerging tech before working on itself?
State and local governments in the US are scrambling to harness tools like ChatGPT to unburden their bureaucracies, rushing to write their own rules—and avoid generative AI's many pitfalls.
The social media giant filed a lawsuit against a nonprofit that researches hate speech online. It’s the latest effort to cut off the data needed to expose online platforms’ failings.
Since 2018, a dedicated team within Microsoft has attacked machine learning systems to make them safer. But with the public release of new generative AI tools, the field is already evolving.
The US Congress is trying to tame the rapid rise of artificial intelligence. But senators’ failure to tackle privacy reform is making the task a nightmare.
Meta’s Twitter alternative promises that it will work with decentralized platforms, giving you greater control of your data. You can hold the company to that—if you don't sign up.