Anonymous, candid reviews made Glassdoor a powerful place to research potential employers. A policy shift requiring users to privately verify their real names is raising privacy concerns.
A global network of violent predators is hiding in plain sight, targeting children on major platforms, grooming them, and extorting them to commit horrific acts of abuse.
A coalition of 41 state attorneys general says Meta is failing to assist Facebook and Instagram users whose accounts have been hacked—and they want the company to take “immediate action.”
Security researchers created an AI worm in a test environment that can automatically spread between generative AI agents—potentially stealing data and sending spam emails along the way.
In a test at one station, Transport for London used a computer vision system to try and detect crime and weapons, people falling on the tracks, and fare dodgers, documents obtained by WIRED show.
Members of Congress say the DOJ is funding the use of AI tools that further discriminatory policing practices. They're demanding higher standards for federal grants.
From Sam Altman and Elon Musk to ransomware gangs and state-backed hackers, these are the individuals and groups that spent this year disrupting the world we know it.
Mark Zuckerberg personally promised that the privacy feature would launch by default on Messenger and Instagram chat. WIRED goes behind the scenes of the company’s colossal effort to get it right.
A WIRED analysis of more than 100 restricted channels shows these communities remain active, and content shared within them often spreads to channels accessible to the public.
Poverty, fentanyl, and lack of public funding mean morgues are overloaded with unidentified bodies. TikTok and Facebook pages are filling the gap—with AI proving a powerful and controversial new tool.
The third GOP debate is sponsored by the Republican Jewish Coalition and will be livestreamed on a platform favored by one of America’s most notorious white nationalists.
A complaint filed with the EU’s independent data regulator accuses YouTube of failing to get explicit user permission for its ad blocker detection system, potentially violating the ePrivacy Directive.
CulturePulse's AI model promises to create a realistic virtual simulation of every Israeli and Palestinian citizen. But don't roll your eyes: It's already been put to the test in other conflict zones.
The slow-motion implosion of Elon Musk’s X has given rise to a slew of competitors, where privacy invasions that ran rampant over the past decade still largely persist.
TikTokkers are using a little-known livestreaming feature to falsely represent Israelis and Palestinians—and the company is taking a cut of costly in-app gifts viewers give to participants.
Inauthentic accounts on X flocked to its owner’s post about Ukrainian president Vlodymr Zelensky, hailing “Comrade Musk” and boosting pro-Russia propaganda.
Hamas has threatened to broadcast videos of hostage executions. With the war between Israel and Hamas poised to enter a new phase, are social platforms ready?
A flood of false information, partisan narratives, and weaponized “fact-checking" has obscured efforts to find out who’s responsible for an explosion at a hospital in Gaza.
X is promoting Community Notes to solve its disinformation problems, but some former employees and people who currently contribute notes say it’s not fit for that purpose.
A video posted by Donald Trump Jr. showing Hamas militants attacking Israelis was falsely flagged in a Community Note as being years old, thus making X's disinformation problem worse, not better.
X’s Trust and Safety team says it’s working to remove false information related to the Israel-Hamas war. Meanwhile, Elon Musk is sharing conspiracies and chatting with QAnon promoters.
People who have turned to X for breaking news about the Israel-Hamas conflict are being hit with old videos, fake photos, and video game footage at a level researchers have never seen.
Corporations are using software to monitor employees on a large scale. Some experts fear the data these tools collect could be used to automate people out of their jobs.
Senators are meeting with Silicon Valley's elite to learn how to deal with AI. But can Congress tackle the rapidly emerging tech before working on itself?
State and local governments in the US are scrambling to harness tools like ChatGPT to unburden their bureaucracies, rushing to write their own rules—and avoid generative AI's many pitfalls.
The social media giant filed a lawsuit against a nonprofit that researches hate speech online. It’s the latest effort to cut off the data needed to expose online platforms’ failings.
Since 2018, a dedicated team within Microsoft has attacked machine learning systems to make them safer. But with the public release of new generative AI tools, the field is already evolving.
The US Congress is trying to tame the rapid rise of artificial intelligence. But senators’ failure to tackle privacy reform is making the task a nightmare.
Meta’s Twitter alternative promises that it will work with decentralized platforms, giving you greater control of your data. You can hold the company to that—if you don't sign up.
The record-breaking GDPR penalty for data transfers to the US could upend Meta's business and spur regulators to finalize a new data-sharing agreement.
The state is poised to be the first in the US to block downloads of the popular app, which could ignite a precarious chain reaction for digital rights.
To beat back fake accounts, the professional social network is rolling out new tools to prove you work where you say you do and are who you say you are.