Approximately 500 NIST staffers, including at least three lab directors, are expected to lose their jobs at the standards agency as part of the ongoing DOGE purge, sources tell WIRED.
China-based DeepSeek has exploded in popularity, drawing greater scrutiny. Case in point: Security researchers found more than 1 million records, including user data and API keys, in an open database.
Amid ongoing fears over TikTok, Chinese generative AI platform DeepSeek says it’s sending heaps of US user data straight to its home country, potentially setting the stage for greater scrutiny.
Global Intelligence claims its Cybercheck technology can help cops find key evidence to nail a case. But a WIRED investigation reveals the smoking gun often appears far less solid.
Bots that “remove clothes” from images have run rampant on the messaging app, allowing people to create nonconsensual deepfake images even as lawmakers and tech companies try to crack down.
xAI’s generative AI tool, Grok AI, is unhinged compared to its competitors. It’s also scooping up a ton of data that people post on X. Here’s how to keep your posts out of Grok—and why you should.
The US military has abandoned its half-century dream of a suit of powered armor in favor of a “hyper enabled operator,” a tactical AI assistant for special operations forces.
AWS hosted a server linked to the Bezos family- and Nvidia-backed search startup that appears to have been used to scrape the sites of major outlets, prompting an inquiry into potential rules violations.
Blockchain analysis firm Elliptic, MIT, and IBM have released a new AI model—and the 200-million-transaction dataset it's trained on—that aims to spot the “shape” of bitcoin money laundering.
The world's most-visited deepfake website and another large competing site are stopping people in the UK from accessing them, days after the UK government announced a crackdown.
Some companies let you opt out of allowing your content to be used for generative AI. Here’s how to take back (at least a little) control from ChatGPT, Google’s Gemini, and more.
Security researchers created an AI worm in a test environment that can automatically spread between generative AI agents—potentially stealing data and sending spam emails along the way.
In a test at one station, Transport for London used a computer vision system to try and detect crime and weapons, people falling on the tracks, and fare dodgers, documents obtained by WIRED show.
Members of Congress say the DOJ is funding the use of AI tools that further discriminatory policing practices. They're demanding higher standards for federal grants.
From Sam Altman and Elon Musk to ransomware gangs and state-backed hackers, these are the individuals and groups that spent this year disrupting the world we know it.
Poverty, fentanyl, and lack of public funding mean morgues are overloaded with unidentified bodies. TikTok and Facebook pages are filling the gap—with AI proving a powerful and controversial new tool.
CulturePulse's AI model promises to create a realistic virtual simulation of every Israeli and Palestinian citizen. But don't roll your eyes: It's already been put to the test in other conflict zones.
Corporations are using software to monitor employees on a large scale. Some experts fear the data these tools collect could be used to automate people out of their jobs.
Senators are meeting with Silicon Valley's elite to learn how to deal with AI. But can Congress tackle the rapidly emerging tech before working on itself?
State and local governments in the US are scrambling to harness tools like ChatGPT to unburden their bureaucracies, rushing to write their own rules—and avoid generative AI's many pitfalls.
Since 2018, a dedicated team within Microsoft has attacked machine learning systems to make them safer. But with the public release of new generative AI tools, the field is already evolving.
The US Congress is trying to tame the rapid rise of artificial intelligence. But senators’ failure to tackle privacy reform is making the task a nightmare.