Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules.
"Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates," Recorded Future said in a new report shared with The Hacker News.
More than 225,000 logs containing compromised OpenAI ChatGPT credentials were made available for sale on underground markets between January and October 2023, new findings from Group-IB show.
These credentials were found within information stealer logs associated with LummaC2, Raccoon, and RedLine stealer malware.
βThe number of infected devices decreased slightly in mid- and late
Nation-state actors associated with Russia, North Korea, Iran, and China are experimenting with artificial intelligence (AI) and large language models (LLMs) to complement their ongoing cyber attack operations.
The findings come from a report published by Microsoft in collaboration with OpenAI, both of which said they disrupted efforts made by five state-affiliated actors that used its
Italy's data protection authority (DPA) has notified ChatGPT-maker OpenAI of supposedly violating privacy laws in the region.
"The available evidence pointed to the existence of breaches of the provisions contained in the E.U. GDPR [General Data Protection Regulation]," the Garante per la protezione dei dati personali (aka the Garante) said in a statement on Monday.
It also said it
ChatGPT: Productivity tool, great for writing poems, and⦠a security risk?! In this article, we show how threat actors can exploit ChatGPT, but also how defenders can use it for leveling up their game.
ChatGPT is the most swiftly growing consumer application to date. The extremely popular generative AI chatbot has the ability to generate human-like, coherent and contextually relevant responses.
Google has announced that it's expanding its Vulnerability Rewards Program (VRP) to compensate researchers for finding attack scenarios tailored to generative artificial intelligence (AI) systems in an effort toΒ bolster AI safety and security.
"Generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation or
Over 101,100 compromised OpenAI ChatGPT account credentials have found their way on illicit dark web marketplaces between June 2022 and May 2023, with India alone accounting for 12,632 stolen credentials.
The credentials were discovered within information stealer logs made available for sale on the cybercrime underground, Group-IB said in a report shared with The Hacker News.
"The number of
Malicious Google Search ads for generative AI services like OpenAI ChatGPT and Midjourney are being used to direct users to sketchy websites as part of a BATLOADER campaign designed to deliverΒ RedLine StealerΒ malware.
"Both AI services are extremely popular but lack first-party standalone apps (i.e., users interface with ChatGPT via their web interface while Midjourney uses Discord)," eSentire