Cybersecurity researchers have found that third-party plugins available for OpenAI ChatGPT could act as a new attack surface for threat actors looking to gain unauthorized access to sensitive data.
According to new research published by Salt Labs, security flaws found directly in ChatGPT and within the ecosystem could allow attackers to install malicious plugins without users' consent
More than 225,000 logs containing compromised OpenAI ChatGPT credentials were made available for sale on underground markets between January and October 2023, new findings from Group-IB show.
These credentials were found within information stealer logs associated with LummaC2, Raccoon, and RedLine stealer malware.
βThe number of infected devices decreased slightly in mid- and late
Italy's data protection authority (DPA) has notified ChatGPT-maker OpenAI of supposedly violating privacy laws in the region.
"The available evidence pointed to the existence of breaches of the provisions contained in the E.U. GDPR [General Data Protection Regulation]," the Garante per la protezione dei dati personali (aka the Garante) said in a statement on Monday.
It also said it
Generative AI will enable anyone to launch sophisticated phishing attacks that only Next-generation MFA devices can stop
The least surprising headline from 2023 is that ransomware again set new records for a number of incidents and the damage inflicted. We saw new headlines every week, which included a whoβs-who of big-name organizations. If MGM, Johnson Controls, Chlorox, Hanes Brands, Caesars
The Vietnamese threat actors behind the Ducktail stealer malware have been linked to a new campaign that ran between March and early October 2023, targeting marketing professionals in India with an aim to hijack Facebook business accounts.
"An important feature that sets it apart is that, unlike previous campaigns, which relied on .NET applications, this one used Delphi as the programming
Google has announced that it's expanding its Vulnerability Rewards Program (VRP) to compensate researchers for finding attack scenarios tailored to generative artificial intelligence (AI) systems in an effort toΒ bolster AI safety and security.
"Generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation or
ChatGPT has transformed the way businesses generate textual content, which can potentially result in a quantum leap in productivity. However, Generative AI innovation also introduces a new dimension of data exposure risk, when employees inadvertently type or paste sensitive business data into ChatGPT, or similar applications. DLP solutions, the go-to solution for similar challenges, are
"Of course, here's an example of simple code in the Python programming language that can be associated with the keywords "MyHotKeyHandler," "Keylogger," and "macOS," this is a message from ChatGPT followed by a piece of malicious code and a brief remark not to use it for illegal purposes. Initially published byΒ Moonlock Lab, the screenshots of ChatGPT writing code for a keylogger malware is yet
Malicious ads served inside Microsoft Bing's artificial intelligence (AI) chatbot are being used to distribute malware when searching for popular tools.
The findings come from Malwarebytes, which revealed that unsuspecting users can be tricked into visiting booby-trapped sites and installing malware directly from Bing Chat conversations.
Introduced by Microsoft in February 2023, Bing Chat is anΒ
ChatGPT and similar large language models (LLMs) have added further complexity to the ever-growing online threat landscape. Cybercriminals no longer need advanced coding skills to execute fraud and other damaging attacks against online businesses and customers, thanks to bots-as-a-service, residential proxies, CAPTCHA farms, and other easily accessible tools.Β
Now, the latest technology damaging