FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
☐ β˜† βœ‡ The Hacker News

Researchers Uncover 'LLMjacking' Scheme Targeting Cloud-Hosted AI Models

By: Newsroom β€” May 10th 2024 at 07:41
Cybersecurity researchers have discovered a novel attack that employs stolen cloud credentials to target cloud-hosted large language model (LLM) services with the goal of selling access to other threat actors. The attack technique has been codenamed LLMjacking by the Sysdig Threat Research Team. "Once initial access was obtained, they exfiltrated cloud credentials and gained
☐ β˜† βœ‡ The Hacker News

U.S. Government Releases New AI Security Guidelines for Critical Infrastructure

By: Newsroom β€” April 30th 2024 at 10:36
The U.S. government has unveiled new security guidelines aimed at bolstering critical infrastructure against artificial intelligence (AI)-related threats. "These guidelines are informed by the whole-of-government effort to assess AI risks across all sixteen critical infrastructure sectors, and address threats both to and from, and involving AI systems," the Department of Homeland Security (DHS)&
☐ β˜† βœ‡ The Hacker News

Microsoft Warns: North Korean Hackers Turn to AI-Fueled Cyber Espionage

By: Newsroom β€” April 22nd 2024 at 07:12
Microsoft has revealed that North Korea-linked state-sponsored cyber actors have begun to use artificial intelligence (AI) to make their operations more effective and efficient. "They are learning to use tools powered by AI large language models (LLM) to make their operations more efficient and effective," the tech giant said in its latest report on East Asia hacking groups. The
☐ β˜† βœ‡ The Hacker News

From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

By: Newsroom β€” March 19th 2024 at 13:55
Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules. "Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates," Recorded Future said in a new report shared with The Hacker News.
☐ β˜† βœ‡ The Hacker News

Third-Party ChatGPT Plugins Could Lead to Account Takeovers

By: Newsroom β€” March 15th 2024 at 11:34
Cybersecurity researchers have found that third-party plugins available for OpenAI ChatGPT could act as a new attack surface for threat actors looking to gain unauthorized access to sensitive data. According to new research published by Salt Labs, security flaws found directly in ChatGPT and within the ecosystem could allow attackers to install malicious plugins without users' consent
☐ β˜† βœ‡ The Hacker News

Researchers Highlight Google's Gemini AI Susceptibility to LLM Threats

By: Newsroom β€” March 13th 2024 at 10:14
Google's Gemini large language model (LLM) is susceptible to security threats that could cause it to divulge system prompts, generate harmful content, and carry out indirect injection attacks. The findings come from HiddenLayer, which said the issues impact consumers using Gemini Advanced with Google Workspace as well as companies using the LLM API. The first vulnerability involves
☐ β˜† βœ‡ The Hacker News

Over 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets

By: Newsroom β€” March 5th 2024 at 10:38
More than 225,000 logs containing compromised OpenAI ChatGPT credentials were made available for sale on underground markets between January and October 2023, new findings from Group-IB show. These credentials were found within information stealer logs associated with LummaC2, Raccoon, and RedLine stealer malware. β€œThe number of infected devices decreased slightly in mid- and late
☐ β˜† βœ‡ The Hacker News

Three Tips to Protect Your Secrets from AI Accidents

By: The Hacker News β€” February 26th 2024 at 10:29
Last year, the Open Worldwide Application Security Project (OWASP) published multiple versions of the "OWASP Top 10 For Large Language Models," reaching a 1.0 document in August and a 1.1 document in October. These documents not only demonstrate the rapidly evolving nature of Large Language Models, but the evolving ways in which they can be attacked and defended. We're going to talk in this
☐ β˜† βœ‡ The Hacker News

Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

By: Newsroom β€” February 23rd 2024 at 11:31
Microsoft has released an open access automation framework called PyRIT (short for Python Risk Identification Tool) to proactively identify risks in generative artificial intelligence (AI) systems. The red teaming tool is designed to "enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances," Ram Shankar Siva Kumar, AI red team
☐ β˜† βœ‡ The Hacker News

Microsoft, OpenAI Warn of Nation-State Hackers Weaponizing AI for Cyber Attacks

By: Newsroom β€” February 14th 2024 at 14:39
Nation-state actors associated with Russia, North Korea, Iran, and China are experimenting with artificial intelligence (AI) and large language models (LLMs) to complement their ongoing cyber attack operations. The findings come from a report published by Microsoft in collaboration with OpenAI, both of which said they disrupted efforts made by five state-affiliated actors that used its
☐ β˜† βœ‡ The Hacker News

How to Prevent ChatGPT From Stealing Your Content & Traffic

By: The Hacker News β€” August 30th 2023 at 11:48
ChatGPT and similar large language models (LLMs) have added further complexity to the ever-growing online threat landscape. Cybercriminals no longer need advanced coding skills to execute fraud and other damaging attacks against online businesses and customers, thanks to bots-as-a-service, residential proxies, CAPTCHA farms, and other easily accessible tools.Β  Now, the latest technology damaging
❌