FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
☐ β˜† βœ‡ WIRED

DeepSeek’s Safety Guardrails Failed Every Test Researchers Threw at Its AI Chatbot

By: Matt Burgess, Lily Hay Newman β€” January 31st 2025 at 18:30
Security researchers tested 50 well-known jailbreaks against DeepSeek’s popular new AI chatbot. It didn’t stop a single one.
☐ β˜† βœ‡ The Hacker News

Researchers Highlight Google's Gemini AI Susceptibility to LLM Threats

By: Newsroom β€” March 13th 2024 at 10:14
Google's Gemini large language model (LLM) is susceptible to security threats that could cause it to divulge system prompts, generate harmful content, and carry out indirect injection attacks. The findings come from HiddenLayer, which said the issues impact consumers using Gemini Advanced with Google Workspace as well as companies using the LLM API. The first vulnerability involves
❌