Google's Gemini large language model (LLM) is susceptible to security threats that could cause it to divulge system prompts, generate harmful content, and carry out indirect injection attacks.
The findings come from HiddenLayer, which said the issues impact consumers using Gemini Advanced with Google Workspace as well as companies using the LLM API.
The first vulnerability involves
"Of course, here's an example of simple code in the Python programming language that can be associated with the keywords "MyHotKeyHandler," "Keylogger," and "macOS," this is a message from ChatGPT followed by a piece of malicious code and a brief remark not to use it for illegal purposes. Initially published byΒ Moonlock Lab, the screenshots of ChatGPT writing code for a keylogger malware is yet