Hey folks,
I just launched Mithra, a security scanner built specifically for REST APIs that integrate large language models like GPT, Claude, open-source LLMs , anyone!
LLM-backed endpoints introduce a new set of risksβprompt injection, context leakage, over-permissive outputs, even logic abuse through natural language. Traditional API scanners don't catch these.
Mithra scans for both OWASP API Top 10 and LLM-specific threats, directly with 3 clicks (no agents, no container dependencies). Itβs designed for devs shipping LLM-powered features like search, summarization, chatbots, or completions.
What it does:
β Detects prompt injection, do anything now, Insecure output handling, sensitive information disclosure etc..
β Flags data/context leakage and logic gaps
Would love feedback from folks building or securing LLM interfaces. Happy to answer questions!
π mithrasec.com