FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Yesterday β€” February 15th 2026Your RSS feeds

I built a free, open-source platform to learn GenAI security, learning content + hands-on labs against real LLMs (beta, looking for feedback)

Hey everyone,

I've been working on PromptTrace, a free platform for learning GenAI security from the ground up. It combines structured learning content with hands-on attack labs against real models. It's currently in beta and I'd love your thoughts and feedback before the full launch.

Learning content (no signup needed):

  • How LLMs actually work, tokenization, attention, generation
  • System prompts, what they are, how they're assembled, why they're vulnerable
  • RAG explained, retrieval pipelines, document injection, trust boundaries
  • Tools & function calling, how LLMs invoke external functions and why that's an attack surface
  • Interactive diagrams and visual explanations throughout

Hands-on labs (free account required):

  • 7 interactive labs across 4 modules: bare LLM attacks, RAG poisoning, tool/function calling exploitation, and defense bypass
  • 10-level Gauntlet, a progressive challenge that chains techniques together, getting harder each level
  • Full context trace on every request, you see exactly what the model sees, including system prompt, injected documents, and available tools

The idea is: first you learn how these systems work, then you break them. Understanding the architecture makes the attacks make sense.

We're still in beta so things might be rough around the edges, but that's exactly why I'm posting here. Would really appreciate your honest feedback: what's missing, what content topics should we cover, what attack scenarios would you want to see?

submitted by /u/MasterpieceMuch872
[link] [comments]
❌