An interactive application that visualizes and demonstrates Googleβs CaMeL (Capabilities for Machine Learning) security approach for defending against prompt injections in LLM agents.
Link to original paper: https://arxiv.org/pdf/2503.18813
All credit to the original researchers
title={Defeating Prompt Injections by Design}, author={Edoardo Debenedetti and Ilia Shumailov and Tianqi Fan and Jamie Hayes and Nicholas Carlini and Daniel Fabian and Christoph Kern and Chongyang Shi and Andreas Terzis and Florian Tramèr}, year={2025}, eprint={2503.18813}, archivePrefix={arXiv}, primaryClass={cs.CR}, url={https://arxiv.org/abs/2503.18813}, }