❌

Normal view

Received yesterday β€” 1 April 2026 ⏭ /r/netsec - Information Security News & Discussion

Authority Encoding Risk (AER)

Most AI discussions focus on correctness.

Accuracy. Alignment. Output quality.

But there’s a more fundamental problem underneath all of that:

Who β€” or what β€” is actually allowed to execute a decision?

---

I just published a paper introducing:

Authority Encoding Risk (AER)

A measurable variable for something most systems don’t track at all:

Authority ambiguity at the moment of execution.

---

Today’s systems can tell you:

β€’ if something is likely correct

β€’ if it follows policy

β€’ if it appears safe

But they cannot reliably answer:

Is this decision admissible under real-world authority constraints?

---

That gap shows up in:

β€’ automation systems

β€’ AI-assisted decisions

β€’ institutional workflows

β€’ underwriting and loss modeling

And right now, it’s largely invisible.

---

The paper breaks down:

β€’ how authority ambiguity propagates into risk

β€’ why existing frameworks fail to capture it

β€’ how it can be measured before loss occurs

---

If you’re working anywhere near AI, risk, infrastructure, or decision systems β€” this is a layer worth paying attention to.

---

There’s a category of risk most AI systems don’t even know exists.

This paper represents an initial formulation.

Ongoing work is focused on tightening definitions, expanding evidence, and strengthening the model.

https://papers.ssrn.com/sol3/papers.cfm?abstract\_id=6229278

submitted by /u/Dramatic-Ebb-7165
[link] [comments]
❌