❌

Normal view

ClearFrame – an open-source AI agent protocol with auditability and goal monitoring

Body

I’ve been playing with the current crop of AI agent runtimes and noticed the same pattern over and over:

  • One process both reads untrusted content and executes tools
  • API keys live in plaintext dotfiles
  • There’s no audit log of what the agent actually did
  • There’s no concept of the agent’s goal, so drift is invisible
  • When something goes wrong, there is nothing to replay or verify

So I built ClearFrame, an open-source protocol and runtime that tries to fix those structural issues rather than paper over them with prompts.

What ClearFrame does differently

  • Reader / Actor isolation Untrusted content ingestion (web, files, APIs) runs in a separate sandbox from tool execution. The process that can run shell, write_file, etc. never sees raw web content directly.
  • GoalManifest + alignment scoring Every session starts with a GoalManifest that declares the goal, allowed tools, domains, and limits. Each proposed tool call is scored for alignment and can be auto-approved, queued for human review, or blocked.
  • Reasoning Transparency Layer (RTL) The agent’s chain-of-thought is captured as structured JSON (with hashes for tamper‑evidence), so you can replay and inspect how it reached a decision.
  • HMAC-chained audit log Every event (session start/end, goal scores, tool approvals, context hashes) is written to an append-only log with a hash chain. You can verify the log hasn’t been edited after the fact.
  • AgentOps control plane A small FastAPI app that shows live sessions, alignment scores, reasoning traces, and queued tool calls. You can approve/block calls in real time and verify audit integrity.

Who this is for

  • People wiring agents into production systems and worried about prompt injection, credential leakage, or goal drift
  • Teams who need to show regulators / security what their agents are actually doing
  • Anyone who wants something more inspectable than β€œcall tools from inside the model and hope for the best”

Status

  • Written in Python 3.11+
  • Packaged as a library with a CLI (clearframe init, clearframe audit-tail, etc.)
  • GitHub Pages site is live with docs and examples

Links

I’d love feedback from people building or operating agents in the real world:

  • Does this address the actual failure modes you’re seeing?
  • What would you want to plug ClearFrame into first (LangChain, LlamaIndex, AutoGen, something else)?
  • What’s missing for you to trust an agent runtime in production?
submitted by /u/TheDaVinci1618
[link] [comments]

Reverse engineered SilentSDK - RAT and C2 infrastructure found on beamers, sold on Amazon/AliExpress/eBay

Hi everyone,

I recently bought one of those popular, cheap Android projectors and noticed some suspicious network activity. Being curious, I decided to set up a lab, intercept the traffic, and dig into the firmware.

I ended up uncovering a factory-installed malware ecosystem including a disguised dropper (StoreOS) and a persistent RAT (SilentSDK) that communicates with a C2 server in China (api.pixelpioneerss.com).

Key findings of my analysis:

  • The malware uses a "Byte-Reversal" trick on APK payloads..
  • RAT Capabilities: Decrypted strings reveal remote command execution, chmod 777 on secondary payloads, and deep device fingerprinting.

This is my first independent technical report and deep dive into malware research. I’ve documented the full kill chain, decrypted the obfuscated strings, and written scripts to repair the malformed payloads for analysis.

Full Report: https://github.com/Kavan00/Android-Projector-C2-Malware

I'd love to get your opinion on the report.

Looking forward to your feedback!

submitted by /u/DerErbsenzaehler
[link] [comments]
❌