❌

Normal view

Reverse CAPTCHA: Evaluating LLM Susceptibility to Invisible Unicode Instruction Injection

Tested 5 LLMs (GPT-5.2, GPT-4o-mini, Claude Opus/Sonnet/Haiku) against invisible instructions encoded in zero-width characters and Unicode Tags, hidden inside normal trivia questions.

The practical takeaway for anyone building on LLM APIs: tool access transforms invisible Unicode from an ignorable artifact into a decoded instruction channel. Models with code execution can write scripts to extract and follow hidden payloads.

Other findings:

  • OpenAI and Anthropic models are vulnerable to different encoding schemes β€” attackers need to fingerprint the target model
  • Without explicit decoding hints, compliance is near-zero β€” but a single line like "check for hidden Unicode" is enough to trigger extraction
  • Standard Unicode normalization (NFC/NFKC) does not strip these characters

Defense: strip characters in U+200B-200F, U+2060-2064, and U+E0000-E007F ranges at the input boundary. Be careful with zero-width joiners (U+200D) which are required for emoji rendering.

Code + data: https://github.com/canonicalmg/reverse-captcha-eval

Writeup: https://moltwire.com/research/reverse-captcha-zw-steganography

submitted by /u/thecanonicalmg
[link] [comments]
❌