Home Tech Recent Article Shows 26 Routers in Clear Suspicious Behavior Involving Injection or Credential Theft 

Recent Article Shows 26 Routers in Clear Suspicious Behavior Involving Injection or Credential Theft 

Recent Article Shows 26 Routers in Clear Suspicious Behavior Involving Injection or Credential Theft 

A recently published research paper titled “Your Agent Is Mine: Measuring Malicious Intermediary Attacks on the LLM Supply Chain” from researchers at UC Santa Barbara, UC San Diego, and collaborators including blockchain security firm Fuzzland is generating buzz on AI and Crypto forums.

What are LLM routers

LLM routers also called API routers or proxies are intermediary services that sit between your application and AI agent and the actual model providers like OpenAI, Anthropic, Grok/xAI, etc. They often: Aggregate multiple providers for cost optimization, fallback, or load balancing. Handle routing, formatting, or additional features. Are popular in AI agent setups e.g., coding agents like Claude Code, autonomous agents handling tools and APIs.

Many are cheap or free third-party options sold on marketplaces like Taobao, Xianyu, or Shopify, or shared in developer communities. What did the researchers find? They tested 428 routers (28 paid + 400 free): 9 routers (1 paid, 8 free) actively injected malicious code into tool-call responses. This means they rewrote the JSON output from the LLM before it reached the agent’s execution layer—potentially making the agent run harmful commands.

Register for Tekedia Mini-MBA edition 20 (June 8 – Sept 5, 2026).

Register for Tekedia AI in Business Masterclass.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register for Tekedia AI Lab.

17 routers accessed or exfiltrated researcher-controlled AWS/cloud credentials (sent as decoys). At least one router successfully drained Ether from a researcher-controlled decoy wallet; small amount in testing, but they reference a real-world client loss of ~$500k via a compromised router. Two used adaptive evasion techniques to avoid detection.

In total, 26 routers showed clear malicious or highly suspicious behavior involving injection or credential theft. The attacks exploit the fact that many routers terminate TLS; so they see plaintext prompts, API keys, private keys, tool calls, etc. and have full ability to modify responses. There’s often no cryptographic verification that the tool call came from the actual LLM.

Real-world impact is already happening, especially for crypto and smart contract developers using AI agents that auto-approve tool executions or handle wallets and keys. One quoted researcher noted: “26 LLM routers are secretly injecting malicious tool calls and stealing creds. One drained our client $500k wallet.” They also demonstrated “poisoning” the ecosystem to redirect traffic.

AI agents increasingly act autonomously; calling tools, executing code, managing crypto. A compromised router breaks the trust chain between the model and execution. Detection is hard because the injection looks like a legitimate tool call. Auto-approve features (common for convenience) make it worse—91% of tested real Codex-like sessions ran fully auto-approved in the study. The paper formalizes attack classes like: Payload injection — Rewriting tool calls.

Secret exfiltration — Stealing keys and credentials silently. Avoid untrusted third-party routers when possible, especially cheap/free ones or those from unknown marketplaces. Stick to official provider APIs or well-audited open-source proxies e.g., LiteLLM with strict controls, but even those aren’t immune if misconfigured.

Never send sensitive data like private keys, seed phrases, high-privilege API keys through routers in plaintext. Use cryptographic verification where available e.g., signed responses from the model provider or run inference in trusted execution environments (TEEs). Implement client-side safeguards: sandbox tool execution, network allowlisting, secret scanning and leak detection, and manual review for high-stakes actions.

For crypto/AI agent devs: Treat routers as part of the supply chain attack surface—audit them or eliminate the middleman. It’s a solid, systematic study with a clear threat model and mitigations. This highlights a growing risk in the AI supply chain as agents become more powerful and autonomous. If you’re building or using LLM agents, it’s worth reviewing your routing setup immediately.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here