AI security incident: CVE-2026-26321 (NVD)
OpenClaw is a personal AI assistant. Prior to OpenClaw version 2026.2.14, the Feishu extension previously allowed sendMediaFeishu to treat attacker-controlled mediaUrl values as local filesystem paths and read them directly. If an attacker can influence tool calls (directly or via prompt injection), they may be able to exfiltrate local files by supplying paths such as /etc/passwd as mediaUrl . Upgrade to OpenClaw 2026.2.14 or newer to receive a fix. The fix removes direct local file reads from this path and routes media loading through hardened helpers that enforce local-root restrictions.
Why This Is AI-Related
This page is treated as AI-specific because the source material references prompt injection, which places the issue inside an AI workflow, model, assistant, or supporting dependency rather than a generic software bulletin.
- prompt injection
Affected Workflow
LLM prompts, agent workflows, retrieval layers, and connected tools should be reviewed first.
Likely Attack Path
Untrusted prompts or tool instructions can override intended guardrails, then trigger data access or unsafe downstream actions.
Impact
The weakness can let untrusted prompts or tool instructions bypass intended guardrails and trigger unsafe downstream actions or data access. Severity HIGH. Classification confidence 64%. Source channel NVD.
Detection And Triage Signals
- Unexpected tool invocation chains after user prompts
- Prompt logs that include instruction override patterns or policy bypass text
- Retrieval or plugin calls that expose sensitive internal context
Recommended Response
- Review prompt templates, tool-invocation rules, and system instructions for the affected workflow.
- Restrict sensitive tools, retrieval scopes, and outbound actions until guardrails are validated.
- Search logs for prompt override attempts, unusual tool chains, and sensitive data exposure after user input.
Compliance And Business Impact
Prompt-layer weaknesses can expose regulated data, create unsafe actions, and weaken audit evidence around AI control boundaries.
Sources
Want alerts like this in real time?
Get notified with incident context, likely impact, and response guidance.
Get Notified