ASR AI Security Radar

Back to incidents

AI security incident: CVE-2026-26268 (NVD)

Incident date: February 13, 2026 | Published: February 14, 2026 | Source: NVD | Classification confidence: 75%

This incident is part of the public archive and includes explicit AI-related signals from the cited source material. Review methodology.

Cursor is a code editor built for programming with AI. Sandbox escape via writing .git configuration was possible in versions prior to 2.5. A malicious agent (ie prompt injection) could write to improperly protected .git settings, including git hooks, which may cause out-of-sandbox RCE next time they are triggered. No user interaction was required as Git executes these commands automatically. Fixed in version 2.5.

Why This Is AI-Related

This page is treated as AI-specific because the source material references prompt injection, which places the issue inside an AI workflow, model, assistant, or supporting dependency rather than a generic software bulletin.

  • prompt injection

Affected Workflow

LLM prompts, agent workflows, retrieval layers, and connected tools should be reviewed first.

Likely Attack Path

Untrusted prompts or tool instructions can override intended guardrails, then trigger data access or unsafe downstream actions.

Impact

The weakness can let untrusted prompts or tool instructions bypass intended guardrails and trigger unsafe downstream actions or data access. Severity HIGH. Classification confidence 75%. Source channel NVD.

Detection And Triage Signals

  • Unexpected tool invocation chains after user prompts
  • Prompt logs that include instruction override patterns or policy bypass text
  • Retrieval or plugin calls that expose sensitive internal context

Recommended Response

  • Review prompt templates, tool-invocation rules, and system instructions for the affected workflow.
  • Restrict sensitive tools, retrieval scopes, and outbound actions until guardrails are validated.
  • Search logs for prompt override attempts, unusual tool chains, and sensitive data exposure after user input.

Compliance And Business Impact

Prompt-layer weaknesses can expose regulated data, create unsafe actions, and weaken audit evidence around AI control boundaries.

Sources

Want alerts like this in real time?

Get notified with incident context, likely impact, and response guidance.

Get Notified

More incidents