ASR AI Security Radar

Back to incidents

AI security incident: RediSearch Query Injection in @langchain/langgraph-checkpoint-redis (GHSA-5mx2-w598-3...

Incident date: February 18, 2026 | Published: February 25, 2026 | Source: GitHub Security Advisory | Classification confidence: 80%

This incident is part of the public archive and includes explicit AI-related signals from the cited source material. Review methodology.

Summary A query injection vulnerability exists in the @langchain/langgraph-checkpoint-redis package's filter handling. The RedisSaver and ShallowRedisSaver classes construct RediSearch queries by directly interpolating user-provided filter keys and values without proper escaping. RediSearch has special syntax characters that can modify query behavior, and when user-controlled data contains these characters, the query logic can be manipulated to bypass intended access controls. ## Attack surface The core vulnerability was in the list() methods of both RedisSaver and ShallowRedisSaver : these methods failed to escape RediSearch special characters in filter keys and values when constructing queries.

Why This Is AI-Related

This page is treated as AI-specific because the source material references langchain, which places the issue inside an AI workflow, model, assistant, or supporting dependency rather than a generic software bulletin.

  • langchain

Affected Workflow

Review the AI product, dependency, and integration points mentioned in the source advisory before broadening remediation.

Likely Attack Path

The advisory indicates a security path that can affect AI applications, assistants, models, or connected automation workflows if the component is deployed.

Impact

The advisory has meaningful security implications for an AI-related product, dependency, or workflow and should be triaged against deployed usage. Severity HIGH. Classification confidence 80%. Source channel GHSA.

Detection And Triage Signals

  • New security events tied to the affected component or advisory identifier
  • Changes in AI workflow behavior, access logs, or plugin execution after the advisory window
  • Evidence that the vulnerable version is active in environments that process sensitive data

Recommended Response

  • Confirm whether affected products, models, or integrations are used in your environment.
  • Apply vendor fixes or mitigations and restrict risky permissions until verified.
  • Monitor logs for related indicators and document containment for audit evidence.

Compliance And Business Impact

Even when exploit details are still emerging, delayed triage can widen operational and compliance exposure around AI systems.

Sources

Want alerts like this in real time?

Get notified with incident context, likely impact, and response guidance.

Get Notified

More incidents