ASR AI Security Radar

Back to incidents

AI security incident: New API has an SQL LIKE Wildcard Injection DoS via Token Search (GHSA-w6x6-9fp7-fqm4)

Incident date: February 23, 2026 | Published: February 25, 2026 | Source: GitHub Security Advisory | Classification confidence: 78%

This incident is part of the public archive. AI-specific signals are limited in the current source material, so source citations should be reviewed closely during triage. Review methodology.

Summary A SQL LIKE wildcard injection vulnerability in the /api/token/search endpoint allows authenticated users to cause Denial of Service through resource exhaustion by crafting malicious search patterns. ### Details The token search endpoint accepts user-supplied keyword and token parameters that are directly concatenated into SQL LIKE clauses without escaping wildcard characters ( % , _ ). This allows attackers to inject patterns that trigger expensive database queries. ### Vulnerable Code File: model/token.go:70 go err = DB.Where("user_id = ?", userId). Where("name LIKE ?", "%"+keyword+"%"). // No wildcard escaping Where(commonKeyCol+" LIKE ?", "%"+token+"%"). Find(&tokens).

Why This Is AI-Related

This advisory is part of the public incident archive, but the current source material uses limited explicit AI terminology, so the cited sources should be reviewed carefully when judging AI relevance and exposure.

  • Explicit AI-specific signals are limited in the current source material, so use the cited advisory to validate scope during triage.

Affected Workflow

Check inference endpoints, parsing layers, queues, and file processing jobs that support AI features.

Likely Attack Path

An attacker can drive resource exhaustion or crash conditions in the vulnerable component through crafted traffic or content.

Impact

The advisory describes an availability or resource-exhaustion path that can disrupt AI-serving components and supporting automation. Severity HIGH. Classification confidence 78%. Source channel GHSA.

Detection And Triage Signals

  • Latency spikes or worker restarts on AI-serving endpoints
  • Memory or CPU saturation after malformed requests or artifacts
  • Queue backlogs, timeouts, or repeated crash loops in model services

Recommended Response

  • Identify inference endpoints, parsing jobs, or queues that rely on the affected component.
  • Apply vendor mitigations and add rate, size, or input controls to reduce exhaustion risk during triage.
  • Monitor latency, restart frequency, queue backlog, and saturation indicators for active disruption.

Compliance And Business Impact

Availability failures can interrupt customer-facing AI features and force emergency rollback or capacity isolation.

Sources

Want alerts like this in real time?

Get notified with incident context, likely impact, and response guidance.

Get Notified

More incidents