ASR AI Security Radar

Back to incidents

AI security incident: ImageMagick: Policy bypass through path traversal allows reading restricted content d...

Incident date: February 24, 2026 | Published: February 25, 2026 | Source: GitHub Security Advisory | Classification confidence: 45%

This incident is part of the public archive. AI-specific signals are limited in the current source material, so source citations should be reviewed closely during triage. Review methodology.

ImageMagick’s path security policy is enforced on the raw filename string before the filesystem resolves it. As a result, a policy rule such as /etc/* can be bypassed by a path traversal. The OS resolves the traversal and opens the sensitive file, but the policy matcher only sees the unnormalized path and therefore allows the read. This enables local file disclosure (LFI) even when policy-secure.xml is applied. Actions to prevent reading from files have been taken. But it make sure writing is also not possible the following should be added to your policy: And this will also be included in the project's more secure policies by default.

Why This Is AI-Related

This advisory is part of the public incident archive, but the current source material uses limited explicit AI terminology, so the cited sources should be reviewed carefully when judging AI relevance and exposure.

  • Explicit AI-specific signals are limited in the current source material, so use the cited advisory to validate scope during triage.

Affected Workflow

Review connectors, retrieval plugins, webhook targets, file access paths, and outbound network policies around AI services.

Likely Attack Path

The flaw creates an unauthorized path to fetch, read, or exfiltrate sensitive data from connected systems or local files.

Impact

The flaw can expose internal data, local files, or connected systems through AI workflow connectors and supporting services. Severity HIGH. Classification confidence 45%. Source channel GHSA.

Detection And Triage Signals

  • Unexpected outbound requests from AI application components
  • Access to internal metadata endpoints, local files, or restricted datasets
  • Downloads or responses that contain internal documents, secrets, or embeddings

Recommended Response

  • Confirm whether the affected component can reach internal metadata, local files, or connected data stores.
  • Restrict outbound requests and sensitive data access paths until a patch or mitigation is in place.
  • Inspect logs for unusual downloads, webhook calls, retrieval requests, or responses containing internal content.

Compliance And Business Impact

Data exposure creates direct confidentiality risk and can trigger incident notification, contractual, and regulatory obligations.

Sources

Want alerts like this in real time?

Get notified with incident context, likely impact, and response guidance.

Get Notified

More incidents