ASR AI Security Radar

Back to incidents

AI security incident: Fabric.js Affected by Stored XSS via SVG Export (GHSA-hfvx-25r5-qc3w)

Incident date: February 18, 2026 | Published: February 25, 2026 | Source: GitHub Security Advisory | Classification confidence: 45%

This incident is part of the public archive. AI-specific signals are limited in the current source material, so source citations should be reviewed closely during triage. Review methodology.

fabric.js applies escapeXml() to text content during SVG export ( src/shapes/Text/TextSVGExportMixin.ts:186 ) but fails to apply it to other user-controlled string values that are interpolated into SVG attribute markup. When attacker-controlled JSON is loaded via loadFromJSON() and later exported via toSVG() , the unescaped values break out of XML attributes and inject arbitrary SVG elements including event handlers. ### Deserialization Path (no sanitization) loadFromJSON() ( src/canvas/StaticCanvas.ts:1229 ) calls enlivenObjects() which calls _fromObject() ( src/shapes/Object/Object.ts:1902 ). _fromObject passes all deserialized properties to the shape constructor via new this(enlivedObjectOptions) .

Why This Is AI-Related

This advisory is part of the public incident archive, but the current source material uses limited explicit AI terminology, so the cited sources should be reviewed carefully when judging AI relevance and exposure.

  • Explicit AI-specific signals are limited in the current source material, so use the cited advisory to validate scope during triage.

Affected Workflow

Model registries, artifact scanners, notebook workflows, and CI/CD steps that handle model files need immediate review.

Likely Attack Path

The malicious payload is embedded in model artifacts or serialized objects, then executes or bypasses scanning during load and inspection.

Impact

The advisory affects model artifacts or serialized AI assets, which can bypass inspection or execute during load and validation steps. Severity HIGH. Classification confidence 45%. Source channel GHSA.

Detection And Triage Signals

  • New or unsigned model artifacts entering the registry
  • Scanner output gaps for pickle or custom model formats
  • Unexpected code paths during model loading or validation jobs

Recommended Response

  • Inventory model artifacts, serialized objects, and scanners that touch the affected package or workflow.
  • Block untrusted model files and revalidate registry, CI, or notebook loading paths before restoring normal operation.
  • Review artifact provenance, scanner output, and recent model-ingestion activity for suspicious changes.

Compliance And Business Impact

Model artifact compromise undermines trust in the training and deployment chain and can create stealthy persistence in ML workflows.

Sources

Want alerts like this in real time?

Get notified with incident context, likely impact, and response guidance.

Get Notified

More incidents