ASR AI Security Radar

Back to incidents

AI security incident: Picklescan (scan_pytorch) Bypass via dynamic eval MAGIC_NUMBER (GHSA-97f8-7cmv-76j2)

Incident date: February 18, 2026 | Published: February 25, 2026 | Source: GitHub Security Advisory | Classification confidence: 45%

This incident is part of the public archive and includes explicit AI-related signals from the cited source material. Review methodology.

Summary This is a scanning bypass to scan_pytorch function in picklescan . As we can see in the implementation of [get_magic_number()](https://github.com/mmaitre314/picklescan/blob/2a8383cfeb4158567f9770d86597300c9e508d0f/src/picklescan/torch.py#L76C5-L84) that uses pickletools.genops(data) to get the magic_number with the condition opcode.name includes INT or LONG , but the PyTorch's implemtation simply uses [pickle_module.load()](https://github.com/pytorch/pytorch/blob/134179474539648ba7dee1317959529fbd0e7f89/torch/serialization.py#L1797) to get this magic_number . For this implementation difference, we then can embed the magic_code into the PyTorch file via dynamic eval on the \_\_reduce\_\_ trick, which can make the pickletools.

Why This Is AI-Related

This page is treated as AI-specific because the source material references copilot, picklescan, which places the issue inside an AI workflow, model, assistant, or supporting dependency rather than a generic software bulletin.

  • copilot
  • picklescan

Affected Workflow

Model registries, artifact scanners, notebook workflows, and CI/CD steps that handle model files need immediate review.

Likely Attack Path

The malicious payload is embedded in model artifacts or serialized objects, then executes or bypasses scanning during load and inspection.

Impact

The issue can create a path to command execution inside an AI-facing product, plugin, copilot, or supporting service runtime. Severity HIGH. Classification confidence 45%. Source channel GHSA.

Detection And Triage Signals

  • New or unsigned model artifacts entering the registry
  • Scanner output gaps for pickle or custom model formats
  • Unexpected code paths during model loading or validation jobs

Recommended Response

  • Identify every environment that runs the affected AI plugin, assistant, CLI, or supporting package.
  • Patch or isolate the vulnerable component and remove risky execution permissions while validation is in progress.
  • Review process execution, outbound connections, and file-write logs for signs of post-exploitation activity.

Compliance And Business Impact

Model artifact compromise undermines trust in the training and deployment chain and can create stealthy persistence in ML workflows.

Sources

Want alerts like this in real time?

Get notified with incident context, likely impact, and response guidance.

Get Notified

More incidents