ASR AI Security Radar

Back to incidents

AI security incident: Fickling: OBJ opcode call invisibility bypasses all safety checks (GHSA-mxhj-88fx-4pcv)

Incident date: February 24, 2026 | Published: February 25, 2026 | Source: GitHub Security Advisory | Classification confidence: 64%

This incident is part of the public archive and includes explicit AI-related signals from the cited source material. Review methodology.

Assessment The interpreter so it behaves closer to CPython when dealing with OBJ , NEWOBJ , and NEWOBJ_EX opcodes (https://github.com/trailofbits/fickling/commit/ff423dade2bb1f72b2b48586c022fac40cbd9a4a). # Original report ## Summary All 5 of fickling's safety interfaces -- is_likely_safe() , check_safety() , CLI --check-safety , always_check_safety() , and the check_safety() context manager -- report LIKELY_SAFE / raise no exceptions for pickle files that use the OBJ opcode to call dangerous stdlib functions (signal handlers, network servers, network connections, file operations). The OBJ opcode's implementation in fickling pushes function calls directly onto the interpreter stack without persisting them to the AST via new_variable() .

Why This Is AI-Related

This page is treated as AI-specific because the source material references fickling, which places the issue inside an AI workflow, model, assistant, or supporting dependency rather than a generic software bulletin.

  • fickling

Affected Workflow

Model registries, artifact scanners, notebook workflows, and CI/CD steps that handle model files need immediate review.

Likely Attack Path

The malicious payload is embedded in model artifacts or serialized objects, then executes or bypasses scanning during load and inspection.

Impact

The advisory affects model artifacts or serialized AI assets, which can bypass inspection or execute during load and validation steps. Severity HIGH. Classification confidence 64%. Source channel GHSA.

Detection And Triage Signals

  • New or unsigned model artifacts entering the registry
  • Scanner output gaps for pickle or custom model formats
  • Unexpected code paths during model loading or validation jobs

Recommended Response

  • Inventory model artifacts, serialized objects, and scanners that touch the affected package or workflow.
  • Block untrusted model files and revalidate registry, CI, or notebook loading paths before restoring normal operation.
  • Review artifact provenance, scanner output, and recent model-ingestion activity for suspicious changes.

Compliance And Business Impact

Model artifact compromise undermines trust in the training and deployment chain and can create stealthy persistence in ML workflows.

Sources

Want alerts like this in real time?

Get notified with incident context, likely impact, and response guidance.

Get Notified

More incidents