ASR AI Security Radar

Back to incidents

AI security incident: yt-dlp: Arbitrary Command Injection when using the `--netrc-cmd` option (GHSA-g3gw-q2...

Incident date: February 23, 2026 | Published: February 25, 2026 | Source: GitHub Security Advisory | Classification confidence: 80%

This incident is part of the public archive and includes explicit AI-related signals from the cited source material. Review methodology.

Summary When yt-dlp's --netrc-cmd command-line option (or netrc_cmd Python API parameter) is used, an attacker could achieve arbitrary command injection on the user's system with a maliciously crafted URL. ### Impact yt-dlp maintainers assume the impact of this vulnerability to be high for anyone who uses --netrc-cmd in their command/configuration or netrc_cmd in their Python scripts. Even though the maliciously crafted URL itself will look very suspicious to many users, it would be trivial for a maliciously crafted webpage with an inconspicuous URL to covertly exploit this vulnerability via HTTP redirect. Users without --netrc-cmd in their arguments or netrc_cmd in their scripts are unaffected.

Why This Is AI-Related

This page is treated as AI-specific because the source material references copilot, which places the issue inside an AI workflow, model, assistant, or supporting dependency rather than a generic software bulletin.

  • copilot

Affected Workflow

Review AI plugins, copilots, model-serving helpers, CLI tools, and automation runtimes that execute system commands.

Likely Attack Path

An attacker can turn the vulnerable AI-adjacent component into a path for command execution on the host or service runtime.

Impact

The issue can create a path to command execution inside an AI-facing product, plugin, copilot, or supporting service runtime. Severity HIGH. Classification confidence 80%. Source channel GHSA.

Detection And Triage Signals

  • New shell or process activity from AI-facing services
  • Unexpected outbound connections or file writes after prompt or API activity
  • Privilege changes, container escapes, or suspicious job execution logs

Recommended Response

  • Identify every environment that runs the affected AI plugin, assistant, CLI, or supporting package.
  • Patch or isolate the vulnerable component and remove risky execution permissions while validation is in progress.
  • Review process execution, outbound connections, and file-write logs for signs of post-exploitation activity.

Compliance And Business Impact

Code execution paths create immediate risk of host compromise, credential theft, and downstream lateral movement.

Sources

Want alerts like this in real time?

Get notified with incident context, likely impact, and response guidance.

Get Notified

More incidents