Methodology & Editorial Policy
Last updated: March 7, 2026
AI Security Radar is designed to publish useful AI security pages and keep the full published archive visible. This page explains how incident selection, public publishing, and correction decisions are made.
What We Monitor
- Trusted public advisories such as GitHub Security Advisories, NVD, CISA KEV, CERT, EUVD, and selected vendor feeds.
- AI-specific signals including copilots, assistants, LLM workflows, model artifacts, prompt injection, and AI-relevant dependencies.
- Source material that helps operators validate exposure quickly instead of relying on rewritten summaries alone.
What Counts As AI-Relevant
We do not intentionally label every software advisory as an AI incident. Public pages are meant to stay focused on issues that affect AI applications, assistants, models, agent workflows, or supporting components commonly used in AI delivery.
How Published Pages Are Indexed
- Published incident pages remain indexable and stay in the sitemap.
- AI relevance, summary depth, and source coverage are still reviewed to prioritize editorial fixes and corrections.
- If a page needs improvement, it is updated, merged, or corrected rather than silently hidden from the archive.
How Public Incident Pages Are Written
Source material is normalized, summarized, and then structured into sections that explain likely impact, affected workflows, detection signals, and first-response actions. Where deterministic fallbacks are used, they are designed to stay practical and avoid inflated claims.
Corrections And Removals
If a page is later found to have weak AI relevance, duplicate coverage, or insufficient detail, it may be corrected, merged with newer coverage, or removed if the underlying evidence changes materially. Correction requests can be sent to security@aisecurityradar.com.