Microsoft Security Blog
2026-03-12
Microsoft describes how hidden instructions in content can influence AI tool behavior and uses a scenario to illustrate prompt injection. The post emphasizes the need for human oversight and a structured response playbook.
2026-02-09
The provided excerpt is only a short teaser and does not include the article’s technical details, findings, or mitigations.
Azure Updates
2025-11-18
Microsoft is previewing agent-level guardrails in the Foundry Control Plane (formerly content filters) that let you apply and customize safety controls per agent to help mitigate prompt-injection and other risks.