Skip to main content
Microsoft Security
Giorgio Severi.

Giorgio Severi posts

Giorgio is a Senior AI Safety Researcher at Microsoft, working on the AI Red Team to assess the security and safety of large, multimodal, and agentic AI systems. His work spans multiple areas of adversarial machine learning, with a particular focus on risks related to poisoning and long-term memory.


  • Datacenter aisle.

    Detecting backdoored language models at scale

    We’re releasing new research on detecting backdoors in open-weight language models and highlighting a practical scanner designed to detect backdoored models at scale and improve overall trust in AI systems.