Reducing Privacy leaks in AI: Two approaches to contextual integrity
New research explores two ways to give AI agents stronger privacy safeguards grounded in contextual integrity. One adds lightweight, inference-time checks; the other builds contextual awareness directly into models through reasoning and RL.
CHERI-Lite for Memory Safety Exploit Mitigation
Designing safe digital systems for the humanitarian sector
Speaker: Carmela Troncoso Host: Betül Durak In this talk we overview our collaboration with the International Committee of the Red Cross, in which we help them to digitalize their aid distribution process without increasing risks…
BlueCodeAgent: A blue teaming agent enabled by automated red teaming for CodeGen AI
BlueCodeAgent is an end-to-end blue-teaming framework built to boost code security using automated red-teaming processes, data, and safety rules to guide LLMs’ defensive decisions. Dynamic testing reduces false positives in vulnerability detection.
Knowledge-Coin Fair Exchange
Fair exchange has been studied in computer science for many decades. The problem consists of enabling two participants to exchange digital information in a way which is fair, even when one may be malicious. The…
RedCodeAgent: Automatic red-teaming agent against diverse code agents
Code agents help streamline software development workflows, but may also introduce critical security risks. Learn how RedCodeAgent automates and improves “red-teaming” attack simulations to help uncover real-world threats that other methods overlook.
Applied Scientist – Azure CXP Data & Applied Sciences
In the new era of AI, this role within Azure CXP Data & Applied Sciences team will provide you the opportunity to work on cutting-edge GenAI and ML (Machine Learning) solutions that drive specific, measurable,…