Research talk: Differentially private fine-tuning of large language models
We have come a long way in terms of protecting privacy when training ML models, particularly with large language models. We recently demonstrated that using differentially private stochastic gradient descent (DP-SGD) to fine-tune very large…
Lightning talks: Responsible AI: The challenge of big models
The scale of AI models is continuously increasing. These big models bring new challenges to responsible AI areas such as privacy protection, model interpretability, ethical natural language generation, and evaluation of application maturity. In this…
Research talks: An investigation into user expectations for differential privacy
Differential privacy is widely regarded as a gold standard for privacy-preserving computation over users’ data. A key challenge with differential privacy is that its mathematical sophistication can be difficult to communicate to users, leaving them…
Microsoft Research 2022 Global PhD Fellowship Recipients
Meet some of 2022 Microsoft Research PhD Fellowship recipients from around the world. The Microsoft Research PhD Fellowship is a global program that identifies and empowers the next generation of exceptional computing research talent. Microsoft…
Working Towards a Plural Public via Common Knowledge and Designated Verifier Proofs
Often cited virtues of blockchains are immutability, transparency, decentralization, openness, and trustlessness. Many also think of their most natural applications as financial. Yet a critical affordance of such systems is often missed: the way they…
SIKE Channels – zero value side-channel attacks on SIKE
We present new side-channel attacks on SIKE. Previous works had shown that SIKE is vulnerable to differential power analysis and pointed to coordinate randomization as an effective countermeasure. We show that coordinate randomization alone is…