Differential privacy is a notion of confidentiality that allows usefulcomputations on sensible data while protecting the privacy ofindividuals. Proving differential privacy is a difficult and error-prone task that calls for principled approaches and tool support. Approaches based on linear types and static analysis have recently emerged; however, an increasing number of programs achieve privacy using techniques that fall out of their scope. Examples include programs that aim for weaker, approximate differential privacy guarantees, and programs that achieve differential privacy without using any standard mechanisms. Providing support for reasoning aboutthe privacy of such programs has been an open problem.

We report on CertiPriv, a machine-checked framework for reasoning about differential privacy built on top of the Coq proof assistant. The central component of CertiPriv is a quantitative extension of probabilistic relational Hoare logic that enables one to derive differential privacy guarantees for programs from first principles. We demonstrate the applicability of CertiPriv on a number of examples whose formal analysis is out of the reach of previous techniques. In particular, we provide the first machine-checked proofs of correctness of the Laplacian, Gaussian and Exponential mechanisms and of the privacy of randomized and streaming algorithms from the literature.