Abstract
We propose a natural relaxation of differential privacy based on the Renyi divergence. Closely related notions have appeared in several recent papers that analyzed composition of differentially private mechanisms. We argue that the useful analytical tool can be used as a privacy definition, compactly and accurately representing guarantees on the tails of the privacy loss. We demonstrate that the new definition shares many important properties with the standard definition of differential privacy, while additionally allowing tighter analysis of composite heterogeneous mechanisms.
Summary
This paper introduces Renyi differential privacy (RDP) as a principled relaxation of standard differential privacy that tracks privacy loss through the lens of Renyi divergence. The core definition (Definition 4) states that a randomized mechanism f satisfies (alpha, epsilon)-RDP if for any adjacent inputs D, D’, the Renyi divergence of order alpha satisfies D_alpha(f(D) || f(D’)) ⇐ epsilon. The Renyi divergence of order alpha > 1 between distributions P and Q is defined as D_alpha(P || Q) = 1/(alpha - 1) * log E_{x ~ Q} [(P(x)/Q(x))^alpha]. The key insight is that while standard (epsilon, delta)-DP captures a single point on a privacy curve, RDP captures the entire curve parameterized by alpha, from which any desired (epsilon, delta) guarantee can be extracted.
The paper establishes that RDP inherits the essential properties of standard DP: closure under post-processing, adaptive sequential composition (Proposition 1), group privacy (Proposition 2), and robustness to auxiliary information. Crucially, the composition theorem for RDP is especially clean: if f satisfies (alpha, epsilon_1)-RDP and g satisfies (alpha, epsilon_2)-RDP, then their composition satisfies (alpha, epsilon_1 + epsilon_2)-RDP — the epsilon parameters simply add for any fixed alpha. This eliminates the combinatorial explosion of parameters that plagues advanced composition in the (epsilon, delta) framework.
The conversion from RDP to standard (epsilon, delta)-DP (Proposition 3) provides the bridge that makes RDP practically useful: an (alpha, epsilon)-RDP mechanism also satisfies (epsilon + log(1/delta)/(alpha-1), delta)-differential privacy for any 0 < delta < 1. This conversion lemma is used as the final step in Koloskova et al.’s certified unlearning proofs, where the privacy analysis is conducted in the RDP framework for tightness and then converted to (epsilon, delta)-guarantees for interpretability. The Gaussian mechanism analysis is particularly elegant under RDP: a Gaussian mechanism with sensitivity 1 and noise sigma satisfies (alpha, alpha/(2*sigma^2))-RDP for all alpha >= 1 (Corollary 3), yielding a straight-line budget curve.
Key Contributions
- Definition of Renyi differential privacy as (alpha, epsilon)-RDP using the Renyi divergence
- Clean composition theorem: epsilons add for fixed alpha under adaptive sequential composition
- Conversion lemma from (alpha, epsilon)-RDP to (epsilon + log(1/delta)/(alpha-1), delta)-DP
- Closed-form RDP analysis of the Gaussian mechanism: (alpha, alpha/(2*sigma^2))-RDP
- Closed-form RDP analysis of the Laplace mechanism and randomized response
- Advanced composition theorem proved entirely within the RDP framework (Proposition 4, Corollary 1)
Methodology
The paper develops RDP by systematic analogy with standard DP, proving each property (post-processing, composition, group privacy) has a direct RDP counterpart. The composition proof (Proposition 1) uses the multiplicative structure of Renyi divergence: exp[((alpha-1)*D_alpha(h(D)||h(D’)))] factors into integrals over the joint output space that can be bounded by the product of the individual Renyi exponents. The RDP-to-DP conversion (Proposition 3) uses the relationship between Renyi divergence and probability bounds: Pr[f(D) in S] ⇐ {e^epsilon * Pr[f(D’) in S]}^{(alpha-1)/alpha}, combined with a case analysis on whether e^epsilon * Q exceeds delta^{alpha/(alpha-1)}.
Key Findings
- Gaussian mechanism with sensitivity 1: (alpha, alpha/(2*sigma^2))-RDP — a linear budget curve in alpha
- Composition of n Gaussian mechanisms with parameter sigma has the RDP curve of a single Gaussian with parameter sigma/sqrt(n)
- RDP composition gives tighter bounds than standard advanced composition for both the Gaussian and Laplace mechanisms across a wide range of parameters
- The RDP framework naturally handles heterogeneous compositions by summing the epsilon values at each alpha
- For the “high privacy” regime (log(1/delta) >= epsilon^2 * n), RDP-based analysis matches or improves on optimal (epsilon, delta)-composition theorems
Important References
- The Algorithmic Foundations of Differential Privacy
- Deep Learning with Differential Privacy
- The Composition Theorem for Differential Privacy
Atomic Notes
- Renyi differential privacy
- Renyi divergence
- RDP to epsilon-delta DP conversion
- RDP composition theorem