Langevin unlearning is a machine unlearning framework that exploits the connection between noisy gradient descent and Langevin dynamics to provide certified unlearning guarantees. The core insight is that projected noisy gradient descent (PNGD) naturally produces parameter distributions that converge to a unique stationary distribution determined by the dataset. When data is removed, the unlearning process simply continues PNGD on the updated dataset, and the parameter distribution converges to the new stationary distribution.
The framework unifies differential privacy (DP) training and privacy-certified unlearning into a single process. During learning, PNGD with noise standard deviation σ provides DP guarantees. During unlearning, the same PNGD process provides certified unlearning guarantees measured via Rényi divergence. The privacy loss decays exponentially under strong convexity through the Log-Sobolev inequality (LSI) constant tracking mechanism.
Key Details
- Learning: Run PNGD on dataset D for T epochs → converges to stationary distribution ν_D
- Unlearning: Continue PNGD from current parameters on updated dataset D’ for K epochs → converges toward ν_{D’}
- Privacy guarantee: (α,ε)-Rényi Unlearning where ε decays as exp(-2σ²η/(αC_LSI)) per epoch under strong convexity
- Non-convex extension: Leverages bounded projection set C_R geometry for iteration-independent LSI upper bound
- Sequential unlearning: W∞ triangle inequality gives linear accumulation vs. exponential for Rényi-based methods
- Extends naturally to both full-batch (PNGD) and mini-batch (PNSGD) settings