A technique used in ZHANGZ2022 that discretizes model parameters to a finite set of representable values after computing the unlearning update. By restricting the output of the unlearning mechanism to a discrete set, the privacy/certification analysis becomes tighter because the mechanism has a bounded and enumerable output space.
In the context of certified machine unlearning, quantization serves a specific role in the certification guarantee: the sensitivity of the unlearning mechanism (how much the output distribution can change when one data point is added or removed) is reduced when the output is constrained to quantized values. This allows the (epsilon, delta) certification budget to be achieved with less additive noise, preserving more model utility compared to continuous-output mechanisms with the same certification level.
The quantization is typically applied as a post-processing step after the randomized gradient smoothing update. The combination of smoothing (reducing gradient sensitivity) and quantization (reducing output sensitivity) provides a two-pronged approach to certified unlearning that avoids the inverse Hessian computation required by Newton-update-based methods like those in ZHANGB2024.