Sequential certified unlearning addresses the practical scenario where multiple users send data removal requests at different time points, and each unlearning step must build upon the previously unlearned model rather than the original trained model. This is important because real-world deployments of machine-unlearning must handle successive deletion requests without full retraining after each one.

In ZHANGB2024, sequential certified unlearning is formalized in Algorithm 2. For the k-th unlearning request with unlearned set D_{u_k}, the retrained model is estimated as:

w_tilde_k = w_tilde_{k-1} - (1/H) * H_tilde_{s,lambda,k-1}^{-1} * grad_L(w_tilde_{k-1}, D_{r_k})

where each step uses the previously unlearned model w_tilde_{k-1} as the starting point and the LiSSA-approximated inverse Hessian over the current retained set. Crucially, Proposition 4.2 in ZHANGB2024 proves that the sequential approximation error has the same upper bound as the single-step case, because: (1) the weight norm constraint makes ||w* - w_tilde*||_2^2 bounded independently of the unlearned set, and (2) the gradient norm decreases at each step as parameters approach a local optimum.

In CHIEN2024, sequential unlearning is handled through the W-infinity distance framework. The triangle inequality of the W-infinity metric (a true metric, unlike Renyi divergence) allows the initial W-infinity distance bound Z_B^{(s+1)} for the (s+1)-th request to be bounded as min(c^{K_s * n/b} * Z_B^{(s)} + Z_B, 2R), where c is the contraction rate. This yields bounds that grow linearly with the number of unlearning requests, compared to exponential growth in Langevin-dynamics-based approaches that must use the weak triangle inequality of Renyi divergence (which doubles the divergence order alpha at each step).

The practical implication is that certified unlearning can be applied incrementally in production systems without the certification budget degrading catastrophically as deletion requests accumulate.


concept