DP-SGD Without Clipping: The Lipschitz Neural Network Way - Argumentation, Décision, Raisonnement, Incertitude et Apprentissage
Communication Dans Un Congrès Année : 2024

DP-SGD Without Clipping: The Lipschitz Neural Network Way

Résumé

State-of-the-art approaches for training Differentially Private (DP) Deep Neural Networks (DNN) face difficulties to estimate tight bounds on the sensitivity of the network's layers, and instead rely on a process of per-sample gradient clipping. This clipping process not only biases the direction of gradients but also proves costly both in memory consumption and in computation. To provide sensitivity bounds and bypass the drawbacks of the clipping process, we propose to rely on Lipschitz constrained networks. Our theoretical analysis reveals an unexplored link between the Lipschitz constant with respect to their input and the one with respect to their parameters. By bounding the Lipschitz constant of each layer with respect to its parameters, we prove that we can train these networks with privacy guarantees. Our analysis not only allows the computation of the aforementioned sensitivities at scale, but also provides guidance on how to maximize the gradient-to-noise ratio for fixed privacy guarantees. The code has been released as a Python package available at https://github.com/Algue-Rythme/lip-dp
Fichier principal
Vignette du fichier
2305.16202v2.pdf (7.03 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04610647 , version 1 (13-06-2024)

Identifiants

Citer

Louis Béthune, Thomas Massena, Thibaut Boissin, Yannick Prudent, Corentin Friedrich, et al.. DP-SGD Without Clipping: The Lipschitz Neural Network Way. ICLR 2024 - 12th International Conference on Learning Representations, 2024, Vienna (Austria), Austria. ⟨hal-04610647⟩
650 Consultations
302 Téléchargements

Altmetric

Partager

More