Delay-tolerant distributed Bregman proximal algorithms - Optimization and learning for Data Science Accéder directement au contenu
Article Dans Une Revue Optimization Methods and Software Année : 2024

Delay-tolerant distributed Bregman proximal algorithms

Résumé

Many problems in machine learning write as the minimization of a sum of individual loss functions over the training examples. These functions are usually differentiable but, in some cases, their gradients are not Lipschitz continuous, which compromises the use of (proximal) gradient algorithms. Fortunately, changing the geometry and using Bregman divergences can alleviate this issue in several applications, such as for Poisson linear inverse problems. However, the Bregman operation makes the aggregation of several points and gradients more involved, hindering the distribution of computations for such problems. In this paper, we propose an asynchronous variant of the Bregman proximal-gradient method, able to adapt to any centralized computing system. In particular, we prove that the algorithm copes with arbitrarily long delays and we illustrate its behavior on distributed Poisson inverse problems.
Fichier principal
Vignette du fichier
main.pdf (584.55 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04515223 , version 1 (25-04-2024)

Identifiants

Citer

Sélim Chraibi, Franck Iutzeler, Jérôme Malick, Alexander Rogozin. Delay-tolerant distributed Bregman proximal algorithms. Optimization Methods and Software, 2024, pp.1-17. ⟨10.1080/10556788.2023.2278089⟩. ⟨hal-04515223⟩
0 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More