Gradient flow dynamics of shallow ReLU networks for square loss and orthogonal inputs - École des Ponts ParisTech Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Gradient flow dynamics of shallow ReLU networks for square loss and orthogonal inputs

Résumé

The training of neural networks by gradient descent methods is a cornerstone of the deep learning revolution. Yet, despite some recent progress, a complete theory explaining its success is still missing. This article presents, for orthogonal input vectors, a precise description of the gradient flow dynamics of training one-hidden layer ReLU neural networks for the mean squared error at small initialisation. In this setting, despite non-convexity, we show that the gradient flow converges to zero loss and characterise its implicit bias towards minimum variation norm. Furthermore, some interesting phenomena are highlighted: a quantitative description of the initial alignment phenomenon and a proof that the process follows a specific saddle to saddle dynamics.
Fichier principal
Vignette du fichier
2206.00939.pdf (1.12 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04105187 , version 1 (24-05-2023)

Identifiants

Citer

Etienne Boursier, Loucas Pillaud-Vivien, Nicolas Flammarion. Gradient flow dynamics of shallow ReLU networks for square loss and orthogonal inputs. NeurIPS 2022 - 36th International Conference on Neural Information Processing Systems, Nov 2022, New Orleans, United States. ⟨hal-04105187⟩
2 Consultations
15 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More