Training morphological neural networks with gradient descent: some theoretical insights - Morphologie mathématique (CMM) Accéder directement au contenu
Communication Dans Un Congrès Année : 2024

Training morphological neural networks with gradient descent: some theoretical insights

Résumé

Morphological neural networks, or layers, can be a powerful tool to boost the progress in mathematical morphology, either on theoretical aspects such as the representation of complete lattice operators, or in the development of image processing pipelines. However, these architectures turn out to be difficult to train when they count more than a few morphological layers, at least within popular machine learning frameworks which use gradient descent based optimization algorithms. In this paper we investigate the potential and limitations of differentiation based approaches and back-propagation applied to morphological networks, in light of the non-smooth optimization concept of Bouligand derivative. We provide insights and first theoretical guidelines, in particular regarding initialization and learning rates.
Fichier principal
Vignette du fichier
training.pdf (622.48 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
licence : CC BY ND - Paternité - Pas de modifications

Dates et versions

hal-04438062 , version 1 (05-02-2024)

Licence

Paternité - Pas de modifications

Identifiants

  • HAL Id : hal-04438062 , version 1

Citer

Samy Blusseau. Training morphological neural networks with gradient descent: some theoretical insights. IAPR Third International Conference on Discrete Geometry and Mathematical Morphology, Andrea Frosini; Elena Barcucci; Elisa Pergola; Michela Ascolese; Niccoló Di Marco; Simone Rinaldi; Sara Brunetti; Giulia Palma; Veronica Gierrini; Leonardo Bindi, Apr 2024, Firenze, Italy. ⟨hal-04438062⟩
26 Consultations
22 Téléchargements

Partager

Gmail Facebook X LinkedIn More