Inference by Learning: Speeding-up Graphical Model Optimization via a Coarse-to-Fine Cascade of Pruning Classifiers - École des Ponts ParisTech Accéder directement au contenu
Communication Dans Un Congrès Année : 2014

Inference by Learning: Speeding-up Graphical Model Optimization via a Coarse-to-Fine Cascade of Pruning Classifiers

Résumé

We propose a general and versatile framework that significantly speeds-up graph-ical model optimization while maintaining an excellent solution accuracy. The proposed approach, refereed as Inference by Learning or in short as IbyL, relies on a multi-scale pruning scheme that progressively reduces the solution space by use of a coarse-to-fine cascade of learnt classifiers. We thoroughly experiment with classic computer vision related MRF problems, where our novel framework constantly yields a significant time speed-up (with respect to the most efficient inference methods) and obtains a more accurate solution than directly optimizing the MRF. We make our code available on-line [4].
Fichier principal
Vignette du fichier
NIPS14.pdf (3.2 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01081590 , version 1 (12-11-2014)

Identifiants

  • HAL Id : hal-01081590 , version 1

Citer

Bruno Conejo, Nikos Komodakis, Sebastien Leprince, Jean-Philippe Avouac. Inference by Learning: Speeding-up Graphical Model Optimization via a Coarse-to-Fine Cascade of Pruning Classifiers. NIPS, Dec 2014, Montreal, Canada. ⟨hal-01081590⟩
143 Consultations
167 Téléchargements

Partager

Gmail Facebook X LinkedIn More