Inference by Learning: Speeding-up Graphical Model Optimization via a Coarse-to-Fine Cascade of Pruning Classifiers - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année :

Inference by Learning: Speeding-up Graphical Model Optimization via a Coarse-to-Fine Cascade of Pruning Classifiers

(1, 2) , (3, 1, 4) , (2) , (2)
1
2
3
4

Résumé

We propose a general and versatile framework that significantly speeds-up graph-ical model optimization while maintaining an excellent solution accuracy. The proposed approach, refereed as Inference by Learning or in short as IbyL, relies on a multi-scale pruning scheme that progressively reduces the solution space by use of a coarse-to-fine cascade of learnt classifiers. We thoroughly experiment with classic computer vision related MRF problems, where our novel framework constantly yields a significant time speed-up (with respect to the most efficient inference methods) and obtains a more accurate solution than directly optimizing the MRF. We make our code available on-line [4].
Fichier principal
Vignette du fichier
NIPS14.pdf (3.2 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01081590 , version 1 (12-11-2014)

Identifiants

  • HAL Id : hal-01081590 , version 1

Citer

Bruno Conejo, Nikos Komodakis, Sebastien Leprince, Jean-Philippe Avouac. Inference by Learning: Speeding-up Graphical Model Optimization via a Coarse-to-Fine Cascade of Pruning Classifiers. NIPS, Dec 2014, Montreal, Canada. ⟨hal-01081590⟩
136 Consultations
153 Téléchargements

Partager

Gmail Facebook Twitter LinkedIn More