Inference by Learning: Speeding-up Graphical Model Optimization via a Coarse-to-Fine Cascade of Pruning Classifiers

Abstract : We propose a general and versatile framework that significantly speeds-up graph-ical model optimization while maintaining an excellent solution accuracy. The proposed approach, refereed as Inference by Learning or in short as IbyL, relies on a multi-scale pruning scheme that progressively reduces the solution space by use of a coarse-to-fine cascade of learnt classifiers. We thoroughly experiment with classic computer vision related MRF problems, where our novel framework constantly yields a significant time speed-up (with respect to the most efficient inference methods) and obtains a more accurate solution than directly optimizing the MRF. We make our code available on-line [4].
Type de document :
Communication dans un congrès
NIPS, Dec 2014, Montreal, Canada
Liste complète des métadonnées

Littérature citée [30 références]  Voir  Masquer  Télécharger

https://hal-enpc.archives-ouvertes.fr/hal-01081590
Contributeur : Pascal Monasse <>
Soumis le : mercredi 12 novembre 2014 - 15:32:14
Dernière modification le : mercredi 11 avril 2018 - 12:12:03
Document(s) archivé(s) le : vendredi 13 février 2015 - 10:20:21

Fichier

NIPS14.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01081590, version 1

Collections

Citation

Bruno Conejo, Nikos Komodakis, Sebastien Leprince, Jean-Philippe Avouac. Inference by Learning: Speeding-up Graphical Model Optimization via a Coarse-to-Fine Cascade of Pruning Classifiers. NIPS, Dec 2014, Montreal, Canada. 〈hal-01081590〉

Partager

Métriques

Consultations de la notice

218

Téléchargements de fichiers

142