Towards Optimal Naive Bayes Nearest Neighbor - École des Ponts ParisTech Accéder directement au contenu
Communication Dans Un Congrès Année : 2010

Towards Optimal Naive Bayes Nearest Neighbor

Résumé

Naive Bayes Nearest Neighbor (NBNN) is a feature-based image classifier that achieves impressive degree of accuracy by exploiting 'Image-to-Class' distances and by avoiding quantization of local image descriptors. It is based on the hypothesis that each local descriptor is drawn from a class-dependent probability measure. The density of the latter is estimated by the non-parametric kernel estimator, which is further simplified under the assumption that the normalization factor is class-independent. While leading to significant simplification, the assumption underlying the original NBNN is too restrictive and considerably degrades its generalization ability. The goal of this paper is to address this issue. As we relax the incriminated assumption we are faced with a parameter selection problem that we solve by hinge-loss minimization. We also show that our modified formulation naturally generalizes to optimal combinations of feature types. Experiments conducted on several datasets show that the gain over the original NBNN may attain up to 20 percentage points. We also take advantage of the linearity of optimal NBNN to perform classification by detection through efficient sub-window search, with yet another performance gain. As a result, our classifier outperforms -- in terms of misclassification error -- methods based on support vector machine and bags of quantized features on some datasets.

Dates et versions

hal-00654399 , version 1 (21-12-2011)

Identifiants

Citer

Régis Behmo, Paul Marcombes, Arnak S. Dalalyan, Veronique Prinet. Towards Optimal Naive Bayes Nearest Neighbor. ECCV 2010 - 11th European Conference on Computer Vision, Sep 2010, Heraklion, Crete, Greece. pp.171-184, ⟨10.1007/978-3-642-15561-1_13⟩. ⟨hal-00654399⟩
553 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More