Learning to Guide Local Feature Matches - École des Ponts ParisTech Access content directly
Conference Papers Year :

Learning to Guide Local Feature Matches

Abstract

We tackle the problem of finding accurate and robust keypoint correspondences between images. We propose a learning-based approach to guide local feature matches via a learned approximate image matching. Our approach can boost the results of SIFT to a level similar to state-of-the-art deep descriptors, such as Superpoint, ContextDesc, or D2-Net and can improve performance for these descriptors. We introduce and study different levels of supervision to learn coarse correspondences. In particular, we show that weak supervision from epipolar geometry leads to performances higher than the stronger but more biased point level supervision and is a clear improvement over weak image level supervision. We demonstrate the benefits of our approach in a variety of conditions by evaluating our guided keypoint correspondences for localization of internet images on the YFCC100M dataset and indoor images on the SUN3D dataset, for robust localization on the Aachen day-night benchmark and for 3D reconstruction in challenging conditions using the LTLL historical image data.
Fichier principal
Vignette du fichier
camera_ready.pdf (2.57 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-02969789 , version 1 (16-10-2020)

Identifiers

  • HAL Id : hal-02969789 , version 1

Cite

François Darmon, Mathieu Aubry, Pascal Monasse. Learning to Guide Local Feature Matches. 3DV, Nov 2020, Fukuoka ( On line ), Japan. ⟨hal-02969789⟩
70 View
576 Download

Share

Gmail Facebook Twitter LinkedIn More