Virtual Training for a Real Application: Accurate Object-Robot Relative Localization without Calibration - École des Ponts ParisTech Accéder directement au contenu
Article Dans Une Revue International Journal of Computer Vision Année : 2018

Virtual Training for a Real Application: Accurate Object-Robot Relative Localization without Calibration

Vianney Loing
Renaud Marlet
Mathieu Aubry

Résumé

Localizing an object accurately with respect to a robot is a key step for autonomous robotic manipulation. In this work, we propose to tackle this task knowing only 3D models of the robot and object in the particular case where the scene is viewed from uncalibrated cameras — a situation which would be typical in an uncontrolled environment, e.g., on a construction site. We demonstrate that this localization can be performed very accurately, with millimetric errors, without using a single real image for training, a strong advantage since acquiring representative training data is a long and expensive process. Our approach relies on a classification Convolutional Neural Network (CNN) trained using hundreds of thousands of synthetically rendered scenes with randomized parameters. To evaluate our approach quantitatively and make it comparable to alternative approaches, we build a new rich dataset of real robot images with accurately localized blocks.
Fichier principal
Vignette du fichier
IJCV-2018-Loing-et-al.pdf (2.3 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01815826 , version 1 (14-06-2018)
hal-01815826 , version 2 (04-07-2018)

Identifiants

Citer

Vianney Loing, Renaud Marlet, Mathieu Aubry. Virtual Training for a Real Application: Accurate Object-Robot Relative Localization without Calibration. International Journal of Computer Vision, 2018, 126, pp.1045-1060. ⟨10.1007/s11263-018-1102-6⟩. ⟨hal-01815826v2⟩
401 Consultations
249 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More