Virtual Training for a Real Application: Accurate Object-Robot Relative Localization without Calibration - École des Ponts ParisTech Access content directly
Journal Articles International Journal of Computer Vision Year : 2018

Virtual Training for a Real Application: Accurate Object-Robot Relative Localization without Calibration

Vianney Loing
Renaud Marlet
Mathieu Aubry
  • Function : Author
  • PersonId : 945627

Abstract

Localizing an object accurately with respect to a robot is a key step for autonomous robotic manipulation. In this work, we propose to tackle this task knowing only 3D models of the robot and object in the particular case where the scene is viewed from uncalibrated cameras — a situation which would be typical in an uncontrolled environment, e.g., on a construction site. We demonstrate that this localization can be performed very accurately, with millimetric errors, without using a single real image for training, a strong advantage since acquiring representative training data is a long and expensive process. Our approach relies on a classification Convolutional Neural Network (CNN) trained using hundreds of thousands of synthetically rendered scenes with randomized parameters. To evaluate our approach quantitatively and make it comparable to alternative approaches, we build a new rich dataset of real robot images with accurately localized blocks.
Fichier principal
Vignette du fichier
IJCV-2018-Loing-et-al.pdf (2.3 Mo) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-01815826 , version 1 (14-06-2018)
hal-01815826 , version 2 (04-07-2018)

Identifiers

Cite

Vianney Loing, Renaud Marlet, Mathieu Aubry. Virtual Training for a Real Application: Accurate Object-Robot Relative Localization without Calibration. International Journal of Computer Vision, In press, ⟨10.1007/s11263-018-1102-6⟩. ⟨hal-01815826v2⟩
384 View
228 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More