Accéder directement au contenu Accéder directement à la navigation
Communication dans un congrès

Understanding deep features with computer-generated imagery

Mathieu Aubry 1, 2, 3, 4 Bryan Russell 5
Abstract : We introduce an approach for analyzing the variation of features generated by convolutional neural networks (CNNs) with respect to scene factors that occur in natural images. Such factors may include object style, 3D viewpoint, color, and scene lighting configuration. Our approach analyzes CNN feature responses corresponding to different scene factors by controlling for them via rendering using a large database of 3D CAD models. The rendered images are presented to a trained CNN and responses for different layers are studied with respect to the input scene factors. We perform a decomposition of the responses based on knowledge of the input scene factors and analyze the resulting components. In particular, we quantify their relative importance in the CNN responses and visualize them using principal component analysis. We show qualitative and quantitative results of our study on three CNNs trained on large image datasets: AlexNet [18], Places [40], and Oxford VGG [8]. We observe important differences across the networks and CNN layers for different scene factors and object categories. Finally, we demonstrate that our analysis based on computer-generated imagery translates to the network representation of natural images.
Type de document :
Communication dans un congrès
Liste complète des métadonnées
Contributeur : Mathieu Aubry <>
Soumis le : samedi 12 décembre 2015 - 14:11:11
Dernière modification le : mardi 8 décembre 2020 - 10:05:58
Archivage à long terme le : : dimanche 13 mars 2016 - 10:12:22


Fichiers produits par l'(les) auteur(s)


  • HAL Id : hal-01240849, version 1


Mathieu Aubry, Bryan Russell. Understanding deep features with computer-generated imagery. ICCV, Dec 2015, Santiago, Chile. ⟨hal-01240849⟩



Consultations de la notice


Téléchargements de fichiers