Skip to Main content Skip to Navigation

Improving Latent Representations of ConvNets for Visual Understanding

Abstract : For a decade now, convolutional deep neural networks have demonstrated their ability to produce excellent results for computer vision. For this, these models transform the input image into a series of latent representations. In this thesis, we work on improving the ``quality'' of the latent representations of ConvNets for different tasks. First, we work on regularizing those representations to increase their robustness toward intra-class variations and thus improve their performance for classification. To do so, we develop a loss based on information theory metrics to decrease the entropy conditionally to the class. Then, we propose to structure the information in two complementary latent spaces, solving a conflict between the invariance of the representations and the reconstruction task. This structure allows to release the constraint posed by classical architecture, allowing to obtain better results in the context of semi-supervised learning. Finally, we address the problem of disentangling, i.e. explicitly separating and representing independent factors of variation of the dataset. We pursue our work on structuring the latent spaces and use adversarial costs to ensure an effective separation of the information. This allows to improve the quality of the representations and allows semantic image editing.
Complete list of metadatas

Cited literature [191 references]  Display  Hide  Download
Contributor : Thomas Robert <>
Submitted on : Wednesday, October 9, 2019 - 4:00:26 PM
Last modification on : Friday, January 8, 2021 - 5:36:04 PM


Files produced by the author(s)


  • HAL Id : tel-02309812, version 1


Thomas Robert. Improving Latent Representations of ConvNets for Visual Understanding. Computer Vision and Pattern Recognition [cs.CV]. Sorbonne Université, 2019. English. ⟨tel-02309812v1⟩



Record views


Files downloads