Appearance Modelling for 4D Representations

Vagia Tsiminaki 1, 2
2 MORPHEO - Capture and Analysis of Shapes in Motion
Inria Grenoble - Rhône-Alpes, LJK - Laboratoire Jean Kuntzmann, INPG - Institut National Polytechnique de Grenoble
Abstract : Capturing spatio-temporal models (4D modelling) from real world imagery has received a growing interest during the last years urged by the increasing demands of real-world applications and the tremendous amount of easily accessible image data. The general objective is to produce realistic representations of the world from captured video sequences. Although geometric modelling has already reached a high level of maturity, the appearance aspect has not been fully explored. The current thesis addresses the problem of appearance modelling for realistic spatio-temporal representations. We propose a view-independent, high resolution appearance representation that successfully encodes the high visual variability of objects under various movements.First, we introduce a common appearance space to express all the available visual information from the captured images. In this space we define the representation of the global appearance of the subject. We then introduce a linear image formation model to simulate the capturing process and to express the multi-camera observations as different realizations of the common appearance. Identifying that the principle of Super-Resolution technique governs also our multi-view scenario, we extend the image generative model to accommodate it. In our work, we use Bayesian inference to solve for the super-resolved common appearance.Second, we propose a temporally coherent appearance representation. We extend the image formation model to generateimages of the subject captured in a small time interval. Our starting point is the observation thatthe appearance of the subject does not change dramatically in a predefined small time interval and the visual information from each view and each frame corresponds to the same appearance representation.We use Bayesian inference to exploit the visual redundant as well as the hidden non-redundant information across time, in order to obtain an appearance representation with fine details.Third, we leverage the interdependency of geometry and photometry and use it toestimate appearance and geometry in a joint manner. We show that by jointly estimating both, we are able to enhance the geometry globally that in turn leads to a significant appearance improvement.Finally, to further encode the dynamic appearance variability of objects that undergo several movements, we cast the appearance modelling as a dimensionality reduction problem. We propose a view-independent representation which builds on PCA and decomposesthe underlying appearance variability into Eigen textures and Eigen warps. The proposed framework is shown to accurately reproduce appearances with compact representations and to resolve appearance interpolation and completion tasks.
Document type :
Theses
Liste complète des métadonnées

https://tel.archives-ouvertes.fr/tel-01680802
Contributor : Abes Star <>
Submitted on : Thursday, January 11, 2018 - 11:10:13 AM
Last modification on : Friday, August 17, 2018 - 1:06:18 AM
Document(s) archivé(s) le : Wednesday, May 23, 2018 - 5:42:53 PM

File

TSIMINAKI_2016_archivage.pdf
Version validated by the jury (STAR)

Identifiers

  • HAL Id : tel-01680802, version 2

Collections

Citation

Vagia Tsiminaki. Appearance Modelling for 4D Representations. Image Processing [eess.IV]. Université Grenoble Alpes, 2016. English. ⟨NNT : 2016GREAM083⟩. ⟨tel-01680802v2⟩

Share

Metrics

Record views

364

Files downloads

62