Learning shape spaces of dressed 3D human models in motion

Abstract : The 3D virtual representations of dressed humans appear in movies, video games and since recently, VR contents. To generate these representations, we usually perform 3D acquisitions or synthesize sequences with physics-based simulation or other computer graphics techniques such as rigging and skinning. These traditional methods generally require tedious manual intervention and generate new contents with low speed or low quality, due to the complexity of clothing motion. To deal with this problem, we propose in this work, a data-driven learning approach, which can take both captures and simulated sequences as learning data, and output unseen 3D shapes of dressed human with different body shape, body motion, clothing fit and clothing materials.Due to the lack of temporal coherence and semantic information, raw captures can hardly be used directly for analysis and learning. Therefore, we first propose an automatic method to extract the human body under clothing from unstructured 3D sequences. It is achieved by exploiting a statistical human body model and optimizing the model parameters so that the body surface stays always within while as close as possible to the observed clothed surface throughout the sequence. We show that our method can achieve similar or better result compared with other state-of-the-art methods, and does not need any manual intervention.After extracting the human body under clothing, we propose a method to register the clothing surface with the help of isometric patches. Some anatomical points on the human body model are first projected to the clothing surface in each frame of the sequence. Those projected points give the starting correspondence between clothing surfaces across a sequence. We isometrically grow patches around these points in order to propagate the correspondences on the clothing surface. Subsequently, those dense correspondences are used to guide non-rigid registration so that we can deform the template mesh to obtain temporal coherence of the raw captures.Based on processed captures and simulated data, we finally propose a comprehensive analysis of the statistics of the clothing layer with a simple two-component model. It is based on PCA subspace reduction of the layer information on one hand, and a generic parameter regression model using neural networks on the other hand, designed to regress from any semantic parameter whose variation is observed in a training set, to the layer parameterization space. We show that our model not only allows to reproduce previous re-targeting works, but generalizes the data synthesizing capabilities to other semantic parameters such as body motion, clothing fit, and physical material parameters, paving the way for many kinds of data-driven creation and augmentation applications.
Complete list of metadatas

Cited literature [129 references]  Display  Hide  Download

https://tel.archives-ouvertes.fr/tel-02091727
Contributor : Abes Star <>
Submitted on : Wednesday, July 17, 2019 - 3:48:08 PM
Last modification on : Sunday, July 21, 2019 - 1:17:40 AM

File

YANG_2019_a.pdf
Version validated by the jury (STAR)

Identifiers

  • HAL Id : tel-02091727, version 2

Collections

Citation

Jinlong Yang. Learning shape spaces of dressed 3D human models in motion. Modeling and Simulation. Université Grenoble Alpes, 2019. English. ⟨NNT : 2019GREAM008⟩. ⟨tel-02091727v2⟩

Share

Metrics

Record views

132

Files downloads

76