Skip to Main content Skip to Navigation
Theses

GAN-based face image synthesis and its application to face recognition

Abstract : Recently, with the development of deep Convolutional Neural Networks (CNNs) and large-scale datasets, face recognition (FR) has made remarkable progress. However, recognizing faces in large poses and under heavy occlusion remains a vital challenge due to the unbalanced training data. Thanks to Generative Adversarial Neural networks (GANs), synthesizing photorealistic multi-view faces and unveiling heavily occluded face images becomes feasible, which significantly facilitates FR and has a wide range of applications in entertainment and art fields. This thesis provides an in-depth study of GAN-based face image synthesis and its application to FR. The current facial image synthesizing methods have featured two main research lines, i.e., 2D-based and 3D reconstruction-based. Our works cover both of them. For 2D-based face pose editing, current methods primarily focus on modeling the identity preserving ability but are less able to preserve the image style properly, which refers to the color, brightness, saturation, etc. This thesis proposes a novel two-stage approach to solve the style in-consistency problem, where face pose manipulation is cast into pixel sampling and face inpainting. With pixels sampled directly from the input image, the face editing result faithfully keeps the identity and the image style. For traditional 3D face reconstruction, due to the linear and low-dimensional nature of the 3D Morphable Model (3DMM), the reconstructed textures hardly capture high-frequency details, resulting in blurred textures that are far from satisfactory. Some recent 3D face reconstruction methods have also leveraged adversarial training to improve the texture quality. However, these methods either rely on scarce, non-public 3D face data or complex and costly optimization approach. This thesis proposes a high-fidelity texture generation method, which predicts the global texture of the 3D face from a single input face image. The training is based on the pseudo ground truth blended by the 3DMM and input face textures. Multiple partial UV map discriminators are leveraged to handle the imperfect artifacts in the pseudo ground truth. In terms of face de-occlusion, we propose a Segmentation-Reconstruction-Guided face de-occlusion GAN, consisting of three parts, a 3DMM parameter regression module N_, a face segmentation module NS, and an image generation module NG. With the texture prior provided by N_ and the occluded parts indicated by NS, NG can faithfully recover the missing textures. The proposed method outperforms the state-of-the-art methods quantitatively and qualitatively.
Document type :
Theses
Complete list of metadata

https://tel.archives-ouvertes.fr/tel-03739829
Contributor : ABES STAR :  Contact
Submitted on : Thursday, July 28, 2022 - 11:55:12 AM
Last modification on : Tuesday, August 2, 2022 - 4:12:20 AM

File

TH_T2829_xyin.pdf
Version validated by the jury (STAR)

Identifiers

  • HAL Id : tel-03739829, version 1

Citation

Xiangnan Yin. GAN-based face image synthesis and its application to face recognition. Other. Université de Lyon, 2022. English. ⟨NNT : 2022LYSEC021⟩. ⟨tel-03739829⟩

Share

Metrics

Record views

0

Files downloads

0