Learning robust speech representation with an articulatory-regularized variational autoencoder - CRISSP Access content directly
Conference Papers Year : 2021

Learning robust speech representation with an articulatory-regularized variational autoencoder

Abstract

It is increasingly considered that human speech perception and production both rely on articulatory representations. In this paper, we investigate whether this type of representation could improve the performances of a deep generative model (here a variational autoencoder) trained to encode and decode acoustic speech features. First we develop an articulatory model able to associate articulatory parameters describing the jaw, tongue, lips and velum configurations with vocal tract shapes and spectral features. Then we incorporate these articulatory parameters into a variational autoencoder applied on spectral features by using a regularization technique that constrains part of the latent space to represent articulatory trajectories. We show that this articulatory constraint improves model training by decreasing time to convergence and reconstruction loss at convergence, and yields better performance in a speech denoising task.
Fichier principal
Vignette du fichier
georges-interspeech2021-final.pdf (529.94 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03373252 , version 1 (13-10-2021)

Identifiers

Cite

Marc-Antoine Georges, Laurent Girin, Jean-Luc Schwartz, Thomas Hueber. Learning robust speech representation with an articulatory-regularized variational autoencoder. Interspeech 2021 - 22nd Annual Conference of the International Speech Communication Association, Aug 2021, Brno, Czech Republic. pp.3345-3349, ⟨10.21437/Interspeech.2021-1604⟩. ⟨hal-03373252⟩
105 View
87 Download

Altmetric

Share

Gmail Facebook X LinkedIn More