Skip to Main content Skip to Navigation
Theses

Apprentissage profond appliqué à la reconnaissance des émotions dans la voix

Abstract : This thesis deals with the application of artificial intelligence to the automatic classification of audio sequences according to the emotional state of the customer during a commercial phone call. The goal is to improve on existing data preprocessing and machine learning models, and to suggest a model that is as efficient as possible on the reference IEMOCAP audio dataset. We draw from previous work on deep neural networks for automatic speech recognition, and extend it to the speech emotion recognition task. We are therefore interested in End-to-End neural architectures to perform the classification task including an autonomous extraction of acoustic features from the audio signal. Traditionally, the audio signal is preprocessed using paralinguistic features, as part of an expert approach. We choose a naive approach for data preprocessing that does not rely on specialized paralinguistic knowledge, and compare it with the expert approach. In this approach, the raw audio signal is transformed into a time-frequency spectrogram by using a short-term Fourier transform. In order to apply a neural network to a prediction task, a number of aspects need to be considered. On the one hand, the best possible hyperparameters must be identified. On the other hand, biases present in the database should be minimized (non-discrimination), for example by adding data and taking into account the characteristics of the chosen dataset. We study these aspects in order to develop an End-to-End neural architecture that combines convolutional layers specialized in the modeling of visual information with recurrent layers specialized in the modeling of temporal information. We propose a deep supervised learning model, competitive with the current state-of-the-art when trained on the IEMOCAP dataset, justifying its use for the rest of the experiments. This classification model consists of a four-layer convolutional neural networks and a bidirectional long short-term memory recurrent neural network (BLSTM). Our model is evaluated on two English audio databases proposed by the scientific community: IEMOCAP and MSP-IMPROV. A first contribution is to show that, with a deep neural network, we obtain high performances on IEMOCAP, and that the results are promising on MSP-IMPROV. Another contribution of this thesis is a comparative study of the output values ​​of the layers of the convolutional module and the recurrent module according to the data preprocessing method used: spectrograms (naive approach) or paralinguistic indices (expert approach). We analyze the data according to their emotion class using the Euclidean distance, a deterministic proximity measure. We try to understand the characteristics of the emotional information extracted autonomously by the network. The idea is to contribute to research focused on the understanding of deep neural networks used in speech emotion recognition and to bring more transparency and explainability to these systems, whose decision-making mechanism is still largely misunderstood.
Complete list of metadatas

Cited literature [167 references]  Display  Hide  Download

https://tel.archives-ouvertes.fr/tel-02479126
Contributor : Abes Star :  Contact
Submitted on : Friday, February 14, 2020 - 12:43:17 PM
Last modification on : Wednesday, September 16, 2020 - 5:34:11 PM
Long-term archiving on: : Friday, May 15, 2020 - 3:32:25 PM

File

86189_ETIENNE_2019_archivage.p...
Version validated by the jury (STAR)

Identifiers

  • HAL Id : tel-02479126, version 1

Citation

Caroline Etienne. Apprentissage profond appliqué à la reconnaissance des émotions dans la voix. Intelligence artificielle [cs.AI]. Université Paris-Saclay, 2019. Français. ⟨NNT : 2019SACLS517⟩. ⟨tel-02479126⟩

Share

Metrics

Record views

561

Files downloads

845