Egocentric Audio-Visual Scene Analysis : a machine learning and signal processing approach

Abstract : Along the past two decades, the industry has developed several commercial products with audio-visual sensing capabilities. Most of them consists on a videocamera with an embedded microphone (mobile phones, tablets, etc). Other, such as Kinect, include depth sensors and/or small microphone arrays. Also, there are some mobile phones equipped with a stereo camera pair. At the same time, many research-oriented systems became available (e.g., humanoid robots such as NAO). Since all these systems are small in volume, their sensors are close to each other. Therefore, they are not able to capture de global scene, but one point of view of the ongoing social interplay. We refer to this as "Egocentric Audio-Visual Scene Analysis''.This thesis contributes to this field in several aspects. Firstly, by providing a publicly available data set targeting applications such as action/gesture recognition, speaker localization, tracking and diarisation, sound source localization, dialogue modelling, etc. This work has been used later on inside and outside the thesis. We also investigated the problem of AV event detection. We showed how the trust on one of the modalities (visual to be precise) can be modeled and used to bias the method, leading to a visually-supervised EM algorithm (ViSEM). Afterwards we modified the approach to target audio-visual speaker detection yielding to an on-line method working in the humanoid robot NAO. In parallel to the work on audio-visual speaker detection, we developed a new approach for audio-visual command recognition. We explored different features and classifiers and confirmed that the use of audio-visual data increases the performance when compared to auditory-only and to video-only classifiers. Later, we sought for the best method using tiny training sets (5-10 samples per class). This is interesting because real systems need to adapt and learn new commands from the user. Such systems need to be operational with a few examples for the general public usage. Finally, we contributed to the field of sound source localization, in the particular case of non-coplanar microphone arrays. This is interesting because the geometry of the microphone can be any. Consequently, this opens the door to dynamic microphone arrays that would adapt their geometry to fit some particular tasks. Also, because the design of commercial systems may be subject to certain constraints for which circular or linear arrays are not suited.
Document type :
Liste complète des métadonnées

Cited literature [107 references]  Display  Hide  Download
Contributor : Abes Star <>
Submitted on : Tuesday, March 31, 2015 - 5:17:05 PM
Last modification on : Friday, June 22, 2018 - 2:09:49 AM
Document(s) archivé(s) le : Wednesday, July 1, 2015 - 11:46:39 AM


Version validated by the jury (STAR)


  • HAL Id : tel-00880117, version 2


Xavier Alameda-Pineda. Egocentric Audio-Visual Scene Analysis : a machine learning and signal processing approach. General Mathematics [math.GM]. Université de Grenoble, 2013. English. ⟨NNT : 2013GRENM024⟩. ⟨tel-00880117v2⟩



Record views


Files downloads