Skip to Main content Skip to Navigation

Reconnaissance automatique des gestes de la langue française parlée complétée

Abstract : Cued Speech facilitates hearing-impaired people communication by completing lip-reading. Basically, its purpose is to add manual gestures nearby the face in order to disambiguate the lip motion which is not self-sufficient for a complete understanding of the message. The goal of Telephony for Hearing IMpaired Project is to elaborate a terminal which allows communication based on French Cued Speech. Amongst the manifoldness of functionalities it requires, it is mandatory to automatically recognize French Cued Speech manual gestures. The subject of this work is the segmentation, the analysis and the recognition of Cued Speech gestures. It requires image and video processing techniques as well as data fusion, classification and gesture recognition techniques. In order to achieve this goal, we have developed several original algorithms, such as (1) a bio-inspired filter which quantifies the amount of motion in a video by integrating retinal processing, (2) a new combination technique for multi-classification via SVMs or unary classifiers based on belief theories, from which a transform from belief function to probability is derived, (3) a partial decision method based on the generalisation of the Pignistic Transform, in order to authorize some uncertainty when processing ambiguous gestures.
Document type :
Complete list of metadatas
Contributor : Thomas Burger <>
Submitted on : Wednesday, January 9, 2008 - 5:09:59 PM
Last modification on : Thursday, November 19, 2020 - 12:59:34 PM
Long-term archiving on: : Tuesday, April 13, 2010 - 4:51:11 PM


  • HAL Id : tel-00203360, version 1



Thomas Burger. Reconnaissance automatique des gestes de la langue française parlée complétée. Interface homme-machine [cs.HC]. Institut National Polytechnique de Grenoble - INPG, 2007. Français. ⟨tel-00203360⟩



Record views


Files downloads