B. Abboud and F. Davoine, Appearance factorization for facial expression analysis, Procedings of the British Machine Vision Conference 2004, 2004.
DOI : 10.5244/C.18.53

URL : https://hal.archives-ouvertes.fr/hal-00143751

B. Abboud, F. Davoine, and M. Dang, Facial expression recognition and synthesis based on an appearance model, Signal Processing: Image Communication, vol.19, issue.8, pp.723-740, 2004.
DOI : 10.1016/j.image.2004.05.009

URL : https://hal.archives-ouvertes.fr/hal-00143335

A. Amini, T. Weymouth, and R. Jain, Using dynamic programming for solving variational problems in vision, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.12, issue.9, pp.855-867, 1990.
DOI : 10.1109/34.57681

K. Anderson, W. Mcown, P. , M. , A. Cyber-netics et al., A real-time automated system for the recognition of human facial expressions, IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics), vol.36, issue.1, pp.96-105, 2006.
DOI : 10.1109/TSMCB.2005.854502

R. Banse and K. R. Scherer, Acoustic profiles in vocal emotion expression., Journal of Personality and Social Psychology, vol.70, issue.3, pp.614-636, 1996.
DOI : 10.1037/0022-3514.70.3.614

J. N. Bassili, Facial motion in the perception of faces and of emotional expression. Experimental Psychology -Human Perception and Performance, 4 no, pp.373-379, 1978.

J. N. Bassili, Emotion recognition: The role of facial movement and the relative importance of upper and lower areas of the face., Journal of Personality and Social Psychology, vol.37, issue.11, pp.61298-307, 1979.
DOI : 10.1037/0022-3514.37.11.2049

W. Beaudot, The neural information in the vertebra retina : a melting pot of ideas for artificial vision, 1994.

J. A. Bilmes, A gentle tutorial of the em algorithm and its application to parameter estimation for gaussian mixture and hidden markov models, 1998.

M. J. Black and P. Anandan, A framework for the robust estimation of optical flow, 1993 (4th) International Conference on Computer Vision, pp.231-236, 1993.
DOI : 10.1109/ICCV.1993.378214

M. J. Black and Y. Yacoob, Recognizing facial expression in image sequences using local parametrized models of image motion, International Journal of Computer Vision, vol.25, issue.1, pp.23-48, 1997.
DOI : 10.1023/A:1007977618277

P. Boersman and D. Weenink, Praat speech processing software Intitute of Phonetics Sciences of the University of Amsterdam, 2001.

A. Botino, Real time head and facial features tracking from uncalibrated monocular views, Proc. 5th Asian Conference on Computer Vision ACCV, pp.23-25, 2002.

J. D. Boucher and P. Ekman, Facial Areas and Emotional Information, Journal of Communication, vol.30, issue.2, pp.21-29, 1975.
DOI : 10.1111/j.1460-2466.1975.tb00577.x

D. H. Brainard, The psychophysics toolbox. Spatial Vision, pp.433-436, 1997.

J. C. Burges, A tutorial on support vector machines for pattern recognition, Data Mining and Knowledge Discovery, pp.121-167, 1998.

G. T. Buswell, How people look at pictures, 1920.

C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee et al., Analysis of emotion recognition using facial expressions, speech and multimodal information, Proceedings of the 6th international conference on Multimodal interfaces , ICMI '04, 2004.
DOI : 10.1145/1027933.1027968

J. Canny, A computational approach to edge detection, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.8, issue.6, pp.679-698, 1986.

L. S. Chen, T. S. Huang, T. Miyasato, and R. Nakatsu, Multimodal human emotion/expression recognition, Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition, pp.396-401, 1998.
DOI : 10.1109/AFGR.1998.670976

L. S. Chen and T. S. Huang, Emotional expressions in audiovisual human computer interaction, 2000 IEEE International Conference on Multimedia and Expo. ICME2000. Proceedings. Latest Advances in the Fast Changing World of Multimedia (Cat. No.00TH8532), pp.423-426, 2000.
DOI : 10.1109/ICME.2000.869630

P. Chou, Optimal partitioning for classification and regression trees, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.13, issue.4, pp.340-354, 1991.
DOI : 10.1109/34.88569

J. F. Cohn, A. J. Zlochower, J. J. Lien, and T. Kanade, Feature-point tracking by optical flow discriminates subtle differences in facial expression, Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition, pp.396-401, 1998.
DOI : 10.1109/AFGR.1998.670981

I. Cohen, F. G. Cozman, N. Sebe, M. C. Cirelo, and T. S. Huang, Learning bayesian network classifiers for facial expression recognition using both labeled and unlabeled data, IEEE conference on Computer Vision and Pattern Recognition (CVPR), pp.16-22, 2003.

I. Cohen, N. Sebe, L. Chen, A. Garg, and T. S. Huang, Facial expression recognition from video sequences, Proceedings. IEEE International Conference on Multimedia and Expo, pp.160-187, 2003.
DOI : 10.1109/ICME.2002.1035527

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16.8659

T. F. Cootes, G. J. Edwards, and C. J. Taylor, Active appearance models, Lecture Notes in Computer Science, pp.484-491, 1998.

M. Dailey, G. W. Cottrell, and J. Reilly, California facial expressions (cafe). unpublished digital images, 2001.

C. Darwin, The expression of the emotions in man and animals, p.1872

D. Silva, L. C. Miyasato, T. Nakatsu, and R. , Facial emotion recognition using multimodal information, Proc. IEEE International Conference on Information, Communication and Signal Processing, 1997.

D. Silva, L. C. Ng, and P. C. , Bimodal emotion recognition, Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580), pp.332-335
DOI : 10.1109/AFGR.2000.840655

D. Deliyski, Acoustic model and evaluation of pathological voice production, Proc. 3-rd Conference on Speech Communication and Technology EUROSPEECH, pp.1969-1972, 1993.

A. Dempster, A Generalization of Bayesian Inference, Journal of the Royal Statistical Society, Series B, pp.205-247, 1968.
DOI : 10.1007/978-3-540-44792-4_4

X. Deng, C. H. Chang, and E. Brandle, A new method for eye extraction from facial image, Proc. 2nd IEEE international workshop on electronic dessign, test and applications (DELTA), pp.29-34, 2004.

P. R. Devijver and J. Kittler, Pattern Recognition: A Statistical Approach, 1982.

D. F. Dinges, M. Mallis, G. Maislin, and J. W. Powell, Evaluation of techniques for ocular measurement as an index of fatigue and the basis for alertness management, Dept. Transp. Highway Safety, vol.808, issue.762, 1998.

G. Donato, M. S. Barlett, J. C. Hager, P. Ekman, and T. J. Sejnowski, Classifying facial actions, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.21, issue.10, pp.974-989, 1999.
DOI : 10.1109/34.799905

F. Dornaika and F. Davoine, Online appearance-based face and facial feature tracking, Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004., pp.814-817, 2004.
DOI : 10.1109/ICPR.2004.1334653

URL : https://hal.archives-ouvertes.fr/hal-00143752

F. Dornaika, Head and facial animation tracking using appearanceadaptive models and particle filters. Workshop Real-Time Vision for Human- Computer Interaction RTV4HCI in conjunction with CVPR, 2004.
URL : https://hal.archives-ouvertes.fr/hal-00143756

S. Dubuisson, F. Davoine, and M. Masson, A solution for facial expression representation and recognition, Signal Processing: Image Communication, pp.657-673, 2002.
DOI : 10.1016/S0923-5965(02)00076-0

URL : https://hal.archives-ouvertes.fr/hal-00143701

R. O. Duda, P. E. Hart, and D. G. Stork, Pattern classification, 2001.

Y. Ebisawa and S. Satoh, Effectiveness of pupil area detection technique using two light sources and image difference method, Proceedings of the 15th Annual International Conference of the IEEE Engineering in Medicine and Biology Societ, pp.1268-1269, 1993.
DOI : 10.1109/IEMBS.1993.979129

P. Ekman, W. V. Friesen, and P. Ellsworth, Emotion in the human face, 1972.

P. Ekman and W. V. Friesen, Facial action coding system, pp.881-905, 1978.

P. Ekman, W. V. Friesen, and P. Ellsworth, Research foundations. in p. ekman (ed.) emotion in the human face, 1982.

P. Ekman, The handbook of cognition and emotion: Facial expression, 1999.

I. S. Engberg and A. V. Hansen, Documentation on the danish emotional speech database des, 1996.

I. A. Essa and A. P. Pentland, Coding, analysis, interpretation, and recognition of facial expressions, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.19, issue.7, pp.757-763, 1997.
DOI : 10.1109/34.598232

N. Eveno, A. Caplier, and P. Y. Coulon, New color transformation for lips segmentation, 2001 IEEE Fourth Workshop on Multimedia Signal Processing (Cat. No.01TH8564), 2001.
DOI : 10.1109/MMSP.2001.962702

N. Eveno, A. Caplier, and P. Y. Coulon, A parametric model for realistic lip segmentation, 7th International Conference on Control, Automation, Robotics and Vision, 2002. ICARCV 2002., 2002.
DOI : 10.1109/ICARCV.2002.1234982

N. Eveno, Segmentation des l` evres par un modèle déformable analytique, 2003.

N. Eveno, A. Caplier, and P. Y. Coulon, Jumping snakes and parametric model for lip segmentation, Proceedings 2003 International Conference on Image Processing (Cat. No.03CH37429), 2003.
DOI : 10.1109/ICIP.2003.1246818

N. Eveno, A. Caplier, and P. Y. Coulon, Accurate and Quasi-Automatic Lip Tracking, IEEE Transactions on Circuits and Systems for Video Technology, vol.14, issue.5, pp.706-715, 2004.
DOI : 10.1109/TCSVT.2004.826754

B. Fasel and J. Luettin, Automatic facial expression analysis: A survey. Pattern Recognition, 1 no, pp.259-275, 2003.

I. Fasel, B. Fortenberry, and J. Movellan, A generative framework for real time object detection and classification, Computer Vision and Image Understanding, vol.98, issue.1, pp.182-210, 2005.
DOI : 10.1016/j.cviu.2004.07.014

Y. Gao and M. K. Leung, Human face recognition using line edge maps, Proc. IEEE 2nd Workshop Automatic Identification Advanced Technology, pp.173-176, 1999.

Y. Gao, M. K. Leung, H. S. Tananda, and M. W. , Facial expression recognition from line-based caricatures, IEEE Transaction on System, Man and Cybernetics - PART A: System and Humans, 2003.

K. Gouta and M. Miyamoto, Emotion recognition. Facial components associated with various emotions., The Japanese journal of psychology, vol.71, issue.3, pp.211-218, 2000.
DOI : 10.4992/jjpsy.71.211

D. J. Hand, Discrimination and classification, 1981.

D. W. Hansen and A. E. Pece, Eye tracking in the wild, Computer Vision and Image Understanding, vol.98, issue.1, pp.155-181, 2005.
DOI : 10.1016/j.cviu.2004.07.013

H. Hjelmäs and B. Low, Face Detection: A Survey, Proc. Computer Vision and Understanding, pp.236-274, 2001.
DOI : 10.1006/cviu.2001.0921

J. Hérault, A model of colour processing in the retina of vertebrates: From photoreceptors to colour opposition and colour constancy phenomena, Neurocomputing, vol.12, issue.2-3, pp.113-129, 1996.
DOI : 10.1016/0925-2312(95)00114-X

C. L. Huang and Y. M. Huang, Facial expression recognition using model-based feature extraction and action parameters classification. Visual Communication and Image Representation, pp.278-290, 1997.

H. T. Jr, K. Reichert, and L. Frey, Human-computer interaction using eye-gaze input, IEEE Transaction System Man Cybernet, vol.19, pp.1527-1533, 1989.

[. Project, Multimodal analysis/synthesis system for human interaction to virtual and augmented environments. EC IST-1999-No 10036, coo F. Lavagetto, 2000.

Q. Ji and X. Yang, Real Time Visual Cues Extraction for Monitoring Driver Vigilance, Lecture Notes Computer Science, 2001.
DOI : 10.1007/3-540-48222-9_8

Q. Ji and Z. Zhu, Eye and gaze tracking for interactive graphic display, Proceedings of the 2nd international symposium on Smart graphics , SMARTGRAPH '02, pp.79-85, 2002.
DOI : 10.1145/569005.569017

Q. Ji, Z. Zhu, and P. Lan, Real-Time Nonintrusive Monitoring and Prediction of Driver Fatigue, IEEE Transactions on Vehicular Technology, vol.53, issue.4, pp.1052-1068, 2004.
DOI : 10.1109/TVT.2004.830974

P. N. Juslin and P. Laukka, Impact of intended emotion intensity on cue utilization and decoding accuracy in vocal expression of emotion., Emotion, vol.1, issue.4, pp.381-412, 2001.
DOI : 10.1037/1528-3542.1.4.381

P. N. Juslin and P. Laukka, Communication of emotions in vocal expression and music performance: Different channels, same code?, Psychological Bulletin, vol.129, issue.5, pp.770-814, 2003.
DOI : 10.1037/0033-2909.129.5.770

R. Kallel, M. Cottrell, and V. Vigneron, Bootstrap for neural model selection, Neurocomputing, vol.48, issue.1-4, pp.175-183, 2002.
DOI : 10.1016/S0925-2312(01)00650-6

URL : https://hal.archives-ouvertes.fr/hal-00258886

M. Kapmann and L. Zhang, Estimation of eye, eyebrow and nose features in videophone sequences, International Workshop on Very Low Bitrate Video Coding (VLBV 98), pp.101-104, 1998.

M. Kass, A. Withins, and D. Tersopolos, Snakes: Active contour models, International Journal of Computer Vision, vol.5, issue.6035, pp.321-331, 1988.
DOI : 10.1007/BF00133570

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.124.5318

H. Kashima, H. Hongo, K. Kato, and K. Yamamoto, A robust iris detection method of facial and eye movement, Proc. Vision Interface Annual Conference, pp.7-9, 2001.

R. Kjedlsen and J. Kender, Finding skin in color images. Face and gesture Recognition, pp.312-317, 1996.

J. Ko, K. Kim, and R. R. , Facial feature tracking for eye-head controlled human computer interface, IEEE TENCON, 1999.

I. Kononenko, Estimating attributes: Analysis and extensions of RELIEF, Proc. European conference on Machine Learning, pp.171-182, 1994.
DOI : 10.1007/3-540-57868-4_57

C. M. Lee, S. Yuildrim, M. Bulut, A. Kazemzadeh, C. Busso et al., Emotion recognition based on phoneme classes, Proc. International Conference on Spoken Language Processing, Jeju Island (Corea), 2004.

J. R. Leigh and D. S. Zee, The neurology of eye movements, FA Davis Company, 1983.

L. J. Kanade, T. Cohn, J. F. Li, and C. , Subtly different facial expression recognition and expression intensity estimation, Proc. Computer Vision and Pattern Recognition (CVPR), pp.853-859, 1998.

Y. Linde, A. Buzo, and R. M. Gray, An Algorithm for Vector Quantizer Design, IEEE Transactions on Communications, vol.28, issue.1, pp.84-95, 1980.
DOI : 10.1109/TCOM.1980.1094577

B. D. Lucas and T. Kanade, An iterative image registration technique with an application to stereo vision, Proc. of International Joint Conference on Artificial Intelligence, pp.674-680, 1981.

M. J. Lyons and S. Akamatsu, Coding facial expressions with Gabor wavelets, Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition, pp.200-205, 1998.
DOI : 10.1109/AFGR.1998.670949

D. Maio and D. Maltoni, Real-time face location on gray-scale static images, Pattern Recognition, vol.33, issue.9, pp.1525-1539, 2000.
DOI : 10.1016/S0031-3203(99)00130-2

M. Malciu and F. Preteux, Mpeg-4 compliant tracking of facial features in video sequences, Proc. of International Conference on Augmented, Virtual Environments and 3D Imaging, pp.108-111, 2001.
URL : https://hal.archives-ouvertes.fr/hal-00271825

D. W. Massaro, Illusions and issues in bimodal speech perception, Proc. Auditory Visual Speech Perception, pp.21-26, 1998.

Y. Matsumoto, T. Ogasawara, and A. Zelinski, Behavior recognition based on head pose and gaze direction measurement, Proceedings. 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2000) (Cat. No.00CH37113), pp.2127-2132, 2000.
DOI : 10.1109/IROS.2000.895285

Y. Medan, E. Yair, and D. Chazan, Super resolution pitch determination of speech signals, IEEE Transactions on Signal Processing, vol.39, issue.1, pp.40-48, 1991.
DOI : 10.1109/78.80763

A. Mehrabian, Communication without words, Psychology Today, vol.2, issue.4, pp.53-56, 1968.

C. Morimoto, D. Koons, M. Flickner, and S. Zhai, Keeping an eye for HCI, XII Brazilian Symposium on Computer Graphics and Image Processing (Cat. No.PR00481), pp.171-176, 1999.
DOI : 10.1109/SIBGRA.1999.805722

A. Nogueiras, A. Moreno, A. Bonafonte, and J. B. Marino, Speech emotion recognition using hidden markov models, Proc. European Conference on Speech Communication and Technology, Scandinavia, 2001.

B. Nou05-]-noureddin, P. D. Lawrence, and C. F. Man, A non-contact device for tracking gaze in a human computer interface, Computer Vision and Image Understanding, vol.98, issue.1, pp.52-82, 2005.
DOI : 10.1016/j.cviu.2004.07.005

N. Oliver, A. Pentland, and F. Bérard, LAFTER: a real-time face and lips tracker with facial expression recognition, Pattern Recognition, vol.33, issue.8, pp.1369-1382, 2000.
DOI : 10.1016/S0031-3203(99)00113-2

N. Otsu, A Threshold Selection Method from Gray-Level Histograms, IEEE Transactions on Systems, Man, and Cybernetics, vol.9, issue.1, pp.62-66, 1978.
DOI : 10.1109/TSMC.1979.4310076

M. Pantic and L. J. Rothkrantz, Expert system for automatic analysis of facial expressions, Image and Vision Computing, vol.18, issue.11, pp.881-905, 2000.
DOI : 10.1016/S0262-8856(00)00034-2

M. Pantic and L. J. Rothkrantz, Automatic analysis of facial expressions: The state of the art, IEEE Transactions On Pattern Analysis and Machine Intelligence, vol.2212, pp.1424-1445, 2000.

M. Pantic and L. J. Rothkrantz, Toward an affect-sensitive multimodal humancomputer interaction, Proc. Procedings of the IEEE, pp.1370-1390, 2003.
DOI : 10.1109/jproc.2003.817122

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.3.2728

M. Pantic and I. Patras, Detecting facial actions and their temporal segmentation in nearly frontal-view face image sequences, Proc. IEEE International Conference on Systemsn Man and Cybernetics, 2005.

M. Pantic, M. F. Valstar, R. Rademaker, and L. Maat, Web-Based Database for Facial Expression Analysis, 2005 IEEE International Conference on Multimedia and Expo, pp.317-321, 2005.
DOI : 10.1109/ICME.2005.1521424

M. Pantic and I. Patras, Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences, IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics), vol.36, issue.2, 2006.
DOI : 10.1109/TSMCB.2005.859075

M. Pardas, Automatic face analysis for model calibration, Proc. International Workshop on Synthetic Natural Hybrid Coding and three Dimentional Imaging (IWSNHC3DI), pp.12-15, 1999.

M. Pardas, Extraction and tracking of the eyelids, 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.00CH37100), pp.2357-2360, 2000.
DOI : 10.1109/ICASSP.2000.859314

M. Pardas and E. Sayrol, Motion estimation based tracking of active contours. Pattern recognition letters, 22 no, pp.1447-1456, 2001.

M. Pardas and A. Bonafonte, Facial animation parameters extraction and expression detection using hmm, Signal Processing: Image Communication, pp.675-688, 2002.

D. G. Pelli, The video toolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 1997.

A. Pentland, Perceptual user interfaces: perceptual intelligence, Communications of the ACM, vol.43, issue.3, pp.35-44, 2000.
DOI : 10.1145/330534.330536

V. A. Petrushin, Emotion recognition in speech signal: experimental study, development , and application, Proc. 6th International Conference on Spoken Language Processing, 2000.

F. Prêteux, On a distance function approach for grey-level mathematical morphology, Mathematical Morphology in Image Processing, Dekker M, 1992.

J. Qiang and Z. Zhu, Non-intrusive eye and gaze tracking for natural human computer interaction, Journal. MMI interaktiv, vol.6, 2003.

T. F. Quatieri, Discrete time speech signal processing: Principles and practice, 2001.

L. R. Rabiner, A tutorial on hidden markov models and selected applications in speech recognition, Proc. IEEE, pp.257-286, 1989.

P. Radeva and J. Serrat, Rubber snake: Implementation on signed distance potential, Proc. Proceedings SWISS VISSION, pp.187-194, 1993.

P. Radeva and E. Marti, Facial features segmentation by model-based snakes, Proc. International Conference on Computer Analysis and Image Processing, pp.515-520, 1995.

M. Rosenblum, Y. Yacoob, and L. S. Davis, Human expression recognition from motion using a radial basis function network architecture, IEEE Transactions on Neural Networks, vol.7, issue.5, pp.1121-1137, 1996.
DOI : 10.1109/72.536309

S. Roweis and L. Saul, Nonlinear dimensionality reduction by locally linear embedding . Science, 290 no, pp.2323-2326, 2000.

H. Sato, Y. Mitsukura, M. Fukumi, and N. Akamatsu, Emotional speech classification with prosodic parameters by using neural networks, Proc. Australien and New Zealand Intelligent Information Systems Conference, pp.395-398, 2001.

K. R. Scherer, Vocal communication of emotion: A review of research paradigms, Speech Communication, vol.40, issue.1-2, pp.227-256, 2003.
DOI : 10.1016/S0167-6393(02)00084-5

N. Sebe, I. Cohen, and T. S. Huang, MULTIMODAL EMOTION RECOGNITION, 2004.
DOI : 10.1142/9789812775320_0021

G. Shafer, A Mathematical Theory of Evidence, 1976.

J. Shi and C. Tomasi, Good features to track, Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp.593-600, 1994.

L. E. Sibert, J. N. Templeman, and R. J. Jacob, Evaluation and analysis of eye gaze interaction, NRL Report NRL, vol.5513, pp.1-9990, 2001.

P. Smets, The combination of evidence in the transferable belief model, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.12, issue.5, pp.447-458, 1990.
DOI : 10.1109/34.55104

P. Smets and R. Kruse, The transferable belief model, Artificial Intelligence, vol.66, issue.2, pp.191-234, 1994.
DOI : 10.1016/0004-3702(94)90026-4

URL : https://hal.archives-ouvertes.fr/hal-01185821

P. Smets, Handbook of defeasible reasoning and uncertainty manage-ment system: The transferable belief model for quantified belief representation, Kluwer, vol.1, pp.267-301, 1998.

P. Smets, Data fusion in the transferable belief model, Proceedings of the Third International Conference on Information Fusion, pp.21-33, 2000.
DOI : 10.1109/IFIC.2000.862713

P. Smith, M. Shah, and L. N. , Determining driver visual attention with one camera, IEEE Transactions on Intelligent Transportation Systems, vol.4, issue.4, 2003.
DOI : 10.1109/TITS.2003.821342

M. L. Smith, G. W. Cottrell, F. Gosselin, and S. P. , Transmitting and Decoding Facial Expressions, Psychological Science, vol.17, issue.3, pp.679-698, 2005.
DOI : 10.1016/j.cub.2003.09.038

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.517.617

A. Sourd, Master 2 report, 2003.

K. Talmi and J. Liu, Eye and gaze tracking for visually controlled interactive stereoscopic displays, Signal Processing: Image Communication, vol.14, issue.10, pp.799-810, 1999.
DOI : 10.1016/S0923-5965(98)00044-7

K. Tan, D. Kriegman, and N. Ahuja, Appearance-based eye gaze estimation, Proc. Workshop on Applications of Computer Vision, pp.191-195, 2002.

H. Tao and T. S. Huang, Connected vibration: A model analysis approach to nonrigid motion tracking, Proc. IEEE Computer Vision and Pattern Recognition, pp.735-740, 1998.

H. Tao and T. S. Huang, explanation-based facial motion tracking using a piecewise bezier volume deformation model, Proc. International Conference on Computer Vision and Pattern Recognition, pp.611-617, 1999.

M. Tekalp, Face and 2d mesh animation in mpeg-4. Tutorial Issue on the MPEG-4 Standard, Image Communication Journal, 1999.

Y. Tian, T. Kanade, and J. Cohn, Dual state parametric eye tracking, Proc. 4th IEEE International Conference on Automatic Face and Gesture Recognition, pp.110-115, 2000.

Y. Tian, T. Kanade, and J. F. Cohn, Recognizing action units for facial expression analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.23, issue.2, pp.97-115, 2001.
DOI : 10.1109/34.908962

A. B. Torralba and J. Hérault, An efficient neuromorphic analog network form motion estimation. IEEE Trans. on Circuits and Systems-I: Special Issue on Bio- Inspired Processors and CNNs for Vision, p.46, 1999.
DOI : 10.1109/81.747199

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.18.788

N. Tsapatsoulis, K. Karpouzis, G. Stamou, F. Piat, and S. Kollias, A fuzzy system for emotion classification based on the mpeg-4 facial definition parameter set, Proc. 10th European Signal Processing Conference, 2000.

S. Tsekeridou and I. Pitas, Facial feature extraction in frontal views using biometric analogies, Proc. 9th European Signal Processing Conference, pp.315-318, 1998.

L. Valet, G. Mauris, and P. Bolon, A statistic overview of a recent literature in information fusion, Proc. International conference in information fusion, pp.3-22, 2000.

D. Ververidis and C. Kotropolos, Automatic speech classification to five emotional states based on gender information, Proc. 12th European Signal Processing Conference, pp.341-344, 2004.

V. Vezhnevets and A. Degtiareva, Robust and accurate eye contour extraction, Proc. GraphicsCon, pp.81-84, 2003.

J. G. Wang, E. Sung, and R. Venkateswarlu, Estimating the eye gaze from one eye, Computer Vision and Image Understanding, vol.98, issue.1, pp.83-103, 2005.
DOI : 10.1016/j.cviu.2004.07.008

Y. Yacoob and L. S. Davis, Recognizing human facial expressions from long image sequences using optical flow, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.18, issue.6, pp.636-642, 1996.
DOI : 10.1109/34.506414

R. R. Yager, M. Fedrizzi, and J. Kacprzyk, Advance in the dempster-shafer theory of evidence, 1994.

M. H. Yang and N. Ahuja, Gaussian mixture model for human skin color and its application in image and video database, Proc. Of the SPIE: Conf. On Storage and Retrieval for Images and video Databases, pp.458-466, 1999.

D. H. Yoo and M. J. Chang, A novel non-intrusive eye gaze estimation using cross-ratio under large head motion, Computer Vision and Image Understanding, vol.98, issue.1, pp.25-51, 2005.
DOI : 10.1016/j.cviu.2004.07.011

Y. Yoshitomi, S. Kim, T. Kawano, and T. Kitazoe, Effect of sensor fusion for recognition of emotional states using voice, face image and thermal image of face, Proceedings 9th IEEE International Workshop on Robot and Human Interactive Communication. IEEE RO-MAN 2000 (Cat. No.00TH8499), pp.178-183, 2000.
DOI : 10.1109/ROMAN.2000.892491

A. Yuille, P. Hallinan, and D. Cohen, Feature extraction from faces using deformable templates, International Journal of Computer Vision, vol.26, issue.6, pp.99-111, 1992.
DOI : 10.1007/BF00127169

Z. Zeng, . Tu-jilin, B. Pianfetti, M. Liu, T. Zhang et al., Audio-Visual Affect Recognition through Multi-Stream Fused HMM for HCI, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), 2005.
DOI : 10.1109/CVPR.2005.77

L. Zhang, <title>Estimation of eye and mouth corner point positions in a knowledge-based coding system</title>, Digital Compression Technologies and Systems for Video Communications, pp.21-28, 1996.
DOI : 10.1117/12.251289

L. Zhang, Tracking a face for knowledge based coding of videophone sequences. Signal Processing: Image Communication, 10 no, pp.93-114, 1997.

Z. Zhang, L. Lyons, M. Schuster, and S. Akamatsu, Comparison between geometrybased and gabor wavelets-based facial expression recognition using multi-layer perceptron, Proc. Automatic Face and Gesture Recognition, pp.454-459, 1998.

Y. Zhang and J. Qiang, Active and dynamic information fusion for facial expression understanding from image sequences, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.27, issue.5, pp.699-714, 2005.
DOI : 10.1109/TPAMI.2005.93

Z. Zhu and J. Qiang, Robust real-time eye detection and tracking under variable lighting conditions and various face orientations, Computer Vision and Image Understanding, vol.98, issue.1, pp.124-154, 2005.
DOI : 10.1016/j.cviu.2004.07.012

I. Buciu, Z. Hammal, and A. Caplier, Nikolaidis N. and Pitas I. Enhencing facial expression classification by information fusion, Proc. 14th European Signal Processing Conference (EUSIPCO), 2006.

Z. Hammal, N. Eveno, A. Caplier, and P. Y. Coulon, Extraction réaliste des traits caractéristiques du visagè a l'aide de modèles paramétriques adaptés, Proc. 19ème colloque sur le traitemant du signal et des images, 2003.

Z. Hammal and A. Caplier, Eyes and eyebrows parametric models for automatic segmentation, 6th IEEE Southwest Symposium on Image Analysis and Interpretation, 2004., 2004.
DOI : 10.1109/IAI.2004.1300961

Z. Hammal and A. Caplier, Analyse dynamique des transformations des traits du visage lors de la production d'une emotion, Proc. Atelier sur l'analyse du geste (RFIA), 2004.

Z. Hammal, A. Caplier, and M. Rombaut, Classification d'expressions faciales par la theorie de l'´ evidence, Rencontre Francophones sur la Logique Floue et ses Applications (LFA), 2004.

Z. Hammal, B. Bozkurt, L. Couvreur, D. Unay, A. Caplier et al., Classification bimodale d'expressions vocales, 20ème colloque sur le traitemant du signal et des images (Gretsi), 2005.

Z. Hammal, B. Bozkurt, L. Couvreur, D. Unay, A. Caplier et al., Passive versus active: Vocal classification system, Proc. 13th European Signal Processing Conference Turkey, 2005.

Z. Hammal, A. Caplier, and M. Rombaut, Belief Theory Applied to Facial Expressions Classification, Proc. 3rd International Conference on Advances in Pattern Recognition (ICAPR), 2005.
DOI : 10.1007/11552499_21

Z. Hammal, A. Caplier, and M. Rombaut, A fusion process based on belief theory for classification of facial basic emotions, 2005 7th International Conference on Information Fusion, 2005.
DOI : 10.1109/ICIF.2005.1591924

Z. Hammal, L. Couvreur, A. Caplier, and M. Rombaut, Facial Expression Recognition Based on the Belief Theory: Comparison with Different Classifiers, Proc. 13th International Conference on Image Analysis and Processing. (ICIAP), Italy, 2005.
DOI : 10.1007/11553595_91

URL : https://hal.archives-ouvertes.fr/hal-00371572

Z. Hammal, N. Eveno, A. Caplier, and P. Y. Coulon, Extraction des traits caracteristiques du visagè a l'aide de modèle paramétriques adaptés, pp.59-71, 2005.

Z. Hammal, C. Massot, G. Bedoya, and A. Caplier, Eyes Segmentation Applied to Gaze Direction and Vigilance Estimation, Proc. 3rd International Conference on Advances in Pattern Recognition (ICAPR), 2005.
DOI : 10.1007/11552499_27

Z. Hammal, Dynamic facial expression understanding based on temporal modeling of transferable belief model, Proc. International conference on computer vision theory and application (VISAPP), 2006.

Z. Hammal, A. Caplier, and M. Rombaut, Fusion for classification of facial expressions by the belief theory, Journal of advances in information fusion (JAIF), 2006.

Z. Hammal, L. Couvreur, A. Caplier, and M. Rombaut, Facial expressions classification: A new approach based on transferable belief model, International Journal of Approximate Reasoning, 2006.

Z. Hammal, N. Eveno, A. Caplier, and P. Y. Coulon, Parametric models for facial features segmentation, Signal Processing, vol.86, issue.2, pp.399-413, 2006.
DOI : 10.1016/j.sigpro.2005.06.006

URL : https://hal.archives-ouvertes.fr/hal-00121793