C. and E. De-départ, est l'ensemble de caractéristiques extraites au niveau Analyse, sur lequel va se baser l'interprétation

]. S. Abrilian, L. Devillers, S. Buisine, and J. C. Martin, Du point de vue de la reconnaissance d'´ emotions, lapremì ere perspective est l'extension de l'applicationàapplicationà des caractéristiques d'expressionémotionnelles expressionémotionnelles plus proches de la danse que les caractéristiques générales utilisées actuellement dans notre système. Il convient d'approfondir les techniques de réalité augmentée pour définir les interactions entre le danseur et lesélémentsleséléments virtuels, la scénarisation du monde virtuel et sa présentation. Enfin, si la reconnaissance d'´ emotions offre au danseur une piste de recherche sur son propre corps et son mouvement dans l'espace, la réalité augmentée génère de nombreuses perspectives artistiques par la diversité des technologies d'affichage et l'absence de limitations dans la création de contenu graphique Emotv1 : Annotation of real-life emotions for the specification of multimodal affective interfaces, 11th Int. Conf. on Human-Computer Interaction (HCII'05, 2005.

H. Ahn and R. W. Picard, Affective-Cognitive Learning and Decision Making: A Motivational Reward Framework for Affective Agents, Proceedings of the 1st International Conference on Affective Computing and Intelligent Interaction (ACII'05), pp.863-866, 2005.
DOI : 10.1007/11573548_111

N. Ambady and R. Rosenthal, Thin slices of expressive behavior as predictors of interpersonal consequences: A meta-analysis., Psychological Bulletin, vol.111, issue.2, pp.256-274, 1992.
DOI : 10.1037/0033-2909.111.2.256

F. Aznar, M. Sempere, M. Pujol, and R. Rizo, Bayesian Emotions: Developing an Interface for Robot/Human Communication, Lecture Notes in Computer Science : Advances in Artificial Intelligence, vol.3673, pp.507-517, 2005.
DOI : 10.1007/11558590_51

T. Balomenos, A. Raouzaiou, S. Ioannou, A. Drosopoulos, K. Karpouzis et al., Emotion Analysis in Man-Machine Interaction Systems, Lecture Notes in Computer Science : Machine Learning for Multimodal Interaction, vol.3361, pp.318-328, 2004.
DOI : 10.1007/978-3-540-30568-2_27

L. Bass, A metamodel for the runtime architecture of an interactive system : the UIMS tool developers workshop, SIGCHI Bull, vol.24, issue.1, pp.32-37, 1992.

L. Bass, C. Buhman, S. Comella-dorda, F. Long, and J. Robert, Market Assessment of Component-Based Software Engineering, 2000.

F. Birren, Color psychology and color therapy : a factual study of the influence of color on human life, 2006.

R. A. Bolt, put-that-there " : Voice and gesture at the graphics interface, SIGGRAPH '80 : Proceedings of the 7th annual conference on Computer graphics and interactive techniques, pp.262-270, 1980.

R. T. Boone and J. G. Cunningham, Children's decoding of emotion in expressive body movement: The development of cue attunement., Developmental Psychology, vol.34, issue.5, pp.1007-1016, 1998.
DOI : 10.1037/0012-1649.34.5.1007

R. T. Boone and J. G. Cunningham, Children's expression of emotional meaning in music through expressive body movement, Journal of Nonverbal Behavior, vol.25, issue.1, pp.21-41, 2001.
DOI : 10.1023/A:1006733123708

S. Bottecchia, J. M. Cieutat, C. Merlo, and J. P. , A new AR interaction paradigm for collaborative teleassistance system: the POA, International Journal on Interactive Design and Manufacturing (IJIDeM), vol.11, issue.1, pp.35-40, 2009.
DOI : 10.1007/s12008-008-0051-7

URL : https://hal.archives-ouvertes.fr/hal-00431673

J. Bouchet, Ingénierie de l'interaction multimodale en entrée Approchè a composants ICARE, 2006.

J. Bouchet, L. Nigay, and D. Balzagette, ICARE : Approchè a composants pour l'interaction multimodale. Actes desPremì eres Journées Francophones : Mobilité et Ubiquité, pp.36-43, 2004.

W. Burleson, K. Picard, J. Perlin, and . Lippincott, A platform for affective agent research, Workshop on Empathetic Agents, International Conference on Autonomous Agents and Multiagent Systems, 2004.

C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee et al., Analysis of emotion recognition using facial expressions, speech and multimodal information, Proceedings of the 6th international conference on Multimodal interfaces , ICMI '04, pp.205-211, 2004.
DOI : 10.1145/1027933.1027968

A. Camurri, R. Chiarvetto, A. Coglio, M. D. Stefano, C. Liconte et al., Toward Kansei Information Processing in music/dance interactive multimodal environments, Proceedings of the Italian Association for Musical Informatics (AIMI) International Workshop -Kansei : The Technology of Emotion, pp.74-78, 1997.

A. Camurri, P. Coletta, A. Massari, B. Mazzarino, M. Peri et al., Toward real-time multimodal processing : EyesWeb 4.0, Proceedings of Artificial Intelligence and the Simulation of Behaviour (AISB) convention : Motion, Emotion and Cognition, 2004.

A. Camurri, B. Mazzarino, R. Trocca, and G. Volpe, Real-time analysis of expressive cues in human movement, Proc. CAST01, GMD, St. Augustin-Bonn, 2001.

A. Camurri, B. Mazzarino, and G. Volpe, Analysis of Expressive Gesture : The Eyes Web Expressive Gesture Processing Library. Lecture notes in computer science, pp.460-467, 2004.

A. Camurri, M. Ricchetti, and R. Trocca, Eyesweb -toward gesture and affect recognition in dance/music interactive systems, Proceedings of the 1999 IEEE International Conference on Multimedia Computing and Systems (ICMCS'99, p.9643, 1999.

A. Camurri and R. Trocca, Analysis of expressivity in movement and dance, Proceedings of CIM-2000, 2000.

A. Cardon, J. C. Campagne, and M. Camus, A self-adapting system generating intentional behavior and emotions. Lecture notes in computer science, p.33, 2006.

G. Castellano, Movement expressivity analysis in affective computers : from recognition to expression of emotion, 2008.

G. Castellano, L. Kessous, and G. Caridakis, Multimodal emotion recognition from expressive faces, body gestures and speech, Proc. of the Doctoral Consortium of 2nd International Conference on Affective Computing and Intelligent Interaction (ACII), 2007.

G. Castellano, M. Mortillaro, A. Camurri, G. Volpe, and K. Scherer, Automated Analysis of Body Movement in Emotionally Expressive Piano Performances, Music Perception: An Interdisciplinary Journal, vol.26, issue.2, pp.103-119, 2008.
DOI : 10.1525/mp.2008.26.2.103

G. Chanel, J. Kronegg, D. Grandjean, and T. Pun, Emotion Assessment: Arousal Evaluation Using EEG???s and Peripheral Physiological Signals, Lecture Notes in Computer Science, vol.4105, p.530, 2006.
DOI : 10.1007/11848035_70

D. Chi, M. Costa, L. Zhao, and N. Badler, The EMOTE model for effort and shape, Proceedings of the 27th annual conference on Computer graphics and interactive techniques , SIGGRAPH '00, pp.173-182, 2000.
DOI : 10.1145/344779.352172

A. Clay, M. Courgeon, N. Couture, E. Delord, C. Clavel et al., Expressive virtual modalities for augmenting the perception of affective movements, Proceedings of the International Workshop on Affective-Aware Virtual Agents and Social Robots, AFFINE '09, pp.1-7, 2009.
DOI : 10.1145/1655260.1655268

URL : https://hal.archives-ouvertes.fr/hal-00445908

A. Clay, N. Couture, and G. Domenger, Reconnaissance d'´ emotions : applicationàapplicationà la danse, 2008.
URL : https://hal.archives-ouvertes.fr/hal-00442105

A. Clay, N. Couture, and G. Domenger, Capture d'´ emotions et reconnaissance par la gestuelle. conférence dansée interactive, 2009.
URL : https://hal.archives-ouvertes.fr/hal-00408176

A. Clay, N. Couture, G. Domenger, A. Delord, and . Reumaux, Improvisation dansée augmentée en appartement, Festival des Ethiopiques, 2009.

A. Clay, N. Couture, and L. Nigay, Engineering affective computing: A unifying software architecture, 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, pp.1-6, 2009.
DOI : 10.1109/ACII.2009.5349541

URL : https://hal.archives-ouvertes.fr/hal-00408182

A. Clay, N. Couture, and L. Nigay, Towards Emotion Recognition in Interactive Systems : Application to a Ballet Dance Show, Proceedings of the ASME/AFM 2009 World Conference on Innovative Virtual Reality (WinVR2009) World Conference on Innovative Virtual Reality, pp.19-24, 2009.
URL : https://hal.archives-ouvertes.fr/hal-00953307

A. Clay, E. Delord, N. Couture, and G. Domenger, Augmenting a Ballet Dance Show Using the Dancer???s Emotion: Conducting Joint Research in Dance and Computer Science, Arts and Technology, Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, pp.148-156
DOI : 10.1145/281388.281884

M. Coulson, Attributing Emotion to Static Body Postures: Recognition Accuracy, Confusions, and Viewpoint Dependence, Journal of Nonverbal Behavior, vol.28, issue.2, pp.117-139, 2004.
DOI : 10.1023/B:JONB.0000023655.25550.be

J. Coutaz, PAC, an Object Oriented Model for Dialog Design, Proceedings of the 2nd IFIP International Conference on Human-Computer Interaction (INTERACT 87, pp.431-436, 1987.
DOI : 10.1016/B978-0-444-70304-0.50074-1

J. Coutaz and L. Nigay, Architecture logicielle conceptuelle des systèmes interactifs Chapitre 7 : Analyse et Conception de l'Interaction Homme-Machine dans les systèmes d'information, Hermes Publ, pp.207-246, 2001.

J. Coutaz, L. Nigay, D. Salber, A. Blandford, J. May et al., Four Easy Pieces for Assessing the Usability of Multimodal Interaction: The Care Properties
DOI : 10.1007/978-1-5041-2896-4_19

N. Couture and S. Minel, Tactimod dirige et oriente un piéton, Proceedings of Ubimob'06, pp.9-16, 2006.

N. Couture, G. Rivì, and P. Reuter, The Engineering of Mixed Reality Systems, chapter Tangible Interaction in Mixed Reality Systems (chapitre 5), 2009.

R. Cowie, E. Douglas-cowie, N. Tsapatsoulis, G. Votsis, S. Kollias et al., Emotion recognition in human-computer interaction, IEEE Signal Processing Magazine, vol.18, issue.1, pp.32-80, 2001.
DOI : 10.1109/79.911197

A. R. Damasio, Descartes'error, 1995.

C. Darwin, P. Ekman, and P. Prodger, The expression of the emotions in man and animals, 2002.

M. Demeijer, The contribution of general features of body movement to the attribution of emotions, Journal of Nonverbal Behavior, vol.6, issue.6, pp.247-268, 1989.
DOI : 10.1007/BF00990296

R. Descartes and G. Rodis-lewis, Les passions de l'? ame. Vrin, 1988.

E. Dubois, Chirurgie Augmentée, un cas de Réalité Augmentée Conception et réalisation centrées sur l'utilisateur, Laboratoire de CommunicationLangagì ere et Interaction Personne-Système (IMAG) Thèse de doctorat Informatique, 2001.

P. Ekman, Basic emotions. Handbook of cognition and emotion, pp.45-60, 1999.

P. Ekman, W. Friesen, and J. Hager, Facial Action Coding System : the manual . A Human Face, HTML demonstration version, 2002.

P. Ekman and W. V. Friesen, The repertoire of nonverbal behavior, 1969.

P. Ekman and W. V. Friesen, Facial Action Coding System (FACS) : A technique for the measurement of facial action, p.Consulting, 1978.

P. Ekman and W. V. Friesen, A new pan-cultural facial expression of emotion, Motivation and Emotion, vol.164, issue.3875, pp.159-168, 1986.
DOI : 10.1007/BF00992253

P. Ekman and W. V. Friesen, Hand Movements, Journal of Communication, vol.10, issue.4, p.273, 2007.
DOI : 10.1037/h0023514

P. Ekman, W. V. Friesen, M. O. Sullivan, A. Chan, I. Diacoyanni-tarlatzis et al., Universals and cultural differences in the judgments of facial expressions of emotion., Journal of Personality and Social Psychology, vol.53, issue.4, pp.712-717, 1987.
DOI : 10.1037/0022-3514.53.4.712

R. Kaliouby and P. Robinson, Generalization of a Vision-Based Computational Model of Mind-Reading, Proceedings of the First International Conference on Affective Computing and Intelligent Interaction (ACII), pp.582-589, 2005.
DOI : 10.1007/11573548_75

M. S. El-nasr, J. Yen, and T. R. Ioerger, Flame?fuzzy logic adaptive model of emotions, Autonomous Agents and Multi-Agent Systems, vol.3, issue.3, pp.219-257, 2000.
DOI : 10.1023/A:1010030809960

F. Février, E. Jamet, G. Rouxel, V. Dardier, and G. Breton, Induction d'´ emotions pour la motion capture dans une situation de vidéo-conversation, Proceedings du Workshop sur les Agents Conversationnels Animés (WACA), 2006.

B. R. Gaines, Modeling and forecasting the information sciences, Information Sciences, vol.57, issue.58, pp.57-583, 1991.
DOI : 10.1016/0020-0255(91)90066-4

P. Gebhard, ALMA, Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems , AAMAS '05, pp.29-36, 2005.
DOI : 10.1145/1082473.1082478

K. Ghamen and A. Caplier, Estimation of Anger, Sadness and Fear Expression Intensity based on the Belief Theory, Proceedings ACIT, pages ?, 2008.
URL : https://hal.archives-ouvertes.fr/hal-00371563

J. Gratch and S. Marsella, Tears and fears, Proceedings of the fifth international conference on Autonomous agents , AGENTS '01, pp.278-285, 2001.
DOI : 10.1145/375735.376309

W. E. Grimson, G. J. Ettinger, S. J. White, T. Lozano-perez, W. M. Wells et al., An automatic registration method for frameless stereotaxy, image guided surgery, and enhanced reality visualization, IEEE Transactions on Medical Imaging, vol.15, issue.2, pp.129-140, 1996.
DOI : 10.1109/42.491415

H. Gunes, M. Piccardi, and T. Jan, Face and body gesture recognition for a visionbased multimodal analyzer, VIP '05 : Proceedings of the Pan-Sydney area workshop on Visual information processing, pp.19-28, 2004.

A. Guye-vuilleme and D. Thalmann, A high-level architecture for believable social agents, Virtual Reality, vol.18, issue.3, pp.95-106, 2000.
DOI : 10.1007/BF01424340

Z. Hammal, L. Couvreur, A. Caplier, and M. Rombaut, Facial expression classification: An approach based on the fusion of facial deformations using the transferable belief model, International Journal of Approximate Reasoning, vol.46, issue.3, pp.542-567, 2007.
DOI : 10.1016/j.ijar.2007.02.003

URL : https://hal.archives-ouvertes.fr/hal-00158427

J. Healey and R. Picard, Digital processing of affective signals, Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '98 (Cat. No.98CH36181), 1998.
DOI : 10.1109/ICASSP.1998.679699

J. Healey, J. Seger, and R. Picard, Quantifying driver stress : developing a system for collecting and processing bio-metric signals in natural situations, Biomedical sciences instrumentation, vol.35, pp.193-198, 1999.

J. Hodgson, Mastering movement : the life and work of Rudolf Laban, Theatre Arts Books, 2001.

S. Ioannou, L. Kessous, G. Caridakis, K. Karpouzis, V. Aharonson et al., Adaptive On-Line Neural Network Retraining for Real Life Multimodal Emotion Recognition, Lecture Notes in Computer Science, vol.4131, p.81, 2006.
DOI : 10.1007/11840817_9

H. Ishii, S. Ren, and P. Frei, Pinwheels, CHI '01 extended abstracts on Human factors in computing systems , CHI '01, pp.111-112, 2001.
DOI : 10.1145/634067.634135

C. E. Izard, The face of emotion, 1971.

A. Jaimes and N. Sebe, Multimodal human???computer interaction: A survey, Computer Vision and Image Understanding, vol.108, issue.1-2, pp.116-134, 2007.
DOI : 10.1016/j.cviu.2006.10.019

X. Jin and Z. Wang, An Emotion Space Model for Recognition of Emotions in Spoken Chinese, Proceedings of the First International Conference on Affective Computing and Intelligent Interaction (ACII), p.397, 2005.
DOI : 10.1007/11573548_51

P. Kaiser, Hand-drawn spaces, ACM SIGGRAPH 98 Electronic art and animation catalog on , SIGGRAPH '98, 1998.
DOI : 10.1145/281388.281884

A. Kapoor, R. W. Picard, and Y. Ivanov, Probabilistic combination of multiple modalities to detect interest, Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004., pp.969-972, 2004.
DOI : 10.1109/ICPR.2004.1334690

K. Karpouzis, A. Raouzaiou, and S. Kollias, Moving'avatars : emotion synthesis in virtual worlds. Human-Computer Interaction : Theory and Practice, pp.503-507, 2003.

A. Kendon, Gesture : Visible action as utterance, 2004.
DOI : 10.1017/CBO9780511807572

A. Kendon and W. S. Language, How gestures can become like words Crosscultural perspectives in nonverbal communication, pp.131-141, 1988.

E. Keogh and C. A. Ratanamahatana, Exact indexing of dynamic time warping, Knowledge and Information Systems, vol.26, issue.3, pp.358-386, 2005.
DOI : 10.1109/TASSP.1978.1163149

K. H. Kim, S. W. Bang, and S. R. Kim, Emotion recognition system using short-term monitoring of physiological signals. Medical and biological engineering and computing, pp.419-427, 2006.

D. Kirsch, The Sentic Mouse : Developing a tool for measuring emotional valence, 1997.

J. Klein, Y. Moon, and R. Picard, This computer responds to user frustration:, Interacting with Computers, vol.14, issue.2, pp.119-140, 2002.
DOI : 10.1016/S0953-5438(01)00053-4

B. Kort, R. Reilly, and R. Picard, An affective model of interplay between emotions and learning: reengineering educational pedagogy-building a learning companion, Proceedings IEEE International Conference on Advanced Learning Technologies, pp.43-46, 2001.
DOI : 10.1109/ICALT.2001.943850

G. E. Krasner and S. T. Pope, A cookbook for using the model-view controller user interface paradigm in Smalltalk-80, J. Object Oriented Program, vol.1, issue.3, pp.26-49, 1988.

D. Lalanne, L. Nigay, P. Palanque, P. Robinson, J. Vanderdonckt et al., Fusion engines for multimodal input, Proceedings of the 2009 international conference on Multimodal interfaces, ICMI-MLMI '09, pp.153-160, 2009.
DOI : 10.1145/1647314.1647343

URL : https://hal.archives-ouvertes.fr/hal-00953686

M. Lamolle, M. Mancini, C. Pelachaud, S. Abrilian, J. C. Martin et al., Contextual Factors and Adaptative Multimodal Human-Computer Interaction: Multi-level Specification of Emotion and Expressivity in Embodied Conversational Agents, Lecture Notes in Computer Science, vol.3554, pp.225-239, 2005.
DOI : 10.1007/11508373_17

C. Lisetti and G. Bastard, MAUI, Proceedings of the tenth ACM international conference on Multimedia , MULTIMEDIA '02, 2004.
DOI : 10.1145/641007.641038

C. L. Lisetti, Le paradigme MAUI pour des agents multimodaux d'interface hommemachine socialement intelligents. Revue d'Intelligence Artificielle, Numero Special sur les Interactions Emotionnelles, vol.20, pp.4-5583, 2006.
DOI : 10.3166/ria.20.583-606

O. Losson, Modélisation du geste communicatif et réalisation d'un signeur virtuel de phrases en langue des signes française, 2000.

K. Lyons, M. Gandy, and T. Starner, Guided by voices : An audio augmented reality system, International Conference on Auditory Display. Citeseer, 2000.

M. Mallem and D. Roussel, Réalitée augmentée : Principes, technologies et applications Site Web : Techniques de l'ingénieur (http ://www.techniques-ingenieur, 2008.

B. Mansoux, L. Nigay, and J. Troccaz, Output multimodal interaction : The case of augmented surgery Human Computer Interaction, People and Computers XX, The 20th BCS HCI Group conference in co-operation with, Proceedings of HCI 2006, pp.177-192, 2006.

J. C. Martin, TYCOON : Theoretical framework and software tools for multimodal interfaces. Intelligence and Multimodality in Multimedia interfaces, 1998.

A. Mehrabian, Outline of a general emotion-based theory of temperament Explorations in temperament : International perspectives on theory and measurement, pp.75-86, 1991.

A. Mehrabian, Framework for a comprehensive description and measurement of emotional states, Genetic, Social, and General Psychology Monographs, vol.121, issue.3, p.339, 1995.

A. Mehrabian, Communication without words, Communication Theory, p.193, 2007.

A. Mehrabian and J. Russell, The basic emotional impact of environments. Perceptual and motor skills, pp.283-301, 1974.

P. Milgram and F. Kishino, A Taxonomy of Mixed Reality Visual Displays, IEICE Transactions on Information Systems, issue.12, pp.1321-1329, 1994.

J. Montepare, E. Koff, D. Zaitchik, and M. Albert, The use of body movements and gestures as cues to emotions in younger and older adults, Journal of Nonverbal Behavior, vol.23, issue.2, pp.133-152, 1999.
DOI : 10.1023/A:1021435526134

S. Mota and R. W. Picard, Automated Posture Analysis for Detecting Learner's Interest Level, 2003 Conference on Computer Vision and Pattern Recognition Workshop, p.49, 2003.
DOI : 10.1109/CVPRW.2003.10047

L. Nigay, Conception et modélisation logicielles des systèmes interactifs : application aux interfaces multimodales, Laboratoire de Génie Informatique (IMAG) Thèse de doctorat Informatique, p.315, 1994.

L. Nigay and J. Coutaz, A design space for multimodal systems, Proceedings of the SIGCHI conference on Human factors in computing systems , CHI '93, pp.172-178, 1993.
DOI : 10.1145/169059.169143

L. Nigay and J. Coutaz, A generic platform for addressing the multimodal challenge, Proceedings of the SIGCHI conference on Human factors in computing systems, CHI '95, pp.98-105, 1995.
DOI : 10.1145/223904.223917

L. Nigay and J. Coutaz, Espaces conceptuels pour l'interaction multimédia et multimodale . TSI, spécial Multimédia et Collecticiel, pp.1195-1225, 1996.

C. S. O-'bryne, L. Cañamero, and J. Murray, The importance of the body in affectmodulated action selection : A case study comparing proximal versus distal perception in a prey-predator scenario, Proceedings of the 2009 IEEE International Conference on Affective Computing and Intelligent Interaction, 2009.

M. Paleari and C. L. Lisetti, Toward multimodal fusion of affective cues, Proceedings of the 1st ACM international workshop on Human-centered multimedia , HCM '06, pp.99-108, 2006.
DOI : 10.1145/1178745.1178762

M. Pantic and I. Patras, Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences, IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics), vol.36, issue.2, pp.433-449, 2006.
DOI : 10.1109/TSMCB.2005.859075

M. Pantic, N. Sebe, J. F. Cohn, and T. Huang, Affective multimodal human-computer interaction, Proceedings of the 13th annual ACM international conference on Multimedia , MULTIMEDIA '05, pp.669-676, 2005.
DOI : 10.1145/1101149.1101299

S. Pasquariello and C. Pelachaud, Greta: A Simple Facial Animation Engine, 6th Online World Conference on Soft Computing in Industrial Appications, Session on Soft Computing for Intelligent 3D Agents, 2001.
DOI : 10.1007/978-1-4471-0123-9_43

C. Peter, E. Ebert, and H. Beikirch, A Wearable Multi-sensor System for Mobile Acquisition of Emotion-Related Physiological Data, Proceedings of the 1st International Conference on Affective Computing and Intelligent Interaction, pp.691-698
DOI : 10.1007/11573548_89

G. E. Pfaff, User Interface Management Systems, 1985.
DOI : 10.1007/978-3-642-70041-5

R. W. Picard, Affective computing, 1997.
DOI : 10.1037/e526112012-054

R. W. Picard, Human-computer coupling, Proceedings of the IEEE, pp.1803-1807, 1998.
DOI : 10.1109/5.704286

R. W. Picard, Building HAL : Computers that sense, recognize, and respond to human emotion, Proceedings of the 2001 SPIE Conference on Human Vision and Electronic Imaging, pp.518-523, 2001.
DOI : 10.1117/12.429523

R. W. Picard and J. Healey, Affective wearables. Personal Technologies, pp.231-240, 1997.
DOI : 10.1109/iswc.1997.629924

R. W. Picard, S. Papert, W. Bender, B. Blumberg, C. Breazeal et al., Affective Learning ??? A Manifesto, BT Technology Journal, vol.22, issue.4, pp.253-269, 2004.
DOI : 10.1023/B:BTTJ.0000047603.37042.33

R. W. Picard and W. Rosalind, Toward agents that recognize emotion, pp.3-13, 2000.
DOI : 10.1147/sj.393.0705

R. Plutchik and H. R. Conte, Circumplex models of personality and emotions, 1997.
DOI : 10.1037/10261-000

F. E. Pollick, H. M. Paterson, A. Bruderlin, and A. J. Sanford, Perceiving affect from arm movement, Cognition, vol.82, issue.2, pp.51-61, 2001.
DOI : 10.1016/S0010-0277(01)00147-0

C. Reynolds and R. Picard, Affective sensors, privacy, and ethical contracts, Extended abstracts of the 2004 conference on Human factors and computing systems , CHI '04, pp.1103-1106, 2004.
DOI : 10.1145/985921.985999

C. Reynolds and R. W. Picard, Designing for affective interactions, Proceedings of the 9th International Conference on Human-Computer Interaction, 2001.

G. Rivì-ere, N. Couture, and F. Jurado, Tangible user interfaces for geosciences. Society of Exploration Geophysicists, Technical Program Expanded Abstracts, 2009.

J. A. Russell, A circumplex model of affect., Journal of Personality and Social Psychology, vol.39, issue.6, pp.1161-1178, 1980.
DOI : 10.1037/h0077714

URL : https://hal.archives-ouvertes.fr/hal-01086372

J. A. Russell and A. Mehrabian, Evidence for a three-factor theory of emotions, Journal of Research in Personality, vol.11, issue.3, pp.273-294, 1977.
DOI : 10.1016/0092-6566(77)90037-X

J. Scheirer, R. Fernandez, and R. W. Picard, Expression glasses, CHI '99 extended abstracts on Human factors in computing systems , CHI '99, pp.262-263, 1999.
DOI : 10.1145/632716.632878

J. Scheirer and R. Picard, Affective objects, 2000.

K. R. Scherer, The nature and function of emotion, Social Science Information, vol.21, issue.4-5, 1984.
DOI : 10.1177/053901882021004001

K. R. Scherer, Adding the affective dimension : A new look in speech analysis and synthesis, Fourth International Conference on Spoken Language Processing, 1996.

K. R. Scherer, Emotions as episodes of subsystem synchronization driven by nonlinear appraisal processes. Emotion, development, and self-organization : Dynamic systems approaches to emotional development, pp.70-99, 2000.

K. R. Scherer, Psychological models of emotion. The neuropsychology of emotion, pp.137-162, 2000.

K. R. Scherer, Feelings integrate the central representation of appraisal-driven response organization in emotion. In Feelings and emotions : The Amsterdam symposium, pp.136-157, 2004.

K. R. Scherer, Which Emotions Can be Induced by Music? What Are the Underlying Mechanisms? And How Can We Measure Them?, Journal of New Music Research, vol.33, issue.3, pp.239-251, 2004.
DOI : 10.1080/0929821042000317822

K. R. Scherer, R. Banse, H. G. Wallbott, and T. Goldbeck, Vocal cues in emotion encoding and decoding, Motivation and Emotion, vol.52, issue.2, pp.123-148, 1991.
DOI : 10.1007/BF00995674

K. R. Scherer, HUMAINE deliverable D3c : Preliminary plans for exemplars : Theory, 2004.

K. R. Scherer and H. G. Wallbott, Analysis of nonverbal behavior. Handbook of discourse analysis, pp.199-230, 1985.

S. Schkolne, M. Pruett, and P. Schröder, Surface drawing, Proceedings of the SIGCHI conference on Human factors in computing systems , CHI '01, pp.261-268, 2001.
DOI : 10.1145/365024.365114

N. Sebe, I. Cohen, and T. S. Huang, Multimodal emotion recognition. Handbook of Pattern Recognition and Computer Vision, pp.981-256, 2005.

M. Serrano and L. Nigay, Temporal aspects of CARE-based multimodal fusion, Proceedings of the 2009 international conference on Multimodal interfaces, ICMI-MLMI '09, pp.177-184, 2009.
DOI : 10.1145/1647314.1647346

URL : https://hal.archives-ouvertes.fr/hal-00953294

P. Valdez and A. Mehrabian, Effects of color on emotions., Journal of Experimental Psychology: General, vol.123, issue.4, pp.394-409, 1994.
DOI : 10.1037/0096-3445.123.4.394

F. Vernier, La multimodalité en sortie et son applicationàapplicationà la visualisation de grandes quantités d'information, 2001.

G. Volpe, Computational models of expressive gesture in multimedia systems, 2003.

E. Vyzas and R. W. Picard, Offline and Online Recognition of Emotion Expression from Physiological Data, Emotion-Based Agent Architectures Workshop Notes, Internationall Conference on Autonomous Agents, pp.135-142, 1999.

E. Vyzas and R. W. Picard, Affective pattern classification. Emotional and Intelligent : The Tangled Knot of Cognition, pp.176-182, 1998.

J. H. Walker, L. Sproull, and R. Subramani, Using a human face in an interface, CHI '94 : Proceedings of the SIGCHI conference on Human factors in computing systems, pp.85-91, 1994.

H. G. Wallbott, Bodily expression of emotion, European Journal of Social Psychology, vol.34, issue.6, pp.879-896, 1998.
DOI : 10.1002/(SICI)1099-0992(1998110)28:6<879::AID-EJSP901>3.0.CO;2-W

H. G. Wallbott and K. R. Scherer, Cues and channels in emotion recognition., Journal of Personality and Social Psychology, vol.51, issue.4, pp.690-699, 1986.
DOI : 10.1037/0022-3514.51.4.690

N. Wang and S. Marsella, Introducing EVG: An Emotion Evoking Game, Lecture Notes in Computer Science, vol.4133, p.282, 2006.
DOI : 10.1007/11821830_23

G. Welch and E. Foxlin, Motion tracking: no silver bullet, but a respectable arsenal, IEEE Computer Graphics and Applications, vol.22, issue.6, pp.24-38, 2002.
DOI : 10.1109/MCG.2002.1046626

J. Wong and S. Cho, Facial emotion recognition by adaptive processing of tree structures, Proceedings of the 2006 ACM symposium on Applied computing , SAC '06, pp.23-30, 2006.
DOI : 10.1145/1141277.1141282

Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang, A survey of affect recognition methods, Proceedings of the ninth international conference on Multimodal interfaces , ICMI '07, pp.39-58, 2009.
DOI : 10.1145/1322192.1322216

Z. Zeng, J. Tu, M. Liu, T. Zhang, N. Rizzolo et al., Bimodal HCI-related affect recognition, Proceedings of the 6th international conference on Multimodal interfaces , ICMI '04, pp.137-143, 2004.
DOI : 10.1145/1027933.1027958

Z. Zeng, Y. Hu, M. Liu, Y. Fu, and T. S. Huang, Training combination strategy of multi-stream fused hidden Markov model for audio-visual affect recognition, Proceedings of the 14th annual ACM international conference on Multimedia , MULTIMEDIA '06, pp.65-68, 2006.
DOI : 10.1145/1180639.1180661