. Publications, P. @bullet-manoj-kumar-rajagopal, C. Horain, and . Pelachaud, Animating a conversational agent with user expressivity, Proc. 11 th Int. Conf. on Intelligent Virtual Agents, pp.464-465, 2011.

P. @bullet-manoj-kumar-rajagopal, C. Horain, and . Pelachaud, Virtually Cloning Real Human with Motion Style, Proceedings 3 rd Int. Conf. on Intelligent Human Computer Interaction, 2011.

@. Jauregui, P. Horain, M. Kumar-rajagopal, and S. Sesh-kumar-karri, Real-time particle filtering with heuristics for 3D motion capture by monocular vision, 2010 IEEE International Workshop on Multimedia Signal Processing, 2010.
DOI : 10.1109/MMSP.2010.5662008

URL : https://hal.archives-ouvertes.fr/hal-01314817

M. @bullet-donald-glowinski, P. Mancas, F. Brunet, C. Cavallero, P. J. Machy et al., Toward a model of computational attention based on expressive behavior: applications to cultural heritage scenarios, Proc. 5 th Int. Summer Workshop on Multimodal Interfaces (eNTERFACE'09), pp.71-78, 2010.

D. @bullet-matei-mancas, P. Glowinski, F. Brunet, C. Caveller, P. Mach et al., Hypersocial museum: addressing the social interaction challenge with museumscenarios and attention-based approaches, QPSR of the numediart research program, pp.91-96, 2009.

I. References and . Inc, Retrieved from http

A. Agarwal and B. Triggs, Recovering 3D human pose from monocular images, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.28, issue.1, pp.44-58, 2006.
DOI : 10.1109/TPAMI.2006.21

URL : https://hal.archives-ouvertes.fr/inria-00548619

G. W. Allport and P. E. Vernon, Studies in expressive movements, 1933.
DOI : 10.1037/11566-000

O. Arikan and D. Forsyth, Interactive motion generation from examples, pp.483-490, 2002.

N. I. Badler, M. S. Palmer, and R. Bindiganavale, Animation control for realtime virtual humans, Communications of the ACM, vol.42, issue.8, 1999.

T. Bänziger, H. Pirker, and K. R. Scherer, GEMEP ? GEneva Multimodal Emotion Portrayals:A corpus for the study of multimodal emotional expressions, 5th International Conference on Language Resources and Evaluation, 2006.

E. Bevacqua, M. Mancini, R. Niewiadomski, and C. Pelachaud, An expressive ECA showing complex emotions, Proceedings of the AISB Annual Convention, pp.208-216, 2007.

R. T. Boone and J. G. Cunningham, Children's decoding of emotion in expressive body movement: The development of cue attunement., Developmental Psychology, vol.34, issue.5, pp.1007-1016, 1998.
DOI : 10.1037/0012-1649.34.5.1007

M. Brand and A. Hertzmann, Style machines, Proceedings of the 27th annual conference on Computer graphics and interactive techniques , SIGGRAPH '00, pp.183-192, 2000.
DOI : 10.1145/344779.344865

T. W. Calvert and A. E. Chapman, Analysis and synthesis of human movement, Handbook of Pattern Recognition and Image Processing, pp.432-474, 1994.

A. Camurri, P. Coletta, A. Massari, B. Mazzarino, M. Peri et al., Toward real-time multimodal processing: EyesWeb 4, Proceedings of AISB 2004 Convention:Motion, Emotion and Cognition, 2004.

A. Camurri, I. Lagerlöf, and G. Volpe, Recognizing emotion from dance movement: comparison of spectator recognition and automated techniques, International Journal of Human-Computer Studies, vol.59, issue.1-2, pp.213-225, 2003.
DOI : 10.1016/S1071-5819(03)00050-8

G. Caridakis, A. Raouzaiou, E. Bevacqua, M. Mancini, K. Karpouzis et al., Virtual agent multimodal mimicry of humans. Language Resources and Evaluation , Special issue on Multimodal Corpora For Modelling Human Multimodal Behavior, pp.367-388, 2008.
URL : https://hal.archives-ouvertes.fr/halshs-00642892

J. Cassell, J. Sullivan, S. Prevost, and E. F. Churchill, Embodied Conversational Agents, 2000.

G. Castellano and M. Mancini, Analysis of Emotional Gestures for the Generation of Expressive Copying Behaviour in an Embodied Agent, GESTURE-BASED HUMAN-COMPUTER INTERACTION AND SIMULATION, pp.193-198, 2009.
DOI : 10.1002/(SICI)1099-0992(1998110)28:6<879::AID-EJSP901>3.0.CO;2-W

R. Chellappa, A. K. Roy-chowdhury, and A. Kale, Human Identification using Gait and Face, 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp.1-2, 2007.
DOI : 10.1109/CVPR.2007.383523

D. Chi, M. Costa, L. Zhao, and N. Badler, The EMOTE model for effort and shape, Proceedings of the 27th annual conference on Computer graphics and interactive techniques , SIGGRAPH '00, pp.173-182, 2000.
DOI : 10.1145/344779.352172

J. J. Craig, Forward Kinematics. In Introduction to robotics : mechanics and control, 1986.

D. Efron, Gesture and environment, 1941.

M. Ekinci, A New Approach for Human Identification Using Gait Recognition, ICCSA, vol.3, pp.1216-1225, 2006.
DOI : 10.1007/11751595_128

P. Ekman, Emotion in the human face, 1982.

A. Elgammal and C. S. Lee, Separating style and content on a nonlinear manifold, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., pp.478-485, 2004.
DOI : 10.1109/CVPR.2004.1315070

A. Eliëns, Z. Huang, and C. Visser, A platform for Embodied Conversational Agents based on Distributed Logic Programming. AAMAS Workshop on " Embodied conversational agents ? Let's specify and evaluate them!, 2002.

P. M. Fitts, The information capacity of the human motor system in controlling the amplitude, Journal of Experimental Psychology, issue.6, pp.47-381, 1954.

M. Furniss, MOTION CAPTURE: AN OVERVIEW. Retrieved from http, 2000.

P. E. Gallaher, Individual differences in nonverbal behavior: Dimensions of style., Journal of Personality and Social Psychology, vol.63, issue.1, pp.133-145, 1992.
DOI : 10.1037/0022-3514.63.1.133

C. R. Gallistel, The Organization of Action : A New Synthesis, 1980.

G. Jáuregui, D. A. Horain, P. Rajagopal, M. K. Karri, and S. S. , Real-time particle filtering with heuristics for 3D motion capture by monocular vision, 2010 IEEE International Workshop on Multimedia Signal Processing, pp.139-144, 2010.
DOI : 10.1109/MMSP.2010.5662008

R. C. Gonzalez and R. E. Woods, Digital Image Processing, 1992.

K. Grochow, S. L. Martin, A. Hertzmann, and Z. Popovi?, Style-based inverse kinematics, Proceedings of ACM SIGGRAPH 2004), pp.522-531, 2004.
DOI : 10.1145/1015706.1015755

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.58.2119

B. Hartmann, M. Mancini, and C. Pelachaud, Formational Parameters and Adaptive Prototype Instantiation for MPEG-4 Compliant Gesture Synthesis. Computer Animation, 2002.

B. Hartmann, M. Mancini, and C. Pelachaud, Implementing Expressive Gesture Synthesis for Embodied Conversational Agents, Gesture Workshop,LNAI, 2005.
DOI : 10.1037/0022-3514.63.1.133

R. R. Hassin, U. S. James, and . Bargh, The New Unconcious, 2005.

E. Hsu, K. Pulli, and J. Popovi?, Style translation for human motion, ACM Transactions on Graphics, vol.24, issue.3, pp.1082-1089, 2005.
DOI : 10.1145/1073204.1073315

. Intersense, Intersense IS-900, Sensing every move. Retrieved from Intersense Inc, 2010.

G. Johansson, Visual perception of biological motion and a model for its analysis, Perception & Psychophysics, vol.4, issue.2, pp.201-211, 1973.
DOI : 10.3758/BF03212378

B. Julesz, Textons, the elements of texture perception, and their interactions, Nature, vol.32, issue.5802, pp.91-97, 1981.
DOI : 10.1038/290091a0

A. Kendon, Gesture Visible action as utterance, 2004.
DOI : 10.1017/CBO9780511807572

M. Kinect, S. Kita, I. V. Gijn, and H. V. Hulst, Microsoft Kinect, Movement Phase in Signs and Co-Speech Gestures, and Their Transcriptions by Human Coders. International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction, pp.23-35, 1998.

E. Klima and U. Bellugi, The signs of language, 1979.

S. Kopp, T. Sowa, and I. Wachsmuth, Imitation Games with an Artificial Agent: From Mimicking to Understanding Shape-Related Iconic Gestures, Gesture Wrokshop, pp.436-447, 2003.
DOI : 10.1007/978-3-540-24598-8_40

L. Kovar, M. Gleicher, and F. Pighin, Motion Graphs, pp.473-482, 2002.

R. V. Laban, The Mastery of Movement, 1960.

J. Lee, J. Chai, P. S. Reitsma, J. K. Hodgins, and N. S. Pollard, Interactive control of avatars animated with human motion data, pp.491-500, 2002.

L. Lee and E. Grimson, Gait analysis for recognition and classification, Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition, pp.734-742, 2002.
DOI : 10.1109/AFGR.2002.1004148

Y. Li, T. Wang, and H. Shum, Motion texture, Proceedings of the 29th annual conference on Computer graphics and interactive techniques , SIGGRAPH '02, pp.465-472, 2002.
DOI : 10.1145/566570.566604

L. Research and . Inc, Retrieved from http://secondlife

M. Mancini, R. Bresin, and C. Pelachaud, A virtual-agent head driven by musical performance, IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, vol.15, pp.1883-1841, 2007.

D. Mcneill, Hand and Mind what gestures reveal about thought, 1992.

A. Mehrabian and M. Wiener, Decoding of inconsistent communications., Journal of Personality and Social Psychology, vol.6, issue.1, pp.109-114, 1967.
DOI : 10.1037/h0024532

M. Meijer, The contribution of general features of body movement to the attribution of emotions, Journal of Nonverbal Behavior, vol.6, issue.6, pp.247-268, 1989.
DOI : 10.1007/BF00990296

M. Imaging and A. Mesa, SwissRanger SR4000 -miniature 3D time-of-flight range camera, 2008.

T. B. Moeslund, A. Hilton, and V. Krüger, A survey of advances in vision-based human motion capture and analysis, Computer Vision and Image Understanding, vol.104, issue.2-3, pp.90-126, 2006.
DOI : 10.1016/j.cviu.2006.08.002

R. Niewiadomski, S. Hyniewska, and C. Pelachaud, Modeling Emotional Expressions as Sequences of Behaviors, International conference on Intelligent virtual agents (IVA), 5773, pp.316-322, 2009.
DOI : 10.1007/978-3-642-04380-2_34

H. Noot and Z. Ruttkay, The GESTYLE Language. International Journal of Human-Computer Studies -Special issue: Subtle expressivity for characters and robots, 2005.

. Optitrack, Optitrack-optic track motion capture system. Retrieved from http, 2010.

C. Pelachaud, Modelling multimodal expression of emotion in a virtual agent, Philosophical Transactions of the Royal Society B: Biological Sciences, vol.28, issue.1, pp.3539-3548, 2009.
DOI : 10.1037/1528-3542.7.1.158

URL : https://hal.archives-ouvertes.fr/hal-00457448

C. Pelachaud, Multimodal expressive embodied conversational agent Singapore, ACM Multimedia, pp.683-689, 2005.
DOI : 10.1145/1101149.1101301

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.453.716

C. Pelachaud and I. Poggi, Subtleties of facial expressions in embodied agents, The Journal of Visualization and Computer Animation, vol.2, issue.5, pp.301-312, 2002.
DOI : 10.1002/vis.299

I. Poggi, 40. Mind, hands, face, and body: A sketch of a goal and belief view of multimodal communication, 2007.
DOI : 10.1515/9783110261318.627

URL : https://hal.archives-ouvertes.fr/in2p3-01226430

I. Poggi, Towards the lexicon and Alphabet of Gesture, Gaze, and Touch. Virtual Symposium http, 2001.

I. Poggi and C. Pelachaud, Emotional Meaning and Expression in Animated Faces Affective interactions: towards a new generation of computer interfaces, 2000.

I. Poggi and C. Pelachaud, Persuasion and the expressivity of gestures in humans and machines, 2008.
DOI : 10.1093/acprof:oso/9780199231751.003.0017

R. Polana and R. Nelson, Low level recognition of human motion Ausitin, Rigid and Articulated Objects, pp.77-82, 1994.

F. E. Pollick, The Features People Use to Recognize Human Movement Style, Gesture-Based Communication in Human- Computer Interaction, pp.10-19, 2003.
DOI : 10.1007/978-3-540-24598-8_2

R. Poppe, Vision-based human motion analysis: An overview, Computer Vision and Image Understanding, vol.108, issue.1-2, pp.4-18, 2007.
DOI : 10.1016/j.cviu.2006.10.016

S. Prillwitz, R. Leven, H. Zienert, T. Hanke, and J. Henning, HamNoSys. Version 2.0. Hamburg Notation System for Sign Languages:An Introductory Guide, 1989.

F. K. Quek, Eyes in the interface, Image and Vision Computing, vol.13, issue.6, pp.511-525, 1995.
DOI : 10.1016/0262-8856(95)94384-C

F. Quek, D. Mcneill, R. Bryll, C. Kirbas, and H. Arslan, Gesture, speech, and gaze cues for discourse segmentation, Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662), pp.247-254, 2000.
DOI : 10.1109/CVPR.2000.854800

E. Reinhard, M. Ashikhmin, B. Gooch, and P. Shirley, Color transfer between images, IEEE Computer Graphics and Applications, vol.21, issue.4, pp.34-41, 2001.
DOI : 10.1109/38.946629

F. Softkinetic and . Softkinetic, Announces First Affordable Time-of-Flight Depth-Sensing Camera for Commercial UseSoftKinetic -Announces-First-Affordable-Time-of-Flight-Depth-Sensing-Camera-for- Commercial-Use, 2011.

W. Stokoe, Sign Language Structure, Annual Review of Anthropology, vol.9, issue.1, 1960.
DOI : 10.1146/annurev.an.09.100180.002053

J. B. Tenanbaum and W. T. Freeman, Separating Style and Content with Bilinear Models, Neural Computation, vol.13, issue.6, pp.1247-1283, 2000.
DOI : 10.1016/0167-6393(88)90018-0

D. Thalman, Challenges for the Research in Virtual Humans. Workshop on Achieving Human-Like Behaviour in Interactive Animated Agents, AGENTS, 2000.

N. Thalmann and D. Thalmann, Complex models for animating synthetic actors, IEEE Computer Graphics and Applications, vol.11, issue.5, pp.32-44, 1991.
DOI : 10.1109/38.90566

N. Thalmann and L. C. Jain, New Advances in Virtual Humans: Artificial Intelligence Environment, 2008.

S. Thrun and L. Pratt, Learning To Learn, 1998.
DOI : 10.1007/978-1-4615-5529-2

R. Urtasun, D. J. Fleet, and P. Fua, 3D People Tracking with Gaussian Process Dynamical Models, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Volume 1 (CVPR'06), pp.238-245, 2006.
DOI : 10.1109/CVPR.2006.15

R. Urtasun, P. Glardon, R. Boulic, D. Thalmann, and P. Fua, Style-Based Motion Synthesis+, Computer Graphics Forum, vol.46, issue.1, pp.799-812, 2004.
DOI : 10.1126/science.290.5500.2319

H. G. Wallbott, Bodily expression of emotion, European Journal of Social Psychology, vol.34, issue.6, pp.879-896, 1998.
DOI : 10.1002/(SICI)1099-0992(1998110)28:6<879::AID-EJSP901>3.0.CO;2-W

H. G. Wallbott, Hand movement quality: A neglected aspect of nonverbal behavior in clinical judgment and person perception, Journal of Clinical Psychology, vol.44, issue.3, pp.345-359, 1985.
DOI : 10.1002/1097-4679(198505)41:3<345::AID-JCLP2270410307>3.0.CO;2-9

H. G. Wallbott and K. R. Scherer, Cues and channels in emotion recognition., Journal of Personality and Social Psychology, vol.51, issue.4, pp.690-699, 1986.
DOI : 10.1037/0022-3514.51.4.690

J. M. Wang, D. J. Fleet, and A. Hertzmann, Gaussian Process Dynamical Models for Human Motion, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.30, issue.2, pp.283-298, 2008.
DOI : 10.1109/TPAMI.2007.1167

X. Wu, L. Ma, C. Zheng, Y. Chen, and K. Huang, On-Line Motion Style Transfer, International Conference on Entertainment Computing. 4161, pp.268-279, 2006.
DOI : 10.1007/11872320_32

L. Zhao, Synthesis and Acquisition of Laban ovement Analysis Qualitative Parameters for Communicative Gestures, 2001.

J. Zhu and P. Thagard, Emotion and action, Philosophical Psychology, vol.7, issue.1, pp.19-36, 2002.
DOI : 10.1080/026999300402745