J. Aggarwal and Q. Cai, Human motion analysis: a review, Proceedings IEEE Nonrigid and Articulated Motion Workshop, pp.90-102, 1997.

G. Ananthakrishnan, D. Neiberg, and O. Engwall, In search of Nonuniqueness in the Acoustic-to-Articulatory Mapping, Proceedings of Interspeech, pp.2799-2802, 2009.

F. Anderson and W. F. Bischof, Learning and performance with gesture guides, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '13, pp.1109-1118, 2013.
DOI : 10.1145/2470654.2466143

M. Anderson, Embodied Cognition: A field guide, Artificial Intelligence, vol.149, issue.1, pp.91-130, 2003.
DOI : 10.1016/S0004-3702(03)00054-7

C. Appert and S. Zhai, Using strokes as command shortcuts, Proceedings of the 27th international conference on Human factors in computing systems, CHI 09, pp.2289-2298, 2009.
DOI : 10.1145/1518701.1519052

URL : https://hal.archives-ouvertes.fr/inria-00538372

D. Arfib, J. M. Couturier, L. Kessous, and V. Verfaille, Strategies of mapping between gesture data and synthesis model parameters using perceptual spaces, Organised Sound, vol.7, issue.02, pp.127-144, 2002.
DOI : 10.1017/S1355771802002054

URL : https://hal.archives-ouvertes.fr/hal-00166439

B. Argall and A. Billard, Learning from Demonstration and Correction via Multiple Modalities for a Humanoid Robot, Proceedings of the International Conference SKILLS, pp.1-4, 2011.
DOI : 10.1051/bioconf/20110100003

B. D. Argall, S. Chernova, M. Veloso, and B. Browning, A survey of robot learning from demonstration, Robotics and Autonomous Systems, vol.57, issue.5, pp.469-483, 2009.
DOI : 10.1016/j.robot.2008.10.024

P. Arias, Description et synthèse sonore dans le cadre de l'apprentissage mouvement-son par démonstration, 2014.

T. Artières, S. Marukatat, and P. Gallinari, Online Handwritten Shape Recognition Using Segmental Hidden Markov Models, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.29, issue.2, pp.205-217, 2007.
DOI : 10.1109/TPAMI.2007.38

M. Astrinaki, N. D-'alessandro, B. Picart, T. Drugman, and T. Dutoit, Reactive and continuous control of HMM-based speech synthesis, 2012 IEEE Spoken Language Technology Workshop (SLT), pp.252-257, 2012.
DOI : 10.1109/SLT.2012.6424231

N. Astrinaki and T. Dutoit, MAGE ? A Platform for Tangible Speech Synthesis, Proceedings of the International Conference on New Interfaces for Musical Expression, p.2012

J. Aucouturier and F. Pachet, Jamming with Plunderphonics: Interactive concatenative synthesis of music, Journal of New Music Research, vol.10, issue.1, pp.35-50, 2006.
DOI : 10.1121/1.421129

Y. Baram and A. Miller, Auditory feedback control for improvement of gait in patients with Multiple Sclerosis, Journal of the Neurological Sciences, vol.254, issue.1-2, pp.90-94, 2007.
DOI : 10.1016/j.jns.2007.01.003

I. Bartenieff, Body movement: Coping with the environment, Routledge, 1980.

O. Bau and W. E. Mackay, OctoPocus, Proceedings of the 21st annual ACM symposium on User interface software and technology, UIST '08, pp.37-46, 2008.
DOI : 10.1145/1449715.1449724

G. Beller, The Synekine Project, Proceedings of the 2014 International Workshop on Movement and Computing, MOCO '14, pp.66-69, 2014.
DOI : 10.1145/2617995.2618007

R. Bencina, The metasurface: applying natural neighbour interpolation to two-to-many mapping, Proceedings of International Conference on New Interfaces for Musical Expression, NIME'05, pp.101-104, 2005.

R. Bencina, D. Wilde, and S. Langley, Gesture-sound experiments: Process and mappings, Proceedings of the International Conference on New Interfaces for Musical Expression, NIME'08, pp.197-202, 2008.

Y. Bengio, Input-output HMMs for sequence processing, IEEE Transactions on Neural Networks, vol.7, issue.5, pp.1231-1949, 1996.
DOI : 10.1109/72.536317

P. A. Best, F. Levy, J. P. Fried, and F. Leventhal, Dance and Other Expressive Art Therapies: When Words Are Not Enough Dance Research: The Journal of the Society for Dance Research, 1998.

F. Bettens and T. Todoroff, Real-time dtw-based gesture recognition external object for max/msp and puredata, Proceedings of the SMC 2009 Conference, pp.30-35, 2009.

F. Bevilacqua, R. Muller, and N. Schnell, MnM: a Max/MSP mapping toolbox, Proceedings of International Conference on New Interfaces for Musical Expression, NIME'05, p.10, 2005.
URL : https://hal.archives-ouvertes.fr/hal-01161330

F. Bevilacqua, B. Zamborlin, A. Sypniewski, N. Schnell, F. Guédy et al., Continuous Realtime Gesture Following and Recognition, Gesture in Embodied Communication and Human-Computer Interaction, pp.73-84, 2010.
DOI : 10.1007/978-3-642-12553-9_7

URL : https://hal.archives-ouvertes.fr/hal-01106955

F. Bevilacqua, N. Schnell, N. Rasamimanana, B. Zamborlin, and F. Guédy, Online Gesture Analysis and Control of Audio Processing, Musical Robots and Interactive Multimodal Systems, pp.127-142, 2011.
DOI : 10.1007/978-3-642-22291-7_8

URL : https://hal.archives-ouvertes.fr/hal-01106956

A. Billard, S. Calinon, R. Dillmann, and S. Schaal, Robot Programming by Demonstration, Handbook of robotics, vol.1, 2008.
DOI : 10.1007/978-3-540-30301-5_60

J. Bilmes, A gentle tutorial of the EM algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov models, Tech. Rep, 1998.

J. Bloit and X. Rodet, Short-time Viterbi for online HMM decoding: Evaluation on a real-time phone recognition task, 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, 2008.
DOI : 10.1109/ICASSP.2008.4518061

URL : https://hal.archives-ouvertes.fr/hal-01161222

J. Bloit, N. Rasamimanana, and F. Bevilacqua, Modeling and segmentation of audio descriptor profiles with segmental models, Pattern Recognition Letters, vol.31, issue.12, pp.1507-1513, 2010.
DOI : 10.1016/j.patrec.2009.11.003

URL : https://hal.archives-ouvertes.fr/hal-01106964

I. Bowler, A. Purvis, P. Manning, and N. Bailey, On mapping n articulation onto m synthesiser-control parameters, Proceedings of the International Computer Music Conference, pp.181-184, 1990.

M. Brand, Voice puppetry, Proceedings of the 26th annual conference on Computer graphics and interactive techniques , SIGGRAPH '99, pp.21-28, 1999.
DOI : 10.1145/311535.311537

M. Brand and A. Hertzmann, Style machines Proceedings of the 27th annual conference on Computer graphics and interactive techniques -SIG- GRAPH '00, pp.183-192, 2000.

B. Bruegge, C. Teschner, and P. Lachenmaier, Pinocchio, Proceedings of the international conference on Advances in computer entertainment technology , ACE '07, pp.294-295, 2007.
DOI : 10.1145/1255047.1255132

B. I. Busso, Z. Deng, U. Neumann, and S. Narayanan, Natural head motion synthesis driven by acoustic prosodic features, Computer Animation and Virtual Worlds, vol.25, issue.3-4, pp.283-290, 2005.
DOI : 10.1002/cav.80

W. Buxton, Sketching User Experiences: Getting the Design Right and the Right Design, 2010.

C. Cadoz, Instrumental gesture and musical composition, Proceedings of the International computer music conference, 1988.
URL : https://hal.archives-ouvertes.fr/hal-00491738

C. Cadoz, A. Luciani, and J. Florens, CORDIS-ANIMA: A Modeling and Simulation System for Sound and Image Synthesis: The General Formalism, Computer Music Journal, vol.17, issue.1, pp.19-29, 1993.
DOI : 10.2307/3680567

URL : https://hal.archives-ouvertes.fr/hal-01234319

S. Calinon, Continuous extraction of task constraints in a robot programming by demonstration framework, PhD Dissertation, p.104, 0191.

S. Calinon, A. Pistillo, and D. G. Caldwell, Encoding the time and space constraints of a task in explicit-duration Hidden Markov Model, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.3413-3418, 2011.
DOI : 10.1109/IROS.2011.6094418

S. Calinon, F. Guenter, and A. Billard, On Learning, Representing, and Generalizing a Task in a Humanoid Robot, IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics), vol.37, issue.2, pp.286-298, 2007.
DOI : 10.1109/TSMCB.2006.886952

S. Calinon, F. D-'halluin, E. Sauser, D. Caldwell, and A. Billard, Learning and Reproduction of Gestures by Imitation, IEEE Robotics & Automation Magazine, vol.17, issue.2, pp.44-54, 2010.
DOI : 10.1109/MRA.2010.936947

S. Calinon, Z. Li, T. Alizadeh, N. G. Tsagarakis, and D. G. Caldwell, Statistical dynamical systems for skills acquisition in humanoids, 2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2012), pp.323-329, 2012.
DOI : 10.1109/HUMANOIDS.2012.6651539

S. Calinon, D. Bruno, and D. G. Caldwell, A task-parameterized probabilistic model with minimal intervention control, 2014 IEEE International Conference on Robotics and Automation (ICRA), pp.3339-3344, 2014.
DOI : 10.1109/ICRA.2014.6907339

A. Camurri, S. Hashimoto, and M. Ricchetti, EyesWeb: Toward Gesture and Affect Recognition in Interactive Dance and Music Systems, Computer Music Journal, vol.24, issue.7, pp.57-69, 2000.
DOI : 10.1121/1.426687

A. Camurri, Movement and Gesture in Intelligent interactive Music Systems, Trends in Gestural Control of Music, pp.95-110, 2000.

A. Camurri, M. Ricchetti, and R. Trocca, EyesWeb-toward gesture and affect recognition in dance/music interactive systems, Proceedings IEEE International Conference on Multimedia Computing and Systems, pp.643-648, 2000.

A. Camurri, B. Mazzarino, and M. Ricchetti, Multimodal analysis of expressive gesture in music and dance performances Gesture-Based Communication in Human-Computer Interaction, pp.20-39, 2004.

B. Caramiaux, U. Stms-ircam-cnrs, and . Upmc, Studies on the Relationship between Gesture and Sound in Musical Performance, p.14, 2012.

B. Caramiaux, F. Bevilacqua, T. Bianco, N. Schnell, O. Houix et al., The Role of Sound Source Perception in Gestural Sound Description, ACM Transactions on Applied Perception, vol.11, issue.1, pp.1-19, 2014.
DOI : 10.1145/2536811

URL : https://hal.archives-ouvertes.fr/hal-01106930

B. Caramiaux, Motion Modeling for Expressive Interaction -A Design Proposal using Bayesian Adaptive Systems, Proceedings of the International Workshop on Movement and Computing, pp.76-81, 2014.

B. Caramiaux and A. Tanaka, Machine Learning of Musical Gestures, proceedings of the International Conference on New Interfaces for Musical Expression, 2013.

B. Caramiaux, F. Bevilacqua, and N. Schnell, Analysing Gesture and Sound Similarities with a HMM-based Divergence Measure, Proceedings of the Sound and Music Computing Conference, SMC, p.2010
URL : https://hal.archives-ouvertes.fr/hal-01161273

B. Caramiaux, M. M. Wanderley, and F. Bevilacqua, Segmenting and Parsing Instrumentalist's Gestures, Journal of New Music Research, vol.41, issue.1, pp.1-27
DOI : 10.1080/09298215.2011.643314

URL : https://hal.archives-ouvertes.fr/hal-01161436/file/index.pdf

B. Caramiaux, J. Françoise, N. Schnell, and F. Bevilacqua, Mapping Through Listening, Computer Music Journal, vol.7, issue.2, pp.34-48, 2014.
DOI : 10.1038/nrn2152

URL : https://hal.archives-ouvertes.fr/hal-01106965

B. Caramiaux, N. Montecchio, A. Tanaka, and F. Bevilacqua, Adaptive Gesture Recognition with Variation Estimation for Interactive Systems, ACM Transactions on Interactive Intelligent Systems, vol.4, issue.4, p.216
DOI : 10.1145/2643204

URL : https://hal.archives-ouvertes.fr/hal-01266046

M. Cartwright and B. Pardo, SynthAssist, Proceedings of the ACM International Conference on Multimedia, MM '14, pp.363-366, 2014.
DOI : 10.1145/2647868.2654880

N. Castagné and C. Cadoz, GENESIS: a Friendly Musician-Oriented Environment for Mass-Interaction Physical Modeling, Proceedings of the International Computer Music Conference, pp.330-337, 2002.

N. Castagne, C. Cadoz, J. Florens, and A. Luciani, Haptics in computer music: a paradigm shift, EuroHaptics, 2004.
URL : https://hal.archives-ouvertes.fr/hal-00484383

R. E. Causse, J. Bensoam, and N. Ellis, Modalys, a physical modeling synthesizer: More than twenty years of researches, developments, and musical uses, The Journal of the Acoustical Society of America, vol.130, issue.4, 2011.
DOI : 10.1121/1.3654475

URL : https://hal.archives-ouvertes.fr/hal-01106782

T. Chen, Audiovisual speech processing, IEEE Signal Processing Magazine, vol.18, issue.1, pp.9-21, 2001.
DOI : 10.1109/79.911195

Y. Choi, J. Luo, and . Hwang, Hidden Markov model inversion for audio-to-visual conversion in an MPEG-4 facial animation system, The Journal of VLSI Signal Processing, vol.29, issue.1/2, pp.51-61, 2001.
DOI : 10.1023/A:1011171430700

A. Cont, T. Coduys, and C. Henry, Real-time Gesture Mapping in Pd Environment using Neural Networks, Proceedings of International Conference on New Interfaces for Musical Expression, pp.39-42, 2004.

K. Dautenhahn and C. L. Nehaniv, The correspondence problem, 2002.

M. Demoucron, On the control of virtual violins -Physical modelling and control of bowed string instruments, UPMC (Paris) and KTH, 2008.
URL : https://hal.archives-ouvertes.fr/tel-00349920

L. Deng and X. Li, Machine Learning Paradigms for Speech Recognition: An Overview, IEEE Transactions on Audio, Speech, and Language Processing, vol.21, issue.5, pp.1060-1089, 2013.
DOI : 10.1109/TASL.2013.2244083

P. Depalle, G. Garcia, and X. Rodet, Tracking of partials for additive sound synthesis using hidden Markov models, IEEE International Conference on Acoustics Speech and Signal Processing, pp.4-7, 1993.
DOI : 10.1109/ICASSP.1993.319096

J. Dines, J. Yamagishi, and S. King, Measuring the Gap Between HMM-Based ASR and TTS, IEEE Journal of Selected Topics in Signal Processing, vol.4, issue.6, pp.1046-1058, 2010.
DOI : 10.1109/JSTSP.2010.2079315

. Ding, Data-Driven Expressive Animation Model of Speech and Laughter for an Embodied Conversational Agent, p.27, 0191.
URL : https://hal.archives-ouvertes.fr/tel-01354335

Y. Ding, M. Radenen, T. Artieres, and C. Pelachaud, Speech-driven eyebrow motion synthesis with contextual Markovian models, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp.3756-3760, 2013.
DOI : 10.1109/ICASSP.2013.6638360

URL : https://hal.archives-ouvertes.fr/hal-01215185

P. Dourish, Where the action is, 2004.

G. Dubus and R. Bresin, A Systematic Review of Mapping Strategies for the Sonification of Physical Quantities, PLoS ONE, vol.10, issue.849696, p.82491, 2013.
DOI : 10.1371/journal.pone.0082491.s002

T. Duong, D. Phung, H. Bui, and S. Venkatesh, Efficient duration and hierarchical modeling for human activity recognition, Artificial Intelligence, vol.173, issue.7-8, pp.830-856, 2009.
DOI : 10.1016/j.artint.2008.12.005

A. Effenberg, U. Fehse, and A. Weber, Movement Sonification: Audiovisual benefits on motor learning, BIO Web of Conferences, 2011.
DOI : 10.1051/bioconf/20110100022

S. Eickeler, A. Kosmala, and G. , Hidden Markov model based continuous online gesture recognition, Proceedings. Fourteenth International Conference on Pattern Recognition (Cat. No.98EX170), pp.1206-1208, 1998.
DOI : 10.1109/ICPR.1998.711914

I. Ekman and M. Rinott, Using vocal sketching for designing sonic interactions, Proceedings of the 8th ACM Conference on Designing Interactive Systems, DIS '10, pp.123-131, 2010.
DOI : 10.1145/1858171.1858195

P. Esling and C. Agon, Multiobjective Time Series Matching for Audio Classification and Retrieval, IEEE Transactions on Audio, Speech, and Language Processing, vol.21, issue.10, pp.2057-2072
DOI : 10.1109/TASL.2013.2265086

J. A. Fails and D. R. Olsen, Interactive machine learning, Proceedings of the 8th international conference on Intelligent user interfaces, IUI '03, pp.39-45, 2003.
DOI : 10.1145/604045.604056

S. Fasciani and L. Wyse, A Voice Interface for Sound Generators: adaptive and automatic mapping of gestures to sound, Proceedings of the International Conference on New Interfaces for Musical Expression, p.2012

F. Alaoui, B. Caramiaux, M. Serrano, and F. Bevilacqua, Movement qualities as interaction modality, Proceedings of the Designing Interactive Systems Conference on, DIS '12, p.179
DOI : 10.1145/2317956.2318071

URL : https://hal.archives-ouvertes.fr/hal-01161433

S. Fdili-alaoui, F. Bevilacqua, B. B. Pascual, and C. Jacquemin, Dance interaction with physical model visuals based on movement qualities, International Journal of Arts and Technology, vol.6, issue.4, pp.357-387
DOI : 10.1504/IJART.2013.058284

B. I. Fels and G. Hinton, Glove-TalkII, Proceedings of the SIGCHI conference on Human factors in computing systems, CHI '95, pp.2-8, 1993.
DOI : 10.1145/223904.223966

S. Ferguson, E. Schubert, and C. Stevens, Dynamic dance warping, Proceedings of the 2014 International Workshop on Movement and Computing, MOCO '14, pp.94-99, 2014.
DOI : 10.1145/2617995.2618012

R. Fiebrink, G. Wang, and P. Cook, Don't forget the laptop, Proceedings of the 7th international conference on New interfaces for musical expression, NIME '07, 2007.
DOI : 10.1145/1279740.1279771

R. Fiebrink, P. R. Cook, and D. Trueman, Play-along mapping of musical controllers, Proceedings of the International Computer Music Conference, 2009.

R. A. Fiebrink, Real-time Human Interaction with Supervised Learning Algorithms for Music Composition and Performance, pp.15-16, 2011.

S. Fine, Y. Singer, and N. Tishby, The hierarchical hidden Markov model: Analysis and applications, Machine Learning, vol.32, issue.1, pp.41-62, 1998.
DOI : 10.1023/A:1007469218079

J. Fogarty, D. Tan, A. Kapoor, and S. Winder, CueFlik, Proceeding of the twenty-sixth annual CHI conference on Human factors in computing systems , CHI '08, p.29, 2008.
DOI : 10.1145/1357054.1357061

J. Françoise, Realtime Segmentation and Recognition of Gestures using Hierarchical Markov Models Master's Thesis, pp.15-70

J. Françoise, B. Caramiaux, and F. Bevilacqua, A Hierarchical Approach for the Design of Gesture-to-Sound Mappings, Proceedings of the 9th Sound and Music Computing Conference, pp.233-240, 0198.

J. Françoise, N. Schnell, and F. Bevilacqua, Gesture-based control of physical modeling sound synthesis, Proceedings of the 21st ACM international conference on Multimedia, MM '13, pp.447-448, 2013.
DOI : 10.1145/2502081.2502262

J. Françoise, S. Fdili-alaoui, T. Schiphorst, and F. Bevilacqua, Vocalizing dance movement for interactive sonification of laban effort factors, Proceedings of the 2014 conference on Designing interactive systems, DIS '14, pp.1079-1082, 2014.
DOI : 10.1145/2598510.2598582

J. Françoise, N. Schnell, and F. Bevilacqua, MaD: Mapping by Demonstration for Continuous Sonification, ACM SIGGRAPH 2014 Emerging Technologies, SIGGRAPH '14, pp.1-161, 2014.

J. Françoise, N. Schnell, R. Borghesi, and F. Bevilacqua, Probabilistic Models for Designing Motion and Sound Relationships, Proceedings of the 2014 International Conference on New Interfaces for Musical Expression, pp.287-292

K. Franinovi´cfraninovi´c and S. Serafin, Sonic Interaction Design, 2013.

S. Fu, R. Gutierrez-osuna, A. Esposito, P. Kakumanu, and O. Garcia, Audio/visual mapping with cross-modal hidden Markov models, IEEE Transactions on, vol.7, issue.111, pp.243-252, 2005.

M. Gales and S. Young, The Application of Hidden Markov Models in Speech Recognition, Foundations and Trends?? in Signal Processing, vol.1, issue.3, pp.195-304, 2007.
DOI : 10.1561/2000000004

W. Gaver, Auditory Icons, ACM SIGCHI Bulletin, vol.19, issue.1, pp.167-177, 1986.
DOI : 10.1145/28189.1044809

W. W. Gaver, How Do We Hear in the World? Explorations in Ecological Acoustics, Ecological Psychology, vol.18, issue.4, pp.285-313, 1993.
DOI : 10.1037//0096-1523.10.5.704

S. Gelineck and N. Böttcher, An educational tool for fast and easy mapping of input devices to musical parameters, Audio Mostly, conference on interaction with sound, p.2012

Z. Ghahramani and M. I. Jordan, Supervised learning from incomplete data via an EM approach, Advances in Neural Information Processing Systems, p.26, 1994.

N. Gillian and R. Knapp, A Machine Learning Toolbox For Musician Computer Interaction, Proceedings of International Conference on New Interfaces for Musical Expression, p.220, 2011.

N. Gillian and J. A. Paradiso, Digito: A Fine-Grain Gesturally Controlled Virtual Musical Instrument, Proceedings of International Conference on New Interfaces for Musical Expression, p.2012

N. Gillian, B. Knapp, and S. O-'modhrain, Recognition Of Multivariate Temporal Musical Gestures Using N-Dimensional Dynamic Time Warping, Proceedings of International Conference on New Interfaces for Musical Expression, pp.337-342, 2011.

N. E. Gillian, Gesture Recognition for Musician Computer Interaction PhD dissertation, Faculty of Arts, Humanities and Social Sciences, 2011.

R. I. Godøy, Motor-Mimetic Music Cognition, Leonardo, vol.36, issue.4, pp.317-319, 2003.
DOI : 10.1016/0010-0277(85)90021-6

R. I. Godøy, E. Haga, and A. Jensenius, Playing Air Instruments " : Mimicry of Sound-producing Gestures by Novices and Experts Gesture in Human-Computer Interaction and Simulation, pp.256-267, 2006.

R. I. Godøy, A. R. Jensenius, and K. Nymoen, Chunking in Music by Coarticulation, Acta Acustica united with Acustica, vol.96, issue.4, pp.690-700, 2010.
DOI : 10.3813/AAA.918323

R. Goudard, H. Genevois, E. Ghomi, and B. Doval, Dynamic Intermediate Models for audiographic synthesis, Proceedings of the Sound and Music Computing Conference, 2011.
URL : https://hal.archives-ouvertes.fr/hal-00601855

C. Goudeseune, Interpolated mappings for musical instruments Organised Sound, pp.85-96, 2002.

T. Grossman, P. Dragicevic, and R. Balakrishnan, Strategies for accelerating on-line learning of hotkeys, Proceedings of the SIGCHI conference on Human factors in computing systems , CHI '07, p.1591, 2007.
DOI : 10.1145/1240624.1240865

G. Guerra-filho and Y. Aloimonos, A Language for Human Action, Computer, vol.40, issue.5, pp.42-51, 2007.
DOI : 10.1109/MC.2007.154

B. Hartmann, L. Abdulla, M. Mittal, and S. R. Klemmer, Authoring sensorbased interactions by demonstration with direct manipulation and pattern recognition, Proceedings of the SIGCHI conference on Human factors in computing systems, CHI'07, p.145, 2007.

. Henry, Physical modeling for pure data (PMPD) and real time interaction with an audio synthesis, Proceedings of the 2004 Sound and Music Computing Conference, p.2004

T. Hermann, J. Neuhoff, and A. Hunt, The Sonification Handbook, Logos Verlag, 2011.

G. Hinton, L. Deng, D. Yu, G. Dahl, A. Mohamed et al., Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups, IEEE Signal Processing Magazine, vol.29, issue.6, pp.82-97, 2012.
DOI : 10.1109/MSP.2012.2205597

G. Hofer, Speech-driven animation using multi-modal hidden Markov models, 2009.

D. M. Howard and S. , Real-Time Gesture-Controlled Physical Modelling Music Synthesis with Tactile Feedback, EURASIP Journal on Advances in Signal Processing, vol.2004, issue.7, pp.1001-1006, 2004.
DOI : 10.1155/S1110865704311182

T. Hueber and P. Badin, Statistical Mapping between Articulatory and Acoustic Data, Application to Silent Speech Interface and Visual Articulatory Feedback, Proceedings of the 1st International Workshop on Performative Speech and Singing Synthesis (p3s), 2011.
URL : https://hal.archives-ouvertes.fr/hal-00640395

A. Hunt and R. Kirk, Mapping Strategies for Musical Performance, Trends in Gestural Control of Music, pp.231-258, 2000.

A. Hunt and M. M. Wanderley, Mapping performer parameters to synthesis engines, Organised Sound, vol.7, issue.02, pp.97-108, 2002.
DOI : 10.1017/S1355771802002030

A. Hunt, M. M. Wanderley, and R. Kirk, Towards a Model for Instrumental Mapping in Expert Musical Interaction, Proceedings of the 2000 International Computer Music Conference, pp.209-212, 2000.
URL : https://hal.archives-ouvertes.fr/hal-01105532

A. J. Ijspeert, J. Nakanishi, H. Hoffmann, P. Pastor, and S. Schaal, Dynamical Movement Primitives: Learning Attractor Models for Motor Behaviors, Neural Computation, vol.2010, issue.11, pp.328-73, 2013.
DOI : 10.1109/AT-EQUAL.2009.32

J. Janer, Singing-driven interfaces for sound synthesizers, PhD Dissertation, 2009.

A. Johnston, Interfaces for musical expression based on simulated physical models, 2009.

A. Johnston, B. Marks, and L. Candy, Sound controlled musical instruments based on physical models, Proceedings of the, 2007.

A. Johnston, L. Candy, and E. Edmonds, Designing and evaluating virtual musical instruments: facilitating conversational user interaction, Design Studies, vol.29, issue.6, pp.556-571, 2008.
DOI : 10.1016/j.destud.2008.07.005

S. Jorda, M. Kaltenbrunner, G. Geiger, and R. Bencina, The reactable*, Proceedings of the international computer music conference (ICMC 2005), pp.579-582, 2005.

S. Jordà, Digital Lutherie: Crafting musical computers for new musics performance and improvisation, PhD Dissertation, 2005.

B. Juang and L. Rabiner, The segmental K-means algorithm for estimating parameters of hidden Markov models, IEEE Transactions on Acoustics, Speech, and Signal Processing, vol.38, issue.9, pp.1639-1641, 1990.
DOI : 10.1109/29.60082

H. Kang, Recognition-based gesture spotting in video games, Pattern Recognition Letters, vol.25, issue.15, pp.1701-1714, 2004.
DOI : 10.1016/j.patrec.2004.06.016

H. Kawahara, I. Masuda-katsuse, and A. De-cheveigné, Restructuring speech representations using a pitch-adaptive time???frequency smoothing and an instantaneous-frequency-based F0 extraction: Possible role of a repetitive structure in sounds, Speech Communication, vol.27, issue.3-4, pp.187-207, 1999.
DOI : 10.1016/S0167-6393(98)00085-5

URL : https://hal.archives-ouvertes.fr/hal-01105608

G. Kellum and A. Crevoisier, A Flexible Mapping Editor for Multi-touch Musical Instruments, Proceedings of the International Conference on New Interfaces for Musical Expression, pp.242-245, 2009.

A. Kendon, Current issues in the study of gesture The biological foundations of gestures: Motor and semiotic aspects, pp.23-47, 1986.

E. Keogh and C. A. Ratanamahatana, Exact indexing of dynamic time warping, Knowledge and Information Systems, vol.26, issue.3, pp.358-386, 2004.
DOI : 10.1109/TASSP.1978.1163149

C. Kiefer, Musical Instrument Mapping Design with Echo State Networks, Proceedings of the International Conference on New Interfaces for Musical Expression, NIME'14, pp.293-298, 2014.

J. Kim, D. Song, and . Kim, Simultaneous gesture segmentation and recognition based on forward spotting accumulative HMMs, Pattern Recognition, vol.40, issue.11, pp.3012-3026, 2007.
DOI : 10.1016/j.patcog.2007.02.010

D. Kirsh, Embodied cognition and the magical future of interaction design, ACM Transactions on Computer-Human Interaction, vol.20, issue.1, pp.1-3
DOI : 10.1145/2442106.2442109

D. Kirsh, D. Muntanyola, and R. Jao, Choreographic methods for creating novel, high quality dance, 5th International workshop on Design and Semantics of Form and Movement, 2009.

W. Knox, Learning from Human-Generated Reward, PhD Dissertation

E. Kohler, C. Keysers, M. A. Umiltà, L. Fogassi, V. Gallese et al., Hearing Sounds, Understanding Actions: Action Representation in Mirror Neurons, Science, vol.297, issue.5582, pp.846-848, 2002.
DOI : 10.1126/science.1070311

P. Kolesnik, Recognition, analysis and performance with expressive conducting gestures, Proceedings of the International Computer Music Conference, 2004.

P. Kolesnik and M. M. Wanderley, Implementation of the Discrete Hidden Markov Model in Max, FLAIRS Conference, pp.68-73, 2005.

D. Koller and N. Friedman, Probabilistic graphical models: principles and techniques, 2009.

T. Kulesza, Personalizing Machine Learning Systems with Explanatory Debugging, PhD Dissertation, 2014.

G. Kurtenbach and W. Buxton, User learning and performance with marking menus, Proceedings of the SIGCHI conference on Human factors in computing systems, pp.258-264, 1994.

C. P. Laguna and R. Fiebrink, Improving data-driven design and exploration of digital musical instruments, Proceedings of the extended abstracts of the 32nd annual ACM conference on Human factors in computing systems, CHI EA '14, pp.2575-2580, 2014.
DOI : 10.1145/2559206.2581327

G. Lakoff and M. Johnson, Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought, Collection of Jamie and Michael Kassler, Basic Books, p.224, 1999.

B. I. Large, On synchronizing movements to music, Human Movement Science, vol.19, issue.4, pp.527-566, 2000.
DOI : 10.1016/S0167-9457(00)00026-9

J. J. Laviola, 3D Gestural Interaction: The State of the Field, ISRN Artificial Intelligence, vol.2, issue.1, p.2013
DOI : 10.1504/IJAHUC.2007.014070

E. Lee and T. Nakra, You're the conductor: a realistic interactive conducting system for children, Proceedings of International Conference on New Interfaces for Musical Expression, pp.68-73, 2004.

H. Lee and J. H. Kim, An HMM-Based Threshold Model Approach for Gesture Recognition Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.21, issue.10, pp.961-973, 1999.

M. Lee, A. Freed, and D. Wessel, <title>Neural networks for simultaneous classification and parameter estimation in musical instrument control</title>, Adaptive and Learning Systems, pp.244-55, 1992.
DOI : 10.1117/12.139949

C. Leggetter and P. Woodland, Maximum likelihood linear regression for speaker adaptation of continuous density hidden Markov models, Computer Speech & Language, vol.9, issue.2, pp.171-185, 0191.
DOI : 10.1006/csla.1995.0010

G. Lemaitre and D. Rocchesso, On the effectiveness of vocal imitations and verbal descriptions of sounds, The Journal of the Acoustical Society of America, vol.135, issue.2, pp.862-73, 2014.
DOI : 10.1121/1.4861245

URL : https://hal.archives-ouvertes.fr/hal-01107168

G. Lemaitre, O. Houix, N. Misdariis, and P. Susini, Listener expertise and sound identification influence the categorization of environmental sounds., Journal of Experimental Psychology: Applied, vol.16, issue.1, pp.16-32, 2010.
DOI : 10.1037/a0018762

URL : https://hal.archives-ouvertes.fr/hal-01106578

M. Leman, Embodied Music Cognition and mediation technology, p.20, 0190.

Y. Li and H. Shum, Learning dynamic audio-visual mapping with inputoutput Hidden Markov models, IEEE Transactions on Multimedia, vol.8, issue.3, pp.542-549, 2006.

O. Liam, F. Dermot, and F. Boland, Introducing CrossMapper: Another Tool for Mapping Musical Control Parameters, Proceedings of the International Conference on New Interfaces for Musical Expression, p.2012

J. D. Loehr and C. Palmer, Cognitive and biomechanical influences in pianists??? finger tapping, Experimental Brain Research, vol.28, issue.4, pp.518-546, 2007.
DOI : 10.1007/s00221-006-0760-8

M. Logan, Dance in the schools: A personal account Theory Into Practice, pp.200-302, 1984.

H. Lü and Y. Li, Gesture coder: a tool for programming multitouch gestures by demonstration, Proceedings of the ACM annual conference on Human Factors in Computing Systems, CHI '12, p.2875, 2012.

J. Malloch, S. Sinclair, and M. M. Wanderley, A Network-Based Framework for Collaborative Development and Performance of Digital Musical Instruments, Computer Music Modeling and Retrieval. Sense of Sounds, pp.401-425, 2008.
DOI : 10.1007/978-3-540-85035-9_28

D. S. Maranan, S. Fdili-alaoui, T. Schiphorst, P. Pasquier, P. Subyen et al., Designing for movement: evaluating computational models using LMA effort qualities, Proceedings of the 32nd annual ACM conference on Human factors in computing systems, CHI '14, pp.991-1000, 2014.

T. Marrin and R. Picard, The 'Conductor's Jacket': A Device for Recording Expressive Musical Gestures, Proceedings of the International Computer Music Conference, pp.215-219, 1998.

M. V. Mathews, The Radio Baton and Conductor Program, or: Pitch, the Most Important and Least Expressive Part of Music, Computer Music Journal, vol.15, issue.4, pp.37-46, 1991.
DOI : 10.2307/3681070

G. Mclachlan and D. Peel, Finite mixture models, 2004.
DOI : 10.1002/0471721182

H. M. Mentis and C. Johansson, Seeing movement qualities, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '13, pp.3375-3384, 2013.
DOI : 10.1145/2470654.2466462

D. J. Merrill and J. A. Paradiso, Personalization, expressivity, and learnability of an implicit mapping strategy for physical interfaces, CHI '05 Extended Abstracts on Human Factors in Computing Systems, p.2152, 2005.

G. A. Miller, The magical number seven, plus or minus two: Some limits on our capacity for processing information., Psychological Review, vol.101, issue.2, pp.343-352, 1956.
DOI : 10.1037/0033-295X.101.2.343

E. Miranda and M. Wanderley, New digital musical instruments: control and interaction beyond the keyboard, 2006.

S. , M. Systems, . Man, . Cybernetics, and C. Part, Gesture recognition: A survey, pp.311-324, 2007.

P. Modler, Neural Networks for Mapping Hand Gestures to Sound Synthesis parameters, Trends in Gestural Control of Music, pp.301-314, 2000.

P. Modler, T. Myatt, and M. Saup, An Experimental Set of Hand Gestures for Expressive Control of Musical Parameters in Realtime, Proceedings of the International Conference on New Interfaces for Musical Expression, NIME'03. McGill University, pp.146-150, 2003.

A. Momeni and C. Henry, Dynamic Independent Mapping Layers for Concurrent Control of Audio and Video Synthesis, Computer Music Journal, vol.9, issue.2, pp.49-66, 2006.
DOI : 10.2307/3680283

K. P. Murphy and M. A. Paskin, Linear Time Inference in Hierarchical HMMs, Advances in Neural Information Processing Systems 14, Proceedings of the 2001 Conference, pp.833-840, 2001.

K. P. Murphy, Hidden semi-Markov models, 2002.

?. and M. Learning, A Probabilistic Perspective, p.57, 2012.

M. A. Nacenta, Y. Kamber, Y. Qiang, and P. O. Kristensson, Memorability of pre-designed and user-defined gesture sets, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '13
DOI : 10.1145/2470654.2466142

M. Nancel, J. Wagner, E. Pietriga, O. Chapuis, and W. Mackay, Mid-air pan-and-zoom on wall-sized displays, Proceedings of the 2011 annual conference on Human factors in computing systems, CHI '11, p.177, 2011.
DOI : 10.1145/1978942.1978969

URL : https://hal.archives-ouvertes.fr/hal-00559171

U. Oh and L. Findlater, The challenges and potential of end-user gesture customization, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '13, p.1129, 2013.
DOI : 10.1145/2470654.2466145

U. Oh, S. K. Kane, and L. Findlater, Follow that sound, Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS '13, pp.1-8, 2013.
DOI : 10.1145/2513383.2513455

J. K. O-'regan and A. Noë, A sensorimotor account of vision and visual consciousness, Behavioral and Brain Sciences, vol.24, issue.05, pp.939-973, 2001.
DOI : 10.1017/S0140525X01000115

D. Ormoneit and V. Tresp, Improved Gaussian Mixture Density Estimates Using Bayesian Penalty Terms and Network Averaging, Advances in Neural Information Processing Systems, pp.542-548, 1996.

M. Ostendorf, V. Digalakis, and O. Kimball, From HMM's to segment models: a unified view of stochastic modeling for speech recognition, Speech and Audio Processing, pp.360-378, 1996.
DOI : 10.1109/89.536930

S. Park and J. Aggarwal, A hierarchical Bayesian network for event recognition of human actions and interactions, Multimedia Systems, vol.22, issue.2, pp.164-179, 2004.
DOI : 10.1007/s00530-004-0148-1

K. Patel, N. Bancroft, S. M. Drucker, J. Fogarty, A. J. Ko et al., Gestalt, Proceedings of the 23nd annual ACM symposium on User interface software and technology, UIST '10, pp.37-46, 2010.
DOI : 10.1145/1866029.1866038

V. I. Pavlovic, R. Sharma, and T. S. Huang, Visual interpretation of hand gestures for human-computer interaction: a review, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.19, issue.7, pp.677-695, 1997.
DOI : 10.1109/34.598226

D. Pirrò and G. Eckel, Physical modelling enabling enaction: an example, Proceedings of the International Conference on New Interfaces for Musical Expression, pp.461-464, 2011.

L. R. Rabiner, A tutorial on hidden Markov models and selected applications in speech recognition, Proceedings of the IEEE, vol.77, issue.2, pp.257-286, 1989.
DOI : 10.1109/5.18626

A. Rahimi, An Erratum for "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition, Tech. Rep, vol.59, p.55, 2000.

R. Rao, T. Chen, and R. Mersereau, Audio-to-visual conversion for multimedia communication, IEEE Transactions on Industrial Electronics, vol.45, issue.1, pp.15-22, 1998.
DOI : 10.1109/41.661300

N. Rasamimanana and J. Bloit, Reconnaissance de timbres en temps-réel et Applications, Private Communication, 2011.

N. Rasamimanana, F. Bevilacqua, N. Schnell, E. Fléty, and B. Zamborlin, Modular Musical Objects Towards Embodied Control Of Digital Music Real Time Musical Interactions, Proceedings of the fifth international conference on Tangible, embedded, and embodied interaction, pp.9-12, 0200.

B. I. Ravet, J. Tilmanne, and N. , Hidden Markov Model Based Real-Time Motion Recognition and Following, Proceedings of the 2014 International Workshop on Movement and Computing, MOCO '14, pp.82-87, 2014.
DOI : 10.1145/2617995.2618010

K. Richmond, A Trajectory Mixture Density Network for the Acoustic- Articulatory Inversion Mapping, Proceedings of Interspeech, pp.577-580, 2006.

J. V. Robertson, T. Hoellinger, P. V. Lindberg, D. Bensmail, S. Hanneton et al., Effect of auditory feedback differs according to side of hemiparesis: a comparative pilot study, Journal of NeuroEngineering and Rehabilitation, vol.6, issue.1, p.45, 2009.
DOI : 10.1186/1743-0003-6-45

URL : https://hal.archives-ouvertes.fr/hal-00526959

D. Rocchesso, G. Lemaitre, P. Susini, S. Ternström, and P. Boussard, Sketching sound with voice and gesture, interactions, vol.22, issue.1, pp.38-41, 0192.
DOI : 10.1145/2685501

URL : https://hal.archives-ouvertes.fr/hal-01260448

X. Rodet and P. Depalle, Spectral envelopes and inverse FFT synthesis, 1992.

J. Rovan, M. M. Wanderley, S. Dubnov, and P. Depalle, Instrumental Gestural Mapping Strategies as Expressivity Determinants in Computer Music Performance, Proceedings of the AIMI International Workshop, pp.68-73, 1997.
URL : https://hal.archives-ouvertes.fr/hal-01105514

M. J. Russell, A segmental HMM for speech pattern modelling, IEEE International Conference on Acoustics Speech and Signal Processing, pp.499-502, 1993.
DOI : 10.1109/ICASSP.1993.319351

T. Sanger, Bayesian Filtering of Myoelectric Signals, Journal of Neurophysiology, vol.97, issue.2, pp.1839-1845, 2007.
DOI : 10.1152/jn.00936.2006

M. Savary, D. Schwarz, D. Pellerin, F. Massin, C. Jacquemin et al., Dirty tangible interfaces, CHI '13 Extended Abstracts on Human Factors in Computing Systems on, CHI EA '13, p.2991, 2013.
DOI : 10.1145/2468356.2479592

URL : https://hal.archives-ouvertes.fr/hal-01161444

H. Sawada and S. Hashimoto, Gesture recognition using an acceleration sensor and its application to musical performance control, Electronics and Communications in Japan (Part III: Fundamental Electronic Science), vol.80, issue.5, pp.1520-6440, 1997.
DOI : 10.1002/(SICI)1520-6440(199705)80:5<9::AID-ECJC2>3.0.CO;2-J

S. Schaal, Is imitation learning the route to humanoid robots?, Trends in Cognitive Sciences, vol.3, issue.6, pp.233-242, 1999.
DOI : 10.1016/S1364-6613(99)01327-3

S. Schaal, S. Kotosaka, and D. Sternad, Nonlinear dynamical systems as movement primitives, IEEE International Conference on Humanoid Robotics, pp.1-11, 2000.

A. Schaal, A. Ijspeert, and . Billard, Computational approaches to motor learning by imitation, Philosophical Transactions of the Royal Society B: Biological Sciences, vol.358, issue.1431, pp.537-584, 2003.
DOI : 10.1098/rstb.2002.1258

T. Schiphorst, soft(n), Proceedings of the 27th international conference extended abstracts on Human factors in computing systems, CHI EA '09, pp.2427-2438, 2009.
DOI : 10.1145/1520340.1520345

N. Schnell, A. Röbel, D. Schwarz, G. Peeters, and R. Borghesi, Mubu & friends -assembling tools for content based real-time interactive audio processing in max/msp, Proceedings of International Computer Music Conference, p.80, 0199.

D. Schwarz, Data-driven concatenative sound synthesis, PhD Dissertation, 2004.
URL : https://hal.archives-ouvertes.fr/hal-01161127

D. Schwarz and B. Caramiaux, Interactive Sound Texture Synthesis Through Semi-Automatic User Annotations, Sound, Music, and Motion, pp.372-392, 2014.
DOI : 10.1007/978-3-319-12976-1_23

URL : https://hal.archives-ouvertes.fr/hal-01161076

D. Schwarz, G. Beller, and B. Verbrugghe, Real-time corpus-based concatenative synthesis with catart, Proc. of the Int. Conf. on, pp.1-7, 2006.
URL : https://hal.archives-ouvertes.fr/hal-01161358

S. Sentürk, S. Lee, A. Sastry, A. Daruwalla, and G. Weinberg, Crossole: A Gestural Interface for Composition, Improvisation and Performance using Kinect, Proceedings of the International Conference on New Interfaces for Musical Expression, p.2012

S. Serafin, M. Burtner, C. Nichols, and S. O-'modhrain, Expressive controllers for bowed string physical models, Proceedings of the COST G-6 Conference on Digital Audio Effects, pp.6-9, 2001.

X. Serra, A System for Sound Analysis/Transformation/Synthesis based on a Deterministic plus Stochastic Decomposition, PhD Dissertation, p.230, 1989.

B. Settles, Active learning literature survey, University of Wisconsin, 2009.

R. Sigrist, G. Rauter, R. Riener, and P. Wolf, Augmented visual, auditory, haptic, and multimodal feedback in motor learning: A review, Psychonomic Bulletin & Review, vol.11, issue.3, pp.21-53, 2013.
DOI : 10.3758/s13423-012-0333-8

R. Sramek, The on-line Viterbi algorithm, 2007.

D. Stowell, Making music through real-time voice timbre analysis: machine learning and timbral control, 2010.

H. G. Sung, Gaussian Mixture Regression and Classification, PhD Dissertation, p.104, 0197.

K. Tahiroglu and T. Ahmaniemi, Vocal sketching, International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction on, ICMI-MLMI '10, pp.1-4, 2010.
DOI : 10.1145/1891903.1891956

J. Talbot, B. Lee, A. Kapoor, and D. S. Tan, EnsembleMatrix, Proceedings of the 27th international conference on Human factors in computing systems, CHI 09, pp.1283-1292, 2009.
DOI : 10.1145/1518701.1518895

P. Taylor, Text-to-Speech Synthesis, 2009.
DOI : 10.1017/CBO9780511816338

J. Tilmanne, Data-driven Stylistic Humanlike Walk Synthesis, 2013.

J. Tilmanne, A. Moinet, and T. Dutoit, Stylistic gait synthesis based on hidden Markov models, EURASIP Journal on Advances in Signal Processing, vol.2012, issue.1, pp.72-2012
DOI : 10.1016/S0010-0277(01)00147-0

T. Toda and K. Tokuda, A Speech Parameter Generation Algorithm Considering Global Variance for HMM-Based Speech Synthesis, IEICE Transactions on Information and Systems, vol.90, issue.5, pp.816-824, 0190.
DOI : 10.1093/ietisy/e90-d.5.816

T. Toda, A. W. Black, and K. Tokuda, Acoustic-to-articulatory inversion mapping with gaussian mixture model, INTERSPEECH, p.26, 2004.

K. Tokuda, T. Yoshimura, T. Masuko, T. Kobayashi, and T. Kitamura, Speech parameter generation algorithms for HMM-based speech synthesis, 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.00CH37100), pp.1315-1318, 2000.
DOI : 10.1109/ICASSP.2000.861820

K. Tokuda, Y. Nankaku, T. Toda, H. Zen, J. Yamagishi et al., Speech Synthesis Based on Hidden Markov Models, Proceedings of the IEEE, pp.1234-1252
DOI : 10.1109/JPROC.2013.2251852

S. Trail, M. Dean, T. Tavares, and G. Odowichuk, Non-invasive sensing and gesture control for pitched percussion hyper-instruments using the Kinect, New Interfaces for Musical Expression, p.2012, 2012.

P. A. Tremblay and D. Schwarz, Surfing the Waves: Live Audio Mosaicing of an Electric Bass Performance as a Corpus Browsing Interface, Proceedings of the International Conference on New Interfaces for Musical Expression (NIME), pp.15-18, 2010.
URL : https://hal.archives-ouvertes.fr/hal-01161254

D. Van-nort, M. M. Wanderley, and P. Depalle, On the choice of mappings based on geometric properties, Proceedings of International Conference on New Interfaces for Musical Expression, pp.87-91, 2004.

D. Van-nort, M. M. Max, /. Msp, and . Jitter, The LoM Mapping Toolbox for, Proc. of the 2006 International Computer Music Conference (ICMC), pp.397-400, 2006.

D. Van-nort, M. M. Wanderley, and P. Depalle, Mapping Control Structures for Sound Synthesis: Functional and Topological Perspectives, Computer Music Journal, vol.38, issue.3
DOI : 10.1162/014892602320582945

F. Vass-rhee, Dancing music: The intermodality of The Forsythe Company , " in William Forsythe and the Practice of Choreography, S. Spier, pp.73-89, 2010.

V. Verfaille, M. M. Wanderley, and P. Depalle, Mapping strategies for gestural and adaptive control of digital audio effects, Journal of New Music Research, vol.92, issue.1, pp.71-93, 2006.
DOI : 10.1109/TSA.2005.858531

M. M. Wanderley, Gestural control of music, International Workshop on Human Supervision and Control in Engineering and Music, p.232, 2001.

B. I. Wessel, Instruments That Learn, Refined Controllers, and Source Model Loudspeakers, Computer Music Journal, vol.15, issue.4, pp.82-86, 1991.
DOI : 10.2307/3681079

G. Widmer, S. Dixon, W. Goebl, E. Pampalk, and A. Tobudic, Horowitz Factor, pp.111-130, 2003.

J. O. Wobbrock, M. R. Morris, and A. D. Wilson, User-defined gestures for surface computing, Proceedings of the 27th international conference on Human factors in computing systems, CHI 09, pp.1083-1092, 2009.
DOI : 10.1145/1518701.1518866

Y. Wu and R. Wang, Minimum Generation Error Training for HMM-Based Speech Synthesis, IEEE International Conference on Acoustics Speed and Signal Processing Proceedings, pp.89-92, 2006.

J. Yamagishi, T. Kobayashi, Y. Nakano, K. Ogata, and J. Isogai, Analysis of Speaker Adaptation Algorithms for HMM-Based Speech Synthesis and a Constrained SMAPLR Adaptation Algorithm, IEEE Transactions on Audio, Speech, and Language Processing, vol.17, issue.1, pp.66-83, 2009.
DOI : 10.1109/TASL.2008.2006647

E. Yamamoto, S. Nakamura, and K. Shikano, Speech-to-lip movement synthesis based on the EM algorithm using audio-visual HMMs, Multimedia Signal Processing, pp.2-5, 1998.

H. Yang, A. Park, and S. Lee, Gesture Spotting and Recognition for Human&ndash;Robot Interaction, IEEE Transactions on Robotics, vol.23, issue.2, pp.256-270, 2007.
DOI : 10.1109/TRO.2006.889491

S. Z. Yu, Hidden semi-Markov models, Artificial Intelligence, vol.174, issue.2, pp.215-243, 2010.
DOI : 10.1016/j.artint.2009.11.011

B. Zamborlin, Studies on customisation-driven digital music instruments, PhD Dissertation, p.2015

B. Zamborlin, F. Bevilacqua, M. Gillies, and M. D. Inverno, Fluid gesture interaction design, ACM Transactions on Interactive Intelligent Systems, vol.3, issue.4, pp.22-2014
DOI : 10.1145/2543921

H. Zen, K. Tokuda, and T. Kitamura, Reformulating the HMM as a trajectory model by imposing explicit relationships between static and dynamic feature vector sequences, Computer Speech & Language, vol.21, issue.1, pp.153-173, 2007.
DOI : 10.1016/j.csl.2006.01.002

H. Zen, Y. Nankaku, and K. Tokuda, Continuous Stochastic Feature Mapping Based on Trajectory HMMs, IEEE Transactions on Audio, Speech, and Language Processing, vol.19, issue.2, pp.417-430, 2011.
DOI : 10.1109/TASL.2010.2049685

A. Zen, M. Senior, and . Schuster, Statistical parametric speech synthesis using deep neural networks, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp.7962-7966, 2013.

L. Zhang and S. Renals, Acoustic-Articulatory Modeling With the Trajectory HMM, IEEE Signal Processing Letters, vol.15, issue.26, pp.245-248, 2008.
DOI : 10.1109/LSP.2008.917004

A. Zils and F. Pachet, Musical mosaicing, Digital Audio Effects (DAFx), 2001.