. Kosara, Notamment le flou est un paramètre visuel préattentif et peutêtrepeutêtre utilisé comme trait caractéristique pour guider l'attention vers une cible nette au milieu de distracteurs flous Après avoir défini des flous audio et audiovisuel, une série de trois expériences a ´ eté mise en place pourévaluerpourévaluer : a) d'une part, l'influence des flous audio et audiovisuel sur une tâche de recherche, notamment l'´ eventuelle capture attentionnelle induite par ces paramètres ; b) d'autre part, le rôle respectif de chacune des deux modalités et de chacun des deux flous dans une tâche de recherche multimodale. Nous exposerons la méthodologie globale employée dans cette suite d'expérimentations, le corpus de stimuli créé pour l'´ etude, ainsi que les 3 expériences, respectivement visuelle, auditive et audiovisuelle. Cetté etude a ´ eté présentée en anglais lors de l'International Multisensory Research Forum (Oxford, juin 2012) sous le titre " Redundancy Gains in Audiovisual Search " et fait l'objet d'une publication en français pour la conférence Ergo'IHM 2012 (Biarritz, octobre 2012) : " Guidage attentionneì a base de flou audiovisuel pour la conception d'interfaces multimodales Cueing Multimedia Search with Audio-Visual Blur, certains paramètres augmentent la saillance perceptive d'un objet perçu auditivement ou visuellement nous présentons une proposition pourétendrepourétendre le flou visuel au domaine sonore et audiovisuel Ces deux publicationsétudientpublicationsétudient surtout la question de la redondance de la distorsion : est-il nécessaire de rendre flou chacune des deux modalités demanì ere cohérente, ou comme c'´ etait le cas pour une lentille audiovisuelle grossissante est-ce qu'appliquer l'effet sur une seule des deux modalités suffitàsuffità améliorer la perception audiovisuelle dans des tâches de recherche multimodale ? La question de l'apport de la multimodalité a quantàquantà elle donné lieù a une soumission pour le journal ACM Transactions on Applied Perception, 2002.

. Analogies-de-guidage, Il existe plusieurs types de flous visuels, notamment le flou statique (static blur ) et le flou cinétique (motion blur ) Peu utilisé tel quel dans le domaine de l'audio, le terme flou désigne souvent un manque de précision responsable d'une diminution de l'intelligibilité de la parole ou gênant l'identification des sons, Il peut aussî etre employéemployéà des fins artistiques, en composition musicale notamment, 1967.

. De-nombreusesétudesnombreusesétudes-se-penchent-d-'ailleurs-sur-la-question, que ce soit au niveau des dispositifs d'entrée (clavier, souris, souris 3D, gants, captation visuelle, surface tactile, etc.) ou des méthodes de déplacement et d'interaction (saisie textuelle, clic, double-clic, glisser-déposer, difficultés de pointage, interaction bi-manuelle, communication naturelle), Gutwin, 2002.

. Enfin, domaine de recherche pourraitêtrepourraitêtre pris en compte pour optimiser les techniques de présentation de données, ` a savoir le domaine de la recherche d'information. Des travaux antérieurs avaientétéavaientété menés pour associer des stratégies de présentation audio (SonicBrowser ) ` a un système d'analyse automatique et d'indexation (MARSYAS ) dans l'´ etude

. De-la-mêmemanì-ere, en tenant compte des critères sur lesquels se basent les analyses de vidéos [Huurnink et al. 2012], et en considérant que les documents de la collection multimédiamultimédià a traiter sont organisés, nous pourrions proposer une disposition des documents audiovisuels, et des outilsàoutilsà base de distorsion, plus adaptés. Inclure la possibilité de faire une recherche par requête (mots-clés ou similarité) permettrait de fournir et d'´ etudier un système complet o` u les différentes composantes de l'interface, de l

M. Apperley and R. Et-spence, A bifocal display technique for data presentation, Eurographics 1982, pp.27-43, 1982.

P. Arnold and F. Hill, Bisensory augmentation: A speechreading advantage when speech is clearly audible and intact, British Journal of Psychology, vol.92, issue.2, pp.339-355, 2001.
DOI : 10.1348/000712601162220

B. Arons, A review of the cocktail party effect, Journal of the American Voice I/O Society, vol.12, pp.35-50, 1992.

J. A. Ballas, Common factors in the identification of an assortment of brief everyday sounds., Journal of Experimental Psychology: Human Perception and Performance, vol.19, issue.2, pp.250-267, 1993.
DOI : 10.1037/0096-1523.19.2.250

J. A. Ballas and R. T. Et-mullins, Effects of context on the identification of everyday sounds. Human Performance, pp.199-219, 1991.

D. R. Begault, 3D Sound for Virtual Reality and Multimedia, p.46, 1994.

A. J. Berkhout, D. De-vries, and P. Vogel, Acoustic control by wave field synthesis, The Journal of the Acoustical Society of America, vol.93, issue.5, pp.2764-2778, 1993.
DOI : 10.1121/1.405852

P. Bertelson and M. Et-radeau, Erratum to: Cross-modal bias and perceptual fusion with auditoryvisual spatial discordance, Perception & Psychophysics, vol.30, issue.3, pp.578-584, 1981.
DOI : 10.3758/BF03214277

V. Best, E. J. Ozmeral, and B. G. Et-shinn-cunningham, Visually-guided Attention Enhances Target Identification in a Complex Auditory Scene, Journal of the Association for Research in Otolaryngology, vol.28, issue.2, pp.294-304, 2007.
DOI : 10.1007/s10162-007-0073-z

R. C. Bilger, J. M. Nuetzel, W. M. Rabinowitz, and C. Et-rzeczkowski, Standardization of a Test of Speech Perception in Noise, Journal of Speech Language and Hearing Research, vol.27, issue.1, pp.32-48
DOI : 10.1044/jshr.2701.32

J. Blauert, Spatial Hearing. The Psychophysics of Human Sound Localization, 1997.

P. Boersma, Praat, a system for doing phonetics by computer, Glot International, vol.510, issue.9, pp.341-345, 2001.

R. S. Bolia, W. R. Angelo, and R. L. Mckinley, Aurally Aided Visual Search in Three-Dimensional Space, Human Factors: The Journal of the Human Factors and Ergonomics Society, vol.41, issue.4, pp.664-669, 1999.
DOI : 10.1518/001872099779656789

N. Bolognini, F. Frassinetti, A. Serino, and E. Et-l-`-adavas, ?Acoustical vision? of below threshold stimuli: interaction among spatially converging audiovisual inputs, Experimental Brain Research, vol.420, issue.3, pp.273-282, 2005.
DOI : 10.1007/s00221-004-2005-z

T. Bouchara, Le " scenemodeler " : des outils pour la modélisation de contenus multimédias interactifs spatialisés, GMEA-AFIM, ´ editeur : 13ème Journées d'Informatique Musicale (JIM'08), 2008.

T. Bouchara, B. L. Giordano, I. Frissen, F. Katz, C. Et et al., Effect of signal-tonoise ratio and visual context on environmental sound identification, 128th convention of the Audio Engineering Society (AES128th), 2010.

T. Bouchara, C. Guastavino, B. Katz, and C. Et-jacquemin, Audio-visual renderings for multimedia navigation, 16th International Conference on Auditory Display (ICAD- 2010), pp.245-252, 2010.

T. Bouchara and B. F. Et-katz, Redundancy gains in audio???visual search, Seeing and Perceiving, vol.25, issue.0, pp.181-181, 2012.
DOI : 10.1163/187847612X648116

T. Bouchara, B. F. Katz, and C. Et-jacquemin, Guidage attentionneì a base de flou audiovisuel pour la conception d'interfaces multimodales, Conférence francophone sur l'interaction homme machine et l'ergonomie (Ergo'IHM2012), 2012.

S. Boyne, N. Pavlovic, R. Kilgore, and M. Et-chignell, Auditory and visual facilitation : Cross-modal fusion of information in multi-modal displays, Visualisation and the Common Operational Picture. Meeting Proceedings RTO-MP-IST-, 043. paper 19, pp.19-20, 2005.

J. Bradley, R. Reich, and S. Et-norcross, On the combined effects of signal-to-noise ratio and room acoustics on speech intelligibility, The Journal of the Acoustical Society of America, vol.106, issue.4, pp.1820-1828, 1999.
DOI : 10.1121/1.427932

E. Brazil, Investigation of multiple visualisation techniques and dynamic queries in conjunction with direct sonification to support the browsing of audio resources, 2003.

E. Brazil, M. Fernstroem, G. Tzanetakis, P. Et-cook, R. Nakatsu et al., Enhancing sonic browsing using audio information retrieval, ICAD International Conferences of Auditory Display Advanced Telecommunications Research Institute (ATR) Advanced Telecommunications Research Institute (ATR), 2002.

E. Brazil and M. Et-fernström, The Sonification Handbook. Chapitre 13. Auditory Icons, Logos Verlag, pp.325-338, 2011.

E. Brazil and M. Et-fernström, The Sonification Handbook, Logos Verlag, pp.339-361, 2011.

A. S. Bregman, Auditory scene analysis.The perceptual organisation of sound, pp.64-129, 1990.

S. A. Brewster, Providing a Structured Method for Integrating Non-Speech Audio into Human-Computer Interfaces, Thèse de doctorat, 1994.

A. W. Bronkhorst, The cocktail party phenomenon : A review on speech intelligibility in multiple-talker conditions, Acta Acustica united with Acustica, vol.86, issue.124, pp.117-128, 2000.

D. Brungart and B. Et-simpson, Improving multitalker speech communication with advanced audio displays In New Directions for Improving Audio Effectiveness. Meeting RTO-MP-HFM-123., numéro paper 30, pp.30-31, 2005.

D. Burr and D. Et-alais, Combining visual and auditory information. Visual Perception, Pt 2 : Fundamentals Of Awareness : Multi-Sensory Integration And High-Order Perception, pp.243-258, 2006.
DOI : 10.1016/s0079-6123(06)55014-9

G. Calvert, C. Spence, and B. Et-stein, The Handbook of Multisensory Processes, 2004.

A. Caramazza and J. R. Et-shelton, Domain-Specific Knowledge Systems in the Brain: The Animate-Inanimate Distinction, Journal of Cognitive Neuroscience, vol.54, issue.1, pp.1-34, 1998.
DOI : 10.1016/S0010-9452(73)80020-6

S. Carlile and D. Et-schonstein, Frequency bandwidth and multi-talker environments, 120th Convention of the Audio Engineering Society. (Cité, p.146, 2006.

R. Carlyon, R. Cusack, J. Foxton, and I. Et-robertson, Effects of attention and unilateral neglect on auditory stream segregation., Journal of Experimental Psychology: Human Perception and Performance, vol.27, issue.1, pp.115-127, 2001.
DOI : 10.1037/0096-1523.27.1.115

M. Carpendale and C. Et-montagnese, A framework for unifying presentation space, Proceedings of the 14th annual ACM symposium on User interface software and technology , UIST '01, pp.61-70, 2001.
DOI : 10.1145/502348.502358

S. Carpendale, A theory of elastic presentation space Cours accessiblècessiblè a l'adresse innovis.cpsc.ucalgary.ca, pp.5-7, 2008.

G. Chareyron, Tatouage d'images : une approche couleur, Thèse de doctorat, 2005.

Y. Chen and C. Et-spence, Crossmodal facilitation of visual target identification at the level of object representation by the presentation of a concomitant sound, European Conference on Visual Processing, 2009.

Y. Chen and C. Et-spence, When hearing the bark helps to identify the dog: Semantically-congruent sounds modulate the identification of masked pictures, Cognition, vol.114, issue.3, pp.389-404, 2010.
DOI : 10.1016/j.cognition.2009.10.012

E. C. Cherry, Some Experiments on the Recognition of Speech, with One and with Two Ears, The Journal of the Acoustical Society of America, vol.25, issue.5, pp.975-979, 1953.
DOI : 10.1121/1.1907229

M. Chion, L'Audio-Vision, Nathan, 1990.

O. Christmann, Navigation dans de grands ensembles non structurés de documents visuels, Thèse de doctorat, 2008.

A. Cockburn, A. Karlson, and B. B. Et-bederson, A review of overview+detail, zooming, and focus+context interfaces, ACM Computing Surveys, vol.41, issue.1, pp.12-13, 2008.
DOI : 10.1145/1456650.1456652

L. Couvreur, F. Bettens, T. Drugman, C. Frisson, M. Jottrand et al., Audio skimming. QPSR of the numediart research program, 2008.

J. F. Culling, K. I. Hodder, and C. Y. Et-toh, Effects of reverberation on perceptual segregation of competing voices, The Journal of the Acoustical Society of America, vol.114, issue.5, pp.2871-2876, 2003.
DOI : 10.1121/1.1616922

R. Cusack and R. P. Et-carlyon, Perceptual asymetries in audition., Journal of Experimental Psychology: Human Perception and Performance, vol.29, issue.3, pp.713-725, 2003.
DOI : 10.1037/0096-1523.29.3.713

R. Cusack, J. Deeks, G. Aikman, and R. Et-carlyon, Effects of Location, Frequency Region, and Time Course of Selective Attention on Auditory Scene Analysis., Journal of Experimental Psychology: Human Perception and Performance, vol.30, issue.4, pp.643-656, 2004.
DOI : 10.1037/0096-1523.30.4.643

J. Daniel, Représentation de champs acoustiques, applicationàapplicationà la transmission etàetà la reproduction de scènes sonores complexes dans un contexte multimédia, Thèse de doctorat, 2000.

O. De-bruijn and R. Et-spence, Rapid serial visual presentation, Proceedings of the working conference on Advanced visual interfaces , AVI '00, 2000.
DOI : 10.1145/345513.345309

M. C. Doyle and R. J. Et-snowden, Identification of visual stimuli is improved by accompanying auditory stimuli: The role of eye movements and sound location, Perception, vol.30, issue.7, pp.795-810, 2001.
DOI : 10.1068/p3126

J. Duncan and G. Et-humphreys, Visual search and stimulus similarity., Psychological Review, vol.96, issue.3, pp.433-458, 1989.
DOI : 10.1037/0033-295X.96.3.433

R. Eramudugolla, K. I. Mcanally, R. L. Martin, B. Mattingleya, and D. R. , The role of spatial location in auditory search, Hearing Research, vol.238, issue.1-2, pp.139-146, 2008.
DOI : 10.1016/j.heares.2007.10.004

M. O. Ernst and H. H. Et-bülthoff, Merging the senses into a robust percept, Trends in Cognitive Sciences, vol.8, issue.4, pp.162-169, 2004.
DOI : 10.1016/j.tics.2004.02.002

M. Fernström and E. Et-brazil, Sonic browsing : an auditory tool for multimedia asset management, International Conference on Auditory Display, pp.132-135, 2001.

J. R. Frederiksen, Cognitive factors in the recognition of ambiguous auditory and visual stimuli., Journal of Personality and Social Psychology, vol.7, issue.1, Pt.2, pp.1-17, 1967.
DOI : 10.1037/h0024887

R. M. French and D. Et-mareschal, Could category-specific semantic deficits reflect differences in the distributions of features within a unified semantic memory ?, 20th Annual Conference of the Cognitive Science Society, 1998.

G. W. Furnas, The fisheye view : a new look at structured file. Rapport technique, Bell Laboratories, 1981.

G. W. Furnas, Generalized fisheye views, Conference on Human Factors in Computing Systems CHI'86, pp.18-23, 1986.
DOI : 10.1145/22339.22342

G. W. Furnas and B. B. Et-bederson, Space-scale diagrams, Proceedings of the SIGCHI conference on Human factors in computing systems, CHI '95, pp.234-241, 1995.
DOI : 10.1145/223904.223934

W. W. Gaver, The SonicFinder: An Interface That Uses Auditory Icons, Human-Computer Interaction, vol.79, issue.4, pp.67-94, 1989.
DOI : 10.1207/s15327051hci0401_3

W. W. Gaver, What in the World Do We Hear?: An Ecological Approach to Auditory Event Perception, Ecological Psychology, vol.19, issue.1, pp.1-29, 1993.
DOI : 10.1037//0096-1523.10.5.704

Y. Gérard, Mémoire sémantique et sons de l'environnement, pp.97-116, 2004.

M. A. Gerzon, Ambisonics in multichannel broadcasting and video, Journal of the Audio Engineering Society, vol.33, issue.11, pp.859-871, 1985.

T. G. Ghirardelli and A. A. Et-scharine, Helmet Mounted Displays-Sensation, Perception and Cognitive Issues Auditory-Visual Interactions, Chapitre Chapter, vol.14, pp.599-618, 2009.

M. Giard and F. Et-peronnet, Auditory-Visual Integration during Multimodal Object Recognition in Humans: A Behavioral and Electrophysiological Study, Journal of Cognitive Neuroscience, vol.76, issue.5, pp.473-490, 1999.
DOI : 10.1016/0013-4694(75)90073-5

B. Giordano, J. Mcdonnell, and S. Et-mcadams, Hearing living symbols and non living icons : category specificities in the cognitive processing of environmental sounds, Brain&Cognition, vol.91, issue.106, p.99, 2010.

B. Gooch, P. J. Sloan, A. Gooch, P. Shirley, and R. Et-riesenfeld, Interactive technical illustration, Proceedings of the 1999 symposium on Interactive 3D graphics , SI3D '99, pp.31-38, 1999.
DOI : 10.1145/300523.300526

M. Grabowecky, A. Sherman, and S. Et-susuki, Natural scenes have matched amplitudemodulated sounds that systematically influence visual scanning, International Multisensory Research Forum, 2012.

K. W. Grant and P. Seitz, The use of visible speech cues for improving auditory detection of spoken sentences, The Journal of the Acoustical Society of America, vol.108, issue.3, pp.1197-1208, 2000.
DOI : 10.1121/1.1288668

A. Grubert, J. Krummenacher, and M. Et-eimer, Redundancy gains in pop-out visual search are determined by top-down task set: Behavioral and electrophysiological evidence, Journal of Vision, vol.11, issue.14, pp.1-10, 2011.
DOI : 10.1167/11.14.10

A. Guillaume, L. Pellieux, V. Chastres, C. Blancard, and C. Et-drake, How long does it take to identify everyday sounds, Tenth Meeting of the International Conference on Auditory Display, 2004.

C. Gutwin, Improving focus targeting in interactive fisheye views, Proceedings of the SIGCHI conference on Human factors in computing systems Changing our world, changing ourselves, CHI '02, pp.267-274, 2002.
DOI : 10.1145/503376.503424

B. Gygi, Factors in the identification of environmental sounds, Thèse de doctorat, 2001.

B. Gygi, G. R. Kidd, and C. Watson, Similarity and categorization of environmental sounds, Perception & Psychophysics, vol.47, issue.6, pp.839-855, 2007.
DOI : 10.3758/BF03193921

B. Gygi, G. R. Kidd, and C. S. Et-watson, Spectral-temporal factors in the identification of environmental sounds, The Journal of the Acoustical Society of America, vol.115, issue.3, pp.1252-1265, 2004.
DOI : 10.1121/1.1635840

B. Gygi and V. Et-shafiro, Effect of audiory context on the identification of environmental sounds, 19th International COngress on Acoustics, pp.96-98, 2007.

B. Gygi and V. Et-shafiro, Environmental sound research as it stands today, Proceedings on Meetings on Acoustics, pp.1-18, 2007.

B. Gygi and V. Et-shafiro, From Signal to Substance and Back: Insights from Environmental Sound Research to Auditory Display Design, 15th International Conference on Auditory Display, pp.1-12, 2009.
DOI : 10.1007/978-3-642-12439-6_16

B. Gygi and V. Et-shafiro, The incongruency advantage for environmental sounds presented in natural auditory scenes., Journal of Experimental Psychology: Human Perception and Performance, vol.37, issue.2, pp.551-565, 2011.
DOI : 10.1037/a0020671

L. Harrie, L. T. Sarjakoski, L. M. Et-lehto, R. Litovsky, and J. Et-culling, A variable-scale map for small-display cartography The benefit of binaural hearing in a cocktail party : Effect of location and type of interferer, Symposium on Geospatial Theory, Processing and Applications, pp.833-843, 2002.

C. G. Healey, K. S. Booth, and J. T. Et-enns, Visualizing real-time multivariate data using preattentive processing, ACM Transactions on Modeling and Computer Simulation, vol.5, issue.3, pp.190-221, 1995.
DOI : 10.1145/217853.217855

C. G. Healey and J. T. Et-enns, Attention and Visual Memory in Visualization and Computer Graphics, IEEE Transactions on Visualization and Computer Graphics, vol.18, issue.7, 2011.
DOI : 10.1109/TVCG.2011.127

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.221.3629

S. Heise, M. Hlatky, and J. Et-loviscach, Soundtorch : Quick browsing in large audio collections, 125th convention of the AES, pp.82-89, 2008.

T. Hermann, A. Hunt, and J. G. Et-neuhoff, The sonification handbook, 2011.

M. P. Hollier, A. N. Rimell, D. S. Hands, and R. M. Voelcker, Multi-modal perception, BT Technology Journal, vol.17, issue.1, pp.35-46, 1999.
DOI : 10.1023/A:1009666623193

P. Howard-jones and S. Et-rosen, The perception of speech in fluctuating noise, Acustica, vol.78, pp.258-272, 1993.

B. Huurnink, C. G. Snoek, M. De-rijke, and A. W. Et-smeulders, Content-Based Analysis Improves Audiovisual Archive Retrieval, IEEE Transactions on Multimedia, vol.14, issue.4, pp.1166-1178, 2012.
DOI : 10.1109/TMM.2012.2193561

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.261.158

L. Iordanescu, M. Grabowecky, S. Franconeri, J. Theeuwes, and S. Suzuki, Characteristic sounds make you look at target objects more quickly. Attention, Perception, Psychophysics, vol.72, issue.7, pp.1736-1741, 2010.
DOI : 10.3758/app.72.7.1736

URL : http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3261720

L. Iordanescu, M. Grabowecky, and S. Suzuki, Object-based auditory facilitation of visual search for pictures and words with frequent and rare targets, Acta Psychologica, vol.137, issue.2, pp.252-259, 2011.
DOI : 10.1016/j.actpsy.2010.07.017

C. Jacquemin, Architecture and experiments in networked 3d audio/graphic rendering with virtual choreographer, Sound and Music Computing, 2004.

W. James, The Principles of Psychology, Chapitre, vol.11, pp.403-404, 1890.

B. Katz, E. Rio, and L. Et-piccinali, LIMSI Spatialisation Engine, p.31235, 2010.

T. A. Keahey and E. L. Et-robertson, Techniques for non-linear magnification transformations, Proceedings IEEE Symposium on Information Visualization '96, p.38, 1996.
DOI : 10.1109/INFVIS.1996.559214

P. Keller and C. Et-stevens, Meaning From Environmental Sounds: Types of Signal-Referent Relations and Their Effect on Recognizing Auditory Icons., Journal of Experimental Psychology: Applied, vol.10, issue.1, pp.3-12, 2004.
DOI : 10.1037/1076-898X.10.1.3

R. Kim, A. K. Megan, . Peters, and L. Shams, 0 + 1 > 1 how adding noninformative sound improves performance on a visual task, Psychological Science, 2011.

N. Kitagawa and C. Et-spence, Audiotactile multisensory interactions in human information processing, Japanese Psychological Research, vol.118, issue.3, pp.158-173, 2006.
DOI : 10.1111/j.1468-5884.2006.00317.x

K. Knöferle and C. Et-spence, Product-related sounds speed visual search for products . Seeing and Perceiving, Abstracts of the13th International Multisensory Research Forum (IMRF), pp.193-193, 2012.

M. Kobayashi and C. Et-schmandt, Dynamic Soundscape, Proceedings of the SIGCHI conference on Human factors in computing systems , CHI '97, pp.194-201, 1997.
DOI : 10.1145/258549.258702

I. Koch, V. Lawo, J. Fels, and M. Vorländer, Switching in the cocktail party: Exploring intentional control of auditory selective attention., Journal of Experimental Psychology: Human Perception and Performance, vol.37, issue.4, pp.1140-1147, 2011.
DOI : 10.1037/a0022189

R. Kosara, Semantic Depth of Field ? Using Blur for Focus+Context Visualization, 2001.

R. Kosara, S. Miksch, and H. Hauser, Focus+context taken literally, GCA, 2002.
DOI : 10.1109/38.974515

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16.4819

R. Kosara, S. Miksch, H. Hauser, J. Schrammel, V. Giller et al., Useful properties of semantic depth of field for better f+c visualization, Joint Eurographics ? IEEE TCVG Symposium on Visualization (VisSym), pp.205-210, 2002.

J. Krummenacher, H. J. Müller, and D. Et-heller, Visual search for dimensionally redundant pop-out targets: Evidence for parallel-coactive processing of dimensions, Perception & Psychophysics, vol.370, issue.5, pp.907-917, 2001.
DOI : 10.3758/BF03194446

H. Kwak, D. Dagenbach, and H. Et-egeth, Further evidence for a time-independent shift of the focus of attention, Perception & Psychophysics, vol.43, issue.4, pp.473-480, 1991.
DOI : 10.3758/BF03212181

J. E. Kyprianidis, J. Collomosse, T. Wang, and T. Et-isenberg, State of the 'ar t' : A taxonomy of ar tistic stylization techniques for images and video, IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, p.99, 2012.

J. Lamping, R. Rao, and P. Et-pirolli, A focus+context technique based on hyperbolic geometry for visualizing large hierarchies, Proceedings of the SIGCHI conference on Human factors in computing systems, CHI '95, 1995.
DOI : 10.1145/223904.223956

P. J. Laurienti, R. A. Kraft, J. A. Maldjian, J. H. Burdette, and M. T. Et-wallace, Semantic congruence is a critical factor in multisensory behavioral performance, Experimental Brain Research, vol.158, issue.4, pp.405-414, 2004.
DOI : 10.1007/s00221-004-1913-2

K. R. Laws, Gender Affects Naming Latencies for Living and Nonliving Things: Implications for Familiarity, Cortex, vol.35, issue.5, pp.729-733, 1999.
DOI : 10.1016/S0010-9452(08)70831-1

URL : http://uhra.herts.ac.uk/bitstream/2299/1719/1/901904.pdf

E. Lecolinet and . Pook, Interfaces zoomables etâ?A«etâetâ? etâ?A« control menusâ?A»menusâmenusâ? menusâ?A» : Techniques focus+contexte pour la navigation interactive dans les bases de données, pp.191-210, 2002.

R. Leech, B. Gygi, J. Aydelott, and F. Dick, Informational factors in identifying environmental sounds in natural auditory scenes, The Journal of the Acoustical Society of America, vol.126, issue.6, pp.3147-3155, 2009.
DOI : 10.1121/1.3238160

Y. K. Leung and M. D. Et-apperley, A review and taxonomy of distortion-oriented presentation techniques, ACM Transactions on Computer-Human Interaction, vol.1, issue.2, pp.126-160, 1994.
DOI : 10.1145/180171.180173

J. Lewald and R. Et-guski, Cross-modal perceptual integration of spatially and temporally disparate auditory and visual stimuli, Cognitive Brain Research, vol.16, issue.3, pp.468-478, 2003.
DOI : 10.1016/S0926-6410(03)00074-0

M. S. Lewicki, Efficient coding of natural sounds, Nature Neuroscience, vol.5, issue.4, pp.356-363, 2002.
DOI : 10.1038/nn831

J. W. Lewis, J. A. Brefczynski, R. E. Phinney, J. J. Janik, and E. A. Et-deyoe, Distinct Cortical Pathways for Processing Tool versus Animal Sounds, Journal of Neuroscience, vol.25, issue.21, pp.5148-5158, 2005.
DOI : 10.1523/JNEUROSCI.0419-05.2005

X. Li, R. Logan, and R. Et-pastore, Perception of acoustic source characteristics: Walking sounds, The Journal of the Acoustical Society of America, vol.90, issue.6, pp.3036-3049, 1991.
DOI : 10.1121/1.401778

H. Lieberman, A multi-scale, multi-layer, translucent virtual space, Proceedings. 1997 IEEE Conference on Information Visualization (Cat. No.97TB100165), pp.124-131, 1997.
DOI : 10.1109/IV.1997.626499

T. Lokki and M. Et-gröhn, Navigation with Auditory Cues in a Virtual Environment, IEEE Multimedia, vol.12, issue.2, pp.80-86, 2005.
DOI : 10.1109/MMUL.2005.33

L. F. Ludwig, M. Cohen, and N. Et-pincever, Extending the notion of a window system to audio, Computer, vol.23, issue.8, pp.66-72, 1990.
DOI : 10.1109/2.56873

S. Lukas, A. M. Philipp, and I. Et-koch, Switching attention between modalities: further evidence for visual dominance, Psychological Research PRPF, vol.29, issue.3, pp.255-267, 2010.
DOI : 10.1007/s00426-009-0246-y

W. J. Ma, X. Zhou, L. A. Ross, J. J. Foxe, and L. C. Et-parra, Lip-Reading Aids Word Recognition Most in Moderate Noise: A Bayesian Explanation Using High-Dimensional Feature Space, PLoS ONE, vol.102, issue.3, pp.4638-95, 2009.
DOI : 10.1371/journal.pone.0004638.s006

M. M. Marcell, D. Borella, M. Greene, E. Kerr, and S. Et-rogers, Confrontation naming of environmental sounds, Journal of Clinical and Experimental Neuropsychology, vol.22, issue.94, pp.830-864, 2000.

D. Massaro and D. Et-stork, Speech Recognition and Sensory Integration, American Scientist, vol.86, issue.3, pp.236-244, 1998.
DOI : 10.1511/1998.25.861

S. Mcadams, Thinking in sound. The cognitive psychology of human audition. Chapitre 6. Recognition of sound sources and events, pp.146-198, 1993.

D. K. Mcgookin and S. A. Brewster, Dolphin : The design and initial evaluation of multimodal focus and context, International Conference on Auditory Display (ICAD'02), 2002.

D. K. Mcgookin and S. A. Brewster, Understanding concurrent earcons, ACM Transactions on Applied Perception, vol.1, issue.2, pp.120-155, 2004.
DOI : 10.1145/1024083.1024087

D. K. Mcgookin and S. A. Brewster, Advantages and issues with concurrent audio presentation as part of an auditory display, 12th International Conference on Auditory Display (ICAD'06), 2006.

H. Mcgurk and J. Et-macdonald, Hearing lips and seeing voices, Nature, vol.65, issue.5588, pp.746-748, 1976.
DOI : 10.1038/264746a0

P. Mermelstein, Distance measures for speech recognition, psychological and instrumental, Pattern Recognition and Artificial Intelligence, vol.116, pp.374-388, 1976.

J. Miller, Divided attention: Evidence for coactivation with redundant signals, Cognitive Psychology, vol.14, issue.2, pp.247-279, 1982.
DOI : 10.1016/0010-0285(82)90010-X

A. Mills, Foundations of modern auditory theory, Chapitre Auditory Localization, vol.2, pp.301-345, 1972.

N. Misdariis, A. Minard, P. Susini, G. Lemaitre, S. Mcadams et al., Environmental sound perception : metadescription and modeling based on independent primary studies, EURASIP J. Audio Speech Music Process, issue.6, pp.1-6, 2010.
URL : https://hal.archives-ouvertes.fr/hal-00560335

T. Moeck, N. Bonneel, N. Tsingos, G. Drettakis, I. Viaud-delmon et al., Progressive perceptual audio rendering of complex scenes, Proceedings of the 2007 symposium on Interactive 3D graphics and games , I3D '07, 2007.
DOI : 10.1145/1230100.1230133

URL : https://hal.archives-ouvertes.fr/inria-00606801

E. D. Mynatt, Designing with auditory icons, Conference companion on Human factors in computing systems , CHI '94, 1994.
DOI : 10.1145/259963.260483

R. Nicol, Binaural Technology, AES monograph, 2010.

M. Noisternig, T. Musil, A. Sontacchi, and R. Et-holdrich, 3D binaural sound reproduction using a virtual ambisonic approach, IEEE International Symposium on Virtual Environments, Human-Computer Interfaces and Measurement Systems, 2003. VECIMS '03. 2003, 2003.
DOI : 10.1109/VECIMS.2003.1227050

V. Occelli, C. Spence, and M. Et-zampini, Audiotactile interactions in temporal perception, Psychonomic Bulletin & Review, vol.73, issue.3, pp.429-454, 2011.
DOI : 10.3758/s13423-011-0070-4

T. B. Olivier-le-meur, Methods for comparing scanpaths and saliency maps : strengths and weaknesses, Behavior Research Methods, 2012.

E. Ozcan and R. Van-egmond, The effect of visual context on the identification of ambiguous environmental sounds, Acta Psychologica, vol.131, issue.2, pp.110-119, 2009.
DOI : 10.1016/j.actpsy.2009.03.007

G. Parseihian and B. F. Et-katz, Morphocons : A new sonification concept based on morphological earcons, Journal of the Audio Society of America, vol.60, issue.6, pp.409-418, 2012.

G. Parseihian and B. F. Et-katz, Rapid head-related transfer function adaptation using a virtual auditory environment, The Journal of the Acoustical Society of America, vol.131, issue.4, pp.10-46, 2012.
DOI : 10.1121/1.3687448

H. E. Pashler, The Psychology of attention, 1998.

D. Perrott, T. Sadralodabai, K. Saberi, and T. Et-strybel, Aurally aided visual search in the central visual field : effects of visual load and visual enhancement of the target, Human Factors, vol.33, issue.4, pp.389-400, 1991.

E. Pietriga and C. Et-appert, Sigma lenses, Proceeding of the twenty-sixth annual CHI conference on Human factors in computing systems , CHI '08, 2008.
DOI : 10.1145/1357054.1357264

URL : https://hal.archives-ouvertes.fr/inria-00271301

E. Pietriga, C. Appert, and M. Et-beaudouin-lafon, Pointing and beyond, Proceedings of the SIGCHI conference on Human factors in computing systems , CHI '07, pp.1215-1224, 2007.
DOI : 10.1145/1240624.1240808

URL : https://hal.archives-ouvertes.fr/inria-00158867

C. Plaisant, D. Carr, and B. Et-shneiderman, Image browsers : Taxonomy, guidelines, and informal specifications, 1995.

W. Prinzmetal, C. Mccool, and S. Park, Attention: Reaction Time and Accuracy Reveal Different Mechanisms., Journal of Experimental Psychology: General, vol.134, issue.1, pp.73-92, 2005.
DOI : 10.1037/0096-3445.134.1.73

V. Pulkki, Virtual sound source positioning using vector base amplitude panning, Journal of the Audio Engineering Society, vol.45, issue.6, pp.456-466, 1997.

M. Rébillat, Vibrations de plaques multi-excitateurs de grandes dimensions pour la création d'environnements virtuels audio-visuels. Approches acoustique, mécanique et perceptive, Thèse de doctorat, pp.42-51, 2011.

T. Riede, H. Herzel, K. Hammerschmidt, L. Brunnberg, and G. Et-tembrock, The harmonic-to-noise ratio applied to dog barks, Journal of the Acoustic Society of America, vol.110, issue.4, 2001.

C. W. Robinson and V. M. Et-sloutsky, The effect of stimulus familiarity on modality dominance, 26th Annual Meeting of the Cognitive Science Society, 2004.

C. W. Robinson and V. M. Et-sloutsky, Effects of multimodal presentation and stimulus familiarity on auditory and visual processing, Journal of Experimental Child Psychology, vol.107, issue.3, pp.351-358, 2010.
DOI : 10.1016/j.jecp.2010.04.006

R. Rosenbaum and H. Et-schumann, Resource-saving image browsing based on JPEG2000, blurring, and progression, Multimedia on Mobile Devices 2009, pp.17-34, 2009.
DOI : 10.1117/12.805473

L. A. Ross, D. Saint-amour, V. M. Leavitt, D. C. Javitt, and J. J. Et-foxe, Do You See What I Am Saying? Exploring Visual Enhancement of Speech Comprehension in Noisy Environments, Cerebral Cortex, vol.17, issue.5, pp.1147-1153, 2007.
DOI : 10.1093/cercor/bhl024

H. Sakoe and S. Et-chiba, Dynamic programming algorithm optimization for spoken word recognition, IEEE Transactions on Acoustics, Speech, and Signal Processing, vol.26, issue.1, pp.43-49, 1978.
DOI : 10.1109/TASSP.1978.1163055

M. Sarkar and M. H. Et-brown, Graphical fisheye views, Communications of the ACM, vol.37, issue.12, pp.73-83, 1994.
DOI : 10.1145/198366.198384

C. Schmandt, Audio hallway, Proceedings of the 11th annual ACM symposium on User interface software and technology , UIST '98, pp.163-170, 1998.
DOI : 10.1145/288392.288597

C. Schmandt and A. Et-mullins, AudioStreamer, Conference companion on Human factors in computing systems , CHI '95, pp.218-219, 1995.
DOI : 10.1145/223355.223533

T. R. Schneider, A. K. Engel, and S. Et-debener, Multisensory Identification of Natural Objects in a Two-Way Crossmodal Priming Paradigm, Experimental Psychology, vol.55, issue.2, pp.121-132, 2008.
DOI : 10.1027/1618-3169.55.2.121

J. Schrammel, V. Giller, M. Tscheligi, R. Kosara, H. Hauser et al., Experimental evaluation of semantic depth of field, a preattentive method for focus+context visualization, Human-Computer Interaction ? INTERACT'03, pp.888-891, 2003.

E. Schröger and A. Et-widmann, Speeded responses to audiovisual signal changes result from bimodal integration, Psychophysiology, vol.35, issue.6, pp.755-759, 1998.
DOI : 10.1111/1469-8986.3560755

R. Sekuler, A. B. Sekuler, and R. Et-lau, Sound alters visual motion perception, Nature, vol.385, issue.6614, p.385, 1997.
DOI : 10.1038/385308a0

M. Serrano, Interaction Multimodale en Entrée : Conception et Prototypage, Thèse de doctorat, 2010.

V. Shafiro, Development of a Large-Item Environmental Sound Test and the Effects of Short-Term Training with Spectrally-Degraded Stimuli, Ear and Hearing, vol.29, issue.5, pp.775-790, 2008.
DOI : 10.1097/AUD.0b013e31817e08ea

V. Shafiro, Identification of Environmental Sounds With Varying Spectral Resolution, Ear and Hearing, vol.29, issue.3, pp.401-420, 2008.
DOI : 10.1097/AUD.0b013e31816a0cf1

L. Shams, Y. Kamitani, and . Shimojo, Visual illusion induced by sound, Cognitive Brain Research, vol.14, issue.1, pp.14-49, 2002.
DOI : 10.1016/S0926-6410(02)00069-1

L. Shams and R. Kim, Crossmodal influences on visual perception, Physics of Life Reviews, vol.7, issue.3, pp.269-284, 2010.
DOI : 10.1016/j.plrev.2010.04.006

J. Shen and E. M. Et-reingold, Visual search asymmetry : The influence of stimulus familiarity and low-level features. Perception and Psychophysics, pp.463-475, 2001.

B. Shinn-cunningham, Speech intelligibility, spatial unmasking, and realism in reverberant spatial auditory displays, International Conference on Auditory Display, 2002.

B. Simpson, N. Iyer, and D. S. Et-brungart, Aurally aided visual search with multiple audio cues, International Conference on Auditory Displays (ICAD'10), pp.51-54, 2010.

R. Soto, M. López, D. D. Diego, and G. Manuel, Absolute threshold of coherence of position perception between auditory and visual sources for dialog, 125th Convention of the AES, 2008.

C. Spence, Audiovisual multisensory integration, Acoustical Science and Technology, vol.28, issue.2, pp.61-70, 2007.
DOI : 10.1250/ast.28.61

C. Spence, Crossmodal Correspondences, i-Perception, vol.2, issue.8, pp.971-995, 2011.
DOI : 10.1068/ic887

C. Spence and J. Et-driver, Covert spatial orienting in audition: Exogenous and endogenous mechanisms., Journal of Experimental Psychology: Human Perception and Performance, vol.20, issue.3, pp.555-574, 1994.
DOI : 10.1037/0096-1523.20.3.555

C. Spence and J. Et-driver, On measuring selective attention to an expected sensory modality, Perception & Psychophysics, vol.30, issue.3, pp.389-403, 1997.
DOI : 10.3758/BF03211906

C. Spence and J. Et-driver, Attracting attention to the illusory location of a sound, NeuroReport, vol.11, issue.9, pp.2057-2061, 2000.
DOI : 10.1097/00001756-200006260-00049

R. Spence, Information Visualization. Chapitre 7. Presentation, pp.111-133, 2001.
URL : https://hal.archives-ouvertes.fr/hal-01414610

R. Spence, Rapid, Serial and Visual: A Presentation Technique with Potential, Information Visualization, vol.41, issue.1, pp.13-19, 2002.
DOI : 10.1037/0096-1523.18.3.849

W. Spieth, J. F. Curtis, and J. C. Webster, Responding to One of Two Simultaneous Messages, The Journal of the Acoustical Society of America, vol.26, issue.3, pp.391-396, 1954.
DOI : 10.1121/1.1907347

B. E. Stein, M. A. Meredith, and . Us, The merging of the senses, p.47, 1993.

B. Suh, A. Woodruff, R. Rosenholtz, and A. Et-glass, Popout prism, Proceedings of the SIGCHI conference on Human factors in computing systems Changing our world, changing ourselves, CHI '02, 2002.
DOI : 10.1145/503376.503422

C. Suied, N. Bonneel, and I. Et-viaud-delmon, Role of semantic vs spatial congruency in a bimodal go/no-go task, Poster, pp.55-116, 2007.
URL : https://hal.archives-ouvertes.fr/inria-00606807

C. Suied, N. Bonneel, and I. Et-viaud-delmon, Integration of auditory and visual information in the recognition of realistic objects, Experimental Brain Research, vol.58, issue.1, pp.91-102, 2009.
DOI : 10.1007/s00221-008-1672-6

URL : https://hal.archives-ouvertes.fr/inria-00606822

C. Suied, P. Susini, S. Mcadams, and R. D. Et-patterson, Why are natural sounds detected faster than pips?, The Journal of the Acoustical Society of America, vol.127, issue.3, p.127, 2010.
DOI : 10.1121/1.3310196

URL : https://hal.archives-ouvertes.fr/hal-01106520

C. Suied and I. Et-viaud-delmon, Auditory-Visual Object Recognition Time Suggests Specific Processing for Animal Sounds, PLoS ONE, vol.106, issue.4, 2009.
DOI : 10.1371/journal.pone.0005256.t001

URL : https://hal.archives-ouvertes.fr/hal-01107100

W. Sumby and I. Et-pollack, Visual Contribution to Speech Intelligibility in Noise, The Journal of the Acoustical Society of America, vol.26, issue.2, pp.212-215, 1954.
DOI : 10.1121/1.1907309

J. Theeuwes, Top-down search strategies cannot override attentional capture, Psychonomic Bulletin & Review, vol.16, issue.1, pp.65-70, 2004.
DOI : 10.3758/BF03206462

S. Thorpe, D. Fize, and C. Et-marlot, Speed of processing in the human visual system, Nature, vol.381, issue.6582, pp.520-522, 1996.
DOI : 10.1038/381520a0

A. Treisman, Features and Objects in Visual Processing, Scientific American, vol.255, issue.5, pp.114-125, 1986.
DOI : 10.1038/scientificamerican1186-114B

A. Treisman, Feature binding, attention and object perception, Philosophical Transactions of the Royal Society B: Biological Sciences, vol.353, issue.1373, pp.1295-1306, 1998.
DOI : 10.1098/rstb.1998.0284

A. Treisman and G. Et-gelade, A feature-integration theory of attention, Cognitive Psychology, vol.12, issue.1, pp.97-136, 1980.
DOI : 10.1016/0010-0285(80)90005-5

A. Treisman and S. Et-gormican, Feature analysis in early vision: Evidence from search asymmetries., Psychological Review, vol.95, issue.1, pp.14-48, 1988.
DOI : 10.1037/0033-295X.95.1.15

E. Van-der-burg, C. N. Olivers, and A. Et-bronkhorst, Pip and pop: Nonspatial auditory signals improve spatial visual search., Journal of Experimental Psychology: Human Perception and Performance, vol.34, issue.5, pp.1053-1065, 2008.
DOI : 10.1037/0096-1523.34.5.1053

L. A. Varghese, E. J. Ozmeral, V. Best, and B. G. Et-shinn-cunningham, How Visual Cues for when to Listen Aid Selective Auditory Attention, Journal of the Association for Research in Otolaryngology, vol.32, issue.3, p.53, 2012.
DOI : 10.1007/s10162-012-0314-7

A. Vatakis and C. Et-spence, Evaluating the influence of the ???unity assumption??? on the temporal perception of realistic audiovisual stimuli, Acta Psychologica, vol.127, issue.1, pp.12-23, 2008.
DOI : 10.1016/j.actpsy.2006.12.002

B. N. Walker, A. Nance, and J. U. Et-lindsay, Spearcons (Speech-Based Earcons) Improve Navigation Performance in Advanced Auditory Menus, Proceedings of the 12th International Conference on Auditory Display, pp.63-68, 2006.
DOI : 10.1177/154193120805201823

Q. Wang, P. Cavanagh, and M. Et-green, Familiarity and pop-out in visual search, Perception & Psychophysics, vol.15, issue.5, pp.495-500, 1994.
DOI : 10.3758/BF03206946

C. Ware, Information visualization : perception for design, pp.30-31, 2000.

C. Ware and M. Lewis, The DragMag image magnifier, Conference companion on Human factors in computing systems , CHI '95, pp.407-408, 1995.
DOI : 10.1145/223355.223749

K. Watanabe, Crossmodal Interaction in Humans, Thèse de doctorat, California Institute of Technology, 2001.

R. Welch and D. Et-warren, Immediate perceptual response to intersensory discrepancy., Psychological Bulletin, vol.88, issue.3, pp.638-667, 1980.
DOI : 10.1037/0033-2909.88.3.638

J. M. Wolfe, Asymmetries in visual search : An introduction. Perception and Psychophysics, pp.381-389, 2001.

N. Wood and N. Et-cowan, The cocktail party phenomenon revisited: How frequent are attention shifts to one's name in an irrelevant auditory channel?, Journal of Experimental Psychology: Learning, Memory, and Cognition, vol.21, issue.1, pp.255-260, 1995.
DOI : 10.1037/0278-7393.21.1.255

D. L. Woods, C. Diaz, R. Et-rhodes, D. , and K. , Location and frequency cues in auditory selective attention., Journal of Experimental Psychology: Human Perception and Performance, vol.27, issue.1, pp.65-74, 2001.
DOI : 10.1037/0096-1523.27.1.65

M. Wright, A. Freed, and A. Et-momeni, Opensound control : State of the art, Conference on NIME, pp.153-159, 2003.

K. Yamaashi, M. Tani, and K. Et-tanikoshi, Fisheye videos, INTERACT '93 and CHI '93 conference companion on Human factors in computing systems , CHI '93, pp.119-120, 1993.
DOI : 10.1145/259964.260141

D. Yamamoto, S. Ozeki, and N. Et-takahashi, Focus+Glue+Context, Proceedings of the 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, GIS '09, pp.101-110, 2009.
DOI : 10.1145/1653771.1653788

E. Yumoto, W. J. Gould, and T. Et-baer, Harmonics???to???noise ratio as an index of the degree of hoarseness, The Journal of the Acoustical Society of America, vol.71, issue.6, pp.1544-1550, 1982.
DOI : 10.1121/1.387808

S. Yuval-greenberg and L. Y. Et-deouell, The dog???s meow: asymmetrical interaction in cross-modal object recognition, Experimental Brain Research, vol.27, issue.5, pp.603-614, 2009.
DOI : 10.1007/s00221-008-1664-6

P. Zahorik, D. Brungart, and A. W. Et-bronkhorst, Auditory distance perception in humans : a summary of past and present research, Acta Acustica united with Acustica, vol.91, issue.3, pp.409-420, 2005.

A. Zanella, M. Rounding, and M. S. Et-carpendale, On the effects of visual cues in comprehending distortions, pp.34-68, 2000.

J. Zhao, F. Chevalier, E. Pietriga, and R. Et-balakrishnan, Exploratory Analysis of Time-Series with ChronoLenses, IEEE Transactions on Visualization and Computer Graphics, vol.17, issue.12, pp.2422-2431, 2011.
DOI : 10.1109/TVCG.2011.195

URL : https://hal.archives-ouvertes.fr/inria-00637082