Expressive Sound Synthesis for Animation

Cécile Picard-Limpens 1
1 REVES - Rendering and virtual environments with sound
CRISAM - Inria Sophia Antipolis - Méditerranée
Abstract : The main objective of this thesis is to provide tools for an expressive and real-time synthesis of sounds resulting from physical interactions of various objects in a 3D virtual environment. Indeed, these sounds, such as collisions sounds or sounds from continuous interaction between surfaces, are difficult to create in a pre-production process since they are highly dynamic and vary drastically depending on the interaction and objects. To achieve this goal, two approaches are proposed; the first one is based on simulation of physical phenomena responsible for sound production, the second one is based on the processing of a recordings database. According to a physically based point of view, the sound source is modeled as the combination of an excitation and a resonator. We first present an original technique to model the interaction force for continuous contacts, such as rolling. Visual textures of objects in the environment are reused as a discontinuity map to create audible position-dependent variations during continuous contacts. We then propose a method for a robust and exible modal analysis to formulate the resonator. Besides allowing to handle a large variety of geometries and proposing a multi-resolution of modal parameters, the technique enables us to solve the problems of coherence between physics simulation and sound synthesis that are frequently encountered in animation. Following a more empirical approach, we propose an innovative method that consists in bridging the gap between direct playback of audio recordings and physically based synthesis by retargetting audio grains extracted from recordings according to the output of a physics engine. In an off-line analysis task, we automatically segment audio recordings into atomic grains and we represent each original recording as a compact series of audio grains. During interactive animations, the grains are triggered individually or in sequence according to parameters reported from the physics engine and/or user-defined procedures. Finally, we address fracture events which commonly appear in virtual environments, especially in video games. Because of their complexity that makes a purely physical-based model prohibitively expensive and an empirical approach impracticable for the large variety of micro-events, this thesis opens the discussion on a hybrid model and the possible strategies to combine a physically based approach and an empirical approach. The model aims at appropriately rendering the sound corresponding to the fracture and to each specic sounding sample when material breaks into pieces.
Complete list of metadatas

https://tel.archives-ouvertes.fr/tel-00440417
Contributor : Cécile Picard-Limpens <>
Submitted on : Thursday, December 10, 2009 - 4:18:06 PM
Last modification on : Saturday, January 27, 2018 - 1:31:35 AM
Long-term archiving on : Thursday, June 17, 2010 - 8:19:20 PM

Identifiers

  • HAL Id : tel-00440417, version 1

Collections

Citation

Cécile Picard-Limpens. Expressive Sound Synthesis for Animation. Acoustics [physics.class-ph]. Université Nice Sophia Antipolis, 2009. English. ⟨tel-00440417⟩

Share

Metrics

Record views

535

Files downloads

1256