Scénarios temporels pour l'interprétation automatique de séquences vidéos

Abstract : This thesis research focuses on the recognition of temporal scenarios for Automatic Video Interpretation: the goal of this work is to recognize in real-time the behaviors of individuals evolving in a scene depicted by video sequences which were captured by cameras. The recognition process takes the following as input: (1) human behav-ior (i.e., temporal scenario) models predefined by experts; (2) 3D geometric and semantic information of the ob-served environment; and (3) a stream of individuals tracked by a vision module.
To deal with this issue, we have proposed a generic model of temporal scenarios and a description language to rep-resent the knowledge of human behaviors. The representation of this knowledge needs to be clear, rich, intuitive and flexible. The proposed model of a temporal scenario M is composed of five components: (1) a set of physical object variables corresponding to the physical objects involved in M; (2) a set of temporal variables corresponding to the sub-scenarios composing M; (3) a set of forbidden variables corresponding to the scenarios that are not al-lowed to occur during the recognition of M; (4) a set of constraints (symbolic, logical, spatial and temporal con-straints including Allen's interval algebra operators) involving these variables; and (5) a set of decisions corre-sponding to the tasks predefined by experts that are needed to be executed when M has been recognized.
We have also proposed a temporal constraint resolution technique to recognize in real-time the temporal scenario models predefined by experts. The proposed algorithm is most of the time efficient for processing temporal con-straints as well as for combining several actors defined within a given scenario M. By efficient we mean that the recognition process is linear with the number of sub-scenarios and with the number of physical object variables defined within M in most cases.
To validate the proposed algorithm in terms of correctness, robustness and processing time with respect to scenario and scene properties (e.g., number of sub-scenarios, number of persons in the scene), we have tested the algorithm on several videos of different applications, in both on-line and off-line modes and also on simulated data.
By the experiments conducted in metro surveillance and bank monitoring applications, the proposed scenario de-scription language shows the capability to represent easily temporal scenarios corresponding to the human behav-iors of interest in these applications. Moreover, the proposed temporal scenario recognition algorithm shows the capability to recognize in real-time (at least 10 frames/second) complex scenario models (up to 10 physical object variables and 10 sub-scenario variables per scenario) with complex video sequences (up to 240 persons/frame in the scene).
Document type :
Theses
Complete list of metadatas

Cited literature [2 references]  Display  Hide  Download

https://tel.archives-ouvertes.fr/tel-00327919
Contributor : Service Ist Inria Sophia Antipolis-Méditerranée / I3s <>
Submitted on : Thursday, October 9, 2008 - 3:31:43 PM
Last modification on : Saturday, January 27, 2018 - 1:30:52 AM
Long-term archiving on : Monday, June 7, 2010 - 5:24:29 PM

Identifiers

  • HAL Id : tel-00327919, version 1

Collections

Citation

Van-Thinh Vu. Scénarios temporels pour l'interprétation automatique de séquences vidéos. Interface homme-machine [cs.HC]. Université Nice Sophia Antipolis, 2004. Français. ⟨tel-00327919⟩

Share

Metrics

Record views

330

Files downloads

293