Spatio-temporal saliency detection in dynamic scenes using color and texture features

Abstract : Visual saliency is an important research topic in the field of computer vision due to its numerouspossible applications. It helps to focus on regions of interest instead of processingthe whole image or video data. Detecting visual saliency in still images has been widelyaddressed in literature with several formulations. However, visual saliency detection invideos has attracted little attention, and is a more challenging task due to additional temporalinformation. Indeed, a video contains strong spatio-temporal correlation betweenthe regions of consecutive frames, and, furthermore, motion of foreground objects dramaticallychanges the importance of the objects in a scene. The main objective of thethesis is to develop a spatio-temporal saliency method that works well for complex dynamicscenes.A spatio-temporal saliency map is usually obtained by the fusion of a static saliency mapand a dynamic saliency map. In our work, we model the dynamic textures in a dynamicscene with Local Binary Patterns (LBP-TOP) to compute the dynamic saliency map, andwe use color features to compute the static saliency map. Both saliency maps are computedusing a bio-inspired mechanism of Human Visual System (HVS) with a discriminantformulation known as center surround saliency, and are fused in a proper way.The proposed models have been extensively evaluated with diverse publicly availabledatasets which contain several videos of dynamic scenes. The evaluation is performed intwo parts. First, the method in locating interesting foreground objects in complex scene.Secondly, we evaluate our model on the task of predicting human observers fixations.The proposed method is also compared against state-of-the art methods, and the resultsshow that the proposed approach achieves competitive results.In this thesis we also evaluate the performance of different fusion techniques, because fusionplays a critical role in the accuracy of the spatio-temporal saliency map. We evaluatethe performances of different fusion techniques on a large and diverse complex datasetand the results show that a fusion method must be selected depending on the characteristics,in terms of color and motion contrasts, of a sequence. Overall, fusion techniqueswhich take the best of each saliency map (static and dynamic) in the final spatio-temporalmap achieve best results.
Complete list of metadatas

Cited literature [72 references]  Display  Hide  Download

https://tel.archives-ouvertes.fr/tel-01198664
Contributor : Abes Star <>
Submitted on : Monday, September 14, 2015 - 10:52:06 AM
Last modification on : Wednesday, September 12, 2018 - 1:25:54 AM
Long-term archiving on : Tuesday, December 29, 2015 - 1:28:00 AM

File

these_A_MUDDAMSETTY_Satya_Mahe...
Version validated by the jury (STAR)

Identifiers

  • HAL Id : tel-01198664, version 1

Collections

Citation

Satya Mahesh Muddamsetty. Spatio-temporal saliency detection in dynamic scenes using color and texture features. Signal and Image processing. Université de Bourgogne, 2014. English. ⟨NNT : 2014DIJOS067⟩. ⟨tel-01198664⟩

Share

Metrics

Record views

670

Files downloads

540