Skip to Main content Skip to Navigation
New interface

Deep learning interpretability with visual analytics : Exploring reasoning and bias exploitation

Théo Jaunet 1, 2, 3 
2 SICAL - Situated Interaction, Collaboration, Adaptation and Learning
LIRIS - Laboratoire d'InfoRmatique en Image et Systèmes d'information
3 imagine - Extraction de Caractéristiques et Identification
LIRIS - Laboratoire d'InfoRmatique en Image et Systèmes d'information
Abstract : Over the past few years, AI and machine learning have evolved from research areas secluded in laboratories far from the public, to technologies deployed on an industrial scale with a considerable impact on our everyday life. As these technologies are also used to solve critical problems such as finance and driving autonomous cars, in which their decisions can put people at risk, this trend has begun to raise legitimate concerns. Since much of the underlying complexity of the decision-making is learned from massive amounts of data, how these models make decisions remains unknown to both their creators and to the people impacted by such decisions. This has led to the new field of eXplainable AI (XAI) and the problem of analyzing the behavior of trained models, to shed light on their reasoning capabilities and the biases to which they are subject. In this thesis, we have contributed to this field with the design of visual analytics systems for the study and improvement of the interpretability of deep neural networks. Our goal was to provide experts with tools to help them better interpret the decisions of their models and eventually improve them. We also proposed applications designed to introduce deep learning methods to a non-expert audience. During this thesis, we focused on the underexplored challenge of model interpretation and improvement for robotics applications, where important decisions have to be made from high-level and high-dimensional inputs such as high level and high dimensional inputs such as images. All the tools designed during this thesis have been published as open-source projects, and when possible, our visualizations have been made available online as prototypes. These tools are highly interactive and can help encourage further interpretation of decision model decisions by the research community.
Document type :
Complete list of metadata
Contributor : ABES STAR :  Contact
Submitted on : Monday, October 24, 2022 - 2:59:57 PM
Last modification on : Tuesday, October 25, 2022 - 3:57:50 AM


Version validated by the jury (STAR)


  • HAL Id : tel-03827183, version 1


Théo Jaunet. Deep learning interpretability with visual analytics : Exploring reasoning and bias exploitation. Artificial Intelligence [cs.AI]. Université de Lyon, 2022. English. ⟨NNT : 2022LYSEI041⟩. ⟨tel-03827183⟩



Record views


Files downloads