Skip to Main content Skip to Navigation
Theses

Expliquer et justifier les systèmes de décisions algorithmiques

Clément Henin 1, 2 
1 PRIVATICS - Privacy Models, Architectures and Tools for the Information Society
Inria Grenoble - Rhône-Alpes, CITI - CITI Centre of Innovation in Telecommunications and Integration of services, Inria Lyon
Abstract : In a context favorable to the rationalization of decisions through measurable objectives and quantitative methods, the recent development of computing technologies has incresed the adoption of algorithmic decision systems. Such systems are already used in many fields and a widespread adopted is excepted. However, the use of such algorithms does not come without risk. While several solutions have already been proposed, notably by the eXplainable Artififial Intelligence research community, we believe that additional efforts are needed to fully address all the issues. In this thesis, we start by presenting the main research works in eXplainable Artififial Intelligence, in particular black-box explanation methods, i.e. those that work without accessing the algorithm code. These seemingly diverse methods actually share a common structure that we identify and build upon to establish a taxonomy. Next, we describe our Interactive Black-box EXplanation system called IBEX. Based on user input, IBEX generates an explanation tailored to the individual's profile and needs. The user can interact with the explanation system according to their skills. To generate the explanations, IBEX relies on a framework for black-box explanations that describe the explanation process into two distinct components. The approach proposed in IBEX has been tested in a study involving users (agents of a French regulatory authority) with various profiles. Then, we propose an original method allowing to challenge or justify algorithm based decisions. While contestation is of major importance in legal texts and while their justification by external norms is a recurrent concern in social sciences, there are no tools dedicated to these specific objectives. Our tool Algocate operationalizes these notions for three types of norms (rules, objectives and reference) in an interactive way. This approach is also tested in a study involving real users. Finally, a three-year collaboration was conducted with the French biomedical agency. It focused on the heart transplant allocation algorithm. After a bibliographic analysis and interviews conducted in French hospitals, the main sociological and organizational issues surrounding this algorithm were identified. Then, a set of information ensuring the explanation and justification of the system was made available to the physicians of the transplant centers.
Document type :
Theses
Complete list of metadata

https://tel.archives-ouvertes.fr/tel-03551798
Contributor : ABES STAR :  Contact
Submitted on : Wednesday, May 18, 2022 - 5:32:31 PM
Last modification on : Thursday, August 4, 2022 - 5:18:38 PM

File

these.pdf
Version validated by the jury (STAR)

Identifiers

  • HAL Id : tel-03551798, version 2

Citation

Clément Henin. Expliquer et justifier les systèmes de décisions algorithmiques. Intelligence artificielle [cs.AI]. Université de Lyon, 2021. Français. ⟨NNT : 2021LYSEI058⟩. ⟨tel-03551798v2⟩

Share

Metrics

Record views

243

Files downloads

91