Skip to Main content Skip to Navigation

Extraction de connaissances interprétables dans des séries temporelles

Abstract : Energiency is a company that sells a platform toallow manufacturers to analyze their energy consumption and production data, represented in the form of time series. This platform integrates machine learning models to meet customer needs. The application of such models to time series encounters two problems: on the one hand, some classical machine learning approaches have been designed for tabular data and must be adapted to time series, on the other hand, the results of some approaches are difficult for end users to understand. In the first part, we adapt a method to search for occurrences of temporal rules on time series from machines and industrial infrastructures. A temporal rule captures successional relationships between behaviors in time series (e.g., a value peak followed by a trough). In industrial series, due to the presence of many external factors, these regular behaviours can be disruptive. Therefore, two occurrences of the same behaviour produce two sequences of slightly different values. Current methods for searching the occurrences of a rule use a distance measure to assess the similarity between sub-series. However, these measurements are not suitable for assessing the similarity of distorted series such as those in industrial settings. The first contribution of this thesis is the proposal of a method for searching for occurrences of temporal rules capable of capturing this variability in industrial time series. For this purpose, the method integrates the use of elastic distance measure capable of assessing the similarity between slightly deformed time series. The second part of the thesis is devoted to the interpretability of time series classification methods, i.e. the ability of a classifier to return explanations for its results. These explanations must be understandable by a human. Classification is the task of associating a time series with a category (e.g., a series of power consumption associated with the condition of themachine). For an end user inclined to make decisions based on a classifier’s results, understanding the rationale behind those results is of great importance. Otherwise, it is like having blind confidence in the classifier. The second contribution of this thesis is an interpretable time series classifier that can directly provide explanations for its results. This classifier uses local information on time series to discriminate against them. We present how to extract an explanation for a result. Finally, the third contributionof this thesis is a method to explain aposteriori any result of any classifier. This method can be used to explain the results of non-interpretable classifiers. This method learns an interpretable classifier, called a proxy, on the neighbourhood of the time series whose classification we want to explain. This proxy must mimic the behaviour of the classifier to be explained in this neighborhood. We carried out an user study to evaluate the interpretability of our method.
Document type :
Complete list of metadatas

Cited literature [134 references]  Display  Hide  Download
Contributor : Mael Guillemé <>
Submitted on : Wednesday, June 3, 2020 - 4:03:26 PM
Last modification on : Tuesday, October 20, 2020 - 11:03:57 AM
Long-term archiving on: : Wednesday, September 23, 2020 - 10:13:17 PM


Files produced by the author(s)


  • HAL Id : tel-02658677, version 1



Maël Guilleme. Extraction de connaissances interprétables dans des séries temporelles. Informatique [cs]. Université de Rennes 1, 2019. Français. ⟨tel-02658677⟩



Record views


Files downloads