Skip to Main content Skip to Navigation

High-dimensional Learning for Extremes

Abstract : The aim of this thesis is to develop a statistical approach to learn the dependence structure of extremes in a high-dimensional setting. The first chapter brings together the main results concerning multivariate extreme value theory. The different tools needed in the following chapters are introduced, notably the concept of regularly varying random vectors, but also the notions of statistical learning, high-dimensional statistics and model selection. The second chapter is a joint work with Olivier Wintenberger. It outlines the concept of sparse regular variation defined via the Euclidean projection onto the simplex and which extends the standard notion of regular variation. This approach introduces sparsity into the study of multivariate extremes and thus reduces the dimension. The third chapter presents a work in progress with Olivier Wintenberger on a statistical approach of sparsely regularly varying random vectors. The idea of this chapter is to bring out a method which allows us to identify the subsets of R^d on which extremes concentrate. Finally, the fourth chapter discusses the article by Engelke and Hitz (2020) in which the authors define a notion of conditional independence for a multivariate Pareto distribution. We extend their approach with the study of the minimum of the marginals of a regularly varying random vector.
Complete list of metadata
Contributor : ABES STAR :  Contact
Submitted on : Friday, October 8, 2021 - 4:46:30 PM
Last modification on : Friday, August 5, 2022 - 3:00:08 PM


Version validated by the jury (STAR)


  • HAL Id : tel-02977794, version 2


Nicolas Meyer. High-dimensional Learning for Extremes. Statistics [math.ST]. Sorbonne Université, 2020. English. ⟨NNT : 2020SORUS227⟩. ⟨tel-02977794v2⟩



Record views


Files downloads