Skip to Main content Skip to Navigation

Towards adaptive learning and inference - Applications to hyperparameter tuning and astroparticle physics

Abstract : Inference and optimization algorithms usually have hyperparameters that require to be tuned in order to achieve efficiency. We consider here different approaches to efficiently automatize the hyperparameter tuning step by learning online the structure of the addressed problem. The first half of this thesis is devoted to the problem of hyperparameter tuning in machine learning, where recent results suggest that with current generation hardware, the optimal allocation of computing time includes more hyperparameter exploration than has been typical in the literature. After presenting and improving the generic sequential model-based optimization (SMBO) framework, we show that SMBO successfully applies to the challenging task of tuning the numerous hyperparameters of deep belief networks, outperforming expert manual tuning. To close the first part, we propose an algorithm that performs tuning {\it across} datasets, further closing the gap between automatized tuners and human experts by mimicking the memory that humans have of past experiments with the same algorithm on different datasets. The second half of this thesis deals with adaptive Markov chain Monte Carlo (MCMC) algorithms, sampling-based algorithms that explore complex probability distributions while self-tuning their internal parameters on the fly. This second part starts by describing the Pierre Auger observatory (henceforth Auger), a large-scale particle physics experiment dedicated to the observation of atmospheric showers triggered by cosmic rays. These showers are wide cascades of elementary particles raining on the surface of Earth, resulting from charged nuclei hitting our atmosphere with the highest energies ever seen. The analysis of Auger data motivated our study of adaptive MCMC, since the latter can cope with the complex and high-dimensional generative models involved in Auger. We derive the first part of the Auger generative model and introduce a procedure to perform inference on shower parameters that requires only this bottom part. Our generative model inherently suffers from permutation invariance, thus leading to {\it label switching}. Label-switching is a common difficulty in MCMC inference which makes marginal inference useless because of redundant modes of the target distribution. After reviewing previously existing solutions to the label switching problem, we propose AMOR, the first adaptive MCMC algorithm with online relabeling. We empirically demonstrate the benefits of adaptivity and show how AMOR satisfyingly applies to the problem of inference in our Auger model. Finally, we prove consistency results for a variant of AMOR. Our proof provides a generic framework for the analysis of other relabeling algorithms and unveils interesting links between relabeling algorithms and vector quantization.
Complete list of metadatas

Cited literature [129 references]  Display  Hide  Download
Contributor : Rémi Bardenet <>
Submitted on : Saturday, January 12, 2013 - 9:22:43 PM
Last modification on : Tuesday, November 17, 2020 - 10:16:02 AM
Long-term archiving on: : Saturday, April 13, 2013 - 4:08:14 AM


  • HAL Id : tel-00773295, version 1



Rémi Bardenet. Towards adaptive learning and inference - Applications to hyperparameter tuning and astroparticle physics. Methodology [stat.ME]. Université Paris Sud - Paris XI, 2012. English. ⟨tel-00773295⟩



Record views


Files downloads