Skip to Main content Skip to Navigation
Habilitation à diriger des recherches

Statistical Approaches in Learning Theory: boosting and ranking

Abstract : Statistical Learning Theory has been growing rapidly the last ten years. The introduction of efficient classification algorithms, such as boosting and Support Vector Machines, coping with high-dimensional data, generated new questions that Vapnik-Chervonenkis (VC) theory could not answer. The Empirical Risk Minimization principle does not account for practical learning algorithms and the VC dimension is not the appropriate concept to explain the generalization ability of such
methods. In the first chapter, we recall the interpretations of boosting algorithms as implementations of convex risk minimization
principles and we study their properties under this viewpoint. In particular, we show the importance of regularization in order to obtain consistent strategies. We also develop a new class of algorithms called the Mirror Averaging Algorithm and we evaluate
their performance through simulation experiments. After presenting the fundamental ideas underlying boosting, we study, in the second chapter, more advanced issues such as oracle inequalities. Thus, we propose some fine calibration of the penalty function according to the cost function being used and present non-asymptotic results
on the performance of penalized boosting estimators, with refinements such as fast rates of convergence under Mammen-Tsybakov margin conditions. We also describe the approximation properties of boosting using decision stumps. The third chapter explores the ranking problem. In applications such as information retrieval or credit scoring, ranking the instances can be much more significant than simply classifying them. We propose a simple formulation of this problem in which ranking is
equivalent to classification with pairs of observations. The difference lies in the nature of the empirical risks which take the form of U-statistics and we develop classification theory in order to fit with this framework. We also investigate the possibilities of generalizing the ranking error in order to include priors on the ranking we are aiming at, for instance, when we want to focus only on the "best" instances.
Document type :
Habilitation à diriger des recherches
Complete list of metadata
Contributor : Nicolas Vayatis <>
Submitted on : Monday, December 18, 2006 - 10:53:43 AM
Last modification on : Wednesday, December 9, 2020 - 3:14:36 PM
Long-term archiving on: : Thursday, September 20, 2012 - 4:11:16 PM



  • HAL Id : tel-00120738, version 1


Nicolas Vayatis. Statistical Approaches in Learning Theory: boosting and ranking. Mathematics [math]. Université Pierre et Marie Curie - Paris VI, 2006. ⟨tel-00120738⟩



Record views


Files downloads