Sequential learning and stochastic optimization of convex functions

Abstract : In this thesis we study several machine learning problems that are all linked with the minimization of a noisy function, which will often be convex.Inspired by real-life applications we focus on sequential learning problems which consist in treating the data on the fly'', or in an online manner.The first part of this thesis is thus devoted to the study of three different sequential learning problems which all face the classical exploration vs. exploitation'' trade-off.Each of these problems consists in a situation where a decision maker has to take actions in order to maximize a reward or to evaluate a parameter under uncertainty, meaning that the rewards or the feedback of the possible actions are unknown and noisy.We demonstrate that all of these problems can be studied under the scope of stochastic convex optimization, and we propose and analyze algorithms to solve them.In the second part of this thesis we focus on the analysis of the Stochastic Gradient Descent algorithm, which is likely one of the most used stochastic optimization algorithms in machine learning.We provide an exhaustive analysis in the convex setting and in some non-convex situations by studying the associated continuous-time model, and obtain new optimal convergence results.
Keywords :
Document type :
Theses

https://tel.archives-ouvertes.fr/tel-03153285
Contributor : Abes Star :  Contact
Submitted on : Friday, February 26, 2021 - 11:36:22 AM
Last modification on : Friday, April 16, 2021 - 3:31:20 AM

File

94087_FONTAINE_2020_archivage....
Version validated by the jury (STAR)

Identifiers

• HAL Id : tel-03153285, version 1

Citation

Xavier Fontaine. Sequential learning and stochastic optimization of convex functions. General Mathematics [math.GM]. Université Paris-Saclay, 2020. English. ⟨NNT : 2020UPASM024⟩. ⟨tel-03153285⟩

Record views