Skip to Main content Skip to Navigation

Optimisation pour l'apprentissage et apprentissage pour l'optimisation

Abstract : In many industrials problems, a single evaluation of objective function is expensive in time calculation and his gradient can be unavailable. For these reason, it is useful to build a model, fast to estimate and easily derivable, which approach the studied problem. Providing many improvements to the learning process, we have shown that the neural networks can answer these requirements. Particularly, when classic methods introduce oscillations to approach a smooth function, our method gives a satisfactory result. Even better, our method allow to approximate oscillating functions or applications by a regular model. We obtain these results thanks to different regularization techniques: Tikhonov method, the early stopped strategy, the size of the model and finally Gauss-Newton method (GN). This regularization approach even allows avoiding the local minima (which lays serious problems to the classic methods), by increasing the size of the model to ensure the learning process and then decreasing it for the regularization process. For the large size problems, the use of the Gauss-Newton method is very demanding in memory space. However combining the Automatic differentiation adjoin and direct modes, we have propose a "zero-memory" implementation which allows us to apply this method. This process presented in the frame of neural networks can be adapted to every inverse problem. In the recent but rich literature on the subject, the defined functions by a classic neural network are optimized by very expensive global techniques. In our case we profit of the resulting model (regularity, evaluation speed and gradient availability for a negligible supplementary cost) to use efficient optimization processes. We illustrate the pertinence of the proposed process with different academic examples, renowned by their difficulty, and by examples coming from the motor vehicle industry and oil engineering.
Document type :
Complete list of metadatas
Contributor : Milagros van Grieken <>
Submitted on : Monday, September 12, 2005 - 11:26:13 AM
Last modification on : Friday, January 10, 2020 - 9:08:06 PM
Long-term archiving on: : Tuesday, September 7, 2010 - 5:21:13 PM


  • HAL Id : tel-00010106, version 1


Milagros van Grieken. Optimisation pour l'apprentissage et apprentissage pour l'optimisation. Mathématiques [math]. Université Paul Sabatier - Toulouse III, 2004. Français. ⟨tel-00010106⟩



Record views


Files downloads