Skip to Main content Skip to Navigation

Unsupervised Pretraining of State Representations in a Rewardless Environment

Abstract : This thesis seeks to extend the capabilities of state representation learning (SRL) to help scale deep reinforcement learning (DRL) algorithms to continuous control tasks with high-dimensional sensory observations (such as images). SRL allows to improve the performance of DRL by providing it with better inputs than the input embeddings learned from scratch with end-to-end strategies. Specifically, this thesis addresses the problem of performing state estimation in the manner of deep unsupervised pretraining of state representations without reward. These representations must verify certain properties to allow for the correct application of bootstrapping and other decision making mechanisms common to supervised learning, that we will seek to achieve through the models pretrained with the two SRL algorithms proposed in this thesis.
Complete list of metadata
Contributor : Astrid Merckling Connect in order to contact the contributor
Submitted on : Tuesday, February 8, 2022 - 5:23:09 PM
Last modification on : Tuesday, May 31, 2022 - 8:42:13 PM
Long-term archiving on: : Monday, May 9, 2022 - 8:20:23 PM


Files produced by the author(s)


  • HAL Id : tel-03562230, version 1


Astrid Merckling. Unsupervised Pretraining of State Representations in a Rewardless Environment. Machine Learning [cs.LG]. ISIR, Université Pierre et Marie Curie UMR CNRS 7222, 2021. English. ⟨tel-03562230⟩



Record views


Files downloads