Micro-Data Reinforcement Learning for Adaptive Robots

Konstantinos Chatzilygeroudis 1
1 LARSEN - Lifelong Autonomy and interaction skills for Robots in a Sensing ENvironment
Inria Nancy - Grand Est, LORIA - AIS - Department of Complex Systems, Artificial Intelligence & Robotics
Abstract : Robots have to face the real world, in which trying something might take seconds, hours, or even days. Unfortunately, the current state-of-the-art reinforcement learning algorithms (e.g., deep reinforcement learning) require big interaction times to find effective policies. In this thesis, we explored approaches that tackle the challenge of learning by trial-and-error in a few minutes on physical robots. We call this challenge “micro-data reinforcement learning”. In our first contribution, we introduced a novel learning algorithm called “Reset-free Trial-and-Error” that allows complex robots to quickly recover from unknown circumstances (e.g., damages or different terrain) while completing their tasks and taking the environment into account; in particular, a physical damaged hexapod robot recovered most of its locomotion abilities in an environment with obstacles, and without any human intervention. In our second contribution, we introduced a novel model-based reinforcement learning algorithm, called Black-DROPS that: (1) does not impose any constraint on the reward function or the policy (they are treated as black-boxes), (2) is as data-efficient as the state-of-the-art algorithm for data-efficient RL in robotics, and (3) is as fast (or faster) than analytical approaches when several cores are available. We additionally proposed Multi-DEX, a model-based policy search approach, that takes inspiration from novelty-based ideas and effectively solved several sparse reward scenarios. In our third contribution, we introduced a new model learning procedure in Black-DROPS (we call it GP-MI) that leverages parameterized black-box priors to scale up to high-dimensional systems; for instance, it found high-performing walking policies for a physical damaged hexapod robot (48D state and 18D action space) in less than 1 minute of interaction time. Finally, in the last part of the thesis, we explored a few ideas on how to incorporate safety constraints, robustness and leverage multiple priors in Bayesian optimization in order to tackle the micro-data reinforcement learning challenge. Throughout this thesis, our goal was to design algorithms that work on physical robots, and not only in simulation. Consequently, all the proposed approaches have been evaluated on at least one physical robot. Overall, this thesis aimed at providing methods and algorithms that will allow physical robots to be more autonomous and be able to learn in a handful of trials.
Complete list of metadatas

Cited literature [292 references]  Display  Hide  Download

https://tel.archives-ouvertes.fr/tel-01966770
Contributor : Konstantinos Chatzilygeroudis <>
Submitted on : Saturday, December 29, 2018 - 4:20:31 PM
Last modification on : Wednesday, April 10, 2019 - 3:04:37 PM
Long-term archiving on : Saturday, March 30, 2019 - 1:46:19 PM

File

thesis.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : tel-01966770, version 1

Citation

Konstantinos Chatzilygeroudis. Micro-Data Reinforcement Learning for Adaptive Robots. Robotics [cs.RO]. Université de Lorraine, 2018. English. ⟨NNT : 2018LORR0276⟩. ⟨tel-01966770⟩

Share

Metrics

Record views

157

Files downloads

1084