Bayesian Decomposition of Multi-Modal Dynamical Systems for Reinforcement Learning

Bayesian Decomposition of Multi-Modal Dynamical Systems for Reinforcement Learning

Abstract

In this paper, we present a model-based reinforcement learning system where the transition model is treated in a Bayesian manner. The approach naturally lends itself to exploit expert knowledge by introducing priors to impose structure on the underlying learning task. The additional information introduced to the system means that we can learn from small amounts of data, recover an interpretable model and, importantly, provide predictions with an associated uncertainty. To show the benefits of the approach, we use a challenging data set where the dynamics of the underlying system exhibit both operational phase shifts and heteroscedastic noise. Comparing our model to NFQ and BNN+LV, we show how our approach yields human-interpretable insight about the underlying dynamics while also increasing data-efficiency.

Publication
Neurocomputing

asd In this extension of asd

Avatar
Markus Kaiser
PhD candidate in Bayesian Machine Learning

My research interests include hierarchical Bayesian modelling, Gaussian Processes and scalable Bayesian Inference.