Interpretable Dynamics Models for Data-Efficient Reinforcement Learning


In this paper, we present a Bayesian view on model-based reinforcement learning. We use expert knowledge to impose structure on the transition model and present an efficient learning scheme based on variational inference. This scheme is applied to a heteroskedastic and bimodal benchmark problem on which we compare our results to NFQ and show how our approach yields human-interpretable insight about the underlying dynamics while also increasing data-efficiency.

Markus Kaiser
Markus Kaiser
Research Scientist

Research Associate at the University of Cambridge and Research Scientist at Siemens AG. I am interested in scalable Bayesian machine learning and Gaussian processes.