Inherently interpretable Q-Learning
Master Thesis
Συγγραφέας
Koumentis, Ioannis
Κουμέντης, Ιωάννης
Ημερομηνία
2022-06Επιβλέπων
Vouros, GeorgeΒούρος, Γεώργιος
Λέξεις κλειδιά
Q-Learning ; Interpretability ; Transparency ; Human-AI collaboration ; Reinforcement learning ; Explainability ; Stochastic Gradient TreesΠερίληψη
Reinforcement Learning algorithms, especially those that utilize Deep Neural Networks (DNN), have achieved significant and many times impressive results at solving problems within a broad range of applications. Since most implementations and model architectures are based on Neural Networks (NNs), which are non-interpretable by design, there is a growing desire for Interpretable Reinforcement Learning methods development, towards improving the algorithm’s decisions tracking and increase trust, as well as cooperation between intelligent agents and human users. A promising approach towards interpretable methods includes utilizing inherently interpretable methods such as Decision Trees.
This thesis investigates interpretability in Reinforcement Learning by introducing the Stochastic Gradient Trees algorithm as the baseline for developing intelligent agents. To that end, we propose model designs and training methods that utilize agents based on Stochastic Gradient Trees to perform Q-Learning and learn effective policies on several virtual environments. Moreover, a comparison of the interpretable and their counter non-interpretable methods is made under similar settings to study comparatively their efficacy in problem solving. Additionally, experiments have been conducted in a Human - AI collaboration setting, towards creating a transparent method that utilizes visual signals to improve human-agent collaboration in problem solving.