Solving long-horizon tasks via imitation and reinforcement learning
Εκτέλεση διαδικασιών μεγάλου χρονικού ορίζοντα με ενισχυτική μάθηση και μάθηση μέσω μίμησης

Master Thesis
Author
Lappa, Athanasia
Λάππα, Αθανασία
Date
2024-02Advisor
Vouros, GeorgeΒούρος, Γεώργιος
View/ Open
Keywords
Imitation learning ; Reinforcement learning ; Behavior cloning ; Trust region policy optimization ; Relay policy learning ; Deep neural network ; Machine learningAbstract
This thesis explores the use of the Relay Policy Learning (RPL) algorithm proposed by Gupta et al. [1], to model trajectory prediction in an aviation environment. RPL is a twophase approach consisting of a Hierarchical Imitation Learning (HIL) and Hierarchical Reinforcement Learning (HRL) algorithms. The purpose of this thesis is to model a policy learnt through RPL, to predict the aircraft trajectory. This is done through learning goalconditioned hierarchical policies from unstructured and unsegmented demonstrations. This thesis utilizes a dataset with long aircraft trajectories. These are pre-processed to correct imperfections and to create low-level and high-level datasets from these demonstrations through the relay-data-relabelling augmentation of the RPL algorithm. Then the created datasets are used to learn hierarchical Imitation Learning (IL) policies without explicit goal labelling using the goal-conditioned Behavior Cloning (BC) method. This provides a policy initialization for subsequent relay reinforcement fine -tuning using a variant of the Trust Region Policy Optimization (TRPO) on-policy algorithm proposed by Schulman et al. [4]. Then, the implemented agent is tested and evaluated. The thesis concludes with a presentation of results and proposals for further work towards extending the RPL algorithm to work with off-policy RL algorithms.