Supporting human-AI co-learning through emphasizing recent experiences

Master Thesis
Author
Kalogeropoulos, Dimitris
Καλογερόπουλος, Δημήτρης
Date
2025-06Advisor
Dagioglou, MariaΔαγιόγλου, Μαρία
View/ Open
Abstract
Human-AI collaboration is a critical aspect of interactive systems, particularly in tasks that require fluid and adaptive teamwork. However, achieving seamless collaboration between humans and agents to reach a specific goal is both time-consuming and demanding. While imitation learning , often considered a subset of transfer learning, has been used to accelerate agent training via expert demonstrations, such data is typically expensive and difficult to acquire in many domains. This thesis investigates the impact of Emphasized Recent Experience (ERE), a temporal prioritization strategy for replay buffer, on the collaborative dynamics of a Deep Reinforcement Learning (DRL) agent trained with a discrete Soft Actor-Critic (SAC) algorithm in a human-agent collaboration task. The study involved three groups, each with five participants, employing different training strategies: Group 1 used the basic SAC algorithm, Group 2 interacted with a SAC agent augmented with ERE, and Group 3 utilized Transfer Learning (TL) through expert demonstration data. The results indicate that incorporating ERE significantly improves both the performance and consistency of the SAC agent, while maintaining the time-efficiency of the TL method. Even in the later stages of the game, it provides a promising alternative to TL in terms of the scores achieved by naive users. Additionally, subjective evaluations from the participants reflect a better overall game experience and a stronger sense of collaboration compared to SAC.

