Analyzing long-term memory in sequential models through internal state dynamics

Master Thesis
Author
Romesis, Christoforos
Ρωμέσης, Χριστόφορος
Date
2026-02View/ Open
Keywords
Long-term memory ; Sequential models ; Dynamical systemsAbstract
This thesis investigates the mechanisms governing long-term memory retention in sequential neural models, adopting a dynamical systems perspective on recurrent architectures. The study is conducted on fault-driven time-series data generated from a high-fidelity mathematical model of an aircraft engine, designed to include subtle intermediate events that are essential for correct fault discrimination despite identical final observable behavior.
Recurrent models are trained and systematically analyzed beyond output-level accuracy, with emphasis on their internal state dynamics. While classification performance remains consistently high, memory retention exhibits non-monotonic dependence on sequence length, revealing alternating regimes of successful and degraded historical encoding.
Dimensionality reduction of internal state trajectories exposes structured geometric organization in the learned state space, including attractor basins and rotational dynamics. The results suggest that long-term memory retention is not adequately explained by classical vanishing or exploding gradient arguments alone. Instead, it emerges as a geometric property of the internal dynamics, shaped by phase-dependent recurrent interactions and the effective training horizon.
The thesis introduces the concept of memory crackpoints, referring to critical dynamical transitions where history-dependent structure collapses, leading to qualitative memory loss without apparent degradation in short-term predictive performance. These findings highlight the importance of internal dynamical analysis for diagnosing and improving memory mechanisms in sequential neural models.

