Επεξηγήσιμη τεχνητή νοημοσύνη για ανάλυση χρονοσειρών και προληπτική συντήρηση
XAI for timeseries analysis and predictive maintenance

View/ Open
Keywords
Επεξηγήσιμη τεχνητή νοημοσύνη ; XAI ; Timeseries ; Predictive maintenance ; Ανάλυση χρονοσειρών ; Προληπτική συντήρησηAbstract
This thesis explores the use of Explainable Artificial Intelligence (XAI) in timeseries analysis and predictive maintenance, aiming to provide in-depth insights and pragmatic solutions for challenges in these areas. It emphasizes the need for understandable and interpretable results. A significant part of this work is the development of a prototype application for monitoring and predictive maintenance of main engines in commercial ships. The application is designed for the early detection of potential faults, leveraging timeseries data from various engine sensors. This proactive approach helps prevent costly and time-consuming repairs, ensuring uninterrupted and safe operation of ships. Similar to other industrial equipment, ships are fitted with sensors that gather data about their overall functioning and the condition of their components. This data, when analyzed through AI techniques, yields insights into potential engine faults. These insights guide decision-making processes, such as ordering spare parts or rerouting ships for maintenance. This paper introduces a two-tiered approach to predictive maintenance, utilizing machine and deep learning techniques on sensor data to anticipate the condition of specific parts of a ship's engine. The approach includes an array of models analyzed and applied specifically for the maritime industry, as well as an ensemble of these models for enhanced predictive accuracy. The effectiveness of this approach is demonstrated using real-world data from a maritime company. Furthermore, the thesis goes beyond conventional data mining and preprocessing in ML, stressing the importance of model trust and interpretability, especially in real-life decision-making scenarios. XAI is central in this regard, advocating for clearer explanations of ML model decisions, beyond basic performance metrics. The thesis investigates various explanation techniques, both embedded in and applied after model development, to enhance result comprehension and application. A Deep Neural Network (DNN) with a teacher-student architecture (distillation model) is presented, offering interpretability in timeseries classification tasks. The method involves transforming time series into 2D plots and using image highlighting methods like LIME and Grad-Cam to make predictions understandable. While this approach offers increased accuracy, it does come with the trade-off of extended training time. Additionally, the thesis addresses challenges of imbalanced datasets, underscoring the need for targeted model training to improve accuracy, particularly for minority classes, with methods like SMOTE and ADASYN employed to mitigate these challenges.