Interactive visualization of explanations for machine learning models
Διαδραστική οπτικοποίηση επεξηγήσεων για μοντέλα μηχανικής μάθησης
Master Thesis
Author
Ploumidi, Chrysa
Πλουμίδη, Χρύσα
Date
2024-07View/ Open
Keywords
Machine learning ; Interpretability ; Interpretable machine learning ; Explainable AI ; Counterfactual explanations ; Visualization techniques ; Hyperparameter tuningAbstract
The rapid adoption of Machine Learning (ML) models across various sectors necessitates a deeper understanding of their decision-making processes to ensure transparency and build trust. This thesis explores the domain of interpretability in ML, focusing on visualization techniques and the novel approach of Counterfactual Explanations. Visualization techniques, such as graphs and interactive diagrams, transform abstract algorithms into comprehensible formats, enabling users to understand the factors driving model predictions. This is critical for applications in fields where decisions and their outcomes directly influence human lives and societal outcomes.
This research introduces a system for generating Counterfactual Explanations for hyperparameters. Utilizing a proxy model and the dice-ml library in Python, the proposed methodology systematically explores Counterfactual scenarios, offering valuable insights into the decision-making mechanisms of ML models. By visualizing differences between factual and Counterfactual hyperparameters, users can better understand the model's behavior and the factors influencing its predictions.
The thesis demonstrates the application of this method using the "Adult" dataset from the UCI Machine Learning Repository. The User Interface (UI) created, allows users to produce Counterfactual Explanations and select which ones to visualize. The results underscore the effectiveness of visual explanations in enhancing interpretability and facilitating informed decision-making. The findings contribute to the broader field of explainable AI, providing practical tools and methods for improving transparency and accountability in ML systems.
Overall, this work aims to bridge the gap between complex Machine Learning algorithms and the need for understandable and actionable insights, making a significant contribution to the development of more interpretable and trustworthy AI technologies.