Εμπλουτίζοντας σχήματα ταξινόμησης ιατρικών εικόνων με ιδιότητες επεξηγησιμότητας
Enhancing medical imaging classification schemes with explainability properties
Doctoral Thesis
Author
Καλλιπολίτης, Αθανάσιος
Kallipolitis, Athanasios
Date
2023-06-06View/ Open
Keywords
Explainability ; Interpretability ; Machine learning ; Medical imaging ; Image classification ; Artificial intelligence algorithms ; Deep learning ; Activation mapsAbstract
At the beginning of a self-improving journey, a machine learning engineer worries about the performance of the machine learning pipeline which is mostly expressed in terms of bias error. The first step of maturation involves the acknowledgment of underfitting and overfitting as part of a machine learning process that involves only a small dataset, as a representative part of the a-priori knowledge, in order to solve a complex and multifactorial problem. Inevitably, a “mature” machine learning practitioner needs to grasp the importance of generalization, the corresponding variance, and irreducible errors, if he is bound to evolve into a “grown-up” machine learning expert. We learn to account for our work by measuring the performanceof a proposed methodology. It is a strong requirement that evaluation metrics such as accuracy, balanced accuracy, precision, recall, and others according to the nature of a machine learning task are retained at high values, but there exist equally significant aspects that need to be taken into consideration. Discovering the principal and auxiliary causes upon which a predictive model decides on one class in favor of another can be a powerfulsource of useful knowledge and, therefore, light should be shed on the inner mechanisms of decision-making. Naturally, the journey of self-improvement is a never-ending loop, as the realization of acquired knowledge leads to new unanswered questions.For this thesis, the maturation of the author with basic machine learning notions has evolved into a quest for developing machine learning approaches for image classification that, inherently or post-hoc, have the ability to provide meaningful connections between the predicted outcome and the visual patterns that most influenced it. Since traditional machine-learning approaches remain in the field as efficient solutions for tasks that are covered with little data, the proposed methodologies cover both traditional machine learning and deep learning architectures. The scope of the thesis is limited to medical imaging and the application of explainable machine-learning approaches for the handling of corresponding health-related issues. Medical images are considered one of the richest sources of information concerning health data and the basis upon which experts make decisions for treatment plans and interventions. Creating explainable automated systems that support these decisions can speed up their integration process into everyday clinical workflows since experts will be able to understand and trust the generated predictions through added transparency and causality. Towards the integration of explainable properties in machine learning classification schemes, in this thesis a novel explainability scheme is proposed which is based on the Bag of Visual Words paradigm for the interpretation of image classification results by means of ensemble explainable classifiers.Since Fisher Vectors push the performance of vocabulary-based approaches to higher values, the proposed methodology evolves to support the architecture of generative models, such as Gaussian Mixture Models. Concerning deep learning techniques, a novel modular explainability approach is proposed that exploits the advantages of two well-established approaches, Gradient-Based Class Activation Maps, and Superpixels. The results show that the combined scheme significantly increases the performance of the original Gradient-based approach and the modularity allows for implementation with different explainability approaches