Εμφάνιση απλής εγγραφής

dc.contributor.advisorΜαγκλογιάννης, Ηλίας
dc.contributor.advisorMaglogiannis, Ilias
dc.contributor.authorΖαφειρίου, Αργύριος
dc.contributor.authorZafeiriou, Argyrios
dc.date.accessioned2022-10-25T10:49:27Z
dc.date.available2022-10-25T10:49:27Z
dc.date.issued2022-06
dc.identifier.urihttps://dione.lib.unipi.gr/xmlui/handle/unipi/14739
dc.identifier.urihttp://dx.doi.org/10.26267/unipi_dione/2161
dc.format.extent71el
dc.language.isoenel
dc.publisherΠανεπιστήμιο Πειραιώςel
dc.rightsΑναφορά Δημιουργού-Μη Εμπορική Χρήση-Όχι Παράγωγα Έργα 3.0 Ελλάδα*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/gr/*
dc.titleEnsembling to leverage the interpretability of medical image analysis systemsel
dc.typeMaster Thesisel
dc.contributor.departmentΣχολή Τεχνολογιών Πληροφορικής και Επικοινωνιών. Τμήμα Ψηφιακών Συστημάτωνel
dc.description.abstractENModern Artificial Intelligence (AI) systems have been achieving human-level and, in some cases, even higher predictive capabilities, solving numerous and various tasks. Two primary reasons behind this accomplishment are the rapid technological evolution, and the rising volume of available data, both of which allowed the development of multimillion parameter models. Inevitably, along with accuracy, complexity has also risen. But no matter how high the accuracy may be, some tasks, including any medicalrelated task, require explanations about the model’s decision. When dealing with image data, the explanations of the model’s decision usually take the form of salient and nonsalient areas over the image that highlight the important and non-important areas respectively. Whichever the importance attribution method though, the saliency of an area represents the view of the model towards the stimuli that influenced mostly the outcome and can be as accurate as the quality of the features the model has learned. Thus, a plausible assumption would be that the better predictions the model makes, the more accurate explanations it produces. In this work, the efficacy of ensembling models as a means of leveraging explanations is examined, under the concept that ensemble models are combinatory informed. Apart from ensembling, a novel approach is herein presented for the aggregation of the importance attribution maps, in an attempt to examine an alternative way of combining the different views that several competent models offer. The purpose of aggregating is to lower computation costs, while allowing for the combinations of maps of various origin. Following a saliency map evaluation scheme, four tests are performed over three datasets, two of which are medical image datasets, and one is generic. The results indicate that explainability can, indeed, benefit from the combination of information, either by ensembling or aggregating. Discussion follows, in an attempt to provide insight over the mechanics that led to the provided results, as well as to give guidelines for potential future work.el
dc.corporate.nameNational Centre for Scientific Research "Demokritos"el
dc.contributor.masterΤεχνητή Νοημοσύνη - Artificial Intelligenceel
dc.subject.keywordMedical AIel
dc.subject.keywordComputer visionel
dc.subject.keywordExplainabilityel
dc.subject.keywordInterpretabilityel
dc.subject.keywordAttribution mapsel
dc.date.defense2022-02-28


Αρχεία σε αυτό το τεκμήριο

Thumbnail

Αυτό το τεκμήριο εμφανίζεται στις ακόλουθες συλλογές

Εμφάνιση απλής εγγραφής

Αναφορά Δημιουργού-Μη Εμπορική Χρήση-Όχι Παράγωγα Έργα 3.0 Ελλάδα
Εκτός από όπου διευκρινίζεται διαφορετικά, το τεκμήριο διανέμεται με την ακόλουθη άδεια:
Αναφορά Δημιουργού-Μη Εμπορική Χρήση-Όχι Παράγωγα Έργα 3.0 Ελλάδα

Βιβλιοθήκη Πανεπιστημίου Πειραιώς
Επικοινωνήστε μαζί μας
Στείλτε μας τα σχόλιά σας
Created by ELiDOC
Η δημιουργία κι ο εμπλουτισμός του Ιδρυματικού Αποθετηρίου "Διώνη", έγιναν στο πλαίσιο του Έργου «Υπηρεσία Ιδρυματικού Αποθετηρίου και Ψηφιακής Βιβλιοθήκης» της πράξης «Ψηφιακές υπηρεσίες ανοιχτής πρόσβασης της βιβλιοθήκης του Πανεπιστημίου Πειραιώς»