A comparative analysis of adversarial techniques on AI models

Master Thesis
Συγγραφέας
Kollarou, Athanasia
Κολλάρου, Αθανασία
Ημερομηνία
2025-02Επιβλέπων
Xenakis, ChristosΞενάκης, Χρήστος
Προβολή/ Άνοιγμα
Λέξεις κλειδιά
Adversarial Machine Learning (AML) ; Evasion attacks ; Neural networks ; Model robustness ; Evaluation of AI model securityΠερίληψη
The increasing adoption of Artificial Intelligence and Machine Learning in critical
sectors underscores the need for robust systems resilient to adversarial threats. While
deep learning architectures have revolutionized tasks like image recognition, their
ability to maintain their functionality without being affected by adversarial techniques
remains an open issue. This study aims to evaluate the impact of adversarial techniques,
including Fast Gradient Sign Method, Projected Gradient Descent, DeepFool, and
Carlini & Wagner, on five neural network models: Fully Connected Neural Network,
LeNet, Simple CNN, MobileNetV2, and VGG11. Using the EvAIsion tool developed
for this research, attacks were implemented and analysed using metrics such as
accuracy, F1 score, and misclassification rate. The results showed varying levels of
vulnerability across the tested models, with simpler models having slightly better
performance than more complex ones in some cases. The findings suggest that the
effectiveness of the adversarial techniques can vary depending on the model. This
emphasises the importance of selecting the more appropriate attack techniques for the
targeted architecture and also customising the different attack parameters to achieve the
best results for each specific case