Generative adversarial AI

Master Thesis
Author
Kalligeros, Panteleimon
Καλλίγερος, Παντελεήμων
Date
2025Advisor
Xenakis, ChristosΞενάκης, Χρήστος
View/ Open
Keywords
Adversarial ; AI ; Artificial Intelligence ; Attacks ; Hyperparameters ; TaxonomyAbstract
As artificial intelligence (AI) systems become more deeply integrated into essential applications, their susceptibility to adversarial attacks emerges as a substantial security issue. This thesis delves into the realm of adversarial AI by suggesting a thorough taxonomy of attack strategies and conducting an empirical assessment of their effects on machine learning (ML) and deep learning (DL) models. The research categorizes adversarial attacks into four primary types: poisoning, evasion, inference, and model extraction, thus offering a structured approach to understanding their methods and objectives more clearly. In addition, it examines how model hyperparameters, such as learning rate, regularization strength, and network architecture, play a role in the vulnerability to these attacks. Through a series of carefully controlled experiments, the study demonstrates the differing levels of attack effectiveness with strategies like FGSM, PGD, DeepFool, and KnockoffNets, revealing the delicate balance that exists between achieving optimal model performance and maintaining robustness. The findings emphasize that even slight adjustments in hyperparameters can substantially influence a model’s security stance. This research adds to the expanding field of adversarial AI by integrating theoretical analysis with practical experimentation, providing practical insights for the development of AI systems with enhanced resilience and setting the foundation for future defense strategies.