Risk management for AI systems

Master Thesis
Author
Psychogyiou, Aikaterini
Ψυχογυιού, Αικατερίνη
Date
2025-05Advisor
Xenakis, ChristosΞενάκης, Χρήστος
View/ Open
Keywords
AI ; Risk management ; Cybersecurity risk management ; AI/ML ; Optimisation ; Qualitative cybersecurity risk assessmentAbstract
The rapid integration of artificial intelligence (AI) technologies across various sectors has transformed many aspects of everyday processes, offering unprecedented advancements in the efficiency of their execution and decision-making. However, this evolution has also introduced significant risks, including serious security and compliance threats, as well as ethical dilemmas that jeopardise
the integrity of these emerging systems. The aim of this dissertation is to advance the nearly non-existent landscape of risk management for AI systems by developing a comprehensive framework that addresses the unique challenges involved in assessing and improving AI technologies. The research begins with an in-depth literature review, examining existing risk management theories and frameworks, including ENISA guidelines and ISO standards, while also identifying critical risk categories that are specific to AI systems and their corresponding measures. The study then proposes a new risk management model tailored specifically to AI technologies, bringing together principles from cybersecurity, governance, and established ethical guidelines under a unified structure. Finally, the dissertation presents a case study of an AI-powered fraud detection system within
a financial institution, showcasing the practical application of the proposed framework. This case study verifies the functional success of the model through the successful completion of the risk management process, accounting for potential risks and the mitigation or even elimination measures that could be taken. The results reveal significant improvements in the system’s operational efficiency while also addressing challenges related to data privacy and regulatory compliance. The findings of this study contribute to the growing body of knowledge on AI risk management, offering a structured approach that organisations can adopt to mitigate associated risks effectively. By emphasising the importance of continuous monitoring and adaptive strategies, this research aims to promote the safer and more trustworthy development of AI systems, ensuring their resilience in an increasingly AI-driven environment.

