Αντιμετώπιση των deepfakes μέσω προηγμένων τεχνολογιών κυβερνοασφάλειας
Cybersecurity advanced technologies for competing against deepfakes

View/ Open
Keywords
Deepfakes ; Cybersecurity ; AI ; CNNs ; GANs ; Watermarking ; Disinformation ; DetectionAbstract
The rapid development of artificial intelligence algorithms in recent years has
introduced a new threat to the world of the internet, the realistic synthetic
multimedia. These are multimedia such as videos, images, texts, which have been
produced by artificial intelligence and can hardly be categorized as artificial or
not. The purpose of this paper is to highlight the importance of the problem of the
distribution of artificial videos and images by highlighting the way in which society
can be aoected through disinformation, while also delving into the technical
interventions that cybersecurity can ooer to address these threats. Initially, the
necessary background is given for the definition of deepfakes by citing examples
of threats through real incidents of the past. Then, through the analysis of how
deepfakes are created, the basis for the continuation of the work is set, more
specifically, the basic algorithms that create synthetic content Generative
Adversarial Networks and Autoencoders are analyzed. Focusing on a system that
will constitute a comprehensive solution for the authentication and ensuring the
integrity of genuine material that has not been processed, an analysis of the most
basic synthetic content detection techniques in the literature is made, focusing
on the Xception model. Then, through watermarking techniques, it is explained
how the authentication of the material and the assurance of the integrity of
multimedia can be achieved through modern techniques. Based on these, a code
was created that detects synthetic content through the Xception model and then
with the LSB technique an encrypted watermark is introduced through which the
integrity of the content and authenticity are ensured. Finally, a brief analysis is
provided regarding the data provided for training artificial intelligence models and
how they can be protected by the GDPR and the Artificial Intelligence Act.