Έλεγχος περιεχομένου από πλατφόρμες και προστασία προσωπικών δεδομένων
Content moderation from platforms and personal data protection

View/ Open
Keywords
Πλατφόρμες ; Έλεγχος περιεχόμενου ; ΓΚΠΔ ; DSAAbstract
This paper addresses the issue of content moderation by digital platforms through the lens of personal data protection and the broader safeguarding of fundamental rights in the contemporary digital environment. In particular, it seeks to highlight the ways in which mechanisms of filtering, flagging, removal, and restriction of content, whether through human supervision or algorithmic processes, affect both the scope of freedom of expression and the privacy of users. The issue becomes even more significant when considering that platforms, through the collection and processing of vast amounts of personal data, concentrate a form of private but de facto institutional power, which is not always subject to sufficient accountability or transparency.
Within this context, the study analyzes the models of self-regulation and co-regulation adopted internationally, assessing their effectiveness in light of the principles of proportionality, necessity, and transparency. The research focuses on the transformation of content moderation from a mere technical process into an institutional challenge, directly related to the functioning of democracy in the digital sphere.
A central position in the analysis is held by the European regulatory framework, as shaped by Regulation (EU) 2022/2065 on Digital Services (Digital Services Act - DSA), which introduces new rules regarding content moderation, provider liability, system transparency, and the protection of users from abusive practices. At the same time, the study explores the relationship between the DSA and the protection of personal data, particularly the General Data Protection Regulation (EU) 2016/679 (GDPR) and Law 4624/2019, which establish the legal framework for the lawful processing of personal data and the rights of data subjects.
The research is based on the analysis of legislative and regulatory texts, the case law of the Court of Justice of the European Union (CJEU) and the European Court of Human Rights (ECHR), as well as scientific literature that highlights the legal and ethical dimensions of the issue. Through this multi-layered approach, the study aims to capture the tension between the need to combat illegal or harmful content and the obligation to safeguard individual freedoms.
The overall assessment highlights the need to establish a coherent framework of cooperation between states, platforms, and independent authorities, one that combines technological innovation with the protection of fundamental rights. Strengthening transparency, introducing clear mechanisms of review and appeal, and consistently integrating data protection principles are identified as key pillars for achieving a fairer and more democratic model of digital governance. In this way, content moderation can evolve from a field of conflicting interests into a mechanism of balance between freedom of expression, digital security, and user privacy.

