Ανίχνευση ψευδών ειδήσεων στα μέσα κοινωνικής δικτύωσης
Fake news detection on social media

Master Thesis
Author
Τζώρας, Χρήστος
Tzoras, Christos
Date
2025-09Advisor
Πελέκης, ΝικόλαοςPelekis, Nikolaos
View/ Open
Keywords
Fake news detection ; Explainable AI (XAI) ; Large Language Models (LLMs) ; Multi-agent systems ; Interpretable machine learning ; Fact-checking ; Source quality evaluation ; Confidence scoring ; Rule-based coordination ; Transparency ; AI ; LIAR dataset ; FakeNewsNet datasetAbstract
This thesis explores the development of a novel, explainable system for fake news detection, leveraging the capabilities of large language models (LLMs) within a modular multi-agent architecture. The system integrates several specialized agents, such as a Fact Checker, Confidence Scorer, Source Quality Evaluator, and a Supervisor Agent, each contributing interpretable insights to the final verdict. Unlike traditional black-box models, this approach emphasizes transparency and human-aligned reasoning, enabling users to trace back the rationale behind each classification.
To assess performance, the system was manually tested on 80 news claims (40 from the LIAR dataset and 40 from FakeNewsNet), using both titles and full-texts. The system achieved an overall accuracy of 75%, with 76.6% accuracy on title-based entries. Beyond accuracy, the system offers structured outputs, evidence citations, confidence scores, and semantic justifications – marking a major improvement in interpretability over existing models like BERT, LSTM, and hybrid deep learning frameworks.
The results indicate that LLM-based agents, when orchestrated through interpretable rule-based coordination, can offer competitive performance with unprecedented explainability. This thesis concludes by proposing directions for future work, including testing across more datasets, automating agent decision logic with mathematical functions, an exploring other LLM families.


