A comparative study on explainable machine learning models for fact checking
Master Thesis
Author
Gkatsis, Vasileios
Γκάτσης, Βασίλειος
Date
2022-02View/ Open
Keywords
Artificial intelligence ; NLP ; Fact checking ; Claim-justificationAbstract
Fact checking, as the task of assessing the validity of a claim or a piece of news, is a very important process for both journalists and public, especially in the era of social media. A huge amount of research has been addressed towards finding automated solutions for this problem. The recent advancements on Artificial Intelligence and Machine Learning have provided tools and frameworks with very good results. Especially with the recent enhancements of hardware, the development of state-of-the-art algorithms and mostly the availability of high quality data, tremendous progress has been made. With the broader use of such methods the request for reliability has began to emerge. That means that models should not appear as black boxes but their actions should be clear and understandable by humans. The two terms which describe that need are interpretability and explainability. Interpretability can be viewed as the ability of a machine learning model’s actions to be transparent, while explainability is the ability of the model to use human understandable means of providing explanations about it’s actions. Different approaches have been proposed in order to achieve such models, and discussions have risen on the usefulness of certain methods. In this thesis we study two different explanation approaches. One uses a set of words that helped the most in in the fact checking process, and the other uses short summaries extracted from ruling articles. Then we propose a new high level taxonomy of claim justifications which can serve as an evaluation method for the aforementioned approaches as well as for a new means of explanation.