Responsible and Explainable Fact-Checking through Fine-Grained Factual Reasoning
The constant increase of misinformation around the world has become an urgent global problem. Nowadays, most existing automated fact-checking addresses article-level or claim analysis of news. Nevertheless, news credibility verification and fact-checking systems at scale require accurate prediction, since each news document comprises multiple sentences, which may contain media bias, facts, and fake information. Furthermore, due to the high complexity for the interpretability of machine learning models, there is a lack of transparency that poses unwanted risks for misinformation applications. In order to address the current limitations and provide advances towards effectively countering misinformation on the web and social media in Brazil, we already created accurate and expert annotation schemas to createbdata resources as FactNews dataset and Central de Fatos repository for sentence-level news source reliability estimation of media outlets and sentence-level factuality prediction of news articles. As research to be developed in this project, we aim to investigate and propose responsible and explainable methods for automated fact-checking that use factual reasoning to provide explanations on the reliability and veracity of news articles or claims at a fine-grained level. The proposed fact-checking will compute an overall trustworthiness score of the entire document take into account the sentence-level explications of reliability and veracity by predicting media bias including propaganda, and the retrived-evidence from different repositorie generating post-hoc explanations using different explainability methods.
Leader
- Francielle Vargas. Institute of Mathematics and Computer Sciences, University of São Paulo, Brazil
Team
- Ameeta Agrawal. College of Engineering and Computer Science, Portland State University, United States
- Kokil Jaidka. Centre for Trusted Internet and Community, National University of Singapore, Singapore
- Zohar Rabinovich. Viterbi School of Engineering, University of Southern California, United States
- Diego Alves. Department of Language Science and Technology, Saarland University, Germany
- Isadora Salles. Department of Computer Science, Federal University of Minas Gerais, Brazil
- Jonas D'Alessandro. Department of Linguistics, Federal University of Minas Gerais, Brazil
- Fabrício Benevenuto. Department of Computer Science, Federal University of Minas Gerais, Brazil
Publications
-
Improving Explainable Fact-Checking via Sentence-Level Factual Reasoning
Francielle Vargas, Isadora Salles, Diego Alves, Ameeta Agrawal, Thiago Pardo, Fabrício Benevenuto
7th Fact Extraction and VERification Workshop (FEVER @ EMNLP 2024). Miami, United States. to appear
-
Predicting Sentence-Level Factuality of News and Bias of Media Outlets
Francielle Vargas, Kokil Jaidka, Thiago A.S. Pardo, Fabrício Benevenuto
Recent Advances in Natural Language Processing (RANLP 2023). pp. 1197–1206. Varna, Bulgaria. see
-
Rhetorical Structure Approach for Online Deception Detection: A Survey
Francielle Vargas, Jonas D'Alessandro, Zohar Rabinovich, Fabrício Benevenuto, Thiago A.S. Pardo
13th Conference on Language Resources and Evaluation (LREC 2022). pp. 5906‑5915. Marseille, France. see
-
Toward Discourse-Aware Models for Multilingual Fake News Detection
Francielle Vargas, Fabrício Benevenuto, Thiago A.S. Pardo
Recent Advances in Natural Language Processing (RANLP 2021). pp. 210-218. Held Online. see
Resources
New Methods (patents)
- SELFAR: Sentence-Level Factual Reasoning for Explanable Fact-Checking
Dataset
- FactNews: Sentence-Level Annotated Dataset to Predict Factually and Media Bias
Software
- FACTual: A Fact-Checking and News Source Reliability Estimation System