Research

研究プロジェクト・論文・書籍等

Share

  • 論文

A Multi-Level Attention Model for Evidence-Based Fact Checking

Author:Canasai Kruengkrai, Xin Wang, Junichi Yamagishi

  • #言語処理
  • #自動ファクトチェック

Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

Evidence-based fact checking aims to verify the truthfulness of a claim against evidence extracted from textual sources. Learning a representation that effectively captures relations between a claim and evidence can be challenging. Recent state-of-the-art approaches have developed increasingly sophisticated models based on graph structures. We present a simple model that can be trained on sequence structures. Our model enables inter-sentence attentions at different levels and can benefit from joint training. Results on a large-scale dataset for Fact Extraction and VERification (FEVER) show that our model outperforms the graph-based approaches and yields 1.09% and 1.42% improvements in label accuracy and FEVER score, respectively, over the best published model. The code and model checkpoints are available at: https://github.com/nii-yamagishilab/mla.