chart-to-text / README.md
saadob12's picture
Update README.md
cf5be75 verified

Tackling Hallucinations in Neural Chart Summarization

Introduction

The trained model for investigations and state-of-the-art (SOTA) improvements are detailed in the paper: Tackling Hallucinations in Neural Chart Summarization. This repo contains optimized input prompts and summaries after NLI-filtering.

Abstract

Hallucinations in text generation occur when the system produces text that is not grounded in the input. In this work, we address the problem of hallucinations in neural chart summarization. Our analysis reveals that the target side of chart summarization training datasets often contains additional information, leading to hallucinations. We propose a natural language inference (NLI) based method to preprocess the training data and demonstrate through human evaluation that our approach significantly reduces hallucinations. Additionally, we found that shortening long-distance dependencies in the input sequence and adding chart-related information such as titles and legends enhances overall performance.

Main Findings from the Paper

  • Enhanced Context Provision: Emphasizing the importance of providing more context and reducing long-distance dependencies in the input format.
  • NLI Cleaning Step: Introducing an NLI-based cleaning step to eliminate ungrounded information in the training data.
  • Reduction of Intrinsic Hallucinations: Demonstrating that reducing long-distance dependencies and adding more context leads to fewer intrinsic hallucinations.
  • Cause of Extrinsic Hallucinations: Identifying that extrinsic hallucinations are caused by ungrounded information in training summaries.
  • Human Evaluation Results: Showing that using NLI to filter training summaries significantly reduces hallucinations.

GitHub Link for the Original Chart-to-Text Data

vis-nlp/Chart-to-text

Optimized Prompt Dataset with NLI filter

Hallucinations-C2T Data

Citations

Kindly cite my work. Thank you.

Bibtext for Tackling Hallucinations Paper

@inproceedings{obaid-ul-islam-etal-2023-tackling,
    title = {Tackling Hallucinations in Neural Chart Summarization},
    author = {Obaid ul Islam, Saad and Škrjanec, Iza and Dusek, Ondrej and Demberg, Vera},
    booktitle = {Proceedings of the 16th International Natural Language Generation Conference},
    month = sep,
    year = {2023},
    address = {Prague, Czechia},
    publisher = {Association for Computational Linguistics},
    url = {https://aclanthology.org/2023.inlg-main.30},
    doi = {10.18653/v1/2023.inlg-main.30},
    pages = {414--423},
    abstract = {Hallucinations in text generation occur when the system produces text that is not grounded in the input. In this work, we tackle the problem of hallucinations in neural chart summarization. Our analysis shows that the target side of chart summarization training datasets often contains additional information, leading to hallucinations. We propose a natural language inference (NLI) based method to preprocess the training data and show through human evaluation that our method significantly reduces hallucinations. We also found that shortening long-distance dependencies in the input sequence and adding chart-related information like title and legends improves the overall performance.}
}

Original Dataset Paper:

  @inproceedings{kantharaj-etal-2022-chart,
      title = "Chart-to-Text: A Large-Scale Benchmark for Chart Summarization",
      author = "Kantharaj, Shankar  and
        Leong, Rixie Tiffany  and
        Lin, Xiang  and
        Masry, Ahmed  and
        Thakkar, Megh  and
        Hoque, Enamul  and
        Joty, Shafiq",
      editor = "Muresan, Smaranda  and
        Nakov, Preslav  and
        Villavicencio, Aline",
      booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
      month = may,
      year = "2022",
      address = "Dublin, Ireland",
      publisher = "Association for Computational Linguistics",
      url = "https://aclanthology.org/2022.acl-long.277",
      doi = "10.18653/v1/2022.acl-long.277",
      pages = "4005--4023"
  }