saadob12 commited on
Commit
cf5be75
·
verified ·
1 Parent(s): 81c11dc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -50
README.md CHANGED
@@ -1,50 +1,74 @@
1
- This dataset only consists of linearized underlying data table of charts and their corresponding summaries.
2
-
3
- Model that use this dataset: https://huggingface.co/saadob12/t5_C2T_big
4
- ## Created By:
5
- Kanthara, S., Leong, R. T. K., Lin, X., Masry, A., Thakkar, M., Hoque, E., & Joty, S. (2022). Chart-to-Text: A Large-Scale Benchmark for Chart Summarization. arXiv preprint arXiv:2203.06486.
6
-
7
- **Paper**: https://arxiv.org/abs/2203.06486
8
-
9
- **Orignal github repo**: https://github.com/vis-nlp/Chart-to-text
10
-
11
- # Abstract from the Paper
12
- Charts are commonly used for exploring data
13
- and communicating insights. Generating nat-
14
- ural language summaries from charts can be
15
- very helpful for people in inferring key in-
16
- sights that would otherwise require a lot of
17
- cognitive and perceptual efforts. We present
18
- Chart-to-text, a large-scale benchmark with
19
- two datasets and a total of 44,096 charts cover-
20
- ing a wide range of topics and chart types. We
21
- explain the dataset construction process and
22
- analyze the datasets. We also introduce a num-
23
- ber of state-of-the-art neural models as base-
24
- lines that utilize image captioning and data-to-
25
- text generation techniques to tackle two prob-
26
- lem variations: one assumes the underlying
27
- data table of the chart is available while the
28
- other needs to extract data from chart images.
29
- Our analysis with automatic and human eval-
30
- uation shows that while our best models usu-
31
- ally generate fluent summaries and yield rea-
32
- sonable BLEU scores, they also suffer from
33
- hallucinations and factual errors as well as dif-
34
- ficulties in correctly explaining complex pat-
35
- terns and trends in charts.
36
-
37
- ### Note
38
- The original paper published two sub-datasets one collected from statista and the other from pew. The dataset upload here is from statista. Images can be downloaded from the github repo mentioned above.
39
-
40
- # Langugage
41
-
42
- The data is in english and the summaries are in english.
43
-
44
- # Dataset split
45
- | train | valid | test |
46
- |:---:|:---:| :---:|
47
- | 24367 | 5222 | 5222 |
48
-
49
-
50
- **Name of Contributor:** Saad Obaid ul Islam
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Tackling Hallucinations in Neural Chart Summarization
2
+
3
+ ## Introduction
4
+
5
+ The trained model for investigations and state-of-the-art (SOTA) improvements are detailed in the paper: [Tackling Hallucinations in Neural Chart Summarization](https://aclanthology.org/2023.inlg-main.30/).
6
+ This repo contains optimized input prompts and summaries after NLI-filtering.
7
+ ### Abstract
8
+
9
+ Hallucinations in text generation occur when the system produces text that is not grounded in the input. In this work, we address the problem of hallucinations in neural chart summarization. Our analysis reveals that the target side of chart summarization training datasets often contains additional information, leading to hallucinations. We propose a natural language inference (NLI) based method to preprocess the training data and demonstrate through human evaluation that our approach significantly reduces hallucinations. Additionally, we found that shortening long-distance dependencies in the input sequence and adding chart-related information such as titles and legends enhances overall performance.
10
+
11
+ ## Main Findings from the Paper
12
+
13
+ - **Enhanced Context Provision:** Emphasizing the importance of providing more context and reducing long-distance dependencies in the input format.
14
+ - **NLI Cleaning Step:** Introducing an NLI-based cleaning step to eliminate ungrounded information in the training data.
15
+ - **Reduction of Intrinsic Hallucinations:** Demonstrating that reducing long-distance dependencies and adding more context leads to fewer intrinsic hallucinations.
16
+ - **Cause of Extrinsic Hallucinations:** Identifying that extrinsic hallucinations are caused by ungrounded information in training summaries.
17
+ - **Human Evaluation Results:** Showing that using NLI to filter training summaries significantly reduces hallucinations.
18
+
19
+
20
+ ### GitHub Link for the Original Chart-to-Text Data
21
+
22
+ [vis-nlp/Chart-to-text](https://github.com/vis-nlp/Chart-to-text)
23
+
24
+ ### Optimized Prompt Dataset with NLI filter
25
+
26
+ [Hallucinations-C2T Data](https://github.com/WorldHellow/Hallucinations-C2T/tree/main/data)
27
+
28
+ ## Citations
29
+
30
+ Kindly cite my work. Thank you.
31
+ ### Bibtext for Tackling Hallucinations Paper
32
+ ```
33
+ @inproceedings{obaid-ul-islam-etal-2023-tackling,
34
+ title = {Tackling Hallucinations in Neural Chart Summarization},
35
+ author = {Obaid ul Islam, Saad and Škrjanec, Iza and Dusek, Ondrej and Demberg, Vera},
36
+ booktitle = {Proceedings of the 16th International Natural Language Generation Conference},
37
+ month = sep,
38
+ year = {2023},
39
+ address = {Prague, Czechia},
40
+ publisher = {Association for Computational Linguistics},
41
+ url = {https://aclanthology.org/2023.inlg-main.30},
42
+ doi = {10.18653/v1/2023.inlg-main.30},
43
+ pages = {414--423},
44
+ abstract = {Hallucinations in text generation occur when the system produces text that is not grounded in the input. In this work, we tackle the problem of hallucinations in neural chart summarization. Our analysis shows that the target side of chart summarization training datasets often contains additional information, leading to hallucinations. We propose a natural language inference (NLI) based method to preprocess the training data and show through human evaluation that our method significantly reduces hallucinations. We also found that shortening long-distance dependencies in the input sequence and adding chart-related information like title and legends improves the overall performance.}
45
+ }
46
+
47
+ ```
48
+
49
+ ### Original Dataset Paper:
50
+
51
+ ```
52
+ @inproceedings{kantharaj-etal-2022-chart,
53
+ title = "Chart-to-Text: A Large-Scale Benchmark for Chart Summarization",
54
+ author = "Kantharaj, Shankar and
55
+ Leong, Rixie Tiffany and
56
+ Lin, Xiang and
57
+ Masry, Ahmed and
58
+ Thakkar, Megh and
59
+ Hoque, Enamul and
60
+ Joty, Shafiq",
61
+ editor = "Muresan, Smaranda and
62
+ Nakov, Preslav and
63
+ Villavicencio, Aline",
64
+ booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
65
+ month = may,
66
+ year = "2022",
67
+ address = "Dublin, Ireland",
68
+ publisher = "Association for Computational Linguistics",
69
+ url = "https://aclanthology.org/2022.acl-long.277",
70
+ doi = "10.18653/v1/2022.acl-long.277",
71
+ pages = "4005--4023"
72
+ }
73
+ ```
74
+