Datasets:
ibm
/

Modalities:
Text
Formats:
csv
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 13,687 Bytes
c1b28c4
 
 
 
 
 
 
 
 
121530c
 
 
 
 
 
 
d330815
c1b28c4
2298643
00e9141
2298643
121530c
 
 
2298643
3eb27d1
c1b28c4
d330815
c1b28c4
 
 
 
 
 
 
 
 
d330815
c1b28c4
 
 
 
 
 
 
 
 
8634e17
c1b28c4
 
 
 
ab1587a
c1b28c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d330815
c1b28c4
 
 
 
 
 
 
 
 
48aab24
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c1b28c4
 
8b8d3c2
 
48aab24
8b8d3c2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48aab24
 
8b8d3c2
48aab24
8b8d3c2
 
 
 
 
c1b28c4
 
 
 
 
 
d330815
c1b28c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fae36e5
c1b28c4
 
8634e17
c1b28c4
 
 
8b8d3c2
 
c1b28c4
 
8b8d3c2
8634e17
 
 
 
 
 
8b8d3c2
c1b28c4
 
 
48aab24
c1b28c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
---
license: mit
language:
- en
size_categories:
- n<1K
task_categories:
- question-answering
---

<style>
H1{color:Blue !important;}
H2{color:DarkOrange !important;}
p{color:Black !important;}
</style>

# Wikipedia Contradict Benchmark

<!-- Provide a quick summary of the dataset. -->  


<p align="center">
  <img src="./figs/Example.png" width=70%/>
</p>



Wikipedia Contradict Benchmark is a dataset consisting of 253 high-quality, human-annotated instances designed to assess LLM performance when augmented with retrieved passages containing real-world knowledge conflicts. The dataset was created intentionally with that task in mind, focusing on a benchmark consisting of high-quality, human-annotated instances.

This dataset card has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).

## Dataset Details

### Dataset Description

<!-- Provide a longer summary of what this dataset is. -->

Wikipedia Contradict Benchmark is a QA-based benchmark consisting of 253 human-annotated instances that cover different types of real-world knowledge conflicts. 

Each instance consists of a question, a pair of contradictory passages extracted from Wikipedia, and two distinct answers, each derived from on the passages. The pair is annotated by a human annotator who identify where the conflicted information is and what type of conflict is observed. The annotator then produces a set of questions related to the passages with different answers reflecting the conflicting source of knowledge.

- **Curated by:** Yufang Hou, Alessandra Pascale, Javier Carnerero-Cano, Tigran Tchrakian, Radu Marinescu, Elizabeth Daly, Inkit Padhi, and Prasanna Sattigeri. All authors are employed by IBM Research.
<!-- - **Funded by [optional]:** There was no associated grant. -->
- **Shared by:** Yufang Hou, Alessandra Pascale, Javier Carnerero-Cano, Tigran Tchrakian, Radu Marinescu, Elizabeth Daly, Inkit Padhi, and Prasanna Sattigeri. 
- **Language(s) (NLP):** English.
- **License:** MIT.

### Dataset Sources

<!-- Provide the basic links for the dataset. -->

- **Repository:** [More Information Needed]
- **Paper:** https://arxiv.org/abs/2406.13805
- **Demo [optional]:** [More Information Needed]

## Uses

<!-- Address questions around how the dataset is intended to be used. -->

### Direct Use

<!-- This section describes suitable use cases for the dataset. -->

The dataset has been used in the paper to assess LLMs performance when augmented with retrieved passages containing real-world knowledge conflicts.

### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->

N/A.

## Dataset Structure

<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->

Wikipedia Contradict Benchmark is given in JSON format to store the corresponding information, so researchers can easily use our data. There are 253 instances in total.
The description of each key (when the instance contains two questions) is as follows:

- **title:** Title of article.
- **url:** URL of article.
- **paragraph_A:** Paragraph automatically retrieved (containing the tag).
- **paragraph_A_clean:** Paragraph automatically retrieved (removing the tag).
- **tag:** Type of tag of the article (Inconsistent/Self-contradictory/Contradict-other). 
- **tagDate:** Date of the tag.
- **tagReason:** Reason for the tag.
- **wikitag_label_valid:** Valid or invalid tag (Valid/Invalid).
- **valid_comment:** Comment on the tag.
- **paragraphA_article:** Title of article containing passage 1.
- **paragraphA_information:** Relevant information of passage 1.
- **paragraphA_information_standalone:** Decontextualized relevant information of passage 1.
- **paragraphB_article:** Relevant information of passage 2.
- **paragraphB_information_standalone:** Decontextualized relevant information of passage 2.
- **wikitag_label_samepassage:** Boolean value stating whether passage 1 and passage 2 are the same (Same/Different).
- **relevantInfo_comment_A:** Comment on the information of passage 1.
- **relevantInfo_comment_B:** Comment on the information of passage 2.
- **Contradict type I:** Contradiction type I focuses on the fine-grained semantics of the contradiction, e.g., date/time, location, language, etc. 
- **Contradict type II:** Contradiction type II focuses on the modality the contradiction.  It describes the modality of passage 1 and passage 2, whether the information is from a piece of text, or from a row an infobox or a table. 
- **Contradict type III:** Contradiction type III focuses on the source the contradiction.  It describes whether passage 1 and passage 2 are from the same article or not. 
- **Contradict type IV:** Contradiction type IV focuses on the reasoning aspect.  It describes whether the contraction is explicit or implicit (Explicit/Implicit). Implicit contradiction requires some reasoning to understand why passage 1 and passage 2 are contradicted. 
- **question1:** Question 1 inferred from the contradiction.
- **question1_answer1:** Gold answer to question 1 according to passage 1.
- **question1_answer2:** Gold answer to question 1 according to passage 2.
- **question2:** Question 2 inferred from the contradiction.
- **question2_answer1:** Gold answer to question 2 according to passage 1.
- **question2_answer2:** Gold answer to question 2 according to passage 2.    


## Usage of the Dataset

We provide the following starter code. Please refer to the [GitHub repository](https://github.com/) for more information about the functions ```load_testingdata``` and ```generateAnswers_bam_models```.


```python
from genai import Client, Credentials
import datetime
import pytz
import logging
import json
import copy
from dotenv import load_dotenv
from genai.text.generation import CreateExecutionOptions
from genai.schema import (
    DecodingMethod,
    LengthPenalty,
    ModerationParameters,
    ModerationStigma,
    TextGenerationParameters,
    TextGenerationReturnOptions,
)

try:
    from tqdm.auto import tqdm
except ImportError:
    print("Please install tqdm to run this example.")
    raise

load_dotenv()
client = Client(credentials=Credentials.from_env())

logging.getLogger("bampy").setLevel(logging.DEBUG)
fh = logging.FileHandler('bampy.log')
fh.setLevel(logging.DEBUG)
logging.getLogger("bampy").addHandler(fh)

parameters = TextGenerationParameters(
    max_new_tokens=250,
    min_new_tokens=1,
    decoding_method=DecodingMethod.GREEDY,
    return_options=TextGenerationReturnOptions(
        # if ordered is False, you can use return_options to retrieve the corresponding prompt
        input_text=True,
    ),
)


# load dataset
testingUnits = load_testingdata()
# test LLMs models
generateAnswers_bam_models(testingUnits)
```



## Dataset Creation

### Curation Rationale

<!-- Motivation for the creation of this dataset. -->

Retrieval-augmented generation (RAG) has emerged as a promising solution to mitigate the limitations of large language models (LLMs), such as hallucinations and outdated information. However, it remains unclear how LLMs handle knowledge conflicts arising from different augmented retrieved passages, especially when these passages originate from the same source and have equal trustworthiness. In this regard, the motivation of Wikipedia Contradict Benchmark is to comprehensively evaluate LLM-generated answers to questions that have varying answers based on contradictory passages from Wikipedia, a dataset widely regarded as a high-quality pre-training resource for most LLMs.

### Source Data

<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->

#### Data Collection and Processing

<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->

The data was mostly observable as raw text. The raw data was retrieved from Wikipedia articles containing inconsistent, self-contradictory, and contradict-other tags. The first two tags denote contradictory statements within the same article, whereas the third tag highlights instances where the content of one article contradicts that of another article. In total, we collected around 1,200 articles that contain these tags through the Wikipedia maintenance category “Wikipedia articles with content issues”. Given a content inconsistency tag provided by Wikipedia editors, the annotators verified whether the tag is valid by checking the relevant article content, the editor’s comment, as well as the information in the edit history and the article’s talk page if necessary.

#### Who are the source data producers?

<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->

Wikipedia contributors.

### Annotations

<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->

#### Annotation process

<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->

The annotation interface was developed using [Label Studio](https://labelstud.io/).

The annotators were required to slightly modify the original passages to make them stand-alone (decontextualization). Normally, this requires resolving the coreference anaphors or the bridging anaphors in the first sentence (see annotation guidelines). In Wikipedia, oftentimes the antecedents for these anaphors are the article titles themselves.

For further information, see the annotation guidelines of the paper.

#### Who are the annotators?

<!-- This section describes the people or systems who created the annotations. -->

Yufang Hou, Alessandra Pascale, Javier Carnerero-Cano, Tigran Tchrakian, Radu Marinescu, Elizabeth Daly, Inkit Padhi, and Prasanna Sattigeri.

#### Personal and Sensitive Information

<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->

N/A.

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

Each annotation instance contains at least one question and two possible answers, but some instances may contain more than one question (and the corresponding two possible answers for each question). Some instances may not contain a value for **paragraphA_clean**, **tagDate**, and **tagReason**.

### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->

Our data is downloaded from Wikipedia. As such, the data is biased towards the original content and sources. Given that human data annotation involves some degree of subjectivity we created a comprehensive 17-page annotation guidelines document to clarify important cases during the annotation process.  The annotators were explicitly instructed not to take their personal feeling about the particular topic. Nevertheless, some degree of intrinsic subjectivity might have impacted the techniques picked up by the annotators during the annotation.

Since our dataset requires manual annotation, annotation noise is inevitably introduced. 


## Citation

<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->

If this dataset is utilized in your research, kindly cite the following paper:

**BibTeX:**

```
@article{hou2024wikicontradict,
  title={{WikiContradict: A Benchmark for Evaluating LLMs on Real-World Knowledge Conflicts from Wikipedia}},
  author={Hou, Yufang and Pascale, Alessandra and Carnerero-Cano, Javier and Tchrakian, Tigran and Marinescu, Radu and Daly, Elizabeth and Padhi, Inkit and Sattigeri, Prasanna},
  journal={arXiv preprint arXiv:2406.13805},
  year={2024}
}
```

**APA:**

Hou, Y., Pascale, A., Carnerero-Cano, J., Tchrakian, T., Marinescu, R., Daly, E., Padhi, I., & Sattigeri, P. (2024). WikiContradict: A Benchmark for Evaluating LLMs on Real-World Knowledge Conflicts from Wikipedia. arXiv preprint arXiv:2406.13805.

<!-- ## Glossary [optional] -->

<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->

<!-- [More Information Needed] --|

<!-- ## More Information [optional] -->

<!-- [More Information Needed] -->

## Dataset Card Authors

Yufang Hou, Alessandra Pascale, Javier Carnerero-Cano, Tigran Tchrakian, Radu Marinescu, Elizabeth Daly, Inkit Padhi, and Prasanna Sattigeri.

## Dataset Card Contact

Yufang Hou ([email protected]), Alessandra Pascale ([email protected]), Javier Carnerero-Cano ([email protected]), Tigran Tchrakian ([email protected]), Radu Marinescu ([email protected]), Elizabeth Daly ([email protected]), Inkit Padhi ([email protected]), and Prasanna Sattigeri ([email protected]).